id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
61,229
https://en.wikipedia.org/wiki/Straightedge%20and%20compass%20construction
In geometry, straightedge-and-compass construction – also known as ruler-and-compass construction, Euclidean construction, or classical construction – is the construction of lengths, angles, and other geometric figures using only an idealized ruler and a pair of compasses. The idealized ruler, known as a straightedge, is assumed to be infinite in length, have only one edge, and no markings on it. The compass is assumed to have no maximum or minimum radius, and is assumed to "collapse" when lifted from the page, so it may not be directly used to transfer distances. (This is an unimportant restriction since, using a multi-step procedure, a distance can be transferred even with a collapsing compass; see compass equivalence theorem. Note however that whilst a non-collapsing compass held against a straightedge might seem to be equivalent to marking it, the neusis construction is still impermissible and this is what unmarked really means: see Markable rulers below.) More formally, the only permissible constructions are those granted by the first three postulates of Euclid's Elements. It turns out to be the case that every point constructible using straightedge and compass may also be constructed using compass alone, or by straightedge alone if given a single circle and its center. Ancient Greek mathematicians first conceived straightedge-and-compass constructions, and a number of ancient problems in plane geometry impose this restriction. The ancient Greeks developed many constructions, but in some cases were unable to do so. Gauss showed that some polygons are constructible but that most are not. Some of the most famous straightedge-and-compass problems were proved impossible by Pierre Wantzel in 1837 using field theory, namely trisecting an arbitrary angle and doubling the volume of a cube (see § impossible constructions). Many of these problems are easily solvable provided that other geometric transformations are allowed; for example, neusis construction can be used to solve the former two problems. In terms of algebra, a length is constructible if and only if it represents a constructible number, and an angle is constructible if and only if its cosine is a constructible number. A number is constructible if and only if it can be written using the four basic arithmetic operations and the extraction of square roots but of no higher-order roots. Straightedge and compass tools The "straightedge" and "compass" of straightedge-and-compass constructions are idealized versions of real-world rulers and compasses. The straightedge is an infinitely long edge with no markings on it. It can only be used to draw a line segment between two points, or to extend an existing line segment. The compass can have an arbitrarily large radius with no markings on it (unlike certain real-world compasses). Circles and circular arcs can be drawn starting from two given points: the centre and a point on the circle. The compass may or may not collapse (i.e. fold after being taken off the page, erasing its 'stored' radius). Lines and circles constructed have infinite precision and zero width. Actual compasses do not collapse and modern geometric constructions often use this feature. A 'collapsing compass' would appear to be a less powerful instrument. However, by the compass equivalence theorem in Proposition 2 of Book 1 of Euclid's Elements, no power is lost by using a collapsing compass. Although the proposition is correct, its proofs have a long and checkered history. In any case, the equivalence is why this feature is not stipulated in the definition of the ideal compass. Each construction must be mathematically exact. "Eyeballing" distances (looking at the construction and guessing at its accuracy) or using markings on a ruler, are not permitted. Each construction must also terminate. That is, it must have a finite number of steps, and not be the limit of ever closer approximations. (If an unlimited number of steps is permitted, some otherwise-impossible constructions become possible by means of infinite sequences converging to a limit.) Stated this way, straightedge-and-compass constructions appear to be a parlour game, rather than a serious practical problem; but the purpose of the restriction is to ensure that constructions can be proved to be exactly correct. History The ancient Greek mathematicians first attempted straightedge-and-compass constructions, and they discovered how to construct sums, differences, products, ratios, and square roots of given lengths. They could also construct half of a given angle, a square whose area is twice that of another square, a square having the same area as a given polygon, and regular polygons of 3, 4, or 5 sides (or one with twice the number of sides of a given polygon). But they could not construct one third of a given angle except in particular cases, or a square with the same area as a given circle, or regular polygons with other numbers of sides. Nor could they construct the side of a cube whose volume is twice the volume of a cube with a given side. Hippocrates and Menaechmus showed that the volume of the cube could be doubled by finding the intersections of hyperbolas and parabolas, but these cannot be constructed by straightedge and compass. In the fifth century BCE, Hippias used a curve that he called a quadratrix to both trisect the general angle and square the circle, and Nicomedes in the second century BCE showed how to use a conchoid to trisect an arbitrary angle; but these methods also cannot be followed with just straightedge and compass. No progress on the unsolved problems was made for two millennia, until in 1796 Gauss showed that a regular polygon with 17 sides could be constructed; five years later he showed the sufficient criterion for a regular polygon of n sides to be constructible. In 1837 Pierre Wantzel published a proof of the impossibility of trisecting an arbitrary angle or of doubling the volume of a cube, based on the impossibility of constructing cube roots of lengths. He also showed that Gauss's sufficient constructibility condition for regular polygons is also necessary. Then in 1882 Lindemann showed that is a transcendental number, and thus that it is impossible by straightedge and compass to construct a square with the same area as a given circle. The basic constructions All straightedge-and-compass constructions consist of repeated application of five basic constructions using the points, lines and circles that have already been constructed. These are: Creating the line through two points Creating the circle that contains one point and has a center at another point Creating the point at the intersection of two (non-parallel) lines Creating the one point or two points in the intersection of a line and a circle (if they intersect) Creating the one point or two points in the intersection of two circles (if they intersect). For example, starting with just two distinct points, we can create a line or either of two circles (in turn, using each point as centre and passing through the other point). If we draw both circles, two new points are created at their intersections. Drawing lines between the two original points and one of these new points completes the construction of an equilateral triangle. Therefore, in any geometric problem we have an initial set of symbols (points and lines), an algorithm, and some results. From this perspective, geometry is equivalent to an axiomatic algebra, replacing its elements by symbols. Probably Gauss first realized this, and used it to prove the impossibility of some constructions; only much later did Hilbert find a complete set of axioms for geometry. Common straightedge-and-compass constructions The most-used straightedge-and-compass constructions include: Constructing the perpendicular bisector from a segment Finding the midpoint of a segment. Drawing a perpendicular line from a point to a line. Bisecting an angle Mirroring a point in a line Constructing a line through a point tangent to a circle Constructing a circle through 3 noncollinear points Drawing a line through a given point parallel to a given line. Constructible points One can associate an algebra to our geometry using a Cartesian coordinate system made of two lines, and represent points of our plane by vectors. Finally we can write these vectors as complex numbers. Using the equations for lines and circles, one can show that the points at which they intersect lie in a quadratic extension of the smallest field F containing two points on the line, the center of the circle, and the radius of the circle. That is, they are of the form , where x, y, and k are in F. Since the field of constructible points is closed under square roots, it contains all points that can be obtained by a finite sequence of quadratic extensions of the field of complex numbers with rational coefficients. By the above paragraph, one can show that any constructible point can be obtained by such a sequence of extensions. As a corollary of this, one finds that the degree of the minimal polynomial for a constructible point (and therefore of any constructible length) is a power of 2. In particular, any constructible point (or length) is an algebraic number, though not every algebraic number is constructible; for example, is algebraic but not constructible. Constructible angles There is a bijection between the angles that are constructible and the points that are constructible on any constructible circle. The angles that are constructible form an abelian group under addition modulo 2π (which corresponds to multiplication of the points on the unit circle viewed as complex numbers). The angles that are constructible are exactly those whose tangent (or equivalently, sine or cosine) is constructible as a number. For example, the regular heptadecagon (the seventeen-sided regular polygon) is constructible because as discovered by Gauss. The group of constructible angles is closed under the operation that halves angles (which corresponds to taking square roots in the complex numbers). The only angles of finite order that may be constructed starting with two points are those whose order is either a power of two, or a product of a power of two and a set of distinct Fermat primes. In addition there is a dense set of constructible angles of infinite order. Relation to complex arithmetic Given a set of points in the Euclidean plane, selecting any one of them to be called 0 and another to be called 1, together with an arbitrary choice of orientation allows us to consider the points as a set of complex numbers. Given any such interpretation of a set of points as complex numbers, the points constructible using valid straightedge-and-compass constructions alone are precisely the elements of the smallest field containing the original set of points and closed under the complex conjugate and square root operations (to avoid ambiguity, we can specify the square root with complex argument less than π). The elements of this field are precisely those that may be expressed as a formula in the original points using only the operations of addition, subtraction, multiplication, division, complex conjugate, and square root, which is easily seen to be a countable dense subset of the plane. Each of these six operations corresponding to a simple straightedge-and-compass construction. From such a formula it is straightforward to produce a construction of the corresponding point by combining the constructions for each of the arithmetic operations. More efficient constructions of a particular set of points correspond to shortcuts in such calculations. Equivalently (and with no need to arbitrarily choose two points) we can say that, given an arbitrary choice of orientation, a set of points determines a set of complex ratios given by the ratios of the differences between any two pairs of points. The set of ratios constructible using straightedge and compass from such a set of ratios is precisely the smallest field containing the original ratios and closed under taking complex conjugates and square roots. For example, the real part, imaginary part and modulus of a point or ratio z (taking one of the two viewpoints above) are constructible as these may be expressed as Doubling the cube and trisection of an angle (except for special angles such as any φ such that φ/(2)) is a rational number with denominator not divisible by 3) require ratios which are the solution to cubic equations, while squaring the circle requires a transcendental ratio. None of these are in the fields described, hence no straightedge-and-compass construction for these exists. Impossible constructions The ancient Greeks thought that the construction problems they could not solve were simply obstinate, not unsolvable. With modern methods, however, these straightedge-and-compass constructions have been shown to be logically impossible to perform. (The problems themselves, however, are solvable, and the Greeks knew how to solve them without the constraint of working only with straightedge and compass.) Squaring the circle The most famous of these problems, squaring the circle, otherwise known as the quadrature of the circle, involves constructing a square with the same area as a given circle using only straightedge and compass. Squaring the circle has been proved impossible, as it involves generating a transcendental number, that is, . Only certain algebraic numbers can be constructed with ruler and compass alone, namely those constructed from the integers with a finite sequence of operations of addition, subtraction, multiplication, division, and taking square roots. The phrase "squaring the circle" is often used to mean "doing the impossible" for this reason. Without the constraint of requiring solution by ruler and compass alone, the problem is easily solvable by a wide variety of geometric and algebraic means, and was solved many times in antiquity. A method which comes very close to approximating the "quadrature of the circle" can be achieved using a Kepler triangle. Doubling the cube Doubling the cube is the construction, using only a straightedge and compass, of the edge of a cube that has twice the volume of a cube with a given edge. This is impossible because the cube root of 2, though algebraic, cannot be computed from integers by addition, subtraction, multiplication, division, and taking square roots. This follows because its minimal polynomial over the rationals has degree 3. This construction is possible using a straightedge with two marks on it and a compass. Angle trisection Angle trisection is the construction, using only a straightedge and a compass, of an angle that is one-third of a given arbitrary angle. This is impossible in the general case. For example, the angle 2/5 radians (72° = 360°/5) can be trisected, but the angle of /3 radians (60°) cannot be trisected. The general trisection problem is also easily solved when a straightedge with two marks on it is allowed (a neusis construction). Distance to an ellipse The line segment from any point in the plane to the nearest point on a circle can be constructed, but the segment from any point in the plane to the nearest point on an ellipse of positive eccentricity cannot in general be constructed. See Note that results proven here are mostly a consequence of the non-constructivity of conics. If the initial conic is considered as a given, then the proof must be reviewed to check if other distinct conic needs to be generated. As an example, constructions for normals of a parabola are known, but they need to use an intersection between circle and the parabola itself. So they are not constructible in the sense that the parabola is not constructible. Alhazen's problem In 1997, the Oxford mathematician Peter M. Neumann proved the theorem that there is no ruler-and-compass construction for the general solution of the ancient Alhazen's problem (billiard problem or reflection from a spherical mirror). Constructing regular polygons Some regular polygons (e.g. a pentagon) are easy to construct with straightedge and compass; others are not. This led to the question: Is it possible to construct all regular polygons with straightedge and compass? Carl Friedrich Gauss in 1796 showed that a regular 17-sided polygon can be constructed, and five years later showed that a regular n-sided polygon can be constructed with straightedge and compass if the odd prime factors of n are distinct Fermat primes. Gauss conjectured that this condition was also necessary; the conjecture was proven by Pierre Wantzel in 1837. The first few constructible regular polygons have the following numbers of sides: 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, 272... There are known to be an infinitude of constructible regular polygons with an even number of sides (because if a regular n-gon is constructible, then so is a regular 2n-gon and hence a regular 4n-gon, 8n-gon, etc.). However, there are only 5 known Fermat primes, giving only 31 known constructible regular n-gons with an odd number of sides. Constructing a triangle from three given characteristic points or lengths Sixteen key points of a triangle are its vertices, the midpoints of its sides, the feet of its altitudes, the feet of its internal angle bisectors, and its circumcenter, centroid, orthocenter, and incenter. These can be taken three at a time to yield 139 distinct nontrivial problems of constructing a triangle from three points. Of these problems, three involve a point that can be uniquely constructed from the other two points; 23 can be non-uniquely constructed (in fact for infinitely many solutions) but only if the locations of the points obey certain constraints; in 74 the problem is constructible in the general case; and in 39 the required triangle exists but is not constructible. Twelve key lengths of a triangle are the three side lengths, the three altitudes, the three medians, and the three angle bisectors. Together with the three angles, these give 95 distinct combinations, 63 of which give rise to a constructible triangle, 30 of which do not, and two of which are underdefined. Restricted constructions Various attempts have been made to restrict the allowable tools for constructions under various rules, in order to determine what is still constructible and how it may be constructed, as well as determining the minimum criteria necessary to still be able to construct everything that compass and straightedge can. Constructing with only ruler or only compass It is possible (according to the Mohr–Mascheroni theorem) to construct anything with just a compass if it can be constructed with a ruler and compass, provided that the given data and the data to be found consist of discrete points (not lines or circles). The truth of this theorem depends on the truth of Archimedes' axiom, which is not first-order in nature. Examples of compass-only constructions include Napoleon's problem. It is impossible to take a square root with just a ruler, so some things that cannot be constructed with a ruler can be constructed with a compass; but (by the Poncelet–Steiner theorem) given a single circle and its center, they can be constructed. Extended constructions The ancient Greeks classified constructions into three major categories, depending on the complexity of the tools required for their solution. If a construction used only a straightedge and compass, it was called planar; if it also required one or more conic sections (other than the circle), then it was called solid; the third category included all constructions that did not fall into either of the other two categories. This categorization meshes nicely with the modern algebraic point of view. A complex number that can be expressed using only the field operations and square roots (as described above) has a planar construction. A complex number that includes also the extraction of cube roots has a solid construction. In the language of fields, a complex number that is planar has degree a power of two, and lies in a field extension that can be broken down into a tower of fields where each extension has degree two. A complex number that has a solid construction has degree with prime factors of only two and three, and lies in a field extension that is at the top of a tower of fields where each extension has degree 2 or 3. Solid constructions A point has a solid construction if it can be constructed using a straightedge, compass, and a (possibly hypothetical) conic drawing tool that can draw any conic with already constructed focus, directrix, and eccentricity. The same set of points can often be constructed using a smaller set of tools. For example, using a compass, straightedge, and a piece of paper on which we have the parabola y=x2 together with the points (0,0) and (1,0), one can construct any complex number that has a solid construction. Likewise, a tool that can draw any ellipse with already constructed foci and major axis (think two pins and a piece of string) is just as powerful. The ancient Greeks knew that doubling the cube and trisecting an arbitrary angle both had solid constructions. Archimedes gave a neusis construction of the regular heptagon, which was interpreted by medieval Arabic commentators, Bartel Leendert van der Waerden, and others as being based on a solid construction, but this has been disputed, as other interpretations are possible. The quadrature of the circle does not have a solid construction. A regular n-gon has a solid construction if and only if n=2a3bm where a and b are some non-negative integers and m is a product of zero or more distinct Pierpont primes (primes of the form 2r3s+1). Therefore, regular n-gon admits a solid, but not planar, construction if and only if n is in the sequence 7, 9, 13, 14, 18, 19, 21, 26, 27, 28, 35, 36, 37, 38, 39, 42, 45, 52, 54, 56, 57, 63, 65, 70, 72, 73, 74, 76, 78, 81, 84, 90, 91, 95, 97... The set of n for which a regular n-gon has no solid construction is the sequence 11, 22, 23, 25, 29, 31, 33, 41, 43, 44, 46, 47, 49, 50, 53, 55, 58, 59, 61, 62, 66, 67, 69, 71, 75, 77, 79, 82, 83, 86, 87, 88, 89, 92, 93, 94, 98, 99, 100... Like the question with Fermat primes, it is an open question as to whether there are an infinite number of Pierpont primes. Angle trisection What if, together with the straightedge and compass, we had a tool that could (only) trisect an arbitrary angle? Such constructions are solid constructions, but there exist numbers with solid constructions that cannot be constructed using such a tool. For example, we cannot double the cube with such a tool. On the other hand, every regular n-gon that has a solid construction can be constructed using such a tool. Origami The mathematical theory of origami is more powerful than straightedge-and-compass construction. Folds satisfying the Huzita–Hatori axioms can construct exactly the same set of points as the extended constructions using a compass and conic drawing tool. Therefore, origami can also be used to solve cubic equations (and hence quartic equations), and thus solve two of the classical problems. Markable rulers Archimedes, Nicomedes and Apollonius gave constructions involving the use of a markable ruler. This would permit them, for example, to take a line segment, two lines (or circles), and a point; and then draw a line which passes through the given point and intersects the two given lines, such that the distance between the points of intersection equals the given segment. This the Greeks called neusis ("inclination", "tendency" or "verging"), because the new line tends to the point. In this expanded scheme, we can trisect an arbitrary angle (see Archimedes' trisection) or extract an arbitrary cube root (due to Nicomedes). Hence, any distance whose ratio to an existing distance is the solution of a cubic or a quartic equation is constructible. Using a markable ruler, regular polygons with solid constructions, like the heptagon, are constructible; and John H. Conway and Richard K. Guy give constructions for several of them. The neusis construction is more powerful than a conic drawing tool, as one can construct complex numbers that do not have solid constructions. In fact, using this tool one can solve some quintics that are not solvable using radicals. It is known that one cannot solve an irreducible polynomial of prime degree greater or equal to 7 using the neusis construction, so it is not possible to construct a regular 23-gon or 29-gon using this tool. Benjamin and Snyder proved that it is possible to construct the regular 11-gon, but did not give a construction. It is still open as to whether a regular 25-gon or 31-gon is constructible using this tool. Trisect a straight segment Given a straight line segment called AB, could this be divided in three new equal segments and in many parts required by the use of intercept theorem. Computation of binary digits In 1998 Simon Plouffe gave a ruler-and-compass algorithm that can be used to compute binary digits of certain numbers. The algorithm involves the repeated doubling of an angle and becomes physically impractical after about 20 binary digits. See also Carlyle circle Geometric cryptography Geometrography List of interactive geometry software, most of them show straightedge-and-compass constructions Mathematics of paper folding Underwood Dudley, a mathematician who has made a sideline of collecting false straightedge-and-compass proofs. References External links Regular polygon constructions by Dr. Math at The Math Forum @ Drexel Construction with the Compass Only at cut-the-knot Angle Trisection by Hippocrates at cut-the-knot
Straightedge and compass construction
[ "Mathematics" ]
5,509
[ "Straightedge and compass constructions", "Planes (geometry)", "Euclidean plane geometry" ]
61,255
https://en.wikipedia.org/wiki/Bacterial%20artificial%20chromosome
A bacterial artificial chromosome (BAC) is a DNA construct, based on a functional fertility plasmid (or F-plasmid), used for transforming and cloning in bacteria, usually E. coli. F-plasmids play a crucial role because they contain partition genes that promote the even distribution of plasmids after bacterial cell division. The bacterial artificial chromosome's usual insert size is 150–350 kbp. A similar cloning vector called a PAC has also been produced from the DNA of P1 bacteriophage. BACs were often used to sequence the genomes of organisms in genome projects, for example the Human Genome Project, though they have been replaced by more modern technologies. In BAC sequencing, short piece of the organism's DNA is amplified as an insert in BACs, and then sequenced. Finally, the sequenced parts are rearranged in silico, resulting in the genomic sequence of the organism. BACs were replaced with faster and less laborious sequencing methods like whole genome shotgun sequencing and now more recently next-gen sequencing. Common gene components repE for plasmid replication and regulation of copy number. parA and parB for partitioning F plasmid DNA to daughter cells during division and ensures stable maintenance of the BAC. A selectable marker for antibiotic resistance; some BACs also have lacZ at the cloning site for blue/white selection. T7 & Sp6 phage promoters for transcription of inserted genes. Disease modeling Inherited BACs are now being utilized to a greater extent in modeling genetic disease, often alongside transgenic mice. BACs have been useful in this field as complex genes may have several regulatory sequences upstream of the encoding sequence, including various promoter sequences that will govern a gene's expression level. BACs have been used to some degree of success with mice when studying neurological diseases such as Alzheimer's disease or as in the case of aneuploidy associated with Down syndrome. There have also been instances when they have been used to study specific oncogenes associated with cancers. They are transferred over to these genetic disease models by electroporation/transformation, transfection with a suitable virus or microinjection. BACs can also be utilized to detect genes or large sequences of interest and then used to map them onto the human chromosome using BAC arrays. BACs are preferred for these kind of genetic studies because they accommodate much larger sequences without the risk of rearrangement, and are therefore more stable than other types of cloning vectors. Infectious The genomes of several large DNA viruses and RNA viruses have been cloned as BACs. These constructs are referred to as "infectious clones", as transfection of the BAC construct into host cells is sufficient to initiate viral infection. The infectious property of these BACs has made the study of many viruses such as the herpesviruses, poxviruses and coronaviruses more accessible. Molecular studies of these viruses can now be achieved using genetic approaches to mutate the BAC while it resides in bacteria. Such genetic approaches rely on either linear or circular targeting vectors to carry out homologous recombination. See also Cosmid End-sequence profiling Fosmid Human artificial chromosome Secondary chromosome Yeast artificial chromosome References External links The Big Bad BAC: Bacterial Artificial Chromosomes — a review from the Science Creative Quarterly Empire Genomics (company that sells BAC clones from genomic libraries) Amplicon Express (company that makes custom BAC libraries) Genomics techniques Molecular biology techniques
Bacterial artificial chromosome
[ "Chemistry", "Biology" ]
734
[ "Genetics techniques", "Genomics techniques", "Molecular biology techniques", "Molecular biology" ]
61,260
https://en.wikipedia.org/wiki/Filling%20station
A filling station (also known as a gas station [] or petrol station []) is a facility that sells fuel and engine lubricants for motor vehicles. The most common fuels sold are gasoline (or petrol) and diesel fuel. Fuel dispensers are used to pump gasoline, diesel, compressed natural gas, compressed hydrogen, hydrogen compressed natural gas, liquefied petroleum gas, liquid hydrogen, kerosene, alcohol fuels (like methanol, ethanol, butanol, and propanol), biofuels (like straight vegetable oil and biodiesel), or other types of fuel into the tanks within vehicles and calculate the financial cost of the fuel transferred to the vehicle. Besides gasoline pumps, one other significant device which is also found in filling stations and can refuel certain (compressed-air) vehicles is an air compressor, although generally these are just used to inflate car tires. Many filling stations provide convenience stores, which may sell convenience food, beverages, tobacco products, lottery tickets, newspapers, magazines, and, in some cases, a small selection of grocery items, such as milk or eggs. Some also sell propane or butane and have added shops to their primary business. Conversely, some chain stores, such as supermarkets, discount stores, warehouse clubs, or traditional convenience stores, have provided fuel pumps on the premises. Terminology In North America the fuel is known as "gasoline" or "gas" for short, and the terms "gas station" and "service station" are used in the United States, Canada, and the Caribbean. In some regions of Canada, the term "gas bar" (or "gasbar") is used. In the rest of the English-speaking world the fuel is known as "petrol". As a result, the term "petrol station" or "petrol pump" is used in the United Kingdom. In Ireland, New Zealand and South Africa "garage" and "forecourt" are still commonly used. Similarly, in Australia, New Zealand, the United Kingdom, and Ireland, the term "service station" describes any petrol station; Australians and New Zealanders also call it a "servo". In India, Pakistan and Bangladesh, it is called a "petrol pump" or a "petrol bunk". In Japanese, a commonly used term is although the abbreviation SS (for service station) is also used. History The first known filling station was the city pharmacy in Wiesloch, Germany, where Bertha Benz refilled the tank of the first automobile on its maiden trip from Mannheim to Pforzheim back in 1888. Shortly thereafter other pharmacies sold gasoline as a side business. Since 2008 the Bertha Benz Memorial Route commemorates this event. Brazil The first "posto de gasolina" of South America was opened in Santos, São Paulo, Brazil, in 1920. It was located on Ana Costa Avenue, in front of the beach, in a corner that is located by the Hotel Atlântico, which occupies its area nowadays. It was owned by Esso and brought by Antonio Duarte Moreira, a taxi entrepreneur. Russia In Russia, the first filling stations appeared in 1911, when the Imperial Automobile Society signed an agreement with the partnership "Br. Nobel". By 1914 about 440 stations functioned in major cities across the country. In the mid-1960s in Moscow there were about 250 stations. A significant boost in retail network development occurred with the mass launch of the car "Zhiguli" at the Volga Automobile Plant, which was built in Tolyatti in 1970. Gasoline for other than non-private cars was sold for ration cards only. This type of payment system stopped in the midst of perestroika in the early 1990s. Since the saturation of automobile filling stations in Russia is insufficient and lags behind the leading countries of the world, there is a need to accommodate new stations in the cities and along the roads of different levels. United States The increase in automobile ownership after Henry Ford started to sell automobiles that the middle class could afford resulted in an increased demand for filling stations. The world's first purpose-built gas station was constructed in St. Louis, Missouri, in 1905 at 420 South Theresa Avenue. The second station was constructed in 1907 by Standard Oil of California (now Chevron) in Seattle, Washington, at what is now Pier 32. Reighard's Gas Station in Altoona, Pennsylvania claims that it dates from 1909 and is the oldest existing filling station in the United States. Early on, they were known to motorists as "filling stations" and often washed vehicle windows for free. The first "drive-in" filling station, Gulf Refining Company, opened to the motoring public in Pittsburgh on December 1, 1913, at Baum Boulevard and St Clair's Street. Prior to this, automobile drivers pulled into almost any general or hardware store, or even blacksmith shops in order to fill up their tanks. On its first day, the station sold of gasoline at 27 cents per gallon (7 cents per litre). This was also the first architect-designed station and the first to distribute free road maps. The first alternative fuel station was opened in San Diego, California, by Pearson Fuels in 2003. Maryland officials said that on September 26, 2019, RS Automotive in Takoma Park, Maryland became the first filling station in the country to convert to an EV charging station. Design and function The majority of filling stations are built in a similar manner, with most of the fueling installation underground, pump machines in the forecourt and a point of service inside a building. Single or multiple fuel tanks are usually deployed underground. Local regulations and environmental concerns may require a different method, with some stations storing their fuel in container tanks, entrenched surface tanks or unprotected fuel tanks deployed on the surface. Fuel is usually offloaded from a tanker truck into each tank by gravity through a separate capped opening located on the station's perimeter. Fuel from the tanks travels to the dispenser pumps through underground pipes. For every fuel tank, direct access must be available at all times. Most tanks can be accessed through a service canal directly from the forecourt. Older stations tend to use a separate pipe for every kind of available fuel and for every dispenser. Newer stations may employ a single pipe for every dispenser. This pipe houses a number of smaller pipes for the individual fuel types. Fuel tanks, dispenser and nozzles used to fill car tanks employ vapor recovery systems, which prevents releases of vapor into the atmosphere with a system of pipes. The exhausts are placed as high as possible. A vapor recovery system may be employed at the exhaust pipe. This system collects the vapors, liquefies them and releases them back into the lowest grade fuel tank available. The forecourt is the part of a filling station where vehicles are refueled. Gasoline pumps are placed on concrete plinths, as a precautionary measure against collision by motor vehicles. Additional elements may be employed, including metal barriers. The area around the gasoline pumps must have a drainage system. Since fuel sometimes spills onto the pavement, as little of it as possible should remain. Any liquids present on the forecourt will flow into a channel drain before it enters a petrol interceptor which is designed to capture any hydrocarbon pollutants and filter these from rainwater which may then proceed to a sanitary sewer, stormwater drain, or to ground. If a filling station allows customers to pay at the dispenser, the data from the dispenser may be transmitted via RS-232, RS-485 or Ethernet to the point of sale, usually inside the filling station's building, and fed into the station's cash register operating system. The cash register system gives a limited control over the gasoline pump, and is usually limited to allowing the clerks to turn the pumps on and off. A separate system is used to monitor the fuel tank's status and quantities of fuel. With sensors directly in the fuel tank, the data is fed to a terminal in the back room, where it can be downloaded or printed out. Sometimes this method is bypassed, with the fuel tank data transmitted directly to an external database. Underground filling stations The underground modular filling station is a construction model for filling stations that was developed and patented by U-Cont Oy Ltd in Finland in 1993. Afterwards the same system was used in Florida, US. Above-ground modular stations were built in the 1980s in Eastern Europe and especially in Soviet Union, but they were not built in other parts of Europe due to the stations' lack of safety in case of fire. The construction model for underground modular filling station makes the installation time shorter, designing easier and manufacturing less expensive. As a proof of the model's installation speed an unofficial world record of filling station installation was made by U-Cont Oy Ltd when a modular filling station was built in Helsinki, Finland in less than three days, including groundwork. The safety of modular filling stations has been tested in a filling station simulator, in Kuopio, Finland. These tests have included for instance burning cars and explosions in the station simulator. Negative impacts Human health Gasoline contains a mixture of BTEX hydrocarbons (benzene, toluene, ethylbenzene, xylenes). Prolonged exposure to toluene can cause permanent damage to the central nervous system, and chlorinated solvents can cause liver and kidney problems. Benzene in particular causes leukemia and is associated with non-Hodgkin lymphoma and multiple myeloma. People who work in filling stations, live near them, or attend school close to them are exposed to fumes and are at increased lifetime risk of cancer, with risk increased if there are multiple stations nearby. There is some evidence that living near a filling station is a risk for childhood leukemia. In addition to long-term exposure, there are bursts of short-term exposures to benzene when tanker trucks deliver fuel. High levels of benzene have been detected near stations across urban, suburban, and rural environments, though the causes (such as road traffic or congestion) can vary by location. Gas station attendants have suffered adverse health consequences depending on the type of fuel used, exposure to vehicle exhaust, and types of personal protective equipment (PPE) offered. Studies have noted higher levels of chromosomal deletions and higher rates of miscarriage, and workers have reported headaches, fatigue, throat irritation and depression. Exposure to exhaust and fumes has been associated with eye irritation, nausea, dizziness, and cough. Environment Gasoline can leak into the surrounding soil and water, posing health risks. Areas formerly occupied by stations are often contaminated, resulting in brownfields and urban blight. Underground storage tanks (USTs) were typically made of steel and were common in the United States, but were prone to corrosion. They received national attention in 1983 after an episode of 60 Minutes documented significant drinking water contamination from a Mobil station in Canob Park in Richmond, Rhode Island. This led to regulations banning these types of tanks in 1985. However, tanks that ceased operation before 1986 are unlikely to have been recorded, and many underground tanks are thus unknowingly hidden beneath redeveloped land, contributing to soil, groundwater, and indoor air pollution. Because of the relatively small size of former stations (compared to larger brownfields), the cost-per-acre to rehabilitate the land is higher; the total cost in the United States is not known but is in the billions of dollars. Individual cleanups may be complex, with some in Canada taking decades and costing millions of dollars both for the cleanup efforts and in legal fees to determine whether individuals, governments, or corporations are liable for costs. Economic costs The cost of potential cleanup of a former filling station can lower property values, discourage development of land, and depress neighbouring property values and potential tax revenue. When areas are known to be contaminated by leaking underground storage tanks, the sale value of the land and neighbouring area drops. An analysis of residential properties in Cuyahoga County, Ohio estimated the loss at about 17% when within or one block of a registered leaking tank. Active filling stations have similar negative effects on property values, with an analysis in Xuancheng, China finding a loss of 16% within and 9% when between and . Marketing North America In the United States and Canada, there are generally two marketing types of filling stations: premium brands and discount brands. Premium brands Filling stations with premium brands sell well-recognized and often international brands of fuel, including Exxon/Mobil and its Esso brand, Phillips 66/Conoco/76, Chevron, Mobil, Shell, Husky Energy, Sunoco (US), BP, Valero and Texaco. Non-international premium brands include Petrobras, Petro-Canada (owned by Suncor Energy Canada), QuikTrip, Hess, Sinclair, and Pemex. Premium-brand stations accept credit cards, often issue their own company cards (fuel cards or fleet cards) and may charge higher prices. In some cases, fuel cards for customers with a lower fuel consumption are ordered not directly from an oil company, but from an intermediary. Many premium brands have fully automated pay-at-the-pump facilities. Premium stations tend to be highly visible from highway and freeway exits, utilizing tall signs to display their brand logos. Discount brands Discount brands are often smaller, regional chains or independent stations, offering lower fuel prices. Most purchase wholesale commodity gasoline from independent suppliers or the major petroleum companies. Lower-priced stations are also found at some supermarkets (Albertsons, Kroger, Big Y, Ingles, Lowes Foods, Giant, Weis Markets, Safeway, Hy-Vee, Vons, Meijer, Loblaws/Real Canadian Superstore, and Giant Eagle), convenience stores (7-Eleven, Circle K, Cumberland Farms, QuickChek, Road Ranger, Sheetz, Speedway and Wawa), discount stores (Walmart, Canadian Tire) and warehouse clubs (Costco, Sam's Club, and BJ's Wholesale Club). At some stations (such as Vons, Costco gas stations, BJ's Wholesale Club, or Sam's Club), consumers are required to hold a special membership card to be eligible for the discounted price, or pay only with the chain's cash card, debt card or a credit card issuer exclusive to that chain. In some areas, such as New Jersey, this practice is illegal, and stations are required to sell to all at the same price. Some convenience stores, such as 7-Eleven and Circle K, have co-branded their stations with one of the premium brands. After the Gulf Oil company was sold to Chevron, northeastern retail units were sold off as a chain, with Cumberland Farms controlling the remaining Gulf Oil outlets in the United States. State-controlled stations Some countries have only one brand of filling station. In Malaysia, Shell is the dominant player by number of stations, with government-owned Petronas coming in second. In Indonesia, the dominant player by number of stations is the government-owned Pertamina, although other companies such as TotalEnergies and Shell are increasingly found in big cities such as the capital Jakarta or Surabaya. In Taiwan, the government-owned CPC Corporation is the dominant player by number of stations, with the privately owned and operated Formosa Petrochemical Corporation and respectively in second and third place. Global and local branding Some companies, such as Shell, use their brand worldwide, however, Chevron uses its inherited brand Caltex in Asia Pacific, Australia and Africa, and its Texaco brand in Europe and Latin America. ExxonMobil uses its Exxon and Mobil brands but is still known as Esso (the forerunner company name, Standard Oil or S.O.) in many places, most noticeably in Canada and Singapore. In Brazil, the main operators are Vibra Energia and Ipiranga, but Esso and Shell (Raízen) are also present. In Mexico, the historical monopoly filling station operator, and still the largest, is Pemex, but ever since Mexico's energy laws were gradually liberalized starting from 2013, foreign brands such as Shell, BP, Mobil and Chevron, as well as the country's largest convenience store chain Oxxo, have also started operating filling stations. In the United Kingdom, the three largest are BP, Esso and Shell; the "Big Four" supermarket chains, Morrisons, Sainsbury's, Asda and Tesco, also operate filling stations, as well as some smaller supermarket chains such as The Co-operative Group and Waitrose. In Poland, the three largest operators are the partially state-owned PKN Orlen (including acquisitions Grupa Lotos and PGNiG), followed by Shell and BP. Smaller operators include Auchan, , Circle K, MOL Group and Żabka. In Australia, the major operators are Ampol, BP, Chevron Australia (Caltex), EG Australia, ExxonMobil Australia (Mobil), Puma Energy, United Petroleum and Viva Energy (mostly under the dual Shell-Coles Express branding). Smaller operators include Costco, Liberty Oil, Seven & i Holdings (operates servos under the 7-Eleven convenience store branding) and Shell Australia. In India, the three major operators are the state-owned Hindustan Petroleum, Bharat Petroleum and Indian Oil Corporation, which together control approximately 87% of the market. Foreign brands such as BP (joint venture with Reliance Industries, branded as Jio-bp) and Shell are also present. In Japan, the four major operators are: Cosmo Oil, Idemitsu (under the brand names apollostation and Idemitsu), ENEOS Corporation (under the brand names ENEOS, Express and General) and San-Ai Oil (under the brand name Kygnus). Smaller operators include: Japan Agricultural Cooperatives (under the brand name , except in Hokkaido where the brand name is operated by ) and Mitsubishi Group (operates self-service stations under the Lawson convenience store branding). Previously, foreign filling station brands were also present in Japan: mainly Shell (operated by Idemitsu since its acquisition of Showa Shell Sekiyu in 2018–19, all rebranded to apollostation by 2023), Esso and Mobil (last operated by ENEOS Corporation under license from ExxonMobil, all rebranded to ENEOS in 2019). Payment methods Australia and New Zealand Most service stations allow the customer to pump the fuel before paying. In recent years, some service stations have required customers to purchase their fuel first. In some small towns, the customer may hand the cash to the attendant on the forecourt if they are paying for a set amount of fuel and have no change; but usually customers will enter the service centre to pay a cashier. Some supermarkets have their own forecourts which are unmanned and payment is pay-at-pump only. Customers at the supermarket will receive a discount voucher which offers discounted fuel at their forecourt. The amount of discount varies depending on the amount spent on groceries at the supermarket, but normally starts at 4 cents a litre. In New Zealand, BP has an app for smartphones that detects a user's location, then allows one to select the type of fuel, which pump, and how much to spend. The amount is then deducted from the user's account. Canada In British Columbia and Alberta, it is a legal requirement that customers either pre-pay for the fuel or pay at the pump. The law is called "Grant's Law" and is intended to prevent "gas-and-dash" crimes, where a customer refuels and then drives away without paying for it. In other provinces, payment after filling is permitted and widely available, though some stations may require either a pre-payment or a payment at the pump during night hours. Ireland In the Republic of Ireland, most stations allow customers to pump fuel before paying. Some stations have pay-at-the-pump facilities. United Kingdom A large majority of stations allow customers to pay with a chip and pin payment card or pay in the shop. Many have a pay at the pump system, where customers can enter their PIN prior to refueling. United States Pre-payment is the norm in the US and customers may typically pay either at the pump or inside the gas station. Modern stations have pay-at-the-pump functions: in most cases credit, debit, ATM cards, fuel cards and fleet cards are accepted. Occasionally a station will have a pay-at-the-pump-only period per day, when attendants are not present, often at night, and some stations are pay-at-the-pump only 24 hours a day. Types of service Filling stations typically offer one of three types of service to their customers: full service, minimum service or self-service. Full service An attendant operates the pumps, often wipes the windshield, and sometimes checks the vehicle's oil level and tire pressure, then collects payment and perhaps a small tip. Minimum service An attendant operates the pumps. This is often required due to legislation that prohibits customers from operating the pumps. Self service The customer performs all required service. Signs informing the customer of filling procedures and cautions are displayed on each pump. Customers can still enter a store or go to a booth to give payment to a person. Unstaffed Using cardlock (or pay-at-the-pump) system, these are completely unstaffed. Brazil In Brazil, self-service fuel filling is illegal, due to a federal law enacted in 2000. The law was introduced by Federal Deputy Aldo Rebelo, who claims it saved 300,000 fuel attendant jobs across the country. Japan Before 1998, filling stations in Japan were entirely full-service stations. Self-service stations were legalized in Japan in 1998 following the abolition of the Special Petroleum Law, which led to the deregulation of the petroleum industry in Japan. Under current safety regulations, while motorists are able to self-dispense fuel at self-service stations, generally identified in Japanese as , at least one fuel attendant must be on hand to keep watch over potential safety violations and to render assistance to motorists whenever necessary. South Korea Filling stations in South Korea offer a variety of services, such as providing bottled water or tissues, and cleaning free of charge. But most have switched to self-service. Some large full-service stations have many services, such as tire pressure charging, automatic car washing, and self-cleaning. Some of them are free to gas customers who spend more than a certain amount. North America In the past, filling stations in the United States offered a choice between full service and self service. Before 1970, full service was the norm, and self-service was rare. Today, few stations advertise or provide full service. Full service stations are more common in wealthy and upscale areas. The cost of full service is usually assessed as a fixed amount per US gallon. The first self-service station in the United States was in Los Angeles, opened in 1947 by Frank Urich. In Canada, the first self-service station opened in Winnipeg, Manitoba, in 1949. It was operated by the independent company Henderson Thriftway Petroleum, owned by Bill Henderson. In New Jersey, filling stations offer only full service (and mini service); attendants there are required to pump gasoline for customers. Customers, in fact, are prohibited by law from pumping their own gasoline. The only exception to this within New Jersey is at the filling station next to Joint Base McGuire-Dix-Lakehurst in Wrightstown. New Jersey prohibited self-service in 1949, with the passage of "Retail Gasoline Dispensing Safety Act," after lobbying by service station owners. That laws states that "Because of the fire hazards directly associated with dispensing fuel, it is in the public interest that gasoline station operators have the control needed over that activity to ensure compliance with appropriate safety procedures, including turning off vehicle engines and refraining from smoking while fuel is dispensed." Proponents of the prohibition cite safety and jobs as reasons to keep the ban. Of note, the ban does not apply to the pumping of diesel fuel at filling stations (though individual filling stations may prohibit this); nor does it apply to the pumping of gasoline into boats or aircraft. Oregon prohibited self-service in a 1951 statute prohibiting that listed 17 different justifications, including flammability, the risk of crime from customers leaving their vehicles, toxic fumes, and the jobs created by requiring mini service. In 1982 Oregon voters rejected a ballot measure sponsored by the service station owners, which would have legalized self-service. Oregon legislators passed a bill that was signed into law by the Governor in May 2017 to allow self-service for counties with a total population of 40,000 or less beginning in January 2018. Governor Tina Kotek passed a law allowing for it in 2023, but stations are still required to provide full-serve for customers who want it. The constitutionality of the self-service bans has been disputed. The Oregon statute was brought into court in 1989 by ARCO, and the New Jersey statute was challenged in court in 1950 by a small independent service station, Rein Motors. Both challenges failed. Former New Jersey governor Jon Corzine sought to lift the ban on self-service for New Jersey. He asserted that it would be able to lower gas prices, but some New Jerseyans argued that it could cause drawbacks, especially unemployment. The town of Huntington, New York has prohibited self-service stations since the early 1970s firstly to prevent theft and later due to safety concerns. The towns of Arlington, Massachusetts and Weymouth, Massachusetts have also prohibited self-service stations since 1975 and 1977, respectively. Contrary to popular belief, lit cigarettes are not capable of igniting gasoline. However, several states outlaw smoking at gas stations as the fire from the ignition source used to light the cigarette can ignite gasoline vapors. Most gas stations and many municipalities will also explicitly ban any smoking activity within certain distances of gasoline pumps. Other goods and services commonly available Many filling stations provide toilet facilities for customer use, as well as squeegees and paper towels for customers to clean their vehicle's windows. Discount stations may not provide these amenities in some countries. Stations typically have an air compressor, typically with a built-in or provided handheld tire-pressure gauge, to inflate tires and a hose to add water to vehicle radiators. Some air compressor machines are free of charge, while others charge a small fee to use (typically 50 cents to a dollar in North America). In US states, such as California, state law requires that paying customers must be provided with free air compressor service and radiator water. In some regions of America and Australia, many filling stations have a mechanic on duty, but this practice has died out in other parts of the world. Many filling stations have integrated convenience stores which sell food, beverages, and often cigarettes, lottery tickets, motor oil, and auto parts. Prices for these items tend to be higher than they would be at a supermarket or discount store. Many stations, particularly in the United States, have a fast food outlet inside. These are usually "express" versions with limited seating and limited menus, though some may be regular-sized and have spacious seating. Larger restaurants are common at truck stops and toll road service plazas. In some US states, beer, wine, and liquor are sold at filling stations, though this practice varies according to state law (see Alcohol laws of the United States by state). Nevada also allows the operation of slot and video poker machines without time restrictions. Vacuum cleaners, often coin-operated, are a common amenity to allow the cleaning of vehicle interiors, either by the customer or by an attendant. Some stations are equipped with car washes. Car washes are sometimes offered free of charge or at a discounted price with a certain amount of fuel purchased. Conversely, some car washes operate filling stations to supplement their businesses. From approximately 1920 to 1980, many service stations in the US provided free road maps affiliated with their parent oil companies to customers. This practice fell out of favor due to the 1970s energy crisis. Fuel prices Europe In European Union member states, gasoline prices are much higher than in North America due to higher fuel excise or taxation, although the base price is also higher than in the US. Occasionally, price rises trigger national protests. In the UK, a large-scale protest in August and September 2000, known as 'The Fuel Crisis', caused wide-scale havoc not only across the UK, but also in some other EU countries. The UK Government eventually backed down by indefinitely postponing a planned increase in fuel duty. This was partially reversed during December 2006 when then-Chancellor of the Exchequer Gordon Brown raised fuel duty by 1.25 pence per liter. Since 2007, gasoline prices in the UK rose by nearly 40 pence per liter, going from 97.3 pence per liter in 2007 to 136.8 pence per liter in 2012. In much of Europe, including the UK, France and Germany, stations operated by large supermarket chains usually price fuel lower than stand-alone stations. In most of mainland Europe, sales tax is lower on diesel fuel than on gasoline, and diesel is accordingly the cheaper fuel: in the UK and Switzerland, diesel has no tax advantage and retails at a higher price by quantity than gasoline (offset by its higher energy yield). In 2014, according to Eurostat, the mean EU28 price was €1.38 /L for euro-super 95 (gasoline), €1.26 /L for diesel. The least expensive gasoline was in Estonia at €1.10 /L, and the most expensive at €1.57 /L in Italy. The least expensive diesel was in Estonia at €1.14 /L, and the most expensive at €1.54 /L in the UK. The least expensive LPG was in Belgium at €0.50 /L, and the most expensive at €0.83 /L in France. North America Nearly all filling stations in North America advertise their prices on large signs outside the stations. Some locations have laws requiring such signage. In Canada and the United States, federal, state or provincial, and local sales taxes are usually included in the price, although tax details are often posted at the pump and some stations may provide details on sales receipts. Gasoline taxes are often ring-fenced (dedicated) to fund transportation projects such as the maintenance of existing roads and the construction of new ones. Individual filling stations in the United States have little if any control over gasoline prices. The wholesale price of gasoline is determined according to area by oil companies which supply the gasoline, and their prices are largely determined by the world markets for oil. Individual stations are unlikely to sell gasoline at a loss, and the profit margin—typically between 7 and 11 cents a US gallon (2–3 cents per liter)—that they make from gasoline sales is limited by competitive pressures: a gas station which charges more than others will lose customers to them. Most stations try to compensate by selling higher-margin food products in their convenience stores. Even with oil market fluctuations, prices for gasoline in the United States are among the lowest in the industrialized world; this is principally due to lower taxes. While the sales price of gasoline in Europe is more than twice that in the United States, the price excluding taxes is nearly identical in the two areas. Some Canadians and Mexicans in communities close to the US border drive into the United States to purchase cheaper gasoline. Due to heavy fluctuations in price in the United States, some stations offer their customers the option to buy and store gasoline for future uses, such as the service provided by First Fuel Bank. In order to save money, some consumers in Canada and the United States inform each other about low and high prices through the use of gasoline price websites. Such websites allow users to share prices advertised at filling stations with each other by posting them to a central server. Consumers then may check the prices listed in their geographic area in order to select the station with the lowest price available at the time. Some television and radio stations also compile pricing information via viewer and listener reports of pricing or reporter observations and present it as a regular segment of their newscasts, usually before or after traffic reports. These price observations must usually be made by reading the pricing signs outside stations, as many companies do not give their prices by telephone due to competitive concerns. It is a criminal offense to have written or verbal arrangements with competitors, suppliers or customers for: Fixing prices and exchanging information on prices or cost (including discounts and rebates), Limiting or restraining competition unduly, Engaging in misleading or deceptive practices. Gas stations must never hold discussions with other competitors regarding pricing policies and methods, terms of sale, costs, allocation of markets or boycotts of our petroleum products. Rest of the world In other energy-importing countries such as Japan, gasoline and petroleum product prices are higher than in the United States because of fuel transportation costs and taxes. On the other hand, some of the major oil-producing countries such as the Gulf states, Iran, Iraq, and Venezuela provide subsidized fuel at well-below world market prices. This practice tends to encourage heavy consumption. Hong Kong has some of the highest pump prices in the world, but most customers are given discounts as card members. Singapore, like Hong Kong, also has similarly high pump prices, which are largely based on a pricing strategy called Mean of Platts Singapore (MOPS). As Singapore does not have any oil reserves of its own, the city-state has instead built several off-shore refineries to refine oil imported mostly from Indonesian oil fields, as the latter country does not have enough refining capacity and capability of its own. Because neighbouring country Malaysia has cheaper pump prices than Singapore, cars registered in Singapore crossing over into Malaysia are legally required to have at least three-quarters of a tank of fuel since 1991 to prevent evading fuel duties, and when filling up in Malaysia, Singaporean-and Thai-registered hybrid and petrol-powered vehicles are legally restricted to only fill up on unsubsidised, premium-grade RON97-100 petrol, as RON95 petrol in Malaysia is partially subsidised by the Government of Malaysia for the benefit of lower-income Malaysian residents. In Western Australia a program called FuelWatch requires most filling stations to notify their "tomorrow prices" by 2pm each day; prices are changed at 6am each morning, and must be held for 24 hours. Each afternoon, the prices for the next day are released to the public and the media, allowing consumers to decide when to fill up. Service stations A service station or "servo" is the terminology often used in Australia, along with petrol station, to describe any facility where motorists can refuel their cars. In New Zealand a filling station is often referred to as a service station, petrol station or garage, even though it may not offer mechanical repairs or assistance with dispensing fuel. Levels of service available include full service, for which assistance in dispensing fuel is offered, as well as offers to check tire pressure or clean vehicle windscreens. That type of service is becoming uncommon in New Zealand, particularly Auckland. Further south of Auckland, many filling stations offer full service. There is also help service or assisted service, for which customers must request assistance before it is given, and self-service, for which no assistance is available. In the US, a filling station that also offers services such as oil changes and mechanical repairs to automobiles is called a service station. Until the 1970s the vast majority of filling stations were service stations. These stations typically offered free air for inflating tires, as compressed air was already on hand to operate the repair garage's pneumatic tools. While a few filling stations with a service station remain, many in the 1980s and 1990s were converted to convenience stores while still selling fuel, while others continued to offer services but discontinued offering fuel. This kind of business provided the name for the US comic strip Gasoline Alley, where a number of the characters worked. In the UK and Ireland, a 'service station' refers to much larger facilities, usually attached to motorways (see rest area) or major truck routes, which provide food outlets, large parking areas, and often other services such as hotels, arcade games, and shops in addition to 24-hour fuel supplies and a higher standard of restrooms. Fuel is typically more expensive from these outlets due to their premium locations. UK or Irish service stations do not usually repair automobiles. Highway service centre This arrangement occurs on many toll roads and some interstate freeways and is called an oasis or service plaza. In many cases, these centers might have a food court or restaurants. In the United Kingdom and Ireland these are called motorway service areas. Often, the state government maintains public rest areas directly connected to freeways, but does not rent out space to private businesses, as this is specifically prohibited by law via the Interstate Highway Act of 1956 which created the national Interstate Highway System, except sites on freeways built before January 1, 1960, and toll highways that are self-supporting but have Interstate designation, under a grandfather clause. As a result, such areas often provide only minimal services such as restrooms and vending machines. Private entrepreneurs develop additional facilities, such as truck stops or travel centers, restaurants, gas stations, and motels in clusters on private land adjacent to major interchanges. In the US, Pilot Flying J and TravelCenters of America are two of the most common full-service chains of truck stops. Because these facilities are not directly connected to the freeway, they usually have huge signs on poles high enough to be visible by motorists in time to exit from the freeway. Sometimes, the state also posts small official signs (normally blue) indicating what types of filling stations, restaurants, and hotels are available at an upcoming exit; businesses may add their logos to these signs for a fee. In Canada, the province of Ontario has stops along two of its 400-series highways, the 401 and the 400, traditionally referred to as "Service Centres", but recently renamed "ONroute" as part of a full rebuild of the sites. Owned by the provincial government, but leased to private operator Host Kilmer Service Centres, they contain food courts, convenience stores, washrooms, and co-located gas and diesel bars with attached convenience stores. Food providers include Tim Hortons (at all sites), A&W, Wendy's and Pizza Pizza. At most sites fuel is sold by Canadian Tire, with a few older Esso gas bars at earlier renovated locations. Octane In Australia, gasoline is unleaded, and available in 91, 95, 98 and 100 octane (names differ from brand to brand). Fuel additives for use in cars designed for leaded fuel are available at most filling stations. In Canada, the most commonly found octane grades are 87 (regular), 89 (mid grade) and 91 (premium), using the same "(R+M)/2 Method" used in the US (see below). In China, the most commonly found octane grade is RON 91 (regular), 93 (mid grade) and 97 (premium). Almost all of the fuel has been unleaded since 2000. In some premium filling stations in large cities, such as PetrolChina and Sinopec, RON 98 gas is sold for racing cars. In Europe, gasoline is unleaded and available in 95 RON (Eurosuper) and, in nearly all countries, 98 RON (Super Plus) octanes; in some countries 91 RON octane gasoline is offered as well. In addition, 100 RON is offered in some countries in continental Europe (Shell markets this as V-Power Racing). Some stations offer 98 RON with lead substitute (often called Lead-Replacement Petrol, or LRP). In New Zealand, gasoline is unleaded, and most commonly available in 91 RON ("Regular") and 95 RON ("Premium"). 98 RON is available at selected BP ("Ultimate") and Mobil ("Synergy 8000") service stations instead of the standard 95 RON. 96 RON was replaced by 95 RON, and subsequently abolished in 2006. Leaded fuel was abolished in 1996. In the UK the most common gasoline grade (and lowest octane generally available) is 'Premium' 95 RON unleaded. 'Super' is widely available at 97 RON (for example Shell V-Power, BP Ultimate). Leaded fuel is no longer available. In the United States all motor vehicle gasoline is unleaded and is available in several grades with different octane rating; 87 (Regular), 89 (Mid-Grade), and 93 (Premium) are typical grades. At high altitudes in the Mountain States and the Black Hills of South Dakota, regular unleaded can be as low as 85 octane; this practice has become increasingly controversial, since it was instituted when most cars had carburetors instead of the fuel injection and electronic engine controls standard in recent decades. In the US gasoline is described in terms of its "pump octane", which is the mean of their "RON" (Research Octane Number) and "MON" (Motor Octane Number). Labels on pumps in the US typically describe this as the "(R+M)/2 Method". Some nations describe fuels according to the traditional RON or MON ratings, so octane ratings cannot always be compared with the equivalent US rating by the "(R+M)/2 method". Differences in gasoline pumps In Europe, New Zealand and Australia, the customer selects one of several colour-coded nozzles depending on the type of fuel required. The filler pipe of unleaded fuel is smaller than the one for fuels for engines designed to take leaded fuel. The tank filler opening has a corresponding diameter; this prevents inadvertently using leaded fuel in an engine not designed for it, which can damage a catalytic converter. In most stations in Canada and the US, the pump has a single nozzle and the customer selects the desired octane grade by pushing a button. Some pumps require the customer to pick up the nozzle first, then lift a lever underneath it; others are designed so that lifting the nozzle automatically releases a switch. Some newer stations have separate nozzles for different types of fuel. Where diesel fuel is provided, it is usually dispensed from a separate nozzle even if the various grades of gasoline share the same nozzle. Motorists occasionally pump gasoline into a diesel car by accident. The converse is almost impossible because diesel pumps have a large nozzle with a diameter of which does not fit the filler, and the nozzles are protected by a lock mechanism or a liftable flap. Diesel fuel in a gasoline engine—while creating large amounts of smoke—does not normally cause permanent damage if it is drained once the mistake is realized. However even a liter of gasoline added to the tank of a modern diesel car can cause irreversible damage to the injection pump and other components through a lack of lubrication. In some cases, the car has to be scrapped because the cost of repairs exceeds its residual value. The issue is not clear-cut as older diesels using completely mechanical injection can tolerate some gasoline—which has historically been used to "thin" diesel fuel in winter. Legislation In most countries, stations are subjected to guidelines and regulations which exist to minimize the potential of fires, and increase safety. It is prohibited to use open flames on the forecourt of a filling station because of the risk of igniting gasoline vapor. In the United States, establishing fire codes and enforcing their compliance is the responsibility of state governments. Most localities ban smoking, open flames and running engines. Since the increased occurrence of static-related fires many stations have warnings about leaving the refueling point. Cars can build up static charge by driving on dry road surfaces. However many tire compounds contain enough carbon black to provide an electrical ground which prevents charge build-up. Newer "high mileage" tires use more silica and can increase the buildup of static. A driver who does not discharge static by contacting a conductive part of the car will carry it to the insulated handle of the nozzle and the static potential will eventually be discharged when this purposely-grounded arrangement is put into contact with the metallic filler neck of the vehicle. Ordinarily, vapor concentrations in the area of this filling operation are below the lower explosive limit (LEL) of the product being dispensed, so the static discharge causes no problem. The problem with ungrounded gasoline cans results from a combination of vehicular static charge, the potential between the container and the vehicle, and the loose fit between the grounded nozzle and the gas can. This last condition causes a rich vapor concentration in the ullage (the unfilled volume) of the gas can, and a discharge from the can to the grounded hanging hardware (the nozzle, hose, swivels and break-a-ways) can thus occur at a most inopportune point. The Petroleum Equipment Institute has recorded incidents of static-related ignition at refueling sites since early 2000. Although urban legends persist that using a mobile phone while pumping gasoline can cause sparks or explosion, this has not been duplicated under any controlled condition. Nevertheless, mobile phone manufacturers and gas stations ask users to switch off their phones. One suggested origin of this myth is said to have been started by gas station companies because the cell phone signal would interfere with the fuel counter on some older model fuel pumps causing it to give a lower reading. In the MythBusters episode "Cell Phone Destruction", investigators concluded that explosions attributed to cell phones could be caused by static discharges from clothing instead and also observed that such incidents seem to involve women more often than men. The US National Fire Protection Association does most of the research and code writing to address the potential for explosions of gasoline vapor. The customer fueling area, up to above the surface, normally does not have explosive concentrations of vapors, but may from time to time. Above this height, where most fuel filler necks are located, there is no expectation of an explosive concentration of gasoline vapor in normal operating conditions. Electrical equipment in the fueling area may be specially certified for use around gasoline vapors. Worldwide numbers The UK has 8,385 filling stations , down from about 18,000 in 1992 and a peak of around 40,000 in the mid-1960s. The US had 114,474 stations in 2012, according to the US Census Bureau, down from 118,756 in 2007 and 121,446 in 2002. In Canada, the number is on the decline. As of December 2008, 12,684 were in operation, significantly down from about 20,000 stations recorded in 1989. In Japan, the number dropped from a peak of 60,421 in 1994 to 40,357 at the end of 2009. In Germany, the number dropped down to 14,300 in 2011. In China, according to different reports, the total number of gas/oil stations (at the end of 2018) is about 106,000. India—60,799 (as of November 2017) Russia—there were about 25,000 stations in the Russian Federation (2011) In Argentina, as of 2021, there are more than 5,000 stations. The largest filling station networks in Europe (2017) TotalEnergies—8,200 stations Shell—7,800 stations BP—7,000 stations Esso—6,100 stations Eni—5,500 stations Repsol—4,700 stations Q8—4,600 stations Avia—3,000 stations PKN Orlen—2,800 stations Circle K—2,700 stations See also Autogas (LPG) Automated fueling Biofuels Convenience store Ethanol Filling station attendant Gas pump Gasoline usage and pricing Gasoline Highway oasis Hydrogen station List of automotive fuel retailers LPG tank connections National Association of Convenience Stores Petroleum Propellant depot (a gas station in space) Road trip Explanatory notes References Further reading – Grayscale photos taken 1978 to 1982. Review (). External links Fill'er Up—Documentary produced by Wisconsin Public Television 1888 in Germany Fuels
Filling station
[ "Chemistry" ]
9,960
[ "Fuels", "Chemical energy sources" ]
61,271
https://en.wikipedia.org/wiki/Auxiliary%20power%20unit
An auxiliary power unit (APU), is a device on a vehicle that provides energy for functions other than propulsion. They are commonly found on large aircraft and naval ships as well as some large land vehicles. Aircraft APUs generally produce 115 V AC voltage at 400 Hz (rather than 50/60 Hz in mains supply), to run the electrical systems of the aircraft; others can produce 28 V DC voltage. APUs can provide power through single or three-phase systems. A jet fuel starter (JFS) is a similar device to an APU but directly linked to the main engine and started by an onboard compressed air bottle. Transport aircraft History During World War I, the British Coastal class blimps, one of several types of airship operated by the Royal Navy, carried a ABC auxiliary engine. These powered a generator for the craft's radio transmitter and, in an emergency, could power an auxiliary air blower. One of the first military fixed-wing aircraft to use an APU was the British, World War 1, Supermarine Nighthawk, an anti-Zeppelin night fighter. During World War II, a number of large American military aircraft were fitted with APUs. These were typically known as putt–putts, even in official training documents. The putt-putt on the B-29 Superfortress bomber was fitted in the unpressurised section at the rear of the aircraft. Various models of four-stroke, Flat-twin or V-twin engines were used. The engine drove a P2, DC generator, rated 28.5 Volts and 200 Amps (several of the same P2 generators, driven by the main engines, were the B-29's DC power source in flight). The putt-putt provided power for starting the main engines and was used after take-off to a height of . The putt-putt was restarted when the B-29 was descending to land. Some models of the B-24 Liberator had a putt–putt fitted at the front of the aircraft, inside the nose-wheel compartment. Some models of the Douglas C-47 Skytrain transport aircraft carried a putt-putt under the cockpit floor. As mechanical "startup" APUs for jet engines The first German jet engines built during the Second World War used a mechanical APU starting system designed by the German engineer Norbert Riedel. It consisted of a two-stroke flat engine, which for the Junkers Jumo 004 design was hidden in the engine nose cone, essentially functioning as a pioneering example of an auxiliary power unit for starting a jet engine. A hole in the extreme nose of the cone contained a manual pull-handle which started the piston engine, which in turn rotated the compressor. Two spark plug access ports existed in the Jumo 004's nose cone to service the Riedel unit's cylinders in situ, for maintenance purposes. Two small "premix" tanks for the Riedel's petrol/oil fuel were fitted in the annular intake. The engine was considered an extreme short stroke (bore / stroke: 70 mm / 35 mm = 2:1) design so it could fit within the in the nose cone of jet engines like the Jumo 004. For reduction it had an integrated planetary gear. It was produced by Victoria in Nuremberg and served as a mechanical APU-style starter for all three German jet engine designs to have made it to at least the prototype stage before May 1945 – the Junkers Jumo 004, the BMW 003 (which uniquely appears to use an electric starter for the Riedel APU), and the prototypes (19 built) of the more advanced Heinkel HeS 011 engine, which mounted it just above the intake passage in the Heinkel-crafted sheetmetal of the engine nacelle nose. The Boeing 727 in 1963 was the first jetliner to feature a gas turbine APU, allowing it to operate at smaller airports, independent from ground facilities. The APU can be identified on many modern airliners by an exhaust pipe at the aircraft's tail. Sections A typical gas-turbine APU for commercial transport aircraft comprises three main sections: Power section The power section is the gas-generator portion of the engine and produces all the shaft power for the APU. In this section of the engine, air and fuel are mixed, compressed and ignited to create hot and expanding gases. This gas is highly energetic and is used to spin the turbine, which in turn powers other sections of the engine, such as auxiliary gearboxes, pumps, electrical generators, and in the case of a turbo fan engine, the main fan. Load compressor section The load compressor is generally a shaft-mounted compressor that provides pneumatic power for the aircraft, though some APUs extract bleed air from the power section compressor. There are two actuated devices to help control the flow of air: the inlet guide vanes that regulate airflow to the load compressor and the surge control valve that maintains stable or surge-free operation of the turbo machine. Gearbox section The gearbox transfers power from the main shaft of the engine to an oil-cooled generator for electrical power. Within the gearbox, power is also transferred to engine accessories such as the fuel control unit, the lubrication module, and cooling fan. There is also a starter motor connected through the gear train to perform the starting function of the APU. Some APU designs use a combination starter/generator for APU starting and electrical power generation to reduce complexity. On the Boeing 787, an aircraft which has greater reliance on its electrical systems, the APU delivers only electricity to the aircraft. The absence of a pneumatic system simplifies the design, but high demand for electricity requires heavier generators. Onboard solid oxide fuel cell (SOFC) APUs are being researched. Manufacturers The market of Auxiliary power units is dominated by Honeywell, followed by Pratt & Whitney, Motorsich and other manufacturers such as PBS Velká Bíteš, Safran Power Units, Aerosila and Klimov. Local manufacturers include Bet Shemesh Engines and Hanwha Aerospace. The 2018 market share varied according to the application platforms: Large commercial aircraft: Honeywell 70–80%, Pratt & Whitney 20–30%, others 0–5% Regional aircraft: Pratt & Whitney 50–60%, Honeywell 40–50%, others 0–5% Business jets: Honeywell 90–100%, others 0–5% Helicopters: Pratt & Whitney 40–50%, Motorsich 40–50%, Honeywell 5–10%, Safran Power Units 5–10%, others 0–5% On June 4, 2018, Boeing and Safran announced their 50–50 partnership to design, build and service APUs after regulatory and antitrust clearance in the second half of 2018. Boeing produced several hundred T50/T60 small turboshafts and their derivatives in the early 1960s. Safran produces helicopters and business jets APUs but stopped the large APUs since Labinal exited the APIC joint venture with Sundstrand in 1996. This could threaten the dominance of Honeywell and United Technologies. Honeywell has a 65% share of the mainliner APU market and is the sole supplier for the Airbus A350, the Boeing 777 and all single-aisles: the Boeing 737 MAX, Airbus A220 (formerly Bombardier CSeries), Comac C919, Irkut MC-21 and Airbus A320neo since Airbus eliminated the P&WC APS3200 option. P&WC claims the remaining 35% with the Airbus A380, Boeing 787 and Boeing 747-8. It should take at least a decade for the Boeing/Safran JV to reach $100 million in service revenue. The 2017 market for production was worth $800 million (88% civil and 12% military), while the MRO market was worth $2.4 billion, spread equally between civil and military. Spacecraft The Space Shuttle APUs provided hydraulic pressure. The Space Shuttle had three redundant APUs, powered by hydrazine fuel. They were only powered up for ascent, re-entry, and landing. During ascent, the APUs provided hydraulic power for gimballing of the Shuttle's three engines and control of their large valves, and for movement of the control surfaces. During landing, they moved the control surfaces, lowered the wheels, and powered the brakes and nose-wheel steering. Landing could be accomplished with only one APU working. In the early years of the Shuttle there were problems with APU reliability, with malfunctions on three of the first nine Shuttle missions. Armored vehicles APUs are fitted to some tanks to provide electrical power without the high fuel consumption and large infrared signature of the main engine. As early as World War II, the American M4 Sherman had a small, piston-engine powered APU for charging the tank's batteries, a feature the Soviet-produced T-34 tank did not have. Commercial vehicles A refrigerated or frozen food semi trailer or train car may be equipped with an independent APU and fuel tank to maintain low temperatures while in transit, without the need for an external transport-supplied power source. On some older diesel engined-equipment, a small gasoline engine (often called a "pony engine") was used instead of an electric motor to start the main engine. The exhaust path of the pony engine was typically arranged so as to warm the intake manifold of the diesel, to ease starting in colder weather. These were primarily used on large pieces of construction equipment. Fuel cells In recent years, truck and fuel cell manufacturers have teamed up to create, test and demonstrate a fuel cell APU that eliminates nearly all emissions and uses diesel fuel more efficiently. In 2008, a DOE sponsored partnership between Delphi Electronics and Peterbilt demonstrated that a fuel cell could provide power to the electronics and air conditioning of a Peterbilt Model 386 under simulated "idling" conditions for ten hours. Delphi has said the 5 kW system for Class 8 trucks will be released in 2012, at an $8000–9000 price tag that would be competitive with other "midrange" two-cylinder diesel APUs, should they be able to meet those deadlines and cost estimates. See also Air-start system Auxiliary hydraulic system Coffman engine starter Ram air turbine Uninterruptible power supply Notes References External links "Space Shuttle Orbiter APU" "Sound of an APU from inside a Boeing 737 cabin" The Riedel Starter Motor In: Messerschmitt Me 262B in Detail; The airframe, engines and canopy YouTube video of restored Junkers Jumo 004 jet engine, being started with "integral" Riedel APU, from September 2019 Starting systems Electrical generators Aircraft components
Auxiliary power unit
[ "Physics", "Technology" ]
2,230
[ "Physical systems", "Electrical generators", "Machines" ]
61,273
https://en.wikipedia.org/wiki/Supersonic%20speed
Supersonic speed is the speed of an object that exceeds the speed of sound (Mach 1). For objects traveling in dry air of a temperature of 20 °C (68 °F) at sea level, this speed is approximately . Speeds greater than five times the speed of sound (Mach 5) are often referred to as hypersonic. Flights during which only some parts of the air surrounding an object, such as the ends of rotor blades, reach supersonic speeds are called transonic. This occurs typically somewhere between Mach 0.8 and Mach 1.2. Sounds are traveling vibrations in the form of pressure waves in an elastic medium. Objects move at supersonic speed when the objects move faster than the speed at which sound propagates through the medium. In gases, sound travels longitudinally at different speeds, mostly depending on the molecular mass and temperature of the gas, and pressure has little effect. Since air temperature and composition varies significantly with altitude, the speed of sound, and Mach numbers for a steadily moving object may change. In water at room temperature supersonic speed means any speed greater than 1,440 m/s (4,724 ft/s). In solids, sound waves can be polarized longitudinally or transversely and have higher velocities. Supersonic fracture is crack formation faster than the speed of sound in a brittle material. Early meaning The word supersonic comes from two Latin derived words; 1) super: above and 2) sonus: sound, which together mean above sound, or faster than sound. At the beginning of the 20th century, the term "supersonic" was used as an adjective to describe sound whose frequency is above the range of normal human hearing. The modern term for this meaning is "ultrasonic", but the older meaning sometimes still lives on, as in the word superheterodyne Supersonic objects The tip of a bullwhip is generally seen as the first object designed to reach the speed of sound. This action results in its telltale "crack", which is actually just a sonic boom. The first human-made supersonic boom was likely caused by a piece of common cloth, leading to the whip's eventual development. It's the wave motion travelling through the bullwhip that makes it capable of achieving supersonic speeds. Most modern firearm bullets are supersonic, with rifle projectiles often travelling at speeds approaching and in some cases well exceeding Mach 3. Most spacecraft are supersonic at least during portions of their reentry, though the effects on the spacecraft are reduced by low air densities. During ascent, launch vehicles generally avoid going supersonic below 30 km (~98,400 feet) to reduce air drag. Note that the speed of sound decreases somewhat with altitude, due to lower temperatures found there (typically up to 25 km). At even higher altitudes the temperature starts increasing, with the corresponding increase in the speed of sound. When an inflated balloon is burst, the torn pieces of latex contract at supersonic speed, which contributes to the sharp and loud popping noise. Supersonic land vehicles To date, only one land vehicle has officially travelled at supersonic speed, the ThrustSSC. The vehicle, driven by Andy Green, holds the world land speed record, having achieved an average speed on its bi-directional run of in the Black Rock Desert on 15 October 1997. The Bloodhound LSR project planned an attempt on the record in 2020 at Hakskeenpan in South Africa with a combination jet and hybrid rocket propelled car. The aim was to break the existing record, then make further attempts during which (the members of) the team hoped to reach speeds of up to . The effort was originally run by Richard Noble who was the leader of the ThrustSSC project, however following funding issues in 2018, the team was bought by Ian Warhurst and renamed Bloodhound LSR. Later the project was indefinitely delayed due to the COVID-19 pandemic and the vehicle was put up for sale. Supersonic flight Most modern fighter aircraft are supersonic aircraft. No modern-day passenger aircraft are capable of supersonic speed, but there have been supersonic passenger aircraft, namely Concorde and the Tupolev Tu-144. Both of these passenger aircraft and some modern fighters are also capable of supercruise, a condition of sustained supersonic flight without the use of an afterburner. Due to its ability to supercruise for several hours and the relatively high frequency of flight over several decades, Concorde spent more time flying supersonically than all other aircraft combined by a considerable margin. Since Concorde's final retirement flight on November 26, 2003, there are no supersonic passenger aircraft left in service. Some large bombers, such as the Tupolev Tu-160 and Rockwell B-1 Lancer are also supersonic-capable. The aerodynamics of supersonic aircraft is simpler than subsonic aerodynamics because the airsheets at different points along the plane often cannot affect each other. Supersonic jets and rocket vehicles require several times greater thrust to push through the extra aerodynamic drag experienced within the transonic region (around Mach 0.85–1.2). At these speeds aerospace engineers can gently guide air around the fuselage of the aircraft without producing new shock waves, but any change in cross area farther down the vehicle leads to shock waves along the body. Designers use the Supersonic area rule and the Whitcomb area rule to minimize sudden changes in size. However, in practical applications, a supersonic aircraft must operate stably in both subsonic and supersonic profiles, hence aerodynamic design is more complex. The main key to having low supersonic drag is to properly shape the overall aircraft to be long and thin, and close to a "perfect" shape, the von Karman ogive or Sears-Haack body. This has led to almost every supersonic cruising aircraft looking very similar to every other, with a very long and slender fuselage and large delta wings, cf. SR-71, Concorde, etc. Although not ideal for passenger aircraft, this shaping is quite adaptable for bomber use. See also Area rule Hypersonic speed Sonic boom Supersonic aircraft Supersonic airfoils Transonic speed Vapor cone Prandtl–Glauert singularity Supersonic (Oasis song) References External links "Can We Ever Fly Faster Speed of Sound", October 1944, Popular Science one of the earliest articles on shock waves and flying the speed of sound "Britain Goes Supersonic", January 1946, Popular Science 1946 article trying to explain supersonic flight to the general public MathPages – The Speed of Sound Supersonic sound pressure levels Aerodynamics Aerospace engineering Airspeed Sound Temporal rates
Supersonic speed
[ "Physics", "Chemistry", "Engineering" ]
1,359
[ "Temporal quantities", "Physical quantities", "Temporal rates", "Aerodynamics", "Airspeed", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Fluid dynamics" ]
61,274
https://en.wikipedia.org/wiki/Unisys
Unisys Corporation is an American multinational information technology (IT) services and consulting company founded in 1986 and headquartered in Blue Bell, Pennsylvania. The company provides digital workplace, cloud applications and infrastructure, enterprise computing, business process, AI technology, and data analytics services. History Founding Unisys was formed in 1986 through the merger of mainframe corporations Sperry and Burroughs, with Burroughs buying Sperry for $4.8 billion. The new company's name was chosen from over 31,000 submissions in an internal competition when Christian Machen submitted the word "Unisys", which was composed of parts of the words "united", "information", and "systems". The merger was the largest in the computer industry at the time and made Unisys the second-largest computer company with annual revenue of $10.5 billion. W. Michael Blumenthal became CEO and Chairman. 20th century Soon after the merger, the market for proprietary mainframe-class systems, the mainstream product of Unisys and its competitors such as IBM, began a long-term decline that continues, at a lesser rate, today. Unisys responded by making the strategic decision to shift into high-end servers, including 32-bit processor Windows Servers and information technology (IT) services, such as systems integration, outsourcing, and related technical services, while holding onto the profitable revenue stream from maintaining its installed base of proprietary mainframe hardware and applications. In 1988, the company acquired Convergent Technologies, creators of Convergent Technologies Operating System (CTOS). In 1990, Blumenthal resigned. James Unruh, formerly of Memorex and Honeywell, became the new CEO and Chairman after Blumenthal's departure and continued in both roles until 1997, when Larry Weinbach of Arthur Andersen became the new CEO. 21st century Joseph McGrath served as CEO and President from January 2005, until September, 2008. On October 7, 2008, J. Edward Coleman replaced McGrath as CEO and Chairman. On November 10, 2008, the company was removed from the S&P 500 index when the market capitalization of the company had fallen below the S&P 500 minimum of $4 billion. In 2010, Unisys sold its Medicare processing Health Information Management service to Molina Healthcare for $135 million. On October 6, 2014, after six years as CEO and chairman, Unisys announced that Coleman was stepping down effective December 1, 2014. On January 1, 2015, Unisys named Peter Altabef as its new president and CEO, replacing Edward Coleman. Paul Weaver, who was formerly Lead Independent Director, was named Chairman. In February 2020, SAIC announced plans to acquire Unisys Federal, the company’s federal defense contracting operation, for $1.2 billion. The company’s federal customer list included over a dozen military and civilian agencies. As part of the acquisition, Unisys has a licensing agreement with SAIC to continue providing its software to federal clients. In June 2020, Australia’s Home Affairs’ biometric identification system, built in part through partnership with Unisys, was launched. In June 2021, the company announced the acquisition of Unify Square, which provides software and services which help enterprises manage collaboration and communication platforms like Zoom and Microsoft Teams. In November, Mobinergy, a mobile device management software company, was acquired; and in December, Unisys acquired CompuGain, an Amazon Web Services Advanced Consulting Partner. In July 2021, Unisys partnered with Vodafone to help the company boost its IT services. The two launched “Vodafone Digital Factory,” and Unisys helped Vodafone clients with technologies like AI, virtual, and augmented reality, and blockchain. In May 2022, the company joined the Plug and Play Enterprise Tech program. This allowed Unisys to source and partner with technology startups to access and use early-stage emerging technology. Recognition and awards In 2018, 2019, and 2020, Unisys was named an overall market segment leader in the NelsonHall Evaluation & Assessment Tool Vendor Evaluation for Advanced Digital Workplace Services. NelsonHall: 2021 NEAT Assessment, Leader, Cognitive and Self-Healing IT Infrastructure Services In August 2020, Unisys Corporation reported that for the third straight year, NelsonHall has listed the organization as the regional market sector leader in the Evaluation & Assessment Tool (NEAT) Vendor Analysis report for Advanced Digital Workplace Services. Avasant: 2021 RadarView – Digital Workplace Services NelsonHall: 2022 NEAT Assessment, Leader, End-to-end Cloud Infrastructure Services Everest Group: 2022 PEAK Matrix - Cloud Services, Major Contender - North America and Europe In 2022, the company was named a Forbes’ America’s Best Employers For Women and a Leader in Advanced Digital Workplace Services Assessment. Products and services Unisys offers outsourcing managed services, systems integration and consulting services, application management and device management software, high end server technology, maintenance and support services, and cybersecurity services. In line with larger trends in the information technology industry, an increasing amount of Unisys revenue comes from services rather than equipment sales; in 2014, the ratio was 86% for services, up from 65% in 1997. The company maintains a portfolio of over 1,500 U.S. and non-U.S. patents, and in the 1990s controversially monetized its patent on technology underlying the GIF image file format. In 2014, Unisys phased out its CMOS processors, completing the migration of its ClearPath mainframes to Intel x86 chips, allowing clients to run the company's OS 2200 and MCP operating systems alongside more recent Windows and Linux workloads on Intel-based systems that support cloud and virtualization. The company announced its new ClearPath Dorado 8380 and 8390 systems in May, 2015. These new systems allowed the company to transition its ClearPath server families from proprietary CMOS processor technology to a software-based fabric architecture running on Intel processors. Unisys operates data centers around the world. Digital Workplace Services (DWS) In March 2022, Vision-Box awarded Unisys two digital workplace solutions contracts to help build automated “SmartGates,” electronic security gates, at New Zealand’s Auckland International Airport and Australia’s 10 international airports. Cloud, Applications, and Infrastructure (CA & I) California State University used Unisys’ CloudForte and Managed Security Services to integrate its hybrid-cloud environment. After acquiring Compugain, Unisys furthered its cloud capabilities, including hybrid cloud and cloud optimization, agile cloud migration, cloud-native capabilities, and data governance. Cybersecurity In November 2020, Unisys updated its Stealth platform to include visualization and dashboard tools to make it easier for an organization to track security in real-time. The new version made it possible for cybersecurity teams to see relationships between all network endpoints, including multiple clouds and edge computing platforms. Enterprise Computing (ECS) Unisys was the first to develop a server architecture that supported four operating environments to run simultaneously on the same computer system in a single virtualized partition. In 2013, Unisys won a $650 million Enterprise Computing Center Support contract to support the computer systems used by the Internal Revenue Service. Business Processes (BPS) Unisys launched their business process consulting service in 2004. This service called Business Blueprints helped developers create high level models of their own software. The company partners with Rubicon Technologies to deliver business process solutions. Partnerships Unisys’ partnerships include: VMware Oracle British Telecom Dell Technologies Amazon Web Services (AWS) MSP Partner and an AWS Government Competency Microsoft Azure Expert MSP partner and a Microsoft Gold Partner Global managed services partner to ServiceNow Google Cloud Partner Advantage Program as a Google Cloud and Google Workspace Resell Partner Clients Clients include Bank ABC, Hershey, the Bank of China, Somos, Henkel, Flowserve, The Philippine Statistics Authority (PSA), MASkargo (the cargo division of Malaysia Airlines), Nutreco, California State University (CSU), Air India, RAMS Home Loans, and the Georgia Technology Authority. Unisys systems are used for many industrial and government purposes, including banking, check processing, income tax processing, airline passenger reservations, biometric identification, newspaper content management, and shipping port management, as well as providing weather data services. Projects Additional projects include the following: Consumerization of IT A study sponsored by Unisys and conducted by IDC revealed the gap between the activities and expectations of the new generation of "iWorkers" and the ability of organizations to support their needs. The results showed that organizations continue to work with standardized command and control IT models of the past and are not able to profit from the widespread use of newer networked technologies. Cloud 20/20 Cloud 20/20 is an annual technical paper contest for tertiary students from India in October 2009. The contest allows students to explore the possibilities and complexities of cloud computing in areas such as automation, virtualization, application development, security, consumerization of IT and airports. The contest has drawn participation from universities across India, with over 570 institutes taking part in 2009 and more than a thousand in 2010. The contest culminates in an event where five finalists present their papers before a panel of judges that comprise academicians and technologists. Prizes include the latest technology gadgets, internship projects and career opportunities with Unisys. People and Culture Unisys earned a score of 100% on the 2021 Disability Equality Index. The company was recognized as a “Best Place to Work for Disability Inclusion.” The Disability Equality Index is a joint initiative of Disability:IN and the American Association of People with Disabilities. It is a “comprehensive benchmarking tool to measure disability workplace inclusion.” In November 2021, Unisys launched its UGrow program to help its employees grow internally. The program makes different courses available; each one focuses on skills needed by Unisys employees. Company employees also have access to Unisys University, which offers free certifications for over 100 different skills. A few examples include courses focused on management and team leadership skills, communication skills, and culture courses. The courses are organized around Unisys’ core business functions. Supplier Diversity Program Unisys has a supplier diversity program, which is a program that “encourages using companies owned by either minorities, veterans, women, or historically underutilized companies as suppliers.” Carbon Footprint Reduction In 2006, Unisys committed to reducing its carbon footprint by 75% by 2026. It achieved this five years early in 2021. A year later, the company announced a new goal of net zero carbon emissions by 2030. The company also participates in the Carbon Disclosure Project and UN Global Compact. Controversies In 1987, Unisys was sued with Rockwell Shuttle Operations Company for $5.2 million by two former employees of the Unisys Corporation, one a subcontractor responsible for the computer programs for the space shuttle. The suit filed by Sylvia Robins, a former Unisys engineer, and Ria Solomon, who worked for Robins, charges that the two were forced from their jobs and harassed after complaining about safety violations and inflated costs. Unisys overcharged the U.S. government and was found guilty of failure to supply adequate equipment in 1998. In 1998, Unisys Corporation agreed to pay the government $2.25 million to settle allegations that it supplied refurbished, rather than new, computer materials to several federal agencies in violation of the terms of its contract. Unisys admitted to supplying re-worked or refurbished computer components to various civilian and military agencies in the early 1990s, when the contract required the company to provide new equipment. The market price for the refurbished material was less than the price for new material which the government paid. In 1998, Unisys was found guilty of price inflation and government contract fraud, with the company settling to avoid further prosecution. Lockheed Martin and Unisys paid the government $3.15 million to settle allegations that Unisys inflated the prices of spare parts sold to the U.S. Department of Commerce for its NEXRAD Doppler Radar System, in violation of the False Claims Act, 31 U.S.C. § 3729, et seq. "[T]he settlement resolves allegations that Unisys knew that prices it paid Concurrent Computer Corporation for the spare parts were inflated when it passed on those prices to the government. Unisys had obtained price discounts from Concurrent on other items Unisys was purchasing from Concurrent at Unisys' own expense in exchange for agreeing to pay Concurrent the inflated prices". In October 2005, The Washington Post reported that the company had allegedly overbilled on the $1-to-3-billion Transportation Security Administration contract for almost 171,000 hours of labor and overtime at up to the maximum rate of $131.13 per hour, including 24,983 hours not allowed by the contract. Unisys denied wrongdoing. In 2006, The Washington Post reported that the FBI was investigating Unisys for alleged cybersecurity lapses under the company's contract with the U.S. Department of Homeland Security. A number of security lapses supposedly occurred during the contract, including incidents in which data was transmitted to Chinese servers. Unisys denies all charges and said it has documentation disproving the allegations. In 2007, Unisys was found guilty of misrepresentation of retiree benefits. A federal judge in Pennsylvania ordered Unisys to reinstate within 60 days free lifetime retiree medical benefits to 12 former employees who were employed by a Unisys predecessor, the Burroughs Corporation. The judge ruled that Unisys "misrepresented the cost and duration of retiree medical benefits" at a time "trial plaintiffs were making retirement decisions" and while it was advising them about the benefits the company would provide during retirement. Also in 2007, Unisys was found guilty of willful trademark infringement in Visible Systems v. Unisys (Trademark Infringement). Computer company Visible Systems prevailed over Unisys Corp. in a trademark infringement lawsuit filed in Massachusetts federal court. In November 2007, the court entered an injunction and final judgment ordering Unisys to discontinue its use of the "Visible" trademark, upholding the jury's award to Visible Systems of $250,000 in damages, and awarding an additional $17,555 in interest. Visible Systems claimed Unisys wrongfully used the name "Visible" in marketing its software and services. The jury found the infringement by Unisys was willful. Visible Systems appealed the final judgment, believing the court wrongly excluded the issues of bad faith and disgorgement of an estimated $17 billion in unjust profits from the consideration of the jury. In 2010, Unisys Hungary terminated the local Workers' Union representative Gabor Pinter's employment contract with immediate effect for raising concerns about unpaid overtime and the company's non-respect of the health regulations in its local Shared Services Center. According to the 2012 verdict of the Labour Court of Budapest, Unisys acted illegally and was ordered to pay unpaid wages and benefits, legal costs, and three months' average salary as compensation. See also References External links Business process outsourcing companies 1986 establishments in Pennsylvania Companies based in Montgomery County, Pennsylvania Companies listed on the New York Stock Exchange Computer companies of the United States American companies established in 1986 Consulting firms established in 1986 Computer companies established in 1986 Computer hardware companies Computer systems companies Information technology companies of the United States Information technology consulting firms of the United States Outsourcing companies International information technology consulting firms Technology companies established in 1986 Financial technology companies Collaborative software Cloud storage
Unisys
[ "Technology" ]
3,229
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
61,275
https://en.wikipedia.org/wiki/Cathodoluminescence
Cathodoluminescence is an optical and electromagnetic phenomenon in which electrons impacting on a luminescent material such as a phosphor, cause the emission of photons which may have wavelengths in the visible spectrum. A familiar example is the generation of light by an electron beam scanning the phosphor-coated inner surface of the screen of a television that uses a cathode-ray tube. Cathodoluminescence is the inverse of the photoelectric effect, in which electron emission is induced by irradiation with photons. Origin Luminescence in a semiconductor results when an electron in the conduction band recombines with a hole in the valence band. The difference energy (band gap) of this transition can be emitted in form of a photon. The energy (color) of the photon, and the probability that a photon and not a phonon will be emitted, depends on the material, its purity, and the presence of defects. First, the electron has to be excited from the valence band into the conduction band. In cathodoluminescence, this occurs as the result of an impinging high energy electron beam onto a semiconductor. However, these primary electrons carry far too much energy to directly excite electrons. Instead, the inelastic scattering of the primary electrons in the crystal leads to the emission of secondary electrons, Auger electrons and X-rays, which in turn can scatter as well. Such a cascade of scattering events leads to up to 103 secondary electrons per incident electron. These secondary electrons can excite valence electrons into the conduction band when they have a kinetic energy about three times the band gap energy of the material . From there the electron recombines with a hole in the valence band and creates a photon. The excess energy is transferred to phonons and thus heats the lattice. One of the advantages of excitation with an electron beam is that the band gap energy of materials that are investigated is not limited by the energy of the incident light as in the case of photoluminescence. Therefore, in cathodoluminescence, the "semiconductor" examined can, in fact, be almost any non-metallic material. In terms of band structure, classical semiconductors, insulators, ceramics, gemstones, minerals, and glasses can be treated the same way. Microscopy In geology, mineralogy, materials science and semiconductor engineering, a scanning electron microscope (SEM) fitted with a cathodoluminescence detector, or an optical cathodoluminescence microscope, may be used to examine internal structures of semiconductors, rocks, ceramics, glass, etc. in order to get information on the composition, growth and quality of the material. Optical cathodoluminescence microscope A cathodoluminescence (CL) microscope combines a regular (light optical) microscope with a cathode-ray tube. It is designed to image the luminescence characteristics of polished thin sections of solids irradiated by an electron beam. Using a cathodoluminescence microscope, structures within crystals or fabrics can be made visible which cannot be seen in normal light conditions. Thus, for example, valuable information on the growth of minerals can be obtained. CL-microscopy is used in geology, mineralogy and materials science for the investigation of rocks, minerals, volcanic ash, glass, ceramic, concrete, fly ash, etc. CL color and intensity are dependent on the characteristics of the sample and on the working conditions of the electron gun. Here, acceleration voltage and beam current of the electron beam are of major importance. Today, two types of CL microscopes are in use. One is working with a "cold cathode" generating an electron beam by a corona discharge tube, the other one produces a beam using a "hot cathode". Cold-cathode CL microscopes are the simplest and most economical type. Unlike other electron bombardment techniques like electron microscopy, cold cathodoluminescence microscopy provides positive ions along with the electrons which neutralize surface charge buildup and eliminate the need for conductive coatings to be applied to the specimens. The "hot cathode" type generates an electron beam by an electron gun with tungsten filament. The advantage of a hot cathode is the precisely controllable high beam intensity allowing to stimulate the emission of light even on weakly luminescing materials (e.g. quartz – see picture). To prevent charging of the sample, the surface must be coated with a conductive layer of gold or carbon. This is usually done by a sputter deposition device or a carbon coater. Cathodoluminescence from a scanning electron microscope In scanning electron microscopes a focused beam of electrons impinges on a sample and induces it to emit light that is collected by an optical system, such as an elliptical mirror. From there, a fiber optic will transfer the light out of the microscope where it is separated into its component wavelengths by a monochromator and is then detected with a photomultiplier tube. By scanning the microscope's beam in an X-Y pattern and measuring the light emitted with the beam at each point, a map of the optical activity of the specimen can be obtained (cathodoluminescence imaging). Instead, by measuring the wavelength dependence for a fixed point or a certain area, the spectral characteristics can be recorded (cathodoluminescence spectroscopy). Furthermore, if the photomultiplier tube is replaced with a CCD camera, an entire spectrum can be measured at each point of a map (hyperspectral imaging). Moreover, the optical properties of an object can be correlated to structural properties observed with the electron microscope. The primary advantages to the electron microscope based technique is its spatial resolution. In a scanning electron microscope, the attainable resolution is on the order of a few ten nanometers, while in a (scanning) transmission electron microscope (TEM), nanometer-sized features can be resolved. Additionally, it is possible to perform nanosecond- to picosecond-level time-resolved measurements if the electron beam can be "chopped" into nano- or pico-second pulses by a beam-blanker or with a pulsed electron source. These advanced techniques are useful for examining low-dimensional semiconductor structures, such a quantum wells or quantum dots. While an electron microscope with a cathodoluminescence detector provides high magnification, an optical cathodoluminescence microscope benefits from its ability to show actual visible color features directly through the eyepiece. More recently developed systems try to combine both an optical and an electron microscope to take advantage of both these techniques. Extended applications Although direct bandgap semiconductors such as GaAs or GaN are most easily examined by these techniques, indirect semiconductors such as silicon also emit weak cathodoluminescence, and can be examined as well. In particular, the luminescence of dislocated silicon is different from intrinsic silicon, and can be used to map defects in integrated circuits. Recently, cathodoluminescence performed in electron microscopes is also being used to study surface plasmon resonances in metallic nanoparticles. Surface plasmons in metal nanoparticles can absorb and emit light, though the process is different from that in semiconductors. Similarly, cathodoluminescence has been exploited as a probe to map the local density of states of planar dielectric photonic crystals and nanostructured photonic materials. See also Electron-stimulated luminescence Luminescence Photoluminescence Scanning electron microscopy References Further reading Electron beams set nanostructures aglow [PDF], E. S. Reich, Nature 493, 143 (2013) Scanning Cathodoluminescence Microscopy, C. M. Parish and P. E. Russell, in Advances in Imaging and Electron Physics, V.147, ed. P. W. Hawkes, P. 1 (2007) Quick look cathodoluminescence analyses and their impact on the interpretation of carbonate reservoirs. Case study of mid-Jurassic oolitic reservoirs in the Paris Basin , B. Granier and C. Staffelbach (2009) Cathodoluminescence Microscopy of Inorganic Solids,, B. G. Yacobi and D. B. Holt, New York, Springer (1990) External links Application laboratory time-resolved cathodoluminescence spectroscopy at Paul-Drude-Institut LumiSpy – Luminescence spectroscopy data analysis with python Scientific Results about High Spatial Resolution Cathodoluminescence Electron beam Light sources Luminescence Materials science Scientific techniques
Cathodoluminescence
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,799
[ "Electron", "Luminescence", "Molecular physics", "Applied and interdisciplinary physics", "Electron beam", "Materials science", "nan" ]
61,309
https://en.wikipedia.org/wiki/Mosaic
A mosaic is a pattern or image made of small regular or irregular pieces of colored stone, glass or ceramic, held in place by plaster/mortar, and covering a surface. Mosaics are often used as floor and wall decoration, and were particularly popular in the Ancient Roman world. Mosaic today includes not just murals and pavements, but also artwork, hobby crafts, and industrial and construction forms. Mosaics have a long history, starting in Mesopotamia in the 3rd millennium BC. Pebble mosaics were made in Tiryns in Mycenean Greece; mosaics with patterns and pictures became widespread in classical times, both in Ancient Greece and Ancient Rome. Early Christian basilicas from the 4th century onwards were decorated with wall and ceiling mosaics. Mosaic art flourished in the Byzantine Empire from the 6th to the 15th centuries; that tradition was adopted by the Norman Kingdom of Sicily in the 12th century, by the eastern-influenced Republic of Venice, and among the Rus. Mosaic fell out of fashion in the Renaissance, though artists like Raphael continued to practice the old technique. Roman and Byzantine influence led Jewish artists to decorate 5th and 6th century synagogues in the Middle East with floor mosaics. Figurative mosaic, but mostly without human figures, was widely used on religious buildings and palaces in early Islamic art, including Islam's first great religious building, the Dome of the Rock in Jerusalem, and the Umayyad Mosque in Damascus. Such mosaics went out of fashion in the Islamic world after the 8th century, except for geometrical patterns in techniques such as zellij, which remain popular in many areas. Modern mosaics are made by artists and craftspeople around the world. Many materials other than traditional stone, ceramic tesserae, enameled and stained glass may be employed, including shells, beads, charms, chains, gears, coins, and pieces of costume jewelry. Mosaic materials Traditional mosaics are made of small cubes of roughly square pieces of stone or hand made glass enamel of different colours, known as tesserae. Some of the earliest mosaics were made of natural pebbles, originally used to reinforce floors. Mosaic skinning (covering objects with mosaic glass) is done with thin enameled glass and opaque stained glass. Modern mosaic art is made from any material in any size ranging from carved stone, bottle caps, and found objects. History The earliest known examples of mosaics made of different materials were found at a temple building in Abra, Mesopotamia, and are dated to the second half of 3rd millennium BC. They consist of pieces of colored stones, shells and ivory. Excavations at Susa and Chogha Zanbil show evidence of the first glazed tiles, dating from around 1500 BC. However, mosaic patterns were not used until the times of Sassanid Empire and Roman influence. Greek and Roman Bronze Age pebble mosaics have been found at Tiryns; mosaics of the 4th century BC are found in the Macedonian palace-city of Aegae, and the 4th-century BC mosaic of The Beauty of Durrës discovered in Durrës, Albania in 1916, is an early figural example; the Greek figural style was mostly formed in the 3rd century BC. Mythological subjects, or scenes of hunting or other pursuits of the wealthy, were popular as the centrepieces of a larger geometric design, with strongly emphasized borders. Pliny the Elder mentions the artist Sosus of Pergamon by name, describing his mosaics of the food left on a floor after a feast and of a group of doves drinking from a bowl. Both of these themes were widely copied. Greek figural mosaics could have been copied or adapted paintings, a far more prestigious artform, and the style was enthusiastically adopted by the Romans so that large floor mosaics enriched the floors of Hellenistic villas and Roman dwellings from Britain to Dura-Europos. Most recorded names of Roman mosaic workers are Greek, suggesting they dominated high quality work across the empire; no doubt most ordinary craftsmen were slaves. Splendid mosaic floors are found in Roman villas across North Africa, in places such as Carthage, and can still be seen in the extensive collection in Bardo Museum in Tunis, Tunisia. There were two main techniques in Greco-Roman mosaic: opus vermiculatum used tiny tesserae, typically cubes of 4 millimeters or less, and was produced in workshops in relatively small panels which were transported to the site glued to some temporary support. The tiny tesserae allowed very fine detail, and an approach to the illusionism of painting. Often small panels called emblemata were inserted into walls or as the highlights of larger floor-mosaics in coarser work. The normal technique was opus tessellatum, using larger tesserae, which was laid on site. There was a distinct native Italian style using black on a white background, which was no doubt cheaper than fully coloured work. In Rome, Nero and his architects used mosaics to cover some surfaces of walls and ceilings in the Domus Aurea, built 64 AD, and wall mosaics are also found at Pompeii and neighbouring sites. However it seems that it was not until the Christian era that figural wall mosaics became a major form of artistic expression. The Roman church of Santa Costanza, which served as a mausoleum for one or more of the Imperial family, has both religious mosaic and decorative secular ceiling mosaics on a round vault, which probably represent the style of contemporary palace decoration. The mosaics of the Villa Romana del Casale near Piazza Armerina in Sicily are the largest collection of late Roman mosaics in situ in the world, and are protected as a UNESCO World Heritage Site. The large villa rustica, which was probably owned by Emperor Maximian, was built largely in the early 4th century. The mosaics were covered and protected for 700 years by a landslide that occurred in the 12th Century. The most important pieces are the Circus Scene, the 64m long Great Hunting Scene, the Little Hunt, the Labours of Hercules and the famous Bikini Girls, showing women undertaking a range of sporting activities in garments that resemble 20th Century bikinis. The peristyle, the imperial apartments and the thermae were also decorated with ornamental and mythological mosaics. Other important examples of Roman mosaic art in Sicily were unearthed on the Piazza Vittoria in Palermo where two houses were discovered. The most important scenes there depicted are an Orpheus mosaic, Alexander the Great's Hunt and the Four Seasons. In 1913 the Zliten mosaic, a Roman mosaic famous for its many scenes from gladiatorial contests, hunting and everyday life, was discovered in the Libyan town of Zliten. In 2000 archaeologists working in Leptis Magna, Libya, uncovered a 30 ft length of five colorful mosaics created during the 1st or 2nd century AD. The mosaics show a warrior in combat with a deer, four young men wrestling a wild bull to the ground, and a gladiator resting in a state of fatigue, staring at his slain opponent. The mosaics decorated the walls of a cold plunge pool in a bath house within a Roman villa. The gladiator mosaic is noted by scholars as one of the finest examples of mosaic art ever seen – a "masterpiece comparable in quality with the Alexander Mosaic in Pompeii." A specific genre of Roman mosaic was called asaroton (Greek for "unswept floor"). It depicted in trompe-l'œil style the feast leftovers on the floors of wealthy houses. Christian mosaics Early Christian art With the building of Christian basilicas in the late 4th century, wall and ceiling mosaics were adopted for Christian uses. The earliest examples of Christian basilicas have not survived, but the mosaics of Santa Constanza and Santa Pudenziana, both from the 4th century, still exist. The winemaking putti in the ambulatory of Santa Constanza still follow the classical tradition in that they represent the feast of Bacchus, which symbolizes transformation or change, and are thus appropriate for a mausoleum, the original function of this building. In another great Constantinian basilica, the Church of the Nativity in Bethlehem the original mosaic floor with typical Roman geometric motifs is partially preserved. The so-called Tomb of the Julii, near the crypt beneath St Peter's Basilica, is a 4th-century vaulted tomb with wall and ceiling mosaics that are given Christian interpretations. The Rotunda of Galerius in Thessaloniki, converted into a Christian church during the course of the 4th century, was embellished with very high artistic quality mosaics. Only fragments survive of the original decoration, especially a band depicting saints with hands raised in prayer, in front of complex architectural fantasies. In the following century Ravenna, the capital of the Western Roman Empire, became the center of late Roman mosaic art (see details in Ravenna section). Milan also served as the capital of the western empire in the 4th century. In the St Aquilinus Chapel of the Basilica of San Lorenzo, mosaics executed in the late 4th and early 5th centuries depict Christ with the Apostles and the Abduction of Elijah; these mosaics are outstanding for their bright colors, naturalism and adherence to the classical canons of order and proportion. The surviving apse mosaic of the Basilica of Sant'Ambrogio, which shows Christ enthroned between Saint Gervasius and Saint Protasius and angels before a golden background date back to the 5th and to the 8th century, although it was restored many times later. The baptistery of the basilica, which was demolished in the 15th century, had a vault covered with gold-leaf tesserae, large quantities of which were found when the site was excavated. In the small shrine of San Vittore in ciel d'oro, now a chapel of Sant'Ambrogio, every surface is covered with mosaics from the second half of the 5th century. Saint Victor is depicted in the center of the golden dome, while figures of saints are shown on the walls before a blue background. The low spandrels give space for the symbols of the four Evangelists. Albingaunum was the main Roman port of Liguria. The octagonal baptistery of the town was decorated in the 5th century with high quality blue and white mosaics representing the Apostles. The surviving remains are somewhat fragmented. Massilia remained a thriving port and a Christian spiritual center in Southern Gaul where favourable societal and economic conditions ensured the survival of mosaic art in the 5th and 6th centuries. The large baptistery, once the grandest building of its kind in Western Europe, had a geometric floor mosaic which is only known from 19th century descriptions. Other parts of the episcopal complex were also decorated with mosaics as new finds, that were unearthed in the 2000s, attest. The funerary basilica of Saint Victor, built in a quarry outside the walls, was decorated with mosaics but only a small fragment with blue and green scrolls survived on the intrados of an arch (the basilica was later buried under a medieval abbey). A mosaic pavement depicting humans, animals and plants from the original 4th-century cathedral of Aquileia has survived in the later medieval church. This mosaic adopts pagan motifs such as the Nilotic scene, but behind the traditional naturalistic content is Christian symbolism such as the ichthys. The 6th-century early Christian basilicas of Sant' Eufemia :it:Basilica di Sant'Eufemia (Grado) and Santa Maria delle Grazie in Grado also have mosaic floors. Ravenna In the 5th-century Ravenna, the capital of the Western Roman Empire, became the center of late Roman mosaic art. The Mausoleum of Galla Placidia was decorated with mosaics of high artistic quality in 425–430. The vaults of the small, cross-shaped structure are clad with mosaics on blue background. The central motif above the crossing is a golden cross in the middle of the starry sky. Another great building established by Galla Placidia was the church of San Giovanni Evangelista. She erected it in fulfillment of a vow that she made having escaped from a deadly storm in 425 on the sea voyage from Constantinople to Ravenna. The mosaics depicted the storm, portraits of members of the western and eastern imperial family and the bishop of Ravenna, Peter Chrysologus. They are known only from Renaissance sources because almost all were destroyed in 1747. Ostrogoths kept alive the tradition in the 6th century, as the mosaics of the Arian Baptistry, Baptistry of Neon, Archbishop's Chapel, and the earlier phase mosaics in the Basilica of San Vitale and Basilica of Sant'Apollinare Nuovo testify. After 539, Ravenna was reconquered by the Romans in the form of the Eastern Roman Empire (Byzantine Empire) and became the seat of the Exarchate of Ravenna. The greatest development of Christian mosaics unfolded in the second half of the 6th century. Outstanding examples of Byzantine mosaic art are the later phase mosaics in the Basilica of San Vitale and Basilica of Sant'Apollinare Nuovo. The mosaic depicting Emperor Saint Justinian I and Empress Theodora in the Basilica of San Vitale were executed shortly after the Byzantine conquest. The mosaics of the Basilica of Sant'Apollinare in Classe were made around 549. The anti-Arian theme is obvious in the apse mosaic of San Michele in Affricisco, executed in 545–547 (largely destroyed; the remains in Berlin). The last example of Byzantine mosaics in Ravenna was commissioned by bishop Reparatus between 673 and 679 in the Basilica of Sant'Apollinare in Classe. The mosaic panel in the apse showing the bishop with Emperor Constantine IV is obviously an imitation of the Justinian panel in San Vitale. Butrint The mosaic pavement of the Vrina Plain basilica of Butrint, Albania appear to pre-date that of the Baptistery by almost a generation, dating to the last quarter of the 5th or the first years of the 6th century. The mosaic displays a variety of motifs including sea-creatures, birds, terrestrial beasts, fruits, flowers, trees and abstracts – designed to depict a terrestrial paradise of God's creation. Superimposed on this scheme are two large tablets, tabulae ansatae, carrying inscriptions. A variety of fish, a crab, a lobster, shrimps, mushrooms, flowers, a stag and two cruciform designs surround the smaller of the two inscriptions, which reads: In fulfilment of the vow (prayer) of those whose names God knows. This anonymous dedicatory inscription is a public demonstration of the benefactors' humility and an acknowledgement of God's omniscience. The abundant variety of natural life depicted in the Butrint mosaics celebrates the richness of God's creation; some elements also have specific connotations. The kantharos vase and vine refer to the eucharist, the symbol of the sacrifice of Christ leading to salvation. Peacocks are symbols of paradise and resurrection; shown eating or drinking from the vase they indicate the route to eternal life. Deer or stags were commonly used as images of the faithful aspiring to Christ: "As the hart panteth after the water brooks, so panteth my soul after thee, O God." Water-birds and fish and other sea-creatures can indicate baptism as well as the members of the Church who are christened. Late Antique and Early Medieval Rome Christian mosaic art also flourished in Rome, gradually declining as conditions became more difficult in the Early Middle Ages. 5th century mosaics can be found over the triumphal arch and in the nave of the basilica of Santa Maria Maggiore. The 27 surviving panels of the nave are the most important mosaic cycle in Rome of this period. Two other important 5th century mosaics are lost but we know them from 17th-century drawings. In the apse mosaic of Sant'Agata dei Goti (462–472, destroyed in 1589) Christ was seated on a globe with the twelve Apostles flanking him, six on either side. At Sant'Andrea in Catabarbara (468–483, destroyed in 1686) Christ appeared in the center, flanked on either side by three Apostles. Four streams flowed from the little mountain supporting Christ. The original 5th-century apse mosaic of the Santa Sabina was replaced by a very similar fresco by Taddeo Zuccari in 1559. The composition probably remained unchanged: Christ flanked by male and female saints, seated on a hill while lambs drinking from a stream at its feet. All three mosaics had a similar iconography. 6th-century pieces are rare in Rome but the mosaics inside the triumphal arch of the basilica of San Lorenzo fuori le mura belong to this era. The Chapel of Ss. Primo e Feliciano in Santo Stefano Rotondo has very interesting and rare mosaics from the 7th century. This chapel was built by Pope Theodore I as a family burial place. In the 7th–9th centuries Rome fell under the influence of Byzantine art, noticeable on the mosaics of Santa Prassede, Santa Maria in Domnica, Sant'Agnese fuori le Mura, Santa Cecilia in Trastevere, Santi Nereo e Achilleo and the San Venanzio chapel of San Giovanni in Laterano. The great dining hall of Pope Leo III in the Lateran Palace was also decorated with mosaics. They were all destroyed later except for one example, the so-called Triclinio Leoniano of which a copy was made in the 18th century. Another great work of Pope Leo, the apse mosaic of Santa Susanna, depicted Christ with the Pope and Charlemagne on one side, and SS. Susanna and Felicity on the other. It was plastered over during a renovation in 1585. Pope Paschal I (817–824) embellished the church of Santo Stefano del Cacco with an apsidal mosaic which depicted the pope with a model of the church (destroyed in 1607). The fragment of an 8th-century mosaic, the Epiphany is one of the very rare remaining pieces of the medieval decoration of Old St. Peter's Basilica, demolished in the late 16th century. The precious fragment is kept in the sacristy of Santa Maria in Cosmedin. It proves the high artistic quality of the destroyed St. Peter's mosaics. Byzantine mosaics Mosaics were more central to Byzantine culture than to that of Western Europe. Byzantine church interiors were generally covered with golden mosaics. Mosaic art flourished in the Byzantine Empire from the 6th to the 15th centuries. The majority of Byzantine mosaics were destroyed without trace during wars and conquests, but the surviving remains still form a fine collection. The great buildings of Emperor Justinian like the Hagia Sophia in Constantinople, the Nea Church in Jerusalem and the rebuilt Church of the Nativity in Bethlehem were certainly embellished with mosaics but none of these survived. Important fragments survived from the mosaic floor of the Great Palace of Constantinople which was commissioned during Justinian's reign. The figures, animals, plants all are entirely classical but they are scattered before a plain background. The portrait of a moustached man, probably a Gothic chieftain, is considered the most important surviving mosaic of the Justinianian age. The so-called small sekreton of the palace was built during Justin II's reign around 565–577. Some fragments survive from the mosaics of this vaulted room. The vine scroll motifs are very similar to those in the Santa Constanza and they still closely follow the Classical tradition. There are remains of floral decoration in the Church of the Acheiropoietos in Thessaloniki (5th–6th centuries). In the 6th century, Ravenna, the capital of Byzantine Italy, became the center of mosaic making. Istria also boasts some important examples from this era. The Euphrasian Basilica in Parentium was built in the middle of the 6th century and decorated with mosaics depicting the Theotokos flanked by angels and saints. Fragments remain from the mosaics of the Church of Santa Maria Formosa in Pola. These pieces were made during the 6th century by artists from Constantinople. Their pure Byzantine style is different from the contemporary Ravennate mosaics. Very few early Byzantine mosaics survived the Iconoclastic destruction of the 8th century. Among the rare examples are the 6th-century Christ in majesty (or Ezekiel's Vision) mosaic in the apse of the Church of Hosios David in Thessaloniki that was hidden behind mortar during those dangerous times. Nine mosaic panels in the Hagios Demetrios Church, which were made between 634 and 730, also escaped destruction. Unusually almost all represent Saint Demetrius of Thessaloniki, often with suppliants before him. This iconoclasm was almost certainly because of nearby Muslims' beliefs. In the Iconoclastic era, figural mosaics were also condemned as idolatry. The Iconoclastic churches were embellished with plain gold mosaics with only one great cross in the apse like the Hagia Irene in Constantinople (after 740). There were similar crosses in the apses of the Hagia Sophia Church in Thessaloniki and in the Church of the Dormition in Nicaea. The crosses were substituted with the image of the Theotokos in both churches after the victory of the Iconodules (787–797 and in 8th–9th centuries respectively, the Dormition church was totally destroyed in 1922). A similar Theotokos image flanked by two archangels were made for the Hagia Sophia in Constantinople in 867. The dedication inscription says: "The images which the impostors had cast down here pious emperors have again set up." In the 870s the so-called large sekreton of the Great Palace of Constantinople was decorated with the images of the four great iconodule patriarchs. The post-Iconoclastic era was the heyday of Byzantine art with the most beautiful mosaics executed. The mosaics of the Macedonian Renaissance (867–1056) carefully mingled traditionalism with innovation. Constantinopolitan mosaics of this age followed the decoration scheme first used in Emperor Basil I's Nea Ekklesia. Not only this prototype was later totally destroyed but each surviving composition is battered so it is necessary to move from church to church to reconstruct the system. An interesting set of Macedonian-era mosaics make up the decoration of the Hosios Loukas Monastery. In the narthex there is the Crucifixion, the Pantokrator and the Anastasis above the doors, while in the church the Theotokos (apse), Pentecost, scenes from Christ's life and ermit St Loukas (all executed before 1048). The scenes are treated with a minimum of detail and the panels are dominated with the gold setting. The Nea Moni Monastery on Chios was established by Constantine Monomachos in 1043–1056. The exceptional mosaic decoration of the dome showing probably the nine orders of the angels was destroyed in 1822 but other panels survived (Theotokos with raised hands, four evangelists with seraphim, scenes from Christ's life and an interesting Anastasis where King Salomon bears resemblance to Constantine Monomachos). In comparison with Osios Loukas Nea Moni mosaics contain more figures, detail, landscape and setting. Another great undertaking by Constantine Monomachos was the restoration of the Church of the Holy Sepulchre in Jerusalem between 1042 and 1048. Nothing survived of the mosaics which covered the walls and the dome of the edifice but the Russian abbot Daniel, who visited Jerusalem in 1106–1107 left a description: "Lively mosaics of the holy prophets are under the ceiling, over the tribune. The altar is surmounted by a mosaic image of Christ. In the main altar one can see the mosaic of the Exaltation of Adam. In the apse the Ascension of Christ. The Annunciation occupies the two pillars next to the altar." The Daphni Monastery houses the best preserved complex of mosaics from the early Comnenan period (ca. 1100) when the austere and hieratic manner typical for the Macedonian epoch and represented by the awesome Christ Pantocrator image inside the dome, was metamorphosing into a more intimate and delicate style, of which The Angel before St Joachim — with its pastoral backdrop, harmonious gestures and pensive lyricism – is considered a superb example. The 9th- and 10th-century mosaics of the Hagia Sophia in Constantinople are truly classical Byzantine artworks. The north and south tympana beneath the dome was decorated with figures of prophets, saints and patriarchs. Above the principal door from the narthex we can see an Emperor kneeling before Christ (late 9th or early 10th century). Above the door from the southwest vestibule to the narthex another mosaic shows the Theotokos with Justinian and Constantine. Justinian I is offering the model of the church to Mary while Constantine is holding a model of the city in his hand. Both emperors are beardless – this is an example for conscious archaization as contemporary Byzantine rulers were bearded. A mosaic panel on the gallery shows Christ with Constantine Monomachos and Empress Zoe (1042–1055). The emperor gives a bulging money sack to Christ as a donation for the church. The dome of the Hagia Sophia Church in Thessaloniki is decorated with an Ascension mosaic (c. 885). The composition resembles the great baptistries in Ravenna, with apostles standing between palms and Christ in the middle. The scheme is somewhat unusual as the standard post-Iconoclastic formula for domes contained only the image of the Pantokrator. There are very few existing mosaics from the Komnenian period but this paucity must be due to accidents of survival and gives a misleading impression. The only surviving 12th-century mosaic work in Constantinople is a panel in Hagia Sophia depicting Emperor John II and Empress Eirene with the Theotokos (1122–34). The empress with her long braided hair and rosy cheeks is especially capturing. It must be a lifelike portrayal because Eirene was really a redhead as her original Hungarian name, Piroska shows. The adjacent portrait of Emperor Alexios I Komnenos on a pier (from 1122) is similarly personal. The imperial mausoleum of the Komnenos dynasty, the Pantokrator Monastery was certainly decorated with great mosaics but these were later destroyed. The lack of Komnenian mosaics outside the capital is even more apparent. There is only a "Communion of the Apostles" in the apse of the cathedral of Serres. A striking technical innovation of the Komnenian period was the production of very precious, miniature mosaic icons. In these icons the small tesserae (with sides of 1 mm or less) were set on wax or resin on a wooden panel. These products of extraordinary craftmanship were intended for private devotion. The Louvre Transfiguration is a very fine example from the late 12th century. The miniature mosaic of Christ in the Museo Nazionale at Florence illustrates the more gentle, humanistic conception of Christ which appeared in the 12th century. The sack of Constantinople in 1204 caused the decline of mosaic art for the next five decades. After the reconquest of the city by Michael VIII Palaiologos in 1261 the Hagia Sophia was restored and a beautiful new Deesis was made on the south gallery. This huge mosaic panel with figures two and a half times lifesize is really overwhelming due to its grand scale and superlative craftsmanship. The Hagia Sophia Deesis is probably the most famous Byzantine mosaic in Constantinople. The Pammakaristos Monastery was restored by Michael Glabas, an imperial official, in the late 13th century. Only the mosaic decoration of the small burial chapel (parekklesion) of Glabas survived. This domed chapel was built by his widow, Martha around 1304–08. In the miniature dome the traditional Pantokrator can be seen with twelve prophets beneath. Unusually the apse is decorated with a Deesis, probably due to the funerary function of the chapel. The Church of the Holy Apostles in Thessaloniki was built in 1310–14. Although some vandal systematically removed the gold tesserae of the background it can be seen that the Pantokrator and the prophets in the dome follow the traditional Byzantine pattern. Many details are similar to the Pammakaristos mosaics so it is supposed that the same team of mosaicists worked in both buildings. Another building with a related mosaic decoration is the Theotokos Paregoritissa Church in Arta. The church was established by the Despot of Epirus in 1294–96. In the dome is the traditional stern Pantokrator, with prophets and cherubim below. The greatest mosaic work of the Palaeologan renaissance in art is the decoration of the Chora Church in Constantinople. Although the mosaics of the naos have not survived except three panels, the decoration of the exonarthex and the esonarthex constitute the most important full-scale mosaic cycle in Constantinople after the Hagia Sophia. They were executed around 1320 by the command of Theodore Metochites. The esonarthex has two fluted domes, specially created to provide the ideal setting for the mosaic images of the ancestors of Christ. The southern one is called the Dome of the Pantokrator while the northern one is the Dome of the Theotokos. The most important panel of the esonarthex depicts Theodore Metochites wearing a huge turban, offering the model of the church to Christ. The walls of both narthexes are decorated with mosaic cycles from the life of the Virgin and the life of Christ. These panels show the influence of the Italian trecento on Byzantine art especially the more natural settings, landscapes, figures. The last Byzantine mosaic work was created for the Hagia Sophia, Constantinople in the middle of the 14th century. The great eastern arch of the cathedral collapsed in 1346, bringing down the third of the main dome. By 1355 not only the big Pantokrator image was restored but new mosaics were set on the eastern arch depicting the Theotokos, the Baptist and Emperor John V Palaiologos (discovered only in 1989). In addition to the large-scale monuments several miniature mosaic icons of outstanding quality was produced for the Palaiologos court and nobles. The loveliest examples from the 14th century are Annunciation in the Victoria and Albert Museum and a mosaic diptych in the Cathedral Treasury of Florence representing the Twelve Feasts of the Church. In the troubled years of the 15th century the fatally weakened empire could not afford luxurious mosaics. Churches were decorated with wall-paintings in this era and after the Turkish conquest. Rome in the High Middle Ages The last great period of Roman mosaic art was the 12th–13th century when Rome developed its own distinctive artistic style, free from the strict rules of eastern tradition and with a more realistic portrayal of figures in the space. Well-known works of this period are the floral mosaics of the Basilica di San Clemente, the façade of Santa Maria in Trastevere and San Paolo fuori le Mura. The beautiful apse mosaic of Santa Maria in Trastevere (1140) depicts Christ and Mary sitting next to each other on the heavenly throne, the first example of this iconographic scheme. A similar mosaic, the Coronation of the Virgin, decorates the apse of Santa Maria Maggiore. It is a work of Jacopo Torriti from 1295. The mosaics of Torriti and Jacopo da Camerino in the apse of San Giovanni in Laterano from 1288 to 1294 were thoroughly restored in 1884. The apse mosaic of San Crisogono is attributed to Pietro Cavallini, the greatest Roman painter of the 13th century. Six scenes from the life of Mary in Santa Maria in Trastevere were also executed by Cavallini in 1290. These mosaics are praised for their realistic portrayal and attempts at perspective. There is an interesting mosaic medallion from 1210 above the gate of the church of San Tommaso in Formis showing Christ enthroned between a white and a black slave. The church belonged to the Order of the Trinitarians which was devoted to ransoming Christian slaves. The great Navicella mosaic (1305–1313) in the atrium of the Old St. Peter's is attributed to Giotto di Bondone. The giant mosaic, commissioned by Cardinal Jacopo Stefaneschi, was originally situated on the eastern porch of the old basilica and occupied the whole wall above the entrance arcade facing the courtyard. It depicted St. Peter walking on the waters. This extraordinary work was mainly destroyed during the construction of the new St. Peter's in the 17th century. Navicella means "little ship" referring to the large boat which dominated the scene, and whose sail, filled by the storm, loomed over the horizon. Such a natural representation of a seascape was known only from ancient works of art. Sicily The heyday of mosaic making in Sicily was the age of the independent Norman kingdom in the 12th century. The Norman kings adopted the Byzantine tradition of mosaic decoration to enhance the somewhat dubious legality of their rule. Greek masters working in Sicily developed their own style, that shows the influence of Western European and Islamic artistic tendencies. Best examples of Sicilian mosaic art are the Cappella Palatina of Roger II, the Martorana church in Palermo and the cathedrals of Cefalù and Monreale. The Cappella Palatina clearly shows evidence for blending the eastern and western styles. The dome (1142–42) and the eastern end of the church (1143–1154) were decorated with typical Byzantine mosaics i.e. Pantokrator, angels, scenes from the life of Christ. Even the inscriptions are written in Greek. The narrative scenes of the nave (Old Testament, life of Sts Peter and Paul) are resembling to the mosaics of the Old St. Peter's and St. Paul's Basilica in Rome (Latin inscriptions, 1154–66). The Martorana church (decorated around 1143) looked originally even more Byzantine although important parts were later demolished. The dome mosaic is similar to that of the Cappella Palatina, with Christ enthroned in the middle and four bowed, elongated angels. The Greek inscriptions, decorative patterns, and evangelists in the squinches are obviously executed by the same Greek masters who worked on the Cappella Palatina. The mosaic depicting Roger II of Sicily, dressed in Byzantine imperial robes and receiving the crown by Christ, was originally in the demolished narthex together with another panel, the Theotokos with Georgios of Antiochia, the founder of the church. In Cefalù (1148) only the high, French Gothic presbytery was covered with mosaics: the Pantokrator on the semidome of the apse and cherubim on the vault. On the walls are Latin and Greek saints, with Greek inscriptions. The Monreale mosaics constitute the largest decoration of this kind in Italy, covering 0,75 hectares with at least 100 million glass and stone tesserae. This huge work was executed between 1176 and 1186 by the order of King William II of Sicily. The iconography of the mosaics in the presbytery is similar to Cefalu while the pictures in the nave are almost the same as the narrative scenes in the Cappella Palatina. The Martorana mosaic of Roger II blessed by Christ was repeated with the figure of King William II instead of his predecessor. Another panel shows the king offering the model of the cathedral to the Theotokos. The Cathedral of Palermo, rebuilt by Archbishop Walter in the same time (1172–85), was also decorated with mosaics but none of these survived except the 12th-century image of Madonna del Tocco above the western portal. The cathedral of Messina, consecrated in 1197, was also decorated with a great mosaic cycle, originally on par with Cefalù and Monreale, but heavily damaged and restored many times later. In the left apse of the same cathedral 14th-century mosaics survived, representing the Madonna and Child between Saints Agata and Lucy, the Archangels Gabriel and Michael and Queens Eleonora and Elisabetta. Southern Italy was also part of the Norman kingdom but great mosaics did not survive in this area except the fine mosaic pavement of the Otranto Cathedral from 1166, with mosaics tied into a tree of life, mostly still preserved. The scenes depict biblical characters, warrior kings, medieval beasts, allegories of the months and working activity. Only fragments survived from the original mosaic decoration of Amalfi's Norman Cathedral. The mosaic ambos in the churches of Ravello prove that mosaic art was widespread in Southern Italy during the 11th–13th centuries. The palaces of the Norman kings were decorated with mosaics depicting animals and landscapes. The secular mosaics are seemingly more Eastern in character than the great religious cycles and show a strong Persian influence. The most notable examples are the Sala di Ruggero in the Palazzo dei Normanni, Palermo and the Sala della Fontana in the Zisa summer palace, both from the 12th century. Venice In parts of Italy, which were under eastern artistic influences, like Sicily and Venice, mosaic making never went out of fashion in the Middle Ages. The whole interior of the St Mark's Basilica in Venice is clad with elaborate, golden mosaics. The oldest scenes were executed by Greek masters in the late 11th century but the majority of the mosaics are works of local artists from the 12th–13th centuries. The decoration of the church was finished only in the 16th century. One hundred and ten scenes of mosaics in the atrium of St Mark's were based directly on the miniatures of the Cotton Genesis, a Byzantine manuscript that was brought to Venice after the sack of Constantinople (1204). The mosaics were executed in the 1220s. Other important Venetian mosaics can be found in the Cathedral of Santa Maria Assunta in Torcello from the 12th century, and in the Basilical of Santi Maria e Donato in Murano with a restored apse mosaic from the 12th century and a beautiful mosaic pavement (1140). The apse of the San Cipriano Church in Murano was decorated with an impressive golden mosaic from the early 13th century showing Christ enthroned with Mary, St John and the two patron saints, Cipriano and Cipriana. When the church was demolished in the 19th century, the mosaic was bought by Frederick William IV of Prussia. It was reassembled in the Friedenskirche of Potsdam in the 1840s. Trieste was also an important center of mosaic art. The mosaics in the apse of the Cathedral of San Giusto were laid by master craftsmen from Veneto in the 12th–13th centuries. Medieval Italy The monastery of Grottaferrata founded by Greek Basilian monks and consecrated by the Pope in 1024 was decorated with Italo-Byzantine mosaics, some of which survived in the narthex and the interior. The mosaics on the triumphal chancel arch portray the Twelve Apostles sitting beside an empty throne, evoking Christ's ascent to Heaven. It is a Byzantine work of the 12th century. There is a beautiful 11th-century Deesis above the main portal. The Abbot of Monte Cassino, Desiderius sent envoys to Constantinople some time after 1066 to hire expert Byzantine mosaicists for the decoration of the rebuilt abbey church. According to chronicler Leo of Ostia the Greek artists decorated the apse, the arch and the vestibule of the basilica. Their work was admired by contemporaries but was totally destroyed in later centuries except two fragments depicting greyhounds (now in the Monte Cassino Museum). "The abbot in his wisdom decided that great number of young monks in the monastery should be thoroughly initiated in these arts" – says the chronicler about the role of the Greeks in the revival of mosaic art in medieval Italy. In Florence a magnificiant mosaic of the Last Judgement decorates the dome of the Baptistery. The earliest mosaics, works of art of many unknown Venetian craftsmen (including probably Cimabue), date from 1225. The covering of the ceiling was probably not completed until the 14th century. The impressive mosaic of Christ in Majesty, flanked by the Virgin Mary and St. John the Evangelist in the apse of the cathedral of Pisa was designed by Cimabue in 1302. It evokes the Monreale mosaics in style. It survived the great fire of 1595 which destroyed most of the medieval interior decoration. Sometimes not only church interiors but façades were also decorated with mosaics in Italy like in the case of the St Mark's Basilica in Venice (mainly from the 17th–19th centuries, but the oldest one from 1270 to 1275, "The burial of St Mark in the first basilica"), the Cathedral of Orvieto (golden Gothic mosaics from the 14th century, many times redone) and the Basilica di San Frediano in Lucca (huge, striking golden mosaic representing the Ascension of Christ with the apostles below, designed by Berlinghiero Berlinghieri in the 13th century). The Cathedral of Spoleto is also decorated on the upper façade with a huge mosaic portraying the Blessing Christ (signed by one Solsternus from 1207). Western and Central Europe Beyond the Alps the first important example of mosaic art was the decoration of the Palatine Chapel in Aachen, commissioned by Charlemagne. It was completely destroyed in a fire in 1650. A rare example of surviving Carolingian mosaics is the apse semi-dome decoration of the oratory of Germigny-des-Prés built in 805–806 by Theodulf, bishop of Orléans, a leading figure of the Carolingian Renaissance. This unique work of art, rediscovered only in the 19th century, had no followers. Only scant remains prove that mosaics were still used in the Early Middle Ages. The Abbey of Saint-Martial in Limoges, originally an important place of pilgrimage, was totally demolished during the French Revolution except its crypt which was rediscovered in the 1960s. A mosaic panel was unearthed which was dated to the 9th century. It somewhat incongruously uses cubes of gilded glass and deep green marble, probably taken from antique pavements. This could also be the case with the early 9th century mosaic found under the Basilica of Saint-Quentin in Picardy, where antique motifs are copied but using only simple colors. The mosaics in the Cathedral of Saint-Jean at Lyon have been dated to the 11th century because they employ the same non-antique simple colors. More fragments were found on the site of Saint-Croix at Poitiers which might be from the 6th or 9th century. Later fresco replaced the more labor-intensive technique of mosaic in Western-Europe, although mosaics were sometimes used as decoration on medieval cathedrals. The Royal Basilica of the Hungarian kings in Székesfehérvár (Alba Regia) had a mosaic decoration in the apse. It was probably a work of Venetian or Ravennese craftsmen, executed in the first decades of the 11th century. The mosaic was almost totally destroyed together with the basilica in the 17th century. The Golden Gate of the St. Vitus Cathedral in Prague got its name from the golden 14th-century mosaic of the Last Judgement above the portal. It was executed by Venetian craftsmen. The Crusaders in the Holy Land also adopted mosaic decoration under local Byzantine influence. During their 12th-century reconstruction of the Church of the Holy Sepulchre in Jerusalem they complemented the existing Byzantine mosaics with new ones. Almost nothing of them survived except the "Ascension of Christ" in the Latin Chapel (now confusingly surrounded by many 20th-century mosaics). More substantial fragments were preserved from the 12th-century mosaic decoration of the Church of the Nativity in Bethlehem. The mosaics in the nave are arranged in five horizontal bands with the figures of the ancestors of Christ, Councils of the Church and angels. In the apses the Annunciation, the Nativity, Adoration of the Magi and Dormition of the Blessed Virgin can be seen. The program of redecoration of the church was completed in 1169 as a unique collaboration of the Byzantine emperor, the king of Jerusalem and the Latin Church. In 2003, the remains of a mosaic pavement were discovered under the ruins of the Bizere Monastery near the River Mureş in present-day Romania. The panels depict real or fantastic animal, floral, solar and geometric representations. Some archeologists supposed that it was the floor of an Orthodox church, built some time between the 10th and 11th century. Other experts claim that it was part of the later Catholic monastery on the site because it shows the signs of strong Italianate influence. The monastery was situated at that time in the territory of the Kingdom of Hungary. Renaissance and Baroque Although mosaics went out of fashion and were substituted by frescoes, some of the great Renaissance artists also worked with the old technique. Raphael's Creation of the World in the dome of the Chigi Chapel in Santa Maria del Popolo is a notable example that was executed by a Venetian craftsman, Luigi di Pace. During the papacy of Clement VIII (1592–1605), the "Congregazione della Reverenda Fabbrica di San Pietro" was established, providing an independent organisation charged with completing the decorations in the newly built St. Peter's Basilica. Instead of frescoes the cavernous Basilica was mainly decorated with mosaics. Among the explanations are: The old St. Peter's Basilica had been decorated with mosaic, as was common in churches built during the early Christian era; the 17th century followed the tradition to enhance continuity. In a church like this with high walls and few windows, mosaics were brighter and reflected more light. Mosaics had greater intrinsic longevity than either frescoes or canvases. Mosaics had an association with bejeweled decoration, flaunting richness. The mosaics of St. Peter's often show lively Baroque compositions based on designs or canvases from like Ciro Ferri, Guido Reni, Domenichino, Carlo Maratta, and many others. Raphael is represented by a mosaic replica of this last painting, the Transfiguration. Many different artists contributed to the 17th- and 18th-century mosaics in St. Peter's, including Giovanni Battista Calandra, Fabio Cristofari (died 1689), and Pietro Paolo Cristofari (died 1743). Works of the Fabbrica were often used as papal gifts. The Christian East The eastern provinces of the Eastern Roman and later the Byzantine Empires inherited a strong artistic tradition from Late Antiquity. Similar to Italy and Constantinople, churches and important secular buildings in the region of Syria and Egypt were decorated with elaborate mosaic panels between the 5th and 8th centuries. The great majority of these works of art were later destroyed, but archeological excavations unearthed many surviving examples. The single most important piece of Byzantine Christian mosaic art in the East is the Madaba Map, made between 542 and 570 as the floor of the church of Saint George at Madaba, Jordan. It was rediscovered in 1894. The Madaba Map is the oldest surviving cartographic depiction of the Holy Land. It depicts an area from Lebanon in the north to the Nile Delta in the south, and from the Mediterranean Sea in the west to the Eastern Desert. The largest and most detailed element of the topographic depiction is Jerusalem, at the center of the map. The map is enriched with many naturalistic features, like animals, fishing boats, bridges and palm trees. One of the earliest examples of Byzantine mosaic art in the region can be found on Mount Nebo, an important place of pilgrimage in the Byzantine era where Moses died. Among the many 6th-century mosaics in the church complex (discovered after 1933) the most interesting one is located in the baptistery. The intact floor mosaic covers an area of 9 × 3 m and was laid down in 530. It depicts hunting and pastoral scenes with rich Middle Eastern flora and fauna. The Church of Sts. Lot and Procopius was founded in 567 in Nebo village under Mount Nebo (now Khirbet Mukhayyat). Its floor mosaic depicts everyday activities like grape harvest. Another two spectacular mosaics were discovered in the ruined Church of Preacher John nearby. One of the mosaics was placed above the other one which was completely covered and unknown until the modern restoration. The figures on the older mosaic have thus escaped the iconoclasts. The town of Madaba remained an important center of mosaic making during the 5th–8th centuries. In the Church of the Apostles the middle of the main panel Thalassa, goddess of the sea, can be seen surrounded by fishes and other sea creatures. Native Middle Eastern birds, mammals, plants and fruits were also added. Important Justinian era mosaics decorated the Saint Catherine's Monastery on Mount Sinai in Egypt. Generally wall mosaics have not survived in the region because of the destruction of buildings but the St. Catherine's Monastery is exceptional. On the upper wall Moses is shown in two panels on a landscape background. In the apse we can see the Transfiguration of Jesus on a golden background. The apse is surrounded with bands containing medallions of apostles and prophets, and two contemporary figure, "Abbot Longinos" and "John the Deacon". The mosaic was probably created in 565/6. Jerusalem with its many holy places probably had the highest concentration of mosaic-covered churches but very few of them survived the subsequent waves of destructions. The present remains do not do justice to the original richness of the city. The most important is the so-called "Armenian Mosaic" which was discovered in 1894 on the Street of the Prophets near Damascus Gate. It depicts a vine with many branches and grape clusters, which springs from a vase. Populating the vine's branches are peacocks, ducks, storks, pigeons, an eagle, a partridge, and a parrot in a cage. The inscription reads: "For the memory and salvation of all those Armenians whose name the Lord knows." Beneath a corner of the mosaic is a small, natural cave which contained human bones dating to the 5th or 6th centuries. The symbolism of the mosaic and the presence of the burial cave indicates that the room was used as a mortuary chapel. An exceptionally well preserved, carpet-like mosaic floor was uncovered in 1949 in Bethany, the early Byzantine church of the Lazarium which was built between 333 and 390. Because of its purely geometrical pattern, the church floor is to be grouped with other mosaics of the time in Palestine and neighboring areas, especially the Constantinian mosaics in the central nave at Bethlehem. A second church was built above the older one during the 6th century with another more simple geometric mosaic floor. The monastic communities of the Judean Desert also decorated their monasteries with mosaic floors. The Monastery of Martyrius was founded in the end of the 5th century and it was re-discovered in 1982–85. The most important work of art here is the intact geometric mosaic floor of the refectory although the severely damaged church floor was similarly rich. The mosaics in the church of the nearby Monastery of Euthymius are of later date (discovered in 1930). They were laid down in the Umayyad era, after a devastating earthquake in 659. Two six pointed stars and a red chalice are the most important surviving features. Mosaic art also flourished in Christian Petra where three Byzantine churches were discovered. The most important one was uncovered in 1990. It is known that the walls were also covered with golden glass mosaics but only the floor panels survived as usual. The mosaic of the seasons in the southern aisle is from this first building period from the middle of the 5th century. In the first half of the 6th century the mosaics of the northern aisle and the eastern end of the southern aisle were installed. They depict native as well as exotic or mythological animals, and personifications of the Seasons, Ocean, Earth and Wisdom. The Arab conquest of the Middle East in the 7th century did not break off the art of mosaic making. Arabs learned and accepted the craft as their own and carried on the classical tradition. During the Umayyad era Christianity retained its importance, churches were built and repaired and some of the most important mosaics of the Christian East were made during the 8th century when the region was under Islamic rule. The mosaics of the Church of St Stephen in ancient Kastron Mefaa (now Umm ar-Rasas) were made in 785 (discovered after 1986). The perfectly preserved mosaic floor is the largest one in Jordan. On the central panel hunting and fishing scenes are depicted while another panel illustrates the most important cities of the region. The frame of the mosaic is especially decorative. Six mosaic masters signed the work: Staurachios from Esbus, Euremios, Elias, Constantinus, Germanus and Abdela. It overlays another, damaged, mosaic floor of the earlier (587) "Church of Bishop Sergius." Another four churches were excavated nearby with traces of mosaic decoration. The last great mosaics in Madaba were made in 767 in the Church of the Virgin Mary (discovered in 1887). It is a masterpiece of the geometric style with a Greek inscription in the central medallion. With the fall of the Umayyad dynasty in 750 the Middle East went through deep cultural changes. No great mosaics were made after the end of the 8th century and the majority of churches gradually fell into disrepair and were eventually destroyed. The tradition of mosaic making died out among the Christians and also in the Islamic community. Orthodox countries The craft has also been popular in early medieval Rus, inherited as part of the Byzantine tradition. Yaroslav, the Grand Prince of the Kievan Rus' built a large cathedral in his capital, Kyiv. The model of the church was the Hagia Sophia in Constantinople, and it was also called Saint Sophia Cathedral. It was built mainly by Byzantine master craftsmen, sent by Constantine Monomachos, between 1037 and 1046. Naturally the more important surfaces in the interior were decorated with golden mosaics. In the dome we can see the traditional stern Pantokrator supported by angels. Between the 12 windows of the drum were apostles and the four evangelists on the pendentives. The apse is dominated by an orant Theotokos with a Deesis in three medallions above. Below is a Communion of the Apostles. Prince Sviatopolk II built St. Michael's Golden-Domed Monastery in Kyiv in 1108. The mosaics of the church are undoubtedly works of Byzantine artists. Although the church was destroyed by Soviet authorities, majority of the panels were preserved. Small parts of ornamental mosaic decoration from the 12th century survived in the Saint Sophia Cathedral in Novgorod but this church was largely decorated with frescoes. Using mosaics and frescoes in the same building was a unique practice in Ukraine. Harmony was achieved by using the same dominant colors in mosaic and fresco. Both Saint Sophia Cathedral and Saint Michael's Golden-Domed Monastery in Kyiv use this technique. Mosaics stopped being used for church decoration as early as the 12th century in the eastern Slavic countries. Later Russian churches were decorated with frescoes, similarly then orthodox churches in the Balkan. The apse mosaic of the Gelati Monastery is a rare example of mosaic use in Georgia. Began by king David IV and completed by his son Demetrius I of Georgia, the fragmentary panel depicts Theotokos flanked by two archangels. The use of mosaic in Gelati attests to some Byzantine influence in the country and was a demonstration of the imperial ambition of the Bagrationids. The mosaic covered church could compete in magnificence with the churches of Constantinople. Gelati is one of few mosaic creations which survived in Georgia but fragments prove that the early churches of Pitsunda and Tsromi were also decorated with mosaic as well as other, lesser known sites. The destroyed 6th century mosaic floors in the Pitsunda Cathedral have been inspired by Roman prototypes. In Tsromi the tesserae are still visible on the walls of the 7th-century church but only faint lines hint at the original scheme. Its central figure was Christ standing and displaying a scroll with Georgian text. Jewish mosaics Under Roman and Byzantine influence the Jews also decorated their synagogues with classical floor mosaics. Many interesting examples were discovered in Galilee and the Judean Desert. The remains of a 6th-century synagogue have been uncovered in Sepphoris, which was an important centre of Jewish culture between the 3rd–7th centuries and a multicultural town inhabited by Jews, Christians and pagans. The mosaic reflects an interesting fusion of Jewish and pagan beliefs. In the center of the floor the zodiac wheel was depicted. Helios sits in the middle, in his sun chariot, and each zodiac is matched with a Jewish month. Along the sides of the mosaic are strips depicting Biblical scenes, such as the binding of Isaac, as well as traditional rituals, including a burnt sacrifice and the offering of fruits and grains. Another zodiac mosaic decorated the floor of the Beit Alfa synagogue which was built during the reign of Justin I (518–27). It is regarded one of the most important mosaics discovered in Israel. Each of its three panels depicts a scene – the Holy Ark, the zodiac, and the story of the sacrifice of Isaac. In the center of the zodiac is Helios, the sun god, in his chariot. The four women in the corners of the mosaic represent the four seasons. A third superbly preserved zodiac mosaic was discovered in the Severus synagogue in the ancient resort town of Hammat Tiberias. In the center of the 4th-century mosaic the Sun god, Helios sits in his chariot holding the celestial sphere and a whip. Nine of the 12 signs of the zodiac survived intact. Another panel shows the Ark of Covenant and Jewish cultic objects used in the Temple at Jerusalem. In 1936, a synagogue was excavated in Jericho which was named Shalom Al Yisrael Synagogue after an inscription on its mosaic floor ("Peace on Israel"). It appears to have been in use from the 5th to 8th centuries and contained a big mosaic on the floor with drawings of the Ark of the Covenant, the Menorah, a Shofar and a Lulav. Nearby in Naaran, there is another synagogue (discovered in 1918) from the 6th century that also has a mosaic floor. The synagogue in Eshtemoa (As-Samu) was built around the 4th century. The mosaic floor is decorated with only floral and geometric patterns. The synagogue in Khirbet Susiya (excavated in 1971–72, founded in the end of the 4th century) has three mosaic panels, the eastern one depicting a Torah shrine, two menorahs, a lulav and an etrog with columns, deer and rams. The central panel is geometric while the western one is seriously damaged but it has been suggested that it depicted Daniel in the lion's den. The Roman synagogue in Ein Gedi was remodeled in the Byzantine era and a more elaborate mosaic floor was laid down above the older white panels. The usual geometric design was enriched with birds in the center. It includes the names of the signs of the zodiac and important figures from the Jewish past but not their images suggesting that it served a rather conservative community. The ban on figurative depiction was not taken so seriously by the Jews living in Byzantine Gaza. In 1966 remains of a synagogue were found in the ancient harbour area. Its mosaic floor depicts King David as Orpheus, identified by his name in Hebrew letters. Near him were lion cubs, a giraffe and a snake listening to him playing a lyre. A further portion of the floor was divided by medallions formed by vine leaves, each of which contains an animal: a lioness suckling her cub, a giraffe, peacocks, panthers, bears, a zebra and so on. The floor was paved in 508/509. It is very similar to that of the synagogue at Maon (Menois) and the Christian church at Shellal, suggesting that the same artist most probably worked at all three places. The House of Leontius in Bet She'an (excavated in 1964–72) is a rare example of a synagogue which was part of an inn. It was built in the Byzantine period. The colorful mosaic floor of the synagogue room had an outer stripe decorated with flowers and birds, around medallions with animals, created by vine trellises emerging from an amphora. The central medallion enclosed a menorah (candelabrum) beneath the word shalom (peace). A 5th-century building in Huldah may be a Samaritan synagogue. Its mosaic floor contains typical Jewish symbols (menorah, lulav, etrog) but the inscriptions are Greek. Another Samaritan synagogue with a mosaic floor was located in Bet She'an (excavated in 1960). The floor had only decorative motifs and an aedicule (shrine) with cultic symbols. The ban on human or animal images was more strictly observed by the Samaritans than their Jewish neighbours in the same town (see above). The mosaic was laid by the same masters who made the floor of the Beit Alfa synagogue. One of the inscriptions was written in Samaritan script. In 2003, a synagogue of the 5th or 6th century was uncovered in the coastal Ionian town of Saranda, Albania. It had exceptional mosaics depicting items associated with Jewish holidays, including a menorah, ram's horn, and lemon tree. Mosaics in the basilica of the synagogue show the facade of what resembles a Torah, animals, trees, and other biblical symbols. The structure measures 20 by 24 m. and was probably last used in the 6th century as a church. Middle Eastern and Western Asian art Christian Arabia In South Arabia two mosaic works were excavated in a Qatabanian from the late 3rd century, those two plates formed geometric and grapevines formation reflecting the traditions of that culture. In the Ghassanid era religious mosaic art flourished in their territory, so far five churches with mosaic were recorded from that era, two built by Ghassanid rulers and the other three by the Christian Arab community who wrote their names and dedications. Zoroastrian Persia Tilework had been known there for about two thousand years when cultural exchange between Sassanid Empire and Romans influenced Persian artists to create mosaic patterns. Shapur I decorated his palace with tile compositions depicting dancers, musicians, courtesans, etc. This was the only significant example of figurative Persian mosaic, which became prohibited after Arab conquest and arrival of Islam. Islamic art Arab Islamic architecture used mosaic technique to decorate religious buildings and palaces after the Muslim conquests of the eastern provinces of the Byzantine Empire. In Syria and Egypt the Arabs were influenced by the great tradition of Roman and Early Christian mosaic art. During the Umayyad Dynasty mosaic making remained a flourishing art form in Islamic culture and it is continued in the art of zellige and azulejo in various parts of the Arab world, although tile was to become the main Islamic form of wall decoration. The first great religious building of Islam, the Dome of the Rock in Jerusalem, which was built between 688 and 692, was decorated with glass mosaics both inside and outside, by craftsmen of the Byzantine tradition. Only parts of the original interior decoration survive. The rich floral motifs follow Byzantine traditions, and are "Islamic only in the sense that the vocabulary is syncretic and does not include representation of men or animals." The most important early Islamic mosaic work is the decoration of the Umayyad Mosque in Damascus, then capital of the Arab Caliphate. The mosque was built between 706 and 715. The caliph obtained 200 skilled workers from the Byzantine Emperor to decorate the building. This is evidenced by the partly Byzantine style of the decoration. The mosaics of the inner courtyard depict Paradise with beautiful trees, flowers and small hill towns and villages in the background. The mosaics include no human figures, which makes them different from the otherwise similar contemporary Byzantine works. The biggest continuous section survives under the western arcade of the courtyard, called the "Barada Panel" after the river Barada. It is thought that the mosque used to have the largest gold mosaic in the world, at over 4 m2. In 1893 a fire damaged the mosque extensively, and many mosaics were lost, although some have been restored since. The mosaics of the Umayyad Mosque gave inspiration to later Damascene mosaic works. The Dome of the Treasury, which stands in the mosque courtyard, is covered with fine mosaics, probably dating from 13th- or 14th-century restoration work. The style of them are strikingly similar to the Barada Panel. The mausoleum of Sultan Baibars, Madrassa Zahiriyah, which was built after 1277, is also decorated with a band of golden floral and architectural mosaics, running around inside the main prayer hall. Non-religious Umayyad mosaic works were mainly floor panels which decorated the palaces of the caliphs and other high-ranking officials. They were closely modeled after the mosaics of the Roman country villas, once common in the Eastern Mediterranean. The most superb example can be found in the bath house of Hisham's Palace, Palestine which was made around 744. The main panel depicts a large tree and underneath it a lion attacking a deer (right side) and two deer peacefully grazing (left side). The panel probably represents good and bad governance. Mosaics with classical geometric motifs survived in the bath area of the 8th-century Umayyad palace complex in Anjar, Lebanon. The luxurious desert residence of Al-Walid II in Qasr al-Hallabat (in present-day Jordan) was also decorated with floor mosaics that show a high level of technical skill. The best preserved panel at Hallabat is divided by a Tree of Life flanked by "good" animals on one side and "bad" animals on the other. Among the Hallabat representations are vine scrolls, grapes, pomegranates, oryx, wolves, hares, a leopard, pairs of partridges, fish, bulls, ostriches, rabbits, rams, goats, lions and a snake. At Qastal, near Amman, excavations in 2000 uncovered the earliest known Umayyad mosaics in present-day Jordan, dating probably from the caliphate of Abd al-Malik ibn Marwan (685–705). They cover much of the floor of a finely decorated building that probably served as the palace of a local governor. The Qastal mosaics depict geometrical patterns, trees, animals, fruits and rosettes. Except for the open courtyard, entrance and staircases, the floors of the entire palace were covered in mosaics. Some of the best examples of later Islamic mosaics were produced in Moorish Spain. The golden mosaics in the mihrab and the central dome of the Great Mosque in Corduba have a decidedly Byzantine character. They were made between 965 and 970 by local craftsmen, supervised by a master mosaicist from Constantinople, who was sent by the Byzantine Emperor to the Umayyad Caliph of Spain. The decoration is composed of colorful floral arabesques and wide bands of Arab calligraphy. The mosaics were purported to evoke the glamour of the Great Mosque in Damascus, which was lost for the Umayyad family. Mosaics generally went out of fashion in the Islamic world after the 8th century. Similar effects were achieved by the use of painted tilework, either geometric with small tiles, sometimes called mosaic, like the zillij of North Africa, or larger tiles painted with parts of a large decorative scheme (Qashani) in Persia, Turkey and further east. Modern mosaics Noted 19th-century mosaics include those by Edward Burne-Jones at St Pauls within the Walls in Rome. Another modern mosaic of note is the world's largest mosaic installation located at the Cathedral Basilica of St. Louis, located in St. Louis, Missouri. A modern example of mosaic is the Museum of Natural History station of the New York City Subway (there are many such works of art scattered throughout the New York City subway system, though many IND stations are usually designed with bland mosaics). Another example of mosaics in ordinary surroundings is the use of locally themed mosaics in some restrooms in the rest areas along some Texas interstate highways. Some modern mosaics are the work of modernisme style architects Antoni Gaudí and Josep Maria Jujol, for example the mosaics in the Park Güell in Barcelona. Today, among the leading figures of the mosaic world are Elaine M. Goodwin (UK), Felice Nittolo (Italy), Brit Hammer (Netherlands), Dugald MacInnes (Scotland), Heather Hancock (USA), Kelley Knickerbocker (USA), Aida Valencia (Mexico), Emma Biggs (UK), Helen Nock (UK), Marcelo de Melo (Brazil), Sonia King (USA) and Saimir Strati (Albania). As a popular craft Mosaics have developed into a popular craft and art, and are not limited to professionals. Today's artisans and crafters work with stone, ceramics, smalti, shells, art glass, mirror, beads, and even odd items like doll parts, pearls, or photographs. While ancient mosaics tended to be architectural, modern mosaics are found covering everything from park benches and flowerpots to guitars and bicycles. Items can be as small as an earring or as large as a house. Trencadís or pique assiette (a French term – "stolen from plate") is a mosaic made from pieces of broken pottery, china, glass, buttons, figurines, or jewelry which are cemented onto a base to create a new surface. Almost any form can be used as a base, and any combination of pieces can be applied, restricted only by the individual creator's imagination. In street art In styles that owe as much to videogame pixel art and pop culture as to traditional mosaic, street art has seen a novel reinvention and expansion of mosaic artwork. The most prominent artist working with mosaics in street art is the French Invader. He has done almost all his work in two very distinct mosaic styles, the first of which are small "traditional" tile mosaics of 8 bit video game character, installed in cities across the globe, and the second of which are a style he refers to as "Rubikcubism", which uses a kind of dual layer mosaic via grids of scrambled Rubik's Cubes. Although he is the most prominent, other street and urban artists do work in Mosaic styles as well. Calçada Portuguesa Portuguese pavement (in Portuguese, Calçada Portuguesa) is a kind of two-tone stone mosaic paving created in Portugal, and common throughout the Lusosphere. Most commonly taking the form of geometric patterns from the simple to the complex, it also is used to create complex pictorial mosaics in styles ranging from iconography to classicism and even modern design. In Portuguese-speaking countries, many cities have a large amount of their sidewalks and even, though far more occasionally, streets done in this mosaic form. Lisbon in particular maintains almost all walkways in this style. Despite its prevalence and popularity throughout Portugal and its former colonies, and its relation to older art and architectural styles like Azulejo, Portuguese and Spanish painted tilework, it is a relatively young mosaic artform, its first definitive appearance in a modernly recognizable form being in the mid-1800s. Among the most commonly used stones in this style are basalt and limestone. Terminology Mosaic is an art form which uses small pieces of materials placed together to create a unified whole. The materials commonly used are marble or other stone, glass, pottery, mirror or foil-backed glass, or shells. The word mosaic is from the Italian mosaico deriving from the Latin mosaicus and ultimately from the Greek mouseios meaning belonging to the Muses, hence artistic. Each piece of material is a tessera (plural: tesserae). The space in between where the grout goes is an interstice. Andamento is the word used to describe the movement and flow of tesserae. The 'opus', the Latin for 'work', is the way in which the pieces are cut and placed. Common techniques include: Opus regulatum: A grid; all tesserae align both vertically and horizontally. Opus tessellatum: Tesserae form vertical or horizontal rows, but not both. Opus vermiculatum: One or more lines of tesserae follow the edge of a special shape (letters or a major central graphic). Opus musivum: Vermiculatum extends throughout the entire background. Opus palladianum: Instead of forming rows, tesserae are irregularly shaped. Also known as "crazy paving". Opus sectile: A major shape (e.g. heart, letter, cat) is formed by a single tessera, as later in pietra dura. Opus classicum: When vermiculatum is combined with tessellatum or regulatum. Opus circumactum: Tesserae are laid in overlapping semicircles or fan shapes. Micromosaic: using very small tesserae, in Byzantine icons and Italian panels for jewellery from the Renaissance on. Three techniques There are three main methods: the direct method, the indirect method and the double indirect method. Direct method The direct method of mosaic construction involves directly placing (gluing) the individual tesserae onto the supporting surface. This method is well suited to surfaces that have a three-dimensional quality, such as vases. This was used for the historic European wall and ceiling mosaics, following underdrawings of the main outlines on the wall below, which are often revealed again when the mosaic falls away. The direct method suits small projects that are transportable. Another advantage of the direct method is that the resulting mosaic is progressively visible, allowing for any adjustments to tile color or placement. The disadvantage of the direct method is that the artist must work directly at the chosen surface, which is often not practical for long periods of time, especially for large-scale projects. Also, it is difficult to control the evenness of the finished surface. This is of particular importance when creating a functional surface such as a floor or a table top. A modern version of the direct method, sometimes called "double direct," is to work directly onto fiberglass mesh. The mosaic can then be constructed with the design visible on the surface and transported to its final location. Large work can be done in this way, with the mosaic being cut up for shipping and then reassembled for installation. It enables the artist to work in comfort in a studio rather than at the site of installation. Indirect method The indirect method of applying tesserae is often used for very large projects, projects with repetitive elements or for areas needing site specific shapes. Tesserae are applied face-down to a backing paper using a water-soluble adhesive. Once the mosaic is completed in the studio it is transferred in sections to the site and cemented, paper facing outwards. Once fixed the paper is dampened and removed. This method is most useful for extremely large projects as it gives the maker time to rework areas, allows the cementing of the tesserae to the backing panel to be carried out quickly in one operation and helps ensure that the front surfaces of the mosaic tiles and mosaic pieces are flat and in the same plane on the front, even when using tiles and pieces of differing thicknesses. Mosaic murals, benches and tabletops are some of the items usually made using the indirect method, as it results in a smoother and more even surface. Mathematics The best way to arrange variously shaped tiles on a surface leads to the mathematical field of tessellation. The artist M. C. Escher was influenced by Moorish mosaics to begin his investigations into tessellation. Digital imaging A photomosaic is a picture made up of various other pictures (pioneered by Joseph Francis), in which each "pixel" is another picture, when examined closely. This form has been adopted in many modern media and digital image searches. A tile mosaic is a digital image made up of individual tiles, arranged in a non-overlapping fashion, e.g. to make a static image on a shower room or bathing pool floor, by breaking the image down into square pixels formed from ceramic tiles (a typical size is , as for example, on the floor of the University of Toronto pool, though sometimes larger tiles such as are used). These digital images are coarse in resolution and often simply express text, such as the depth of the pool in various places, but some such digital images are used to show a sunset or other beach theme. Recent developments in digital image processing have led to the ability to design physical tile mosaics using computer aided design (CAD) software. The software typically takes as inputs a source bitmap and a palette of colored tiles. The software makes a best-fit match of the tiles to the source image. In order to place tiles in the manner of opus vermiculatum the first step is to find edges of visually important objects in the image. A Python implementation of a complete pixel image to mosaic vector image algorithm is available. Robotic manufacturing With high cost of labor in developed countries, production automation has become increasingly popular. Rather than being assembled by hand, mosaics designed using computer aided design (CAD) software can be assembled by a robot. Production can be greater than 10 times faster with higher accuracy. But these "computer" mosaics have a different look than hand-made "artisanal" mosaics. With robotic production, colored tiles are loaded into buffers, and then the robot picks and places tiles individually according to a command file from the design software. See also Pixel art Terrazzo Tessellation Church of the priest Félix and baptistry of Kélibia References Notes Citations External links Pavements Handicrafts Architectural elements Visual arts materials Decorative arts Byzantine art Persian art
Mosaic
[ "Technology", "Engineering" ]
16,149
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
61,325
https://en.wikipedia.org/wiki/Wilhelm%20R%C3%B6ntgen
Wilhelm Conrad Röntgen (; ; anglicized as Roentgen; 27 March 184510 February 1923) was a German physicist, who, on 8 November 1895, produced and detected electromagnetic radiation in a wavelength range known as X-rays or Röntgen rays, an achievement that earned him the inaugural Nobel Prize in Physics in 1901. In honour of Röntgen's accomplishments, in 2004, the International Union of Pure and Applied Chemistry (IUPAC) named element 111, roentgenium, a radioactive element with multiple unstable isotopes, after him. The non-SI unit of radiation exposure, the roentgen (R), is also named after him. Biographical history Education He was born to Friedrich Conrad Röntgen, a German merchant and cloth manufacturer, and Charlotte Constanze Frowein. When he was aged three, his family moved to the Netherlands, where his mother's family lived. Röntgen attended high school at Utrecht Technical School in Utrecht, Netherlands. He followed courses at the Technical School for almost two years. In 1865, he was unfairly expelled from high school when one of his teachers intercepted a caricature of one of the teachers, which was drawn by someone else. Without a high school diploma, Röntgen could only attend university in the Netherlands as a visitor. In 1865, he tried to attend Utrecht University without having the necessary credentials required for a regular student. Upon hearing that he could enter the Federal Polytechnic Institute in Zürich (today known as the ETH Zurich), he passed the entrance examination and began his studies there as a student of mechanical engineering. In 1869, he graduated with a PhD from the University of Zurich; once there, he became a favourite student of Professor August Kundt, whom he followed to the newly founded German Kaiser-Wilhelms-Universität in Strasbourg. Career In 1874, Röntgen became a lecturer at the University of Strasbourg. In 1875, he became a professor at the Academy of Agriculture at Hohenheim, Württemberg. He returned to Strasbourg as a professor of physics in 1876, and in 1879, he was appointed to the chair of physics at the University of Giessen. In 1888, he obtained the physics chair at the University of Würzburg, and in 1900 at the University of Munich, by special request of the Bavarian government. Röntgen had family in Iowa in the United States and planned to emigrate. He accepted an appointment at Columbia University in New York City and bought transatlantic tickets, before the outbreak of World War I changed his plans. He remained in Munich for the rest of his career. Discovery of X-rays During 1895, at his laboratory in the Würzburg Physical Institute of the University of Würzburg, Röntgen was investigating the external effects of passing an electrical discharge through various types of vacuum tube equipment—apparatuses from Heinrich Hertz, Johann Hittorf, William Crookes, Nikola Tesla and Philipp von Lenard In early November, he was repeating an experiment with one of Lenard's tubes in which a thin aluminium window had been added to permit the cathode rays to exit the tube but a cardboard covering was added to protect the aluminium from damage by the strong electrostatic field that produces the cathode rays. Röntgen knew that the cardboard covering prevented light from escaping, yet he observed that the invisible cathode rays caused a fluorescent effect on a small cardboard screen painted with barium platinocyanide when it was placed close to the aluminium window. It occurred to Röntgen that the Crookes–Hittorf tube, which had a much thicker glass wall than the Lenard tube, might also cause this fluorescent effect. In the late afternoon of 8 November 1895, Röntgen was determined to test his idea. He carefully constructed a black cardboard covering similar to the one he had used on the Lenard tube. He covered the Crookes–Hittorf tube with the cardboard and attached electrodes to a Ruhmkorff coil to generate an electrostatic charge. Before setting up the barium platinocyanide screen to test his idea, Röntgen darkened the room to test the opacity of his cardboard cover. As he passed the Ruhmkorff coil charge through the tube, he determined that the cover was light-tight and turned to prepare for the next step of the experiment. It was at this point that Röntgen noticed a faint shimmering from a bench a few feet away from the tube. To be sure, he tried several more discharges and saw the same shimmering each time. Striking a match, he discovered the shimmering had come from the location of the barium platinocyanide screen he had been intending to use next. Based on the formation of regular shadows, Röntgen termed the phenomenon "rays". As 8 November was a Friday, he took advantage of the weekend to repeat his experiments and made his first notes. In the following weeks, he ate and slept in his laboratory as he investigated many properties of the new rays he temporarily termed "X-rays", using the mathematical designation ("X") for something unknown. The new rays came to bear his name in many languages as "Röntgen rays" (and the associated X-ray radiograms as "Röntgenograms"). At one point, while he was investigating the ability of various materials to stop the rays, Röntgen brought a small piece of lead into position while a discharge was occurring. Röntgen thus saw the first radiographic image: his own flickering ghostly skeleton on the barium platinocyanide screen. About six weeks after his discovery, he took a picture—a radiograph—using X-rays of his wife Anna Bertha's hand. When she saw her skeleton she exclaimed "I have seen my death!" He later took a better picture of his friend Albert von Kölliker's hand at a public lecture. Röntgen's original paper, "On A New Kind of Rays" (Ueber eine neue Art von Strahlen), was published on 28 December 1895. On 5 January 1896, an Austrian newspaper reported Röntgen's discovery of a new type of radiation. Röntgen was awarded an honorary Doctor of Medicine degree from the University of Würzburg after his discovery. He also received the Rumford Medal of the British Royal Society in 1896, jointly with Philipp Lenard, who had already shown that a portion of the cathode rays could pass through a thin film of a metal such as aluminium. Röntgen published a total of three papers on X-rays between 1895 and 1897. Today, Röntgen is considered the father of diagnostic radiology, the medical speciality which uses imaging to diagnose disease. Personal life Röntgen was married to Anna Bertha Ludwig for 47 years until her death in 1919 at the age of 80. In 1866, they met in Zürich at Anna's father's café, Zum Grünen Glas. They became engaged in 1869 and wed in Apeldoorn, Netherlands on 7 July 1872; the delay was due to Anna being six years Wilhelm's senior and his father not approving of her age or humble background. Their marriage began with financial difficulties as family support from Röntgen had ceased. They raised one child, Josephine Bertha Ludwig, whom they adopted as a six-year-old after her father, Anna's only brother, died in 1887. For ethical reasons, Röntgen did not seek patents for his discoveries, holding the view that they should be publicly available without charge. After receiving his Nobel prize money, Röntgen donated the 50,000 Swedish krona to research at the University of Würzburg. Although he accepted the honorary degree of Doctor of Medicine, he rejected an offer of lower nobility, or Niederer Adelstitel, denying the preposition von (meaning "of") as a nobiliary particle (i.e., von Röntgen). With the inflation following World War I, Röntgen fell into bankruptcy, spending his final years at his country home at Weilheim, near Munich. Röntgen died on 10 February 1923 from carcinoma of the intestine, also known as colorectal cancer. In keeping with his will, his personal and scientific correspondence, with few exceptions, were destroyed upon his death. He was a member of the Dutch Reformed Church. Awards and honors 1896: Rumford Medal of the Royal Society 1896: Matteucci Medal of the Accademia nazionale delle scienze 1897: Elliott Cresson Medal of the Franklin Institute 1900: Barnard Medal for Meritorious Service to Science of Columbia University 1901: Nobel Prize in Physics for the discovery of X-rays In 1901, Röntgen was awarded the first Nobel Prize in Physics. The award was officially "in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him". Shy in public speaking, he declined to give a Nobel lecture. Röntgen donated the 50,000 Swedish krona reward from his Nobel Prize to research at his university, the University of Würzburg. Like Marie and Pierre Curie, Röntgen refused to take out patents related to his discovery of X-rays, as he wanted society as a whole to benefit from practical applications of the phenomenon. Röntgen was also awarded Barnard Medal for Meritorious Service to Science in 1900. In November 2004, IUPAC named element number 111 roentgenium (Rg) in his honor. IUPAP adopted the name in November 2011. He was elected an International Member of the American Philosophical Society in 1897. In 1907, he became a foreign member of the Royal Netherlands Academy of Arts and Sciences. Legacy A collection of his papers is held at the National Library of Medicine in Bethesda, Maryland. Today, in Remscheid-Lennep, 40 kilometres east of Röntgen's birthplace in Düsseldorf, is the Deutsches Röntgen-Museum. In Würzburg, where he discovered X-rays, a non-profit organization maintains his laboratory and provides guided tours to the Röntgen Memorial Site. World Radiography Day: World Radiography Day is an annual event promoting the role of medical imaging in modern healthcare. It is celebrated on 8 November each year, coinciding with the anniversary of the Röntgen's discovery. It was first introduced in 2012 as a joint initiative between the European Society of Radiology, the Radiological Society of North America, and the American College of Radiology. As of 2023, 55 stamps from 40 countries have been issued commemorating Röntgen as the discoverer of X-rays. Röntgen Peak in Antarctica is named after Wilhelm Röntgen. Minor planet 6401 Roentgen is named after him. See also German inventors and discoverers Röntgen Memorial Site Ivan Puluj References External links Annotated bibliography for Wilhelm Röntgen from the Alsos Digital Library Wilhelm Conrad Röntgen Biography The Cathode Ray Tube site First X-ray Photogram The American Roentgen Ray Society Deutsches Röntgen-Museum (German Röntgen Museum, Remscheid-Lennep) Röntgen Rays: Memoirs by Röntgen, Stokes, and J.J. Thomson (circa 1899) The New Marvel in Photography, an article on and interview with Röntgen, in McClure's magazine, Vol. 6, No. 5, April 1896, from Project Gutenberg Röntgen's 1895 article, on line and analyzed on BibNum [click 'à télécharger' for English analysis] 1845 births 1923 deaths 20th-century German physicists People from Remscheid ETH Zurich alumni Experimental physicists German Nobel laureates German people of Dutch descent Members of the Royal Netherlands Academy of Arts and Sciences Nobel laureates in Physics Particle physicists People from Apeldoorn People from the Rhine Province People associated with the University of Zurich Projectional radiography Recipients of the Pour le Mérite (civil class) Science teachers University of Zurich alumni Academic staff of the University of Zurich Academic staff of the University of Giessen Academic staff of the Ludwig Maximilian University of Munich Academic staff of the University of Strasbourg Academic staff of the University of Würzburg Utrecht University alumni X-ray pioneers Engineers from North Rhine-Westphalia German mechanical engineers Recipients of the Matteucci Medal German Calvinist and Reformed Christians Members of the Dutch Reformed Church Members of the American Philosophical Society
Wilhelm Röntgen
[ "Physics" ]
2,561
[ "Particle physicists", "Experimental physics", "Experimental physicists", "Particle physics" ]
61,334
https://en.wikipedia.org/wiki/Pygmy%20hippopotamus
The pygmy hippopotamus or pygmy hippo (Choeropsis liberiensis) is a small hippopotamid which is native to the forests and swamps of West Africa, primarily in Liberia, with small populations in Sierra Leone, Guinea, and Ivory Coast. It has been extirpated from Nigeria. The pygmy hippo is reclusive and nocturnal. It is one of only two extant species in the family Hippopotamidae, the other being its much larger relative, the common hippopotamus (Hippopotamus amphibius) or Nile hippopotamus. The pygmy hippopotamus displays many terrestrial adaptations, but like the common hippo, it is semiaquatic and relies on water to keep its skin moist and its body temperature cool. Behaviors such as mating and giving birth may occur in water or on land. The pygmy hippo is herbivorous, feeding on ferns, broad-leaved plants, grasses, and fruits it finds in the forests. A rare nocturnal forest creature, the pygmy hippopotamus is a difficult animal to study in the wild. Pygmy hippos were unknown outside West Africa until the 19th century. Introduced to zoos in the early 20th century, they breed well in captivity and the vast majority of research is derived from zoo specimens. The survival of the species in captivity is more assured than in the wild; in a 2015 assessment, the International Union for Conservation of Nature estimated that fewer than 2,500 pygmy hippos remain in the wild. Pygmy hippos are primarily threatened by loss of habitat, as forests are logged and converted to farm land, and are also vulnerable to poaching, hunting for bushmeat, natural predators, and war. Pygmy hippos are among the species illegally hunted for food in Liberia. Taxonomy and origins Nomenclature of the pygmy hippopotamus reflects that of the hippopotamus; the plural form is pygmy hippopotamuses or pygmy hippopotami. A male pygmy hippopotamus is known as a bull, a female as a cow, and a baby as a calf. A group of hippopotami is known as a herd or a bloat. The pygmy hippopotamus is a member of the family Hippopotamidae where it is classified as a member of the genus Choeropsis ("resembling a hog"). Members of Hippopotamidae are sometimes known as hippopotamids. Sometimes the sub-family Hippopotaminae is used. Further, some taxonomists group hippopotami and anthracotheres in the superfamily Anthracotheroidea or Hippopotamoidea. The taxonomy of the genus of the pygmy hippopotamus has changed as understanding of the animal has developed. Samuel G. Morton initially classified the animal as Hippopotamus minor, but later determined it was distinct enough to warrant its own genus, and labeled it Choeropsis. In 1977, Shirley C. Coryndon proposed that the pygmy hippopotamus was closely related to Hexaprotodon, a genus that consisted of prehistoric hippos mostly native to Asia. This assertion was widely accepted, until Boisserie asserted in 2005 that the pygmy hippopotamus was not a member of Hexaprotodon, after a thorough examination of the phylogeny of Hippopotamidae. He suggested instead that the pygmy hippopotamus was a distinct genus, and returned the animal to Choeropsis. ITIS verifies Hexaprotodon liberiensis as the valid scientific name. All agree that the modern pygmy hippopotamus, be it H. liberiensis or C. liberiensis, is the only extant member of its genus. The American Society of Mammalogists moved it back to Choeropsis in 2021, a move supported by the IUCN. Nigerian subspecies A distinct subspecies of pygmy hippopotamus existed in Nigeria until at least the 20th century, though the validity of this has been questioned. The existence of the subspecies, makes Choeropsis liberiensis liberiensis (or Hexaprotodon liberiensis liberiensis under the old classification) the full trinomial nomenclature for the Liberian pygmy hippopotamus. The Nigerian pygmy hippopotamus was never studied in the wild and never captured. All research and all zoo specimens are the Liberian subspecies. The Nigerian subspecies is classified as C. liberiensis heslopi. The Nigerian pygmy hippopotamus ranged in the Niger River Delta, especially near Port Harcourt, but no reliable reports exist after the collection of the museum specimens secured by Ian Heslop, a British colonial officer, in the early 1940s. It is probably extinct. The subspecies is separated by over and the Dahomey Gap, a region of savanna that divides the forest regions of West Africa. The subspecies is named after Heslop, who shot three members of it in 1935 and 1943. He estimated that perhaps no more than 30 pygmy hippos remained in the region. Heslop sent four pygmy hippopotamus skulls he collected to the British Museum of Natural History in London. These specimens were not subjected to taxonomic evaluation, however, until 1969 when classified the skulls as belonging to a separate subspecies based on consistent variations in the proportions of the skulls. The Nigerian pygmy hippos were seen or shot in Rivers State, Imo State and Bayelsa State, Nigeria. While some local humans are aware that the species once existed, its history in the region is poorly documented. Evolution The evolution of the pygmy hippopotamus is most often studied in the context of its larger cousin. Both species were long believed to be most closely related to the family Suidae (pigs and hogs) or Tayassuidae (peccaries), but research within the last 10 years has determined that pygmy hippos and hippos are most closely related to cetaceans (whales and dolphins). Hippos and whales shared a common semi-aquatic ancestor that branched off from other artiodactyls around . This hypothesized ancestor likely split into two branches about six million years later. One branch would evolve into cetaceans, the other branch became the anthracotheres, a large family of four-legged beasts, whose earliest member, from the Late Eocene, would have resembled narrow hippopotami with comparatively small and thin heads. Hippopotamids are deeply nested within the family Anthracotheriidae. The oldest known hippopotamid is the genus Kenyapotamus, which lived in Africa from . Kenyapotamus is known only through fragmentary fossils, but was similar in size to C. liberiensis. The Hippopotamidae are believed to have evolved in Africa, and while at one point the species spread across Asia and Europe, no hippopotami have ever been discovered in the Americas. Starting the Archaeopotamus, likely ancestors to the genus Hippopotamus and Hexaprotodon, lived in Africa and the Middle East. While the fossil record of hippos is still poorly understood, the lineages of the two modern genera, Hippopotamus and Choeropsis, may have diverged as far back as . The ancestral form of the pygmy hippopotamus may be the genus Saotherium. Saotherium and Choeropsis are significantly more basal than Hippopotamus and Hexaprotodon, and thus more closely resemble the ancestral species of hippos. Extinct pygmy and dwarf hippos Several species of small hippopotamids have also become extinct in the Mediterranean in the late Pleistocene or early Holocene. Though these species are sometimes known as "pygmy hippopotami" they are not believed to be closely related to C. liberiensis. These include the Cretan dwarf hippopotamus (Hippopotamus creutzburgi), the Sicilian hippopotamus (Hippopotamus pentlandi), the Maltese hippopotamus (Hippopotamus melitensis) and the Cyprus dwarf hippopotamus (Hippopotamus minor). These species, though comparable in size to the pygmy hippopotamus, are considered dwarf hippopotamuses, rather than pygmies. They are likely descended from a full-sized species of European hippopotamus, and reached their small size through the evolutionary process of insular dwarfism which is common on islands; the ancestors of pygmy hippopotami were also small and thus there was never a dwarfing process. There were also several species of pygmy hippo on the island of Madagascar (see Malagasy hippopotamus). Description Pygmy hippos share the same general form as a hippopotamus. They have a graviportal skeleton, with four stubby legs and four toes on each foot, supporting a portly frame. Yet, the pygmy is only half as tall as the hippopotamus and weighs less than 1/4 as much as its larger cousin. Adult pygmy hippos stand about high at the shoulder, are in length and weigh . Their lifespan in captivity ranges from 30 to 55 years, though it is unlikely that they live this long in the wild. The skin is greenish-black or brown, shading to a creamy gray on the lower body. Their skin is very similar to the common hippo's, with a thin epidermis over a dermis that is several centimeters thick. Pygmy hippos have the same unusual secretion as common hippos, that gives a pinkish tinge to their bodies, and is sometimes described as "blood sweat" though the secretion is neither sweat nor blood. This substance, hipposudoric acid, is believed to have antiseptic and sunscreening properties. The skin of hippos dries out quickly and cracks, which is why both species spend so much time in water. The skeleton of C. liberiensis is more gracile than that of the common hippopotamus, meaning their bones are proportionally thinner. The common hippo's spine is parallel with the ground; the pygmy hippo's back slopes forward, a likely adaptation to pass more easily through dense forest vegetation. Proportionally, the pygmy hippo's legs and neck are longer and its head smaller. The orbits and nostrils of a pygmy hippo are much less pronounced, an adaptation from spending less time in deep water (where pronounced orbits and nostrils help the common hippo breathe and see). The feet of pygmy hippos are narrower, but the toes are more spread out and have less webbing, to assist in walking on the forest floor. Despite adaptations to a more terrestrial life than the common hippopotamus, pygmy hippos are still more aquatic than all other terrestrial even-toed ungulates. The ears and nostrils of pygmy hippos have strong muscular valves to aid submerging underwater, and the skin physiology is dependent on the availability of water. Behavior The behavior of the pygmy hippo differs from the common hippo in many ways. Much of its behavior is more similar to that of a tapir, though this is an effect of convergent evolution. While the common hippopotamus is gregarious, pygmy hippos live either alone or in small groups, typically a mated pair or a mother and calf. Pygmy hippos tend to ignore each other rather than fight when they meet. Field studies have estimated that male pygmy hippos range over , while the range of a female is .Pygmy hippos spend most of the day hidden in rivers. They will rest in the same spot for several days in a row, before moving to a new spot. At least some pygmy hippos make use of dens or burrows that form in river banks. It is unknown if the pygmy hippos help create these dens, or how common it is to use them. Though a pygmy hippo has never been observed burrowing, other artiodactyls, such as warthogs, are burrowers. Diet Like the common hippopotamus, the pygmy hippo emerges from the water at dusk to feed. It relies on game trails to travel through dense forest vegetation. It marks trails by vigorously waving its tail while defecating to further spread its feces. The pygmy hippo spends about six hours a day foraging for food. Pygmy hippos are herbivorous. They do not eat aquatic vegetation to a significant extent and rarely eat grass because it is uncommon in the thick forests they inhabit. The bulk of a pygmy hippo's diet consists of herbs, ferns, broad-leaved plants, herbaceous shoots, forbs, sedges and fruits that have fallen to the forest floor. The wide variety of plants pygmy hippos have been observed eating suggests that they will eat any plants available. This diet is of higher quality than that of the common hippopotamus. Reproduction A study of breeding behavior in the wild has never been conducted; the artificial conditions of captivity may cause the observed behavior of pygmy hippos in zoos to differ from natural conditions. Sexual maturity for the pygmy hippopotamus occurs between three and five years of age. The youngest reported age for giving birth is a pygmy hippo in the Zoo Basel, Switzerland which bore a calf at three years and three months. The oestrus cycle of a female pygmy hippo lasts an average of 35.5 days, with the oestrus itself lasting between 24 and 48 hours. Pygmy hippos consort for mating, but the duration of the relationship is unknown. In zoos they breed as monogamous pairs. Copulation can take place on land or in the water, and a pair will mate one to four times during an oestrus period. In captivity, pygmy hippos have been conceived and born in all months of the year. The gestation period ranges from 190 to 210 days, and usually a single young is born, though twins are known to occur.The common hippopotamus gives birth and mates only in the water, but pygmy hippos mate and give birth on both land and water. Young pygmy hippos can swim almost immediately. At birth, pygmy hippos weigh 4.5–6.2 kg (9.9–13.7 lb) with males weighing about 0.25 kg (0.55 lb) more than females. Pygmy hippos are fully weaned between six and eight months of age; before weaning they do not accompany their mother when she leaves the water to forage, but instead hide in the water by themselves. The mother returns to the hiding spot about three times a day and calls out for the calf to suckle. Suckling occurs with the mother lying on her side. Temperament Although not considered dangerous to humans and generally docile, pygmy hippos can be highly aggressive at times. Although there have been no human deaths associated with pygmy hippos, there have been several attacks - while most of these were provoked by human behaviour, several have had no apparent cause. Conservation The greatest threat to the remaining pygmy hippopotamus population in the wild is loss of habitat. The forests in which pygmy hippos live have been subject to logging, settling and conversion to agriculture, with little efforts made to make logging sustainable. As forests shrink, the populations become more fragmented, leading to less genetic diversity in the potential mating pool. Pygmy hippos are among the species illegally hunted for food in Liberia. Their meat is said to be of excellent quality, like that of a wild boar; unlike those of the common hippo, the pygmy hippo's teeth have no value. The effects of West Africa's civil strife on the pygmy hippopotamus are unknown, but unlikely to be positive. The pygmy hippopotamus can be killed by leopards, pythons and crocodiles. How often this occurs is unknown. C. liberiensis was identified as one of the top 10 "focal species" in 2007 by the Evolutionarily Distinct and Globally Endangered (EDGE) project. Some populations inhabit protected areas, such as the Gola Forest Reserve in Sierra Leone. Basel Zoo in Switzerland holds the international studbook and coordinates the entire captive pygmy hippo population that freely breeds in zoos around the world. Between 1970 and 1991 the population of pygmy hippos born in captivity more than doubled. The survival of the species in zoos is more certain than the survival of the species in the wild. In captivity, the pygmy hippo lives from 42 to 55 years, longer than in the wild. Since 1919, only 41 percent of pygmy hippos born in zoos have been male. History and folklore While the common hippopotamus has been known to Europeans since classical antiquity, the pygmy hippopotamus was unknown outside its range in West Africa until the 19th century. Due to their nocturnal, forested existence, they were poorly known within their range as well. In Liberia the animal was traditionally known as a water cow. Early field reports of the animal misidentified it as a wild hog. Several skulls of the species were sent to the American natural scientist Samuel G. Morton, during his residency in Monrovia, Liberia. Morton first described the species in 1843. The first complete specimens were collected as part of a comprehensive investigation of Liberian fauna in the 1870s and 1880s by Dr. Johann Büttikofer. The specimens were taken to the Natural History Museum in Leiden, The Netherlands. The first pygmy hippo was brought to Europe in 1873 after being captured in Sierra Leone by a member of the British Colonial Service but died shortly after arrival. Pygmy hippos were successfully established in European zoos in 1911. They were first shipped to Germany and then to the Bronx Zoo in New York City where they also thrived. In 1927, Harvey Firestone of Firestone Tires presented Billy the pygmy hippo to U.S. President Calvin Coolidge. Coolidge donated Billy to the National Zoo in Washington, D.C. According to the zoo, Billy is a common ancestor to most pygmy hippos in U.S. zoos today. Moo Deng is a pygmy hippo living in Khao Kheow Open Zoo, in Thailand, who gained notability in September 2024 as a popular Internet meme after images of her went viral online. Because of the popularity of the hippo, whose name translates to "bouncy pork", the zoo saw a boosted attendance. It has been reported that some visitors to the zoo threw water and other objects at the baby hippo to get her to react. Several folktales have been collected about the pygmy hippopotamus. One tale says that pygmy hippos carry a shining diamond in their mouths to help travel through thick forests at night; by day the pygmy hippo has a secret hiding place for the diamond, but if a hunter catches a pygmy hippo at night the diamond can be taken. Villagers sometimes believed that baby pygmy hippos do not nurse but rather lick secretions off the skin of the mother. References External links Videos of Pygmy Hippos at Arkive.org Pygmy hippo caught on camera in Liberia (video), BBC News 2011-12-19 Rare pygmy hippos caught on film, BBC News 2008-03-10 Camera trap results, Sapo National Park, Liberia, Zoological Society of London (EDGE of Existence Programme). 10 March 2008. First reports showing Pygmy Hippos in wild, surviving Liberian Civil War. Pygmy hippos survive two civil wars, Zoological Society of London Press Release, 10 March 2008. EDGE of Existence "(Pygmy hippo)", Saving the World's most Evolutionarily Distinct and Globally Endangered (EDGE) species pygmy hippopotamus EDGE species Mammals of West Africa Fauna of Rivers State Endangered fauna of Africa Semiaquatic mammals Articles containing video clips pygmy hippopotamus
Pygmy hippopotamus
[ "Biology" ]
4,419
[ "EDGE species", "Biodiversity" ]
61,338
https://en.wikipedia.org/wiki/Addition
Addition (usually signified by the plus symbol ) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division. The addition of two whole numbers results in the total amount or sum of those values combined. The example in the adjacent image shows two columns of three apples and two apples each, totaling at five apples. This observation is equivalent to the mathematical expression (that is, "3 plus 2 is equal to 5"). Besides counting items, addition can also be defined and executed without referring to concrete objects, using abstractions called numbers instead, such as integers, real numbers and complex numbers. Addition belongs to arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can also be performed on abstract objects such as vectors, matrices, subspaces and subgroups. Addition has several important properties. It is commutative, meaning that the order of the operands does not matter, and it is associative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of is the same as counting (see Successor function). Addition of does not change a number. Addition also obeys rules concerning related operations such as subtraction and multiplication. Performing addition is one of the simplest numerical tasks to do. Addition of very small numbers is accessible to toddlers; the most basic task, , can be performed by infants as young as five months, and even some members of other animal species. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day. Notation and terminology Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, ("one plus two equals three") (see "associativity" below) (see "multiplication" below) There are also situations where addition is "understood", even though no symbol appears: A whole number followed immediately by a fraction indicates the sum of the two, called a mixed number. For example, This notation can cause confusion, since in most other contexts, juxtaposition denotes multiplication instead. The sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, Terms The numbers or the objects to be added in general addition are collectively referred to as the terms, the addends or the summands; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the commutative property of addition, "augend" is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb addere, which is in turn a compound of ad "to" and dare "to give", from the Proto-Indo-European root "to give"; thus to add is to give to. Using the gerundive suffix -nd results in "addend", "thing to be added". Likewise from augere "to increase", one gets "augend", "thing to be increased". "Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verb summare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was common for the ancient Greeks and Romans to add upward, contrary to the modern practice of adding downward, so that a sum was literally at the top of the addends. Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer. The plus sign "+" (Unicode:U+002B; ASCII: &#43;) is an abbreviation of the Latin word et, meaning "and". It appears in mathematical works dating back to at least 1489. Interpretations Addition is used to model many physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations. Combining sets Possibly the most basic interpretation of addition lies in combining sets: When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the numbers of objects in the original collections. This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics (for the rigorous definition it inspires, see below). However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers. One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than solely combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods. Extending a length A second interpretation of addition comes from extending an initial length by a given length: When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension. The sum a + b can be interpreted as a binary operation that combines a and b, in an algebraic sense, or it can be interpreted as the addition of b more units to a. Under the latter interpretation, the parts of a sum play asymmetric roles, and the operation is viewed as applying the unary operation +b to a. Instead of calling both a and b addends, it is more appropriate to call a the augend in this case, since a plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa. Properties Commutativity Addition is commutative, meaning that one can change the order of the terms in a sum, but still get the same result. Symbolically, if a and b are any two numbers, then a + b = b + a. The fact that addition is commutative is known as the "commutative law of addition" or "commutative property of addition". Some other binary operations are commutative, such as multiplication, but many others, such as subtraction and division, are not. Associativity Addition is associative, which means that when three or more numbers are added together, the order of operations does not change the result. As an example, should the expression a + b + c be defined to mean (a + b) + c or a + (b + c)? Given that addition is associative, the choice of definition is irrelevant. For any three numbers a, b, and c, it is true that . For example, . When addition is used together with other operations, the order of operations becomes important. In the standard order of operations, addition is a lower priority than exponentiation, nth roots, multiplication and division, but is given equal priority to subtraction. Identity element Adding zero to any number, does not change the number; this means that zero is the identity element for addition, and is also known as the additive identity. In symbols, for every , one has . This law was first identified in Brahmagupta's Brahmasphutasiddhanta in 628 AD, although he wrote it as three separate laws, depending on whether a is negative, positive, or zero itself, and he used words rather than algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement . In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement . Successor Within the context of integers, addition of one also plays a special role: for any integer a, the integer is the least integer greater than a, also known as the successor of a. For instance, 3 is the successor of 2 and 7 is the successor of 6. Because of this succession, the value of can also be seen as the bth successor of a, making addition iterated succession. For example, is 8, because 8 is the successor of 7, which is the successor of 6, making 8 the 2nd successor of 6. Units To numerically add physical quantities with units, they must be expressed with common units. For example, adding 50 milliliters to 150 milliliters gives 200 milliliters. However, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis. Performing addition Innate ability Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituation: infants look longer at situations that are unexpected. A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that five-month-old infants expect to be 2, and they are comparatively surprised when a physical situation seems to imply that is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies. Another 1992 experiment with older toddlers, between 18 and 35 months, exploited their development of motor control by allowing them to retrieve ping-pong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5. Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaque and cottontop tamarin monkeys performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training. More recently, Asian elephants have demonstrated an ability to perform basic arithmetic. Childhood learning Typically, children first master counting. When given a problem that requires that two items and three items be combined, young children model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four, five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers. Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case, starting with three and counting "four, five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child asked to add six and seven may know that and then reason that is one more, or 13. Such derived facts can be found very quickly and most elementary school students eventually rely on a mixture of memorized and derived facts to add fluently. Different nations introduce whole numbers and arithmetic at different ages, with many countries teaching addition in pre-school. However, throughout the world, addition is taught by the end of the first year of elementary school. Table Children are often presented with the addition table of pairs of numbers from 0 to 9 to memorize. Decimal system The prerequisite to addition in the decimal system is the fluent recall or derivation of the 100 single-digit "addition facts". One could memorize all the facts by rote, but pattern-based strategies are more enlightening and, for most people, more efficient: Commutative property: Mentioned above, using the pattern a + b = b + a reduces the number of "addition facts" from 100 to 55. One or two more: Adding 1 or 2 is a basic task, and it can be accomplished through counting on or, ultimately, intuition. Zero: Since zero is the additive identity, adding zero is trivial. Nonetheless, in the teaching of arithmetic, some students are introduced to addition as a process that always increases the addends; word problems may help rationalize the "exception" of zero. Doubles: Adding a number to itself is related to counting by two and to multiplication. Doubles facts form a backbone for many related facts, and students find them relatively easy to grasp. Near-doubles: Sums such as 6 + 7 = 13 can be quickly derived from the doubles fact by adding one more, or from but subtracting one. Five and ten: Sums of the form 5 + and 10 + are usually memorized early and can be used for deriving other facts. For example, can be derived from by adding one more. Making ten: An advanced strategy uses 10 as an intermediate for sums involving 8 or 9; for example, . As students grow older, they commit more facts to memory, and learn to derive other facts rapidly and fluently. Many students never commit all the facts to memory, but can still find any basic fact quickly. Carry The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds nine, the extra digit is "carried" into the next column. For example, in the addition ¹ 27 + 59 ———— 86 7 + 9 = 16, and the digit 1 is the carry. An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many alternative methods. Since the end of the 20th century, some US programs, including TERC, decided to remove the traditional transfer method from their curriculum. This decision was criticized, which is why some states and counties did not support this experiment. Decimal fractions Decimal fractions can be added by a simple modification of the above process. One aligns two decimal fractions above each other, with the decimal point in the same location. If necessary, one can add trailing zeros to a shorter decimal to make it the same length as the longer decimal. Finally, one performs the same addition process as above, except the decimal point is placed in the answer, exactly where it was placed in the summands. As an example, 45.1 + 4.34 can be solved as follows: 4 5 . 1 0 + 0 4 . 3 4 ———————————— 4 9 . 4 4 Scientific notation In scientific notation, numbers are written in the form , where is the significand and is the exponential part. Addition requires two numbers in scientific notation to be represented using the same exponential part, so that the two significands can simply be added. For example: Non-decimal Addition in other bases is very similar to decimal addition. As an example, one can consider addition in binary. Adding two single-digit binary numbers is relatively simple, using a form of carrying: 0 + 0 → 0 0 + 1 → 1 1 + 0 → 1 1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 21)) Adding two "1" digits produces a digit "0", while 1 must be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: 5 + 5 → 0, carry 1 (since 5 + 5 = 10 = 0 + (1 × 101)) 7 + 9 → 6, carry 1 (since 7 + 9 = 16 = 6 + (1 × 101)) This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: 0 1 1 0 1 + 1 0 1 1 1 ————————————— 1 0 0 1 0 0 = 36 In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, . The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: again; the 1 is carried, and 0 is written at the bottom. The third column: . This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (3610). Computers Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier. Addition is also fundamental to the operation of digital computers, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance. The abacus, also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks in Asia, Africa, and elsewhere; it dates back to at least 2700–2300 BC, when it was used in Sumer. Blaise Pascal invented the mechanical calculator in 1642; it was the first operational adding machine. It made use of a gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century and the earliest automatic, digital computer. Pascal's calculator was limited by its carry mechanism, which forced its wheels to only turn one way so it could add. To subtract, the operator had to use the Pascal's calculator's complement, which required as many steps as an addition. Giovanni Poleni followed Pascal, building the second functional mechanical calculator in 1709, a calculating clock made of wood that, once setup, could multiply two numbers automatically. Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing , but one bypasses the group of 9s and skips to the answer. In practice, computational addition may be achieved via XOR and AND bitwise logical operations in conjunction with bitshift operations as shown in the pseudocode below. Both XOR and AND gates are straightforward to realize in digital logic allowing the realization of full adder circuits which in turn may be combined into more complex logical operations. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Many implementations are, in fact, hybrids of these last three designs. Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish. In modern times, the ADD instruction of a microprocessor often replaces the augend with the sum but preserves the addend. In a high-level programming language, evaluating does not change either a or b; if the goal is to replace a with the sum this must be explicitly requested, typically with the statement . Some languages such as C or C++ allow this to be abbreviated as . // Iterative algorithm int add(int x, int y) { int carry = 0; while (y != 0) { carry = AND(x, y); // Logical AND x = XOR(x, y); // Logical XOR y = carry << 1; // left bitshift carry by one } return x; } // Recursive algorithm int add(int x, int y) { return x if (y == 0) else add(XOR(x, y), AND(x, y) << 1); } On a computer, if the result of an addition is too large to store, an arithmetic overflow occurs, resulting in an incorrect answer. Unanticipated arithmetic overflow is a fairly common cause of program errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests. The Year 2000 problem was a series of bugs where overflow errors occurred due to use of a 2-digit format for years. Addition of numbers To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to progressively larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers. (In mathematics education, positive fractions are added before negative numbers are even considered; this is also the historical route.) Natural numbers There are two popular ways to define the sum of two natural numbers a and b. If one defines natural numbers to be the cardinalities of finite sets, (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows: Let N(S) be the cardinality of a set S. Take two disjoint sets A and B, with and . Then is defined as . Here, is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice. The other popular definition is recursive: Let n+ be the successor of n, that is the number following n in the natural numbers, so , . Define . Define the general sum recursively by . Hence . Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the recursion theorem on the partially ordered set N2. On the other hand, some sources prefer to use a restricted recursion theorem that applies only to the set of natural numbers. One then considers a to be temporarily "fixed", applies recursion on b to define a function "a +", and pastes these unary operations for all a together to form the full binary operation. This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, through mathematical induction. Integers The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive or negative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases: For an integer n, let |n| be its absolute value. Let a and b be integers. If either a or b is zero, treat it as an identity. If a and b are both positive, define . If a and b are both negative, define . If a and b have different signs, define to be the difference between |a| and |b|, with the sign of the term whose absolute value is larger. As an example, ; because −6 and 4 have different signs, their absolute values are subtracted, and since the absolute value of the negative term is larger, the answer is negative. Although this definition can be useful for concrete problems, the number of cases to consider complicates proofs unnecessarily. So the following method is commonly used for defining integers. It is based on the remark that every integer is the difference of two natural integers and that two such differences, and are equal if and only if . So, one can define formally the integers as the equivalence classes of ordered pairs of natural numbers under the equivalence relation if and only if . The equivalence class of contains either if , or otherwise. If is a natural number, one can denote the equivalence class of , and by the equivalence class of . This allows identifying the natural number with the equivalence class . Addition of ordered pairs is done component-wise: A straightforward computation shows that the equivalence class of the result depends only on the equivalences classes of the summands, and thus that this defines an addition of equivalence classes, that is integers. Another straightforward computation shows that this addition is the same as the above case definition. This way of defining integers as equivalence classes of pairs of natural numbers, can be used to embed into a group any commutative semigroup with cancellation property. Here, the semigroup is formed by the natural numbers and the group is the additive group of integers. The rational numbers are constructed similarly, by taking as semigroup the nonzero integers with multiplication. This construction has been also generalized under the name of Grothendieck group to the case of any commutative semigroup. Without the cancellation property the semigroup homomorphism from the semigroup into the group may be non-injective. Originally, the Grothendieck group was, more specifically, the result of this construction applied to the equivalences classes under isomorphisms of the objects of an abelian category, with the direct sum as semigroup operation. Rational numbers (fractions) Addition of rational numbers can be computed using the least common denominator, but a conceptually simpler definition involves only integer addition and multiplication: Define As an example, the sum . Addition of fractions is much simpler when the denominators are the same; in this case, one can simply add the numerators while leaving the denominator the same: , so . The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic. For a more rigorous and general discussion, see field of fractions. Real numbers A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be a Dedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers a and b is defined element by element: Define This definition was first published, in a slightly modified form, by Richard Dedekind in 1872. The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses. Unfortunately, dealing with multiplication of Dedekind cuts is a time-consuming case-by-case process similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the limit of a Cauchy sequence of rationals, lim an. Addition is defined term by term: Define This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different. One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions. Complex numbers Complex numbers are added by adding the real and imaginary parts of the summands. That is to say: Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram three of whose vertices are O, A and B. Equivalently, X is the point such that the triangles with vertices O, A, B, and X, B, A, are congruent. Generalizations There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of abstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory. Abstract algebra Vectors In linear algebra, a vector space is an algebraic structure that allows for adding any two vectors and for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair (a,b) is interpreted as a vector from the origin in the Euclidean plane to the point (a,b) in the plane. The sum of two vectors is obtained by adding their individual coordinates: This addition operation is central to classical mechanics, in which velocities, accelerations and forces are all represented by vectors. Matrices Matrix addition is defined for two matrices of the same dimensions. The sum of two m × n (pronounced "m by n") matrices A and B, denoted by , is again an matrix computed by adding corresponding elements: For example: Modular arithmetic In modular arithmetic, the set of available numbers is restricted to a finite subset of the integers, and addition "wraps around" when reaching a certain value, called the modulus. For example, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. A similar "wrap around" operation arises in geometry, where the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori. General theory The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups. Set theory and category theory A far-reaching generalization of addition of natural numbers is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of addition of natural numbers to the transfinite. Unlike most addition operations, addition of ordinal numbers is not commutative. Addition of cardinal numbers, however, is a commutative operation closely related to the disjoint union operation. In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as direct sum and wedge sum, are named to evoke their connection with addition. Related operations Addition, along with subtraction, multiplication and division, is considered one of the basic operations and is used in elementary arithmetic. Arithmetic Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding and subtracting are inverse functions. Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction. Multiplication can be thought of as repeated addition. If a single term appears in a sum n times, then the sum is the product of n and . If n is not a natural number, the product may still make sense; for example, multiplication by yields the additive inverse of a number. In the real and complex numbers, addition and multiplication can be interchanged by the exponential function: This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra. There are even more generalizations of multiplication than addition. In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general. Division is an arithmetic operation remotely related to addition. Since , division is right distributive over addition: . However, division is not left distributive over addition; is not the same as . Ordering The maximum operation "max (a, b)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of different orders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straightforward calculation of can accumulate an unacceptable round-off error, perhaps even returning zero. See also Loss of significance. The approximation becomes exact in a kind of infinite limit; if either a or b is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two. Accordingly, there is no subtraction operation for infinite cardinals. Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition: For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity. Some authors prefer to replace addition with minimization; then the additive identity is positive infinity. Tying these observations together, tropical addition is approximately related to regular addition through the logarithm: which becomes more accurate as the base of the logarithm increases. The approximation can be made exact by extracting a constant h, named by analogy with the Planck constant from quantum mechanics, and taking the "classical limit" as h tends to zero: In this sense, the maximum operation is a dequantized version of addition. Other ways to add Incrementation, also known as the successor operation, is the addition of to a number. Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is zero. An infinite summation is a delicate procedure known as a series. Counting a finite set is equivalent to summing 1 over the set. Integration is a kind of "summation" over a continuum, or more precisely and generally, over a differentiable manifold. Integration over a zero-dimensional manifold reduces to summation. Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics. Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition. See also Lunar arithmetic Mental arithmetic Parallel addition (mathematics) Verbal arithmetic (also known as cryptarithms), puzzles involving addition Notes Footnotes References History Elementary mathematics Education California State Board of Education mathematics content standards Adopted December 1997, accessed December 2005. Cognitive science Mathematical exposition Advanced mathematics Mathematical research Litvinov, Grigory; Maslov, Victor; Sobolevskii, Andreii (1999). Idempotent mathematics and interval analysis. Reliable Computing, Kluwer. Computing Further reading Elementary arithmetic Mathematical notation Articles with example C code
Addition
[ "Mathematics" ]
8,024
[ "Elementary mathematics", "Arithmetic", "Elementary arithmetic", "nan" ]
61,344
https://en.wikipedia.org/wiki/Lightning
Lightning is a natural phenomenon, more specifically an atmospheric electrical phenomenon. It consists of electrostatic discharges occurring through the atmosphere between two electrically charged regions, either both existing within the atmosphere or one within the atmosphere and one on the ground, with these regions then becoming partially or wholly electrically neutralized. Lightning involves a near-instantaneous release of energy on a scale averaging between 200 megajoules and 7 gigajoules. This discharge may produce a wide range of electromagnetic radiation, from heat created by the rapid movement of electrons, to brilliant flashes of visible light in the form of black-body radiation. Lightning also causes thunder, a sound from the shock wave which develops as gases in the vicinity of the discharge experience a sudden increase in pressure. The most common occurrence of a lightning event is known as a thunderstorm, though they can also commonly occur in other types of energetic weather systems too. Lightning influences the global atmospheric electrical circuit, atmospheric chemistry, and is a natural ignition source of wildfires. The scientific study of lightning is called fulminology. Forms Three primary forms of lightning are distinguished by where they occur: (IC) or — Within a single thundercloud (CC) or — Between two clouds (CG) — Between a cloud and the ground, in which case it is referred to as a lightning strike. Many other observational variants are recognized, including: volcanic lightning, which can occur during volcanic eruptions; "heat lightning", which can be seen from a great distance but not heard; dry lightning, which can cause forest fires; and ball lightning, which is rarely observed scientifically. The most direct effects of lightning on humans occur as a result of cloud-to-ground lightning, even though intra-cloud and cloud-to-cloud are more common. Intra-cloud and cloud-to-cloud lightning indirectly affect humans through their influence on atmospheric chemistry. There are variations of each type, such as "positive" versus "negative" CG flashes, that have different physical characteristics common to each which can be measured. Cloud to ground (CG) (CG) lightning is a lightning discharge between a thundercloud and the ground. It is initiated by a stepped leader moving down from the cloud, which is met by a streamer moving up from the ground. CG is the least common, but best understood of all types of lightning. It is easier to study scientifically because it terminates on a physical object, namely the ground, and lends itself to being measured by instruments on the ground. Of the three primary types of lightning, it poses the greatest threat to life and property, since it terminates on the ground or "strikes". The overall discharge, termed a flash, is composed of a number of processes such as preliminary breakdown, stepped leaders, connecting leaders, return strokes, dart leaders, and subsequent return strokes. The conductivity of the electrical ground, be it soil, fresh water, or salt water, may affect the lightning discharge rate and thus visible characteristics. Positive and negative lightning Cloud-to-ground (CG) lightning is either positive or negative, as defined by the direction of the conventional electric current between cloud and ground. Most CG lightning is negative, meaning that a negative charge is transferred (electrons flow) downwards to ground along the lightning channel (conventionally speaking they flow from the ground up to the cloud). The reverse happens in a positive CG flash, where electrons travel upward along the lightning channel, while also a positive charge is transferred downward to the ground (conventionally speaking this would be the opposite). Positive lightning is less common than negative lightning and on average makes up less than 5% of all lightning strikes. There are a number of mechanisms theorized to result in the formation of positive lightning. These are mainly based on movement or intensification of charge centres in the cloud. Such changes in cloud charging may come about as a result of variations in vertical wind shear or precipitation, or dissipation of the storm. Positive flashes may also result from certain behaviour of in-cloud discharges, e.g. breaking off or branching from existing flashes. Positive lightning strikes tend to be much more intense than their negative counterparts. An average bolt of negative lightning creates an electric current of 30,000 amperes (30 kA), transferring a total 15 C (coulombs) of electric charge and 1 gigajoule of energy. Large bolts of positive lightning can create up to 120 kA and transfer 350 C. The average positive ground flash has roughly double the peak current of a typical negative flash, and can produce peak currents up to 400 kA and charges of several hundred coulombs. Furthermore, positive ground flashes with high peak currents are commonly followed by long continuing currents, a correlation not seen in negative ground flashes. As a result of their greater power, positive lightning strikes are considerably more dangerous than negative strikes. Positive lightning produces both higher peak currents and longer continuing currents, making them capable of heating surfaces to much higher levels which increases the likelihood of a fire being ignited. The long distances positive lightning can propagate through clear air explains why they are known as "bolts from the blue", giving no warning to observers. Positive lightning has also been shown to trigger the occurrence of upward lightning flashes from the tops of tall structures and is largely responsible for the initiation of sprites several tens of kilometers above ground level. Positive lightning tends to occur more frequently in winter storms, as with thundersnow, during intense tornadoes and in the dissipation stage of a thunderstorm. Huge quantities of extremely low frequency (ELF) and very low frequency (VLF) radio waves are also generated. Contrary to popular belief, positive lightning flashes do not necessarily originate from the anvil or the upper positive charge region and strike a rain-free area outside of the thunderstorm. This belief is based on the outdated idea that lightning leaders are unipolar and originate from their respective charge region. Despite the popular misconception that flashes originating from the anvil are positive, due to them seemingly originating from the positive charge region, observations have shown that these are in fact negative flashes. They begin as IC flashes within the cloud, the negative leader then exits the cloud from the positive charge region before propagating through clear air and striking the ground some distance away. Cloud to cloud (CC) and intra-cloud (IC) Lightning discharges may occur between areas of cloud without contacting the ground. When it occurs between two separate clouds, it is known as (CC) or lightning; when it occurs between areas of differing electric potential within a single cloud, it is known as (IC) lightning. IC lightning is the most frequently occurring type. IC lightning most commonly occurs between the upper anvil portion and lower reaches of a given thunderstorm. This lightning can sometimes be observed at great distances at night as so-called "sheet lightning". In such instances, the observer may see only a flash of light without hearing any thunder. Another term used for cloud–cloud or cloud–cloud–ground lightning is "Anvil Crawler", due to the habit of charge, typically originating beneath or within the anvil and scrambling through the upper cloud layers of a thunderstorm, often generating dramatic multiple branch strokes. These are usually seen as a thunderstorm passes over the observer or begins to decay. The most vivid crawler behavior occurs in well developed thunderstorms that feature extensive rear anvil shearing. Formation The processes involved in lightning formation fall into the following categories: Large-scale atmospheric phenomena in which charge separation can occur (e.g. storm) Microscopic physical processes that result in charge separation Large-scale separation of charge and establishment of an electric field Discharge through a lightning channel Atmospheric phenomena in which lightning occurs Lightning primarily occurs when warm air is mixed with colder air masses, resulting in atmospheric disturbances necessary for polarizing the atmosphere. The disturbances result in storms, and when those storms also result in lightning and thunder, they are called a thunderstorm. Lightning can also occur during dust storms, forest fires, tornadoes, volcanic eruptions, and even in the cold of winter, where the lightning is known as thundersnow. Hurricanes typically generate some lightning, mainly in the rainbands as much as from the center. Intense forest fires, such as those seen in the 2019–20 Australian bushfire season, can create their own weather systems that can produce lightning (also called Fire Lightning) and other weather phenomena. Intense heat from a fire causes air to rapidly rise within the smoke plume, causing the formation of pyrocumulonimbus clouds. Cooler air is drawn in by this turbulent, rising air, helping to cool the plume. The rising plume is further cooled by the lower atmospheric pressure at high altitude, allowing the moisture in it to condense into cloud. Pyrocumulonimbus clouds form in an unstable atmosphere. These weather systems can produce dry lightning, fire tornadoes, intense winds, and dirty hail. Airplane contrails have also been observed to influence lightning to a small degree. The water vapor-dense contrails of airplanes may provide a lower resistance pathway through the atmosphere having some influence upon the establishment of an ionic pathway for a lightning flash to follow. Rocket exhaust plumes provided a pathway for lightning when it was witnessed striking the Apollo 12 rocket shortly after takeoff. Thermonuclear explosions, by providing extra material for electrical conduction and a very turbulent localized atmosphere, have been seen triggering lightning flashes within the mushroom cloud. In addition, intense gamma radiation from large nuclear explosions may develop intensely charged regions in the surrounding air through Compton scattering. The intensely charged space charge regions create multiple clear-air lightning discharges shortly after the device detonates. Some high energy cosmic rays produced by supernovas as well as solar particles from the solar wind, enter the atmosphere and electrify the air, which may create pathways for lightning channels. Charge separation Charge separation in thunderstorms The details of the charging process are still being studied by scientists, but there is general agreement on some of the basic concepts of thunderstorm charge separation, also known as electrification. Electrification can be by the triboelectric effect leading to electron or ion transfer between colliding bodies. Uncharged, colliding water-drops can become charged because of charge transfer between them (as aqueous ions) in an electric field as would exist in a thunderstorm. The main charging area in a thunderstorm occurs in the central part of the storm where air is moving upward rapidly (updraft) and temperatures range from ; see Figure 1. In that area, the combination of temperature and rapid upward air movement produces a mixture of super-cooled cloud droplets (small water droplets below freezing), small ice crystals, and graupel (soft hail). The updraft carries the super-cooled cloud droplets and very small ice crystals upward. At the same time, the graupel, which is considerably larger and denser, tends to fall or be suspended in the rising air. The differences in the movement of the precipitation cause collisions to occur. When the rising ice crystals collide with graupel, the ice crystals become positively charged and the graupel becomes negatively charged; see Figure 2. The updraft carries the positively charged ice crystals upward toward the top of the storm cloud. The larger and denser graupel is either suspended in the middle of the thunderstorm cloud or falls toward the lower part of the storm. The result is that the upper part of the thunderstorm cloud becomes positively charged while the middle to lower part of the thunderstorm cloud becomes negatively charged. The upward motions within the storm and winds at higher levels in the atmosphere tend to cause the small ice crystals (and positive charge) in the upper part of the thunderstorm cloud to spread out horizontally some distance from the thunderstorm cloud base. This part of the thunderstorm cloud is called the anvil. While this is the main charging process for the thunderstorm cloud, some of these charges can be redistributed by air movements within the storm (updrafts and downdrafts). In addition, there is a small but important positive charge buildup near the bottom of the thunderstorm cloud due to the precipitation and warmer temperatures. Charge separation in different phases of water The induced separation of charge in pure liquid water has been known since the 1840s as has the electrification of pure liquid water by the triboelectric effect. William Thomson (Lord Kelvin) demonstrated that charge separation in water occurs in the usual electric fields at the Earth's surface and developed a continuous electric field measuring device using that knowledge. The physical separation of charge into different regions using liquid water was demonstrated by Kelvin with the Kelvin water dropper. The most likely charge-carrying species were considered to be the aqueous hydrogen ion and the aqueous hydroxide ion. An electron is not stable in liquid water concerning a hydroxide ion plus dissolved hydrogen for the time scales involved in thunderstorms. The electrical charging of solid water ice has also been considered. The charged species were again considered to be the hydrogen ion and the hydroxide ion. The charge carrier in lightning is mainly electrons in a plasma. The process of going from charge as ions (positive hydrogen ion and negative hydroxide ion) associated with liquid water or solid water to charge as electrons associated with lightning must involve some form of electro-chemistry, that is, the oxidation and/or the reduction of chemical species. As hydroxide functions as a base and carbon dioxide is an acidic gas, it is possible that charged water clouds in which the negative charge is in the form of the aqueous hydroxide ion, interact with atmospheric carbon dioxide to form aqueous carbonate ions and aqueous hydrogen carbonate ions. Establishing an electric field In order for an electrostatic discharge to occur, two preconditions are necessary: first, a sufficiently high potential difference between two regions of space must exist, and second, a high-resistance medium must obstruct the free, unimpeded equalization of the opposite charges. The atmosphere provides the electrical insulation, or barrier, that prevents free equalization between charged regions of opposite polarity. Meanwhile, a thunderstorm can provide the charge separation and aggregation in certain regions of the cloud. When the local electric field exceeds the dielectric strength of damp air (about 3 MV/m), electrical discharge results in a strike, often followed by commensurate discharges branching from the same path. Mechanisms that cause the charges to build up to lightning are still a matter of scientific investigation. A 2016 study confirmed dielectric breakdown is involved. Lightning may be caused by the circulation of warm moisture-filled air through electric fields. Ice or water particles then accumulate charge as in a Van de Graaff generator. As a thundercloud moves over the surface of the Earth, an equal electric charge, but of opposite polarity, is induced on the Earth's surface underneath the cloud. The induced positive surface charge, when measured against a fixed point, will be small as the thundercloud approaches, increasing as the center of the storm arrives and dropping as the thundercloud passes. The referential value of the induced surface charge could be roughly represented as a bell curve. The oppositely charged regions create an electric field within the air between them. This electric field varies in relation to the strength of the surface charge on the base of the thundercloud – the greater the accumulated charge, the higher the electrical field. Electrical discharge as flashes and strikes The best-studied and understood form of lightning is cloud to ground (CG) lightning. Although more common, intra-cloud (IC) and cloud-to-cloud (CC) flashes are very difficult to study given there are no "physical" points to monitor inside the clouds. Also, given the very low probability of lightning striking the same point repeatedly and consistently, scientific inquiry is difficult even in areas of high CG frequency. Lightning leaders In a process not well understood, a bidirectional channel of ionized air, called a "leader", is initiated between oppositely-charged regions in a thundercloud. Leaders are electrically conductive channels of ionized gas that propagate through, or are otherwise attracted to, regions with a charge opposite of that of the leader tip. The negative end of the bidirectional leader fills a positive charge region, also called a well, inside the cloud while the positive end fills a negative charge well. Leaders often split, forming branches in a tree-like pattern. In addition, negative and some positive leaders travel in a discontinuous fashion, in a process called "stepping". The resulting jerky movement of the leaders can be readily observed in slow-motion videos of lightning flashes. It is possible for one end of the leader to fill the oppositely-charged well entirely while the other end is still active. When this happens, the leader end which filled the well may propagate outside of the thundercloud and result in either a cloud-to-air flash or a cloud-to-ground flash. In a typical cloud-to-ground flash, a bidirectional leader initiates between the main negative and lower positive charge regions in a thundercloud. The weaker positive charge region is filled quickly by the negative leader which then propagates toward the inductively-charged ground. The positively and negatively charged leaders proceed in opposite directions, positive upwards within the cloud and negative towards the earth. Both ionic channels proceed, in their respective directions, in a number of successive spurts. Each leader "pools" ions at the leading tips, shooting out one or more new leaders, momentarily pooling again to concentrate charged ions, then shooting out another leader. The negative leader continues to propagate and split as it heads downward, often speeding up as it gets closer to the Earth's surface. About 90% of ionic channel lengths between "pools" are approximately in length. The establishment of the ionic channel takes a comparatively long amount of time (hundreds of milliseconds) in comparison to the resulting discharge, which occurs within a few dozen microseconds. The electric current needed to establish the channel, measured in the tens or hundreds of amperes, is dwarfed by subsequent currents during the actual discharge. Initiation of the lightning leader is not well understood. The electric field strength within the thundercloud is not typically large enough to initiate this process by itself. Many hypotheses have been proposed. One hypothesis postulates that showers of relativistic electrons are created by cosmic rays and are then accelerated to higher velocities via a process called runaway breakdown. As these relativistic electrons collide and ionize neutral air molecules, they initiate leader formation. Another hypothesis involves locally enhanced electric fields being formed near elongated water droplets or ice crystals. Percolation theory, especially for the case of biased percolation, describes random connectivity phenomena, which produce an evolution of connected structures similar to that of lightning strikes. A streamer avalanche model has recently been favored by observational data taken by LOFAR during storms. Upward streamers When a stepped leader approaches the ground, the presence of opposite charges on the ground enhances the strength of the electric field. The electric field is strongest on grounded objects whose tops are closest to the base of the thundercloud, such as trees and tall buildings. If the electric field is strong enough, a positively charged ionic channel, called a positive or upward streamer, can develop from these points. This was first theorized by Heinz Kasemir. As negatively charged leaders approach, increasing the localized electric field strength, grounded objects already experiencing corona discharge will exceed a threshold and form upward streamers. Attachment Once a downward leader connects to an available upward leader, a process referred to as attachment, a low-resistance path is formed and discharge may occur. Photographs have been taken in which unattached streamers are clearly visible. The unattached downward leaders are also visible in branched lightning, none of which are connected to the earth, although it may appear they are. High-speed videos can show the attachment process in progress. Discharge – Return stroke Once a conductive channel bridges the air gap between the negative charge excess in the cloud and the positive surface charge excess below, there is a large drop in resistance across the lightning channel. Electrons accelerate rapidly as a result in a zone beginning at the point of attachment, which expands across the entire leader network at up to one third of the speed of light. This is the "return stroke" and it is the most luminous and noticeable part of the lightning discharge. A large electric charge flows along the plasma channel, from the cloud to the ground, neutralising the positive ground charge as electrons flow away from the strike point to the surrounding area. This huge surge of current creates large radial voltage differences along the surface of the ground. Called step potentials, they are responsible for more injuries and deaths in groups of people or of other animals than the strike itself. Electricity takes every path available to it. Such step potentials will often cause current to flow through one leg and out another, electrocuting an unlucky human or animal standing near the point where the lightning strikes. The electric current of the return stroke averages 30 kiloamperes for a typical negative CG flash, often referred to as "negative CG" lightning. In some cases, a ground-to-cloud (GC) lightning flash may originate from a positively charged region on the ground below a storm. These discharges normally originate from the tops of very tall structures, such as communications antennas. The rate at which the return stroke current travels has been found to be around 100,000 km/s (one-third of the speed of light). A typical cloud-to-ground lightning flash culminates in the formation of an electrically conducting plasma channel through the air in excess of tall, from within the cloud to the ground's surface. The massive flow of electric current occurring during the return stroke combined with the rate at which it occurs (measured in microseconds) rapidly superheats the completed leader channel, forming a highly electrically conductive plasma channel. The core temperature of the plasma during the return stroke may exceed , causing it to radiate with a brilliant, blue-white color. Once the electric current stops flowing, the channel cools and dissipates over tens or hundreds of milliseconds, often disappearing as fragmented patches of glowing gas. The nearly instantaneous heating during the return stroke causes the air to expand explosively, producing a powerful shock wave which is heard as thunder. Discharge – Re-strike High-speed videos (examined frame-by-frame) show that most negative CG lightning flashes are made up of 3 or 4 individual strokes, though there may be as many as 30. Each re-strike is separated by a relatively large amount of time, typically 40 to 50 milliseconds, as other charged regions in the cloud are discharged in subsequent strokes. Re-strikes often cause a noticeable "strobe light" effect. To understand why multiple return strokes utilize the same lightning channel, one needs to understand the behavior of positive leaders, which a typical ground flash effectively becomes following the negative leader's connection with the ground. Positive leaders decay more rapidly than negative leaders do. For reasons not well understood, bidirectional leaders tend to initiate on the tips of the decayed positive leaders in which the negative end attempts to re-ionize the leader network. These leaders, also called recoil leaders, usually decay shortly after their formation. When they do manage to make contact with a conductive portion of the main leader network, a return stroke-like process occurs and a dart leader travels across all or a portion of the length of the original leader. The dart leaders making connections with the ground are what cause a majority of subsequent return strokes. Each successive stroke is preceded by intermediate dart leader strokes that have a faster rise time but lower amplitude than the initial return stroke. Each subsequent stroke usually re-uses the discharge channel taken by the previous one, but the channel may be offset from its previous position as wind displaces the hot channel. Since recoil and dart leader processes do not occur on negative leaders, subsequent return strokes very seldom utilize the same channel on positive ground flashes which are explained later in the article. Discharge – Transient currents during flash The electric current within a typical negative CG lightning discharge rises very quickly to its peak value in 1–10 microseconds, then decays more slowly over 50–200 microseconds. The transient nature of the current within a lightning flash results in several phenomena that need to be addressed in the effective protection of ground-based structures. Rapidly changing (alternating) currents tend to travel on the surface of a conductor, in what is called the skin effect, unlike direct currents, which "flow-through" the entire conductor like water through a hose. Hence, conductors used in the protection of facilities tend to be multi-stranded, with small wires woven together. This increases the total bundle surface area in inverse proportion to the individual strand radius, for a fixed total cross-sectional area. The rapidly changing currents also create electromagnetic pulses (EMPs) that radiate outward from the ionic channel. This is a characteristic of all electrical discharges. The radiated pulses rapidly weaken as their distance from the origin increases. However, if they pass over conductive elements such as power lines, communication lines, or metallic pipes, they may induce a current which travels outward to its termination. The surge current is inversely related to the surge impedance: the higher in impedance, the lower the current. This is the surge that, more often than not, results in the destruction of delicate electronics, electrical appliances, or electric motors. Devices known as surge protectors (SPD) or transient voltage surge suppressors (TVSS) attached in parallel with these lines can detect the lightning flash's transient irregular current, and, through alteration of its physical properties, route the spike to an attached earthing ground, thereby protecting the equipment from damage. Distribution, frequency and properties Global monitoring indicates that lightning on Earth occurs at an average frequency of approximately 44 (± 5) times per second, equating to nearly 1.4 billion flashes per year. Median duration is 0.52 seconds made up from a number of much shorter flashes (strokes) of around 60 to 70 microseconds. Occurrences are distributed unevenly across the planet with about 70% being over land in the tropics where atmospheric convection is the greatest. Many factors affect the frequency, distribution, strength and physical properties of a typical lightning flash in a particular region of the world. These factors include ground elevation, latitude, prevailing wind currents, relative humidity, and proximity to warm and cold bodies of water. To a certain degree the proportions of intra-cloud, cloud-to-cloud, and cloud-to-ground lightning may also vary by season in middle latitudes. This occurs from both the mixture of warmer and colder air masses, as well as differences in moisture concentrations, and it generally happens at the boundaries between them. The flow of warm ocean currents past drier land masses, such as the Gulf Stream, partially explains the elevated frequency of lightning in the Southeast United States. Because large bodies of water lack the topographic variation that would result in atmospheric mixing, lightning is notably less frequent over the world's oceans than over land. The North and South Poles are limited in their coverage of thunderstorms and therefore result in areas with the least lightning. In general, CG lightning flashes account for only 25% of all total lightning flashes worldwide. Since the base of a thunderstorm is usually negatively charged, this is where most CG lightning originates. This region is typically at the elevation where freezing occurs within the cloud. Freezing, combined with collisions between ice and water, appears to be a critical part of the initial charge development and separation process. During wind-driven collisions, ice crystals tend to develop a positive charge, while a heavier, slushy mixture of ice and water (called graupel) develops a negative charge. Updrafts within a storm cloud separate the lighter ice crystals from the heavier graupel, causing the top region of the cloud to accumulate a positive space charge while the lower level accumulates a negative space charge. Because the concentrated charge within the cloud must exceed the insulating properties of air, and this increases proportionally to the distance between the cloud and the ground, the proportion of CG strikes (versus CC or IC discharges) becomes greater when the cloud is closer to the ground. In the tropics, where the freezing level is generally higher in the atmosphere, only 10% of lightning flashes are CG. At the latitude of Norway (around 60° North latitude), where the freezing elevation is lower, 50% of lightning is CG. Lightning is usually produced by cumulonimbus clouds, which have bases that are typically above the ground and tops up to in height. The place on Earth where lightning occurs most often is over Lake Maracaibo, wherein the Catatumbo lightning phenomenon produces 250 bolts of lightning a day. This activity occurs on average, 297 days a year. The second most lightning density is near the village of Kifuka in the mountains of the eastern Democratic Republic of the Congo, where the elevation is around . On average, this region receives . Other lightning hotspots include Singapore and Lightning Alley in Central Florida. According to the World Meteorological Organization, on April 29, 2020, a bolt 768 km (477.2 mi) long was observed in the southern U.S.—sixty km (37 mi) longer than the previous distance record (southern Brazil, October 31, 2018). A single flash in Uruguay and northern Argentina on June 18, 2020, lasted for 17.1 seconds—0.37 seconds longer than the previous record (March 4, 2019, also in northern Argentina). Researchers at the University of Florida found that the final one-dimensional speeds of 10 flashes observed were between 1.0 and 1.4 m/s, with an average of 4.4 m/s. Effects A lightning strike can unleash a variety of effects, some temporary, including very brief emission of light, sound and electromagnetic radiation, and some long-lasting, such as death, damage, and atmospheric and environmental changes. Injury, damage and destruction The immense amount of energy transferred in a lightning strike can have potentially devastating effect in a multitude of areas. To nature Objects struck by lightning experience heat and magnetic forces of great magnitude. Consequently: The heat created by lightning currents travelling through a tree may vaporize its sap, causing a steam explosion that rips off bark or even bursts the trunk. Similarly water in a fractured rock may be rapidly heated such that it splits further apart. A struck tree may catch fire, or a forest fire may be started. See also fire lightning below. As lightning travels through sandy soil, the soil surrounding the plasma channel may melt, forming tubular structures called fulgurites. To man-made structures and their contents Buildings or tall structures hit by lightning may be damaged as the lightning seeks unimpeded paths to the ground. By safely conducting a lightning strike to the ground, a lightning protection system, usually incorporating at least one lightning rod, can greatly reduce the probability of severe property damage. Surge protection devices (SPDs) can additionally or alternatively be used to help protect electrical installations from lightning induced electrical surges that risk damaging or destroying electrical equipment or starting a fire. Electrical fires obviously threaten not only structures but all assets, personal possessions, and living beings (people, pets and livestock) within. What, if any, protection system a building or structure requires is determined through a risk assessment. Threats to structures come not only from direct strikes to the structure itself, but also from direct or indirect strikes to connected electrically conductive services (electrical power lines; communication lines; water/gas pipes), or even to the surrounding area from which a surge may reach a service connection as it spreads out into the ground. To aircraft Aircraft are highly susceptible to being struck due to their metallic fuselages, but lightning strikes are generally not dangerous to them. Due to the conductive properties of aluminium alloy, the fuselage acts as a Faraday cage. Present day aircraft are built to be safe from a lightning strike and passengers will generally not even know that it has happened. However, there have been suspicions that lightning strikes can ignite fuel vapor and cause explosion, and nearby lightning can momentarily blind the pilot and cause permanent errors in magnetic compasses. To living beings Although 90 percent of people struck by lightning survive, humans and other animals struck by lightning may suffer severe injury due to internal organ and nervous system damage. Noise (Thunder) Because the electrostatic discharge of terrestrial lightning superheats the air to plasma temperatures along the length of the discharge channel in a short duration, kinetic theory dictates gaseous molecules undergo a rapid increase in pressure and thus expand outward from the lightning creating a shock wave audible as thunder. Since the sound waves propagate not from a single point source but along the length of the lightning's path, the sound origin's varying distances from the observer can generate a rolling or rumbling effect. Perception of the sonic characteristics is further complicated by factors such as the irregular and possibly branching geometry of the lightning channel, by acoustic echoing from terrain, and by the usually multiple-stroke characteristic of the lightning strike. Thunder is heard as a rolling, gradually dissipating rumble because the sound from different portions of a long stroke arrives at slightly different times. Lightning at a sufficient distance may be seen and not heard; there is data that a lightning storm can be seen at over whereas the thunder travels about . Anecdotally, there are many examples of people describing a 'storm directly overhead' or 'all-around' and yet 'no thunder'. Since thunderclouds can be up to high, lightning occurring high up in the cloud may appear close but is actually too far away to produce noticeable thunder. The distance approximation trick Light travels at about , while sound only travels through air at about . An observer can approximate the distance to the strike by timing the interval between the visible lightning and the audible thunder it generates. A lightning flash preceding its thunder by one second would be approximately away; thus a delay of three seconds would indicate a distance of about ; while a flash preceding thunder by five seconds would indicate a distance of roughly . Consequently, a lightning strike observed at a very close distance will be accompanied by a sudden clap of thunder, with almost no perceptible time lapse, possibly accompanied by the smell of ozone (O3). Electromagnetic radiation and interference Electromagnetic waves are emitted in a variety of wavelengths, most obviously that of visible light – the big bright flash! Radio frequency radiation Lightning discharges generate radio-frequency electromagnetic waves which can be received thousands of kilometers from their source. The discharge by itself is relatively simple short-lived dipole source that creates a single electromagnetic pulse with a duration of about 1 ms and a wide spectral density. In the absence in the nearby environment of materials with magnetic or electrical interaction properties, at a large distances in a far field zone, the electromagnetic wave will be proportional to the second derivation of the discharge current. This is what happens with high-altitude discharges or discharges over areas of a dry land. In other cases, the surrounding environment will change the shape of the source signal by absorbing some of its spectrum and converting it into a heat or re-transmitting it back as modified electromagnetic waves. High-energy radiation The production of X-rays by a bolt of lightning was predicted as early as 1925 by C.T.R. Wilson, but no evidence was found until 2001/2002, when researchers at the New Mexico Institute of Mining and Technology detected X-ray emissions from an induced lightning strike along a grounded wire trailed behind a rocket shot into a storm cloud. In the same year University of Florida and Florida Tech researchers used an array of electric field and X-ray detectors at a lightning research facility in North Florida to confirm that natural lightning makes X-rays in large quantities during the propagation of stepped leaders. The cause of the X-ray emissions is still a matter for research, as the temperature of lightning is too low to account for the X-rays observed. A number of observations by space-based telescopes have revealed even higher energy gamma ray emissions, the so-called terrestrial gamma-ray flashes (TGFs). These observations pose a challenge to current theories of lightning, especially with the recent discovery of the clear signatures of antimatter produced in lightning. Recent research has shown that secondary species, produced by these TGFs, such as electrons, positrons, neutrons or protons, can gain energies of up to several tens of MeV. Environmental changes More permanent or longer-lasting environmental changes include the following. Ozone and nitrogen oxides (atmospheric) The very high temperatures generated by lightning lead to significant local increases in ozone and oxides of nitrogen. Each lightning flash in temperate and sub-tropical areas produces 7 kg of on average. In the troposphere the effect of lightning can increase by 90% and ozone by 30%. Ground fertilisation Lightning serves an important role in the nitrogen cycle by oxidizing diatomic nitrogen in the air into nitrates which are deposited by rain and can fertilize the growth of plants and other organisms. Induced permanent magnetism The movement of electrical charges produces a magnetic field (see electromagnetism). The intense currents of a lightning discharge create a fleeting but very strong magnetic field. Where the lightning current path passes through rock, soil, or metal these materials can become permanently magnetized. This effect is known as lightning-induced remanent magnetism, or LIRM. These currents follow the least resistive path, often horizontally near the surface but sometimes vertically, where faults, ore bodies, or ground water offers a less resistive path. One theory suggests that lodestones, natural magnets encountered in ancient times, were created in this manner. Lightning-induced magnetic anomalies can be mapped in the ground, and analysis of magnetized materials can confirm lightning was the source of the magnetization and provide an estimate of the peak current of the lightning discharge. Magnetic hallucinations Research at the University of Innsbruck has calculated that magnetic fields generated by plasma may induce hallucinations in subjects located within of a severe lightning storm, like what happened in Transcranial magnetic stimulation (TMS). Extraterrestrial Lightning has been observed within the atmospheres of planets other than Earth, such as Jupiter, Saturn, and probably Uranus and Neptune. Lightning on Jupiter is far more energetic than on Earth, despite seeming to be generated via the same mechanism. Recently, a new type of lightning was detected on Jupiter, thought to originate from "mushballs" including ammonia. On Saturn lightning, initially referred to as "Saturn Electrostatic Discharge", was discovered by the Voyager 1 mission. Lightning on Venus has been a controversial subject after decades of study. During the Soviet Venera and U.S. Pioneer missions of the 1970s and 1980s, signals suggesting lightning may be present in the upper atmosphere were detected. The short Cassini–Huygens mission fly-by of Venus in 1999 detected no signs of lightning, but radio pulses recorded by the spacecraft Venus Express (which began orbiting Venus in April 2006) may originate from lightning on Venus. Detection and monitoring The earliest detector invented to warn of the approach of a thunderstorm was the lightning bell. Benjamin Franklin installed one such device in his house. The detector was based on an electrostatic device called the 'electric chimes' invented by Andrew Gordon in 1742. Lightning discharges generate a wide range of electromagnetic radiations, including radio-frequency pulses. The times at which a pulse from a given lightning discharge arrives at several receivers can be used to locate the source of the discharge with a precision on the order of metres. The United States federal government has constructed a nationwide grid of such lightning detectors, allowing lightning discharges to be tracked in real time throughout the continental U.S. In addition, Blitzortung (a private global detection system that consists of over 500 detection stations owned and operated by hobbyists/volunteers) provides near real-time lightning maps at . The Earth-ionosphere waveguide traps electromagnetic VLF- and ELF waves. Electromagnetic pulses transmitted by lightning strikes propagate within that waveguide. The waveguide is dispersive, which means that their group velocity depends on frequency. The difference of the group time delay of a lightning pulse at adjacent frequencies is proportional to the distance between transmitter and receiver. Together with direction-finding methods, this allows locating lightning strikes up to distances of 10,000 km from their origin. Moreover, the eigenfrequencies of the Earth-ionospheric waveguide, the Schumann resonances at about 7.5 Hz, are used to determine the global thunderstorm activity. In addition to ground-based lightning detection, several instruments aboard satellites have been constructed to observe lightning distribution. These include the Optical Transient Detector (OTD), aboard the OrbView-1 satellite launched on April 3, 1995, and the subsequent Lightning Imaging Sensor (LIS) aboard TRMM launched on November 28, 1997. Starting in 2016, the National Oceanic and Atmospheric Administration launched Geostationary Operational Environmental Satellite–R Series (GOES-R) weather satellites outfitted with Geostationary Lightning Mapper (GLM) instruments which are near-infrared optical transient detectors that can detect the momentary changes in an optical scene, indicating the presence of lightning. The lightning detection data can be converted into a real-time map of lightning activity across the Western Hemisphere; this mapping technique has been implemented by the United States National Weather Service. In 2022 EUMETSAT plan to launch the Lightning Imager (MTG-I LI) on board Meteosat Third Generation. This will complement NOAA's GLM. MTG-I LI will cover Europe and Africa and will include products on events, groups and flashes. Artificial triggering Rocket-triggered lightning can be "triggered" by launching specially designed rockets trailing spools of wire into thunderstorms. The wire unwinds as the rocket ascends, creating an elevated ground that can attract descending leaders. If a leader attaches, the wire provides a low-resistance pathway for a lightning flash to occur. The wire is vaporized by the return current flow, creating a straight lightning plasma channel in its place. This method allows for scientific research of lightning to occur under a more controlled and predictable manner. The International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, Florida typically uses rocket triggered lightning in their research studies. Laser-triggered Since the 1970s, researchers have attempted to trigger lightning strikes by means of infrared or ultraviolet lasers, which create a channel of ionized gas through which the lightning would be conducted to ground. Such triggering of lightning is intended to protect rocket launching pads, electric power facilities, and other sensitive targets. In New Mexico, U.S., scientists tested a new terawatt laser which provoked lightning. Scientists fired ultra-fast pulses from an extremely powerful laser thus sending several terawatts into the clouds to call down electrical discharges in storm clouds over the region. The laser beams sent from the laser make channels of ionized molecules known as filaments. Before the lightning strikes earth, the filaments lead electricity through the clouds, playing the role of lightning rods. Researchers generated filaments that lived a period too short to trigger a real lightning strike. Nevertheless, a boost in electrical activity within the clouds was registered. According to the French and German scientists who ran the experiment, the fast pulses sent from the laser will be able to provoke lightning strikes on demand. Statistical analysis showed that their laser pulses indeed enhanced the electrical activity in the thundercloud where it was aimed—in effect they generated small local discharges located at the position of the plasma channels. Impact of climate change and air pollution Due to the low resolution of global climate models, accurately representing lightning in these climate models is difficult, largely due to their inability to simulate the convection and cloud ice fundamental to lightning formation. Research from the Future Climate for Africa programme demonstrates that using a convection-permitting model over Africa can more accurately capture convective thunderstorms and the distribution of ice particles. This research indicates climate change may increase the total amount of lightning only slightly: the total number of lightning days per year decreases, while more cloud ice and stronger convection leads to more lightning strikes occurring on days when lightning does occur. A study from the University of Washington looked at lightning activity in the Arctic from 2010 to 2020. The ratio of Arctic summertime strokes was compared to total global strokes and was observed to be increasing with time, indicating that the region is becoming more influenced by lightning. The fraction of strokes above 65 degrees north was found to be increasing linearly with the NOAA global temperature anomaly and grew by a factor of 3 as the anomaly increased from 0.65 to 0.95 °C There is growing evidence that lightning activity is increased by particulate emissions (a form of air pollution). However, lightning may also improve air quality and clean greenhouse gases such as methane from the atmosphere, while creating nitrogen oxide and ozone at the same time. Lightning is also the major cause of wildfire, and wildfire can contribute to climate change as well. More studies are warranted to clarify their relationship. In culture and religion Humans have deified lightning for millennia. Idiomatic expressions derived from lightning, such as the English expression "bolt from the blue", are common across languages. At all times people have been fascinated by the sight and difference of lightning. The fear of lightning is called astraphobia. The first known photograph of lightning is from 1847, by Thomas Martin Easterly. The first surviving photograph is from 1882, by William Nicholson Jennings, a photographer who spent half his life capturing pictures of lightning and proving its diversity. Religion and mythology In many cultures, lightning has been viewed as a sign or part of a deity or a deity in and of itself. These include the Greek god Zeus, the Aztec god Tlaloc, the Mayan God K, Slavic mythology's Perun, the Baltic Pērkons/Perkūnas, Thor in Norse mythology, Ukko in Finnish mythology, the Hindu god Indra, the Yoruba god Sango, Illapa in Inca mythology and the Shinto god Raijin. The ancient Etruscans produced guides to brontoscopic and fulgural divination of the future based on the omens supposedly displayed by thunder or lightning occurring on particular days of the year or in particular places. Such use of thunder and lightning in divination is also known as ceraunoscopy, a kind of aeromancy. In the traditional religion of the African Bantu tribes, lightning is a sign of the ire of the gods. Scriptures in Judaism, Islam and Christianity also ascribe supernatural importance to lightning. In Christianity, the Second Coming of Jesus is compared to lightning. In popular culture Although sometimes used figuratively, the idea that lightning never strikes the same place twice is a common myth. In fact, lightning can, and often does, strike the same place more than once. Lightning in a thunderstorm is more likely to strike objects and spots that are more prominent or conductive. For instance, lightning strikes the Empire State Building in New York City on average 23 times per year. In French and Italian, the expression for "Love at first sight" is coup de foudre and colpo di fulmine, respectively, which literally translated means "lightning strike". Some European languages have a separate word for lightning which strikes the ground (as opposed to lightning in general); often it is a cognate of the English word "rays". The name of Australia's most celebrated thoroughbred horse, Phar Lap, derives from the shared Zhuang and Thai word for lightning. Political and military culture The bolt of lightning in heraldry is called a thunderbolt and is shown as a zigzag with non-pointed ends. This symbol usually represents power and speed. Some political parties use lightning flashes as a symbol of power, such as the People's Action Party in Singapore, the British Union of Fascists during the 1930s, and the National States' Rights Party in the United States during the 1950s. The Schutzstaffel, the paramilitary wing of the Nazi Party, used the Sig rune in their logo which symbolizes lightning. The German word Blitzkrieg, which means "lightning war", was a major offensive strategy of the German army during World War II. The lightning bolt is a common insignia for military communications units throughout the world. A lightning bolt is also the NATO symbol for a signal asset. See also Lightning strike Volcanic lightning Paleolightning Apollo 12 – A Saturn V rocket that was struck by lightning shortly after liftoff. Harvesting lightning energy Keraunography Keraunomedicine – medical study of lightning casualties Lichtenberg figure Lightning injury Lightning-prediction system Roy Sullivan - Sullivan is recognized by Guinness World Records as the person struck by lightning more recorded times than any other human St. Elmo's fire Upper-atmospheric lightning Vela satellites – satellites which could record lightning superbolts References Citations Sources Further reading This is also available at Sample, in .PDF form, consisting of the book through page 20. Early lightning research. External links World Wide Lightning Location Network Feynman's lecture on lightning Articles containing video clips Atmospheric electricity Electric arcs Electrical breakdown Electrical phenomena Terrestrial plasmas Space plasmas Storm Weather hazards Hazards of outdoor recreation
Lightning
[ "Physics" ]
10,150
[ "Space plasmas", "Electric arcs", "Physical phenomena", "Weather hazards", "Weather", "Plasma phenomena", "Atmospheric electricity", "Astrophysics", "Electrical phenomena", "Electrical breakdown", "Lightning" ]
61,346
https://en.wikipedia.org/wiki/Commutative%20ring
In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not specific to commutative rings. This distinction results from the high number of fundamental properties of commutative rings that do not extend to noncommutative rings. Definition and first examples Definition A ring is a set equipped with two binary operations, i.e. operations combining any two elements of the ring to a third. They are called addition and multiplication and commonly denoted by "" and ""; e.g. and . To form a ring these two operations have to satisfy a number of properties: the ring has to be an abelian group under addition as well as a monoid under multiplication, where multiplication distributes over addition; i.e., . The identity elements for addition and multiplication are denoted and , respectively. If the multiplication is commutative, i.e. then the ring is called commutative. In the remainder of this article, all rings will be commutative, unless explicitly stated otherwise. First examples An important example, and in some sense crucial, is the ring of integers with the two operations of addition and multiplication. As the multiplication of integers is a commutative operation, this is a commutative ring. It is usually denoted as an abbreviation of the German word Zahlen (numbers). A field is a commutative ring where and every non-zero element is invertible; i.e., has a multiplicative inverse such that . Therefore, by definition, any field is a commutative ring. The rational, real and complex numbers form fields. If is a given commutative ring, then the set of all polynomials in the variable whose coefficients are in forms the polynomial ring, denoted . The same holds true for several variables. If is some topological space, for example a subset of some , real- or complex-valued continuous functions on form a commutative ring. The same is true for differentiable or holomorphic functions, when the two concepts are defined, such as for a complex manifold. Divisibility In contrast to fields, where every nonzero element is multiplicatively invertible, the concept of divisibility for rings is richer. An element of ring is called a unit if it possesses a multiplicative inverse. Another particular type of element is the zero divisors, i.e. an element such that there exists a non-zero element of the ring such that . If possesses no non-zero zero divisors, it is called an integral domain (or domain). An element satisfying for some positive integer is called nilpotent. Localizations The localization of a ring is a process in which some elements are rendered invertible, i.e. multiplicative inverses are added to the ring. Concretely, if is a multiplicatively closed subset of (i.e. whenever then so is ) then the localization of at , or ring of fractions with denominators in , usually denoted consists of symbols subject to certain rules that mimic the cancellation familiar from rational numbers. Indeed, in this language is the localization of at all nonzero integers. This construction works for any integral domain instead of . The localization is a field, called the quotient field of . Ideals and modules Many of the following notions also exist for not necessarily commutative rings, but the definitions and properties are usually more complicated. For example, all ideals in a commutative ring are automatically two-sided, which simplifies the situation considerably. Modules For a ring , an -module is like what a vector space is to a field. That is, elements in a module can be added; they can be multiplied by elements of subject to the same axioms as for a vector space. The study of modules is significantly more involved than the one of vector spaces, since there are modules that do not have any basis, that is, do not contain a spanning set whose elements are linearly independents. A module that has a basis is called a free module, and a submodule of a free module needs not to be free. A module of finite type is a module that has a finite spanning set. Modules of finite type play a fundamental role in the theory of commutative rings, similar to the role of the finite-dimensional vector spaces in linear algebra. In particular, Noetherian rings (see also , below) can be defined as the rings such that every submodule of a module of finite type is also of finite type. Ideals Ideals of a ring are the submodules of , i.e., the modules contained in . In more detail, an ideal is a non-empty subset of such that for all in , and in , both and are in . For various applications, understanding the ideals of a ring is of particular importance, but often one proceeds by studying modules in general. Any ring has two ideals, namely the zero ideal and , the whole ring. These two ideals are the only ones precisely if is a field. Given any subset of (where is some index set), the ideal generated by is the smallest ideal that contains . Equivalently, it is given by finite linear combinations Principal ideal domains If consists of a single element , the ideal generated by consists of the multiples of , i.e., the elements of the form for arbitrary elements . Such an ideal is called a principal ideal. If every ideal is a principal ideal, is called a principal ideal ring; two important cases are and , the polynomial ring over a field . These two are in addition domains, so they are called principal ideal domains. Unlike for general rings, for a principal ideal domain, the properties of individual elements are strongly tied to the properties of the ring as a whole. For example, any principal ideal domain is a unique factorization domain (UFD) which means that any element is a product of irreducible elements, in a (up to reordering of factors) unique way. Here, an element in a domain is called irreducible if the only way of expressing it as a product is by either or being a unit. An example, important in field theory, are irreducible polynomials, i.e., irreducible elements in , for a field . The fact that is a UFD can be stated more elementarily by saying that any natural number can be uniquely decomposed as product of powers of prime numbers. It is also known as the fundamental theorem of arithmetic. An element is a prime element if whenever divides a product , divides or . In a domain, being prime implies being irreducible. The converse is true in a unique factorization domain, but false in general. Factor ring The definition of ideals is such that "dividing" "out" gives another ring, the factor ring : it is the set of cosets of together with the operations and . For example, the ring (also denoted ), where is an integer, is the ring of integers modulo . It is the basis of modular arithmetic. An ideal is proper if it is strictly smaller than the whole ring. An ideal that is not strictly contained in any proper ideal is called maximal. An ideal is maximal if and only if is a field. Except for the zero ring, any ring (with identity) possesses at least one maximal ideal; this follows from Zorn's lemma. Noetherian rings A ring is called Noetherian (in honor of Emmy Noether, who developed this concept) if every ascending chain of ideals becomes stationary, i.e. becomes constant beyond some index . Equivalently, any ideal is generated by finitely many elements, or, yet equivalent, submodules of finitely generated modules are finitely generated. Being Noetherian is a highly important finiteness condition, and the condition is preserved under many operations that occur frequently in geometry. For example, if is Noetherian, then so is the polynomial ring (by Hilbert's basis theorem), any localization , and also any factor ring . Any non-Noetherian ring is the union of its Noetherian subrings. This fact, known as Noetherian approximation, allows the extension of certain theorems to non-Noetherian rings. Artinian rings A ring is called Artinian (after Emil Artin), if every descending chain of ideals becomes stationary eventually. Despite the two conditions appearing symmetric, Noetherian rings are much more general than Artinian rings. For example, is Noetherian, since every ideal can be generated by one element, but is not Artinian, as the chain shows. In fact, by the Hopkins–Levitzki theorem, every Artinian ring is Noetherian. More precisely, Artinian rings can be characterized as the Noetherian rings whose Krull dimension is zero. Spectrum of a commutative ring Prime ideals As was mentioned above, is a unique factorization domain. This is not true for more general rings, as algebraists realized in the 19th century. For example, in there are two genuinely distinct ways of writing 6 as a product: Prime ideals, as opposed to prime elements, provide a way to circumvent this problem. A prime ideal is a proper (i.e., strictly contained in ) ideal such that, whenever the product of any two ring elements and is in at least one of the two elements is already in (The opposite conclusion holds for any ideal, by definition.) Thus, if a prime ideal is principal, it is equivalently generated by a prime element. However, in rings such as prime ideals need not be principal. This limits the usage of prime elements in ring theory. A cornerstone of algebraic number theory is, however, the fact that in any Dedekind ring (which includes and more generally the ring of integers in a number field) any ideal (such as the one generated by 6) decomposes uniquely as a product of prime ideals. Any maximal ideal is a prime ideal or, more briefly, is prime. Moreover, an ideal is prime if and only if the factor ring is an integral domain. Proving that an ideal is prime, or equivalently that a ring has no zero-divisors can be very difficult. Yet another way of expressing the same is to say that the complement is multiplicatively closed. The localisation is important enough to have its own notation: . This ring has only one maximal ideal, namely . Such rings are called local. Spectrum The spectrum of a ring , denoted by , is the set of all prime ideals of . It is equipped with a topology, the Zariski topology, which reflects the algebraic properties of : a basis of open subsets is given by where is any ring element. Interpreting as a function that takes the value f mod p (i.e., the image of f in the residue field R/p), this subset is the locus where f is non-zero. The spectrum also makes precise the intuition that localisation and factor rings are complementary: the natural maps and correspond, after endowing the spectra of the rings in question with their Zariski topology, to complementary open and closed immersions respectively. Even for basic rings, such as illustrated for at the right, the Zariski topology is quite different from the one on the set of real numbers. The spectrum contains the set of maximal ideals, which is occasionally denoted mSpec (R). For an algebraically closed field k, mSpec (k[T1, ..., Tn] / (f1, ..., fm)) is in bijection with the set Thus, maximal ideals reflect the geometric properties of solution sets of polynomials, which is an initial motivation for the study of commutative rings. However, the consideration of non-maximal ideals as part of the geometric properties of a ring is useful for several reasons. For example, the minimal prime ideals (i.e., the ones not strictly containing smaller ones) correspond to the irreducible components of Spec R. For a Noetherian ring R, Spec R has only finitely many irreducible components. This is a geometric restatement of primary decomposition, according to which any ideal can be decomposed as a product of finitely many primary ideals. This fact is the ultimate generalization of the decomposition into prime ideals in Dedekind rings. Affine schemes The notion of a spectrum is the common basis of commutative algebra and algebraic geometry. Algebraic geometry proceeds by endowing Spec R with a sheaf (an entity that collects functions defined locally, i.e. on varying open subsets). The datum of the space and the sheaf is called an affine scheme. Given an affine scheme, the underlying ring R can be recovered as the global sections of . Moreover, this one-to-one correspondence between rings and affine schemes is also compatible with ring homomorphisms: any f : R → S gives rise to a continuous map in the opposite direction The resulting equivalence of the two said categories aptly reflects algebraic properties of rings in a geometrical manner. Similar to the fact that manifolds are locally given by open subsets of Rn, affine schemes are local models for schemes, which are the object of study in algebraic geometry. Therefore, several notions concerning commutative rings stem from geometric intuition. Dimension The Krull dimension (or dimension) dim R of a ring R measures the "size" of a ring by, roughly speaking, counting independent elements in R. The dimension of algebras over a field k can be axiomatized by four properties: The dimension is a local property: . The dimension is independent of nilpotent elements: if is nilpotent then . The dimension remains constant under a finite extension: if S is an R-algebra which is finitely generated as an R-module, then dim S = dim R. The dimension is calibrated by dim . This axiom is motivated by regarding the polynomial ring in n variables as an algebraic analogue of n-dimensional space. The dimension is defined, for any ring R, as the supremum of lengths n of chains of prime ideals For example, a field is zero-dimensional, since the only prime ideal is the zero ideal. The integers are one-dimensional, since chains are of the form (0) ⊊ (p), where p is a prime number. For non-Noetherian rings, and also non-local rings, the dimension may be infinite, but Noetherian local rings have finite dimension. Among the four axioms above, the first two are elementary consequences of the definition, whereas the remaining two hinge on important facts in commutative algebra, the going-up theorem and Krull's principal ideal theorem. Ring homomorphisms A ring homomorphism or, more colloquially, simply a map, is a map such that These conditions ensure . Similarly as for other algebraic structures, a ring homomorphism is thus a map that is compatible with the structure of the algebraic objects in question. In such a situation S is also called an R-algebra, by understanding that s in S may be multiplied by some r of R, by setting The kernel and image of f are defined by and . The kernel is an ideal of R, and the image is a subring of S. A ring homomorphism is called an isomorphism if it is bijective. An example of a ring isomorphism, known as the Chinese remainder theorem, is where is a product of pairwise distinct prime numbers. Commutative rings, together with ring homomorphisms, form a category. The ring Z is the initial object in this category, which means that for any commutative ring R, there is a unique ring homomorphism Z → R. By means of this map, an integer n can be regarded as an element of R. For example, the binomial formula which is valid for any two elements a and b in any commutative ring R is understood in this sense by interpreting the binomial coefficients as elements of R using this map. Given two R-algebras S and T, their tensor product is again a commutative R-algebra. In some cases, the tensor product can serve to find a T-algebra which relates to Z as S relates to R. For example, Finite generation An R-algebra S is called finitely generated (as an algebra) if there are finitely many elements s1, ..., sn such that any element of s is expressible as a polynomial in the si. Equivalently, S is isomorphic to A much stronger condition is that S is finitely generated as an R-module, which means that any s can be expressed as a R-linear combination of some finite set s1, ..., sn. Local rings A ring is called local if it has only a single maximal ideal, denoted by m. For any (not necessarily local) ring R, the localization at a prime ideal p is local. This localization reflects the geometric properties of Spec R "around p". Several notions and problems in commutative algebra can be reduced to the case when R is local, making local rings a particularly deeply studied class of rings. The residue field of R is defined as Any R-module M yields a k-vector space given by . Nakayama's lemma shows this passage is preserving important information: a finitely generated module M is zero if and only if is zero. Regular local rings The k-vector space m/m2 is an algebraic incarnation of the cotangent space. Informally, the elements of m can be thought of as functions which vanish at the point p, whereas m2 contains the ones which vanish with order at least 2. For any Noetherian local ring R, the inequality holds true, reflecting the idea that the cotangent (or equivalently the tangent) space has at least the dimension of the space Spec R. If equality holds true in this estimate, R is called a regular local ring. A Noetherian local ring is regular if and only if the ring (which is the ring of functions on the tangent cone) is isomorphic to a polynomial ring over k. Broadly speaking, regular local rings are somewhat similar to polynomial rings. Regular local rings are UFD's. Discrete valuation rings are equipped with a function which assign an integer to any element r. This number, called the valuation of r can be informally thought of as a zero or pole order of r. Discrete valuation rings are precisely the one-dimensional regular local rings. For example, the ring of germs of holomorphic functions on a Riemann surface is a discrete valuation ring. Complete intersections By Krull's principal ideal theorem, a foundational result in the dimension theory of rings, the dimension of is at least r − n. A ring R is called a complete intersection ring if it can be presented in a way that attains this minimal bound. This notion is also mostly studied for local rings. Any regular local ring is a complete intersection ring, but not conversely. A ring R is a set-theoretic complete intersection if the reduced ring associated to R, i.e., the one obtained by dividing out all nilpotent elements, is a complete intersection. As of 2017, it is in general unknown, whether curves in three-dimensional space are set-theoretic complete intersections. Cohen–Macaulay rings The depth of a local ring R is the number of elements in some (or, as can be shown, any) maximal regular sequence, i.e., a sequence a1, ..., an ∈ m such that all ai are non-zero divisors in For any local Noetherian ring, the inequality holds. A local ring in which equality takes place is called a Cohen–Macaulay ring. Local complete intersection rings, and a fortiori, regular local rings are Cohen–Macaulay, but not conversely. Cohen–Macaulay combine desirable properties of regular rings (such as the property of being universally catenary rings, which means that the (co)dimension of primes is well-behaved), but are also more robust under taking quotients than regular local rings. Constructing commutative rings There are several ways to construct new rings out of given ones. The aim of such constructions is often to improve certain properties of the ring so as to make it more readily understandable. For example, an integral domain that is integrally closed in its field of fractions is called normal. This is a desirable property, for example any normal one-dimensional ring is necessarily regular. Rendering a ring normal is known as normalization. Completions If I is an ideal in a commutative ring R, the powers of I form topological neighborhoods of 0 which allow R to be viewed as a topological ring. This topology is called the I-adic topology. R can then be completed with respect to this topology. Formally, the I-adic completion is the inverse limit of the rings R/In. For example, if k is a field, k[[X]], the formal power series ring in one variable over k, is the I-adic completion of k[X] where I is the principal ideal generated by X. This ring serves as an algebraic analogue of the disk. Analogously, the ring of p-adic integers is the completion of Z with respect to the principal ideal (p). Any ring that is isomorphic to its own completion, is called complete. Complete local rings satisfy Hensel's lemma, which roughly speaking allows extending solutions (of various problems) over the residue field k to R. Homological notions Several deeper aspects of commutative rings have been studied using methods from homological algebra. lists some open questions in this area of active research. Projective modules and Ext functors Projective modules can be defined to be the direct summands of free modules. If R is local, any finitely generated projective module is actually free, which gives content to an analogy between projective modules and vector bundles. The Quillen–Suslin theorem asserts that any finitely generated projective module over k[T1, ..., Tn] (k a field) is free, but in general these two concepts differ. A local Noetherian ring is regular if and only if its global dimension is finite, say n, which means that any finitely generated R-module has a resolution by projective modules of length at most n. The proof of this and other related statements relies on the usage of homological methods, such as the Ext functor. This functor is the derived functor of the functor The latter functor is exact if M is projective, but not otherwise: for a surjective map of R-modules, a map need not extend to a map . The higher Ext functors measure the non-exactness of the Hom-functor. The importance of this standard construction in homological algebra stems can be seen from the fact that a local Noetherian ring R with residue field k is regular if and only if vanishes for all large enough n. Moreover, the dimensions of these Ext-groups, known as Betti numbers, grow polynomially in n if and only if R is a local complete intersection ring. A key argument in such considerations is the Koszul complex, which provides an explicit free resolution of the residue field k of a local ring R in terms of a regular sequence. Flatness The tensor product is another non-exact functor relevant in the context of commutative rings: for a general R-module M, the functor is only right exact. If it is exact, M is called flat. If R is local, any finitely presented flat module is free of finite rank, thus projective. Despite being defined in terms of homological algebra, flatness has profound geometric implications. For example, if an R-algebra S is flat, the dimensions of the fibers (for prime ideals p in R) have the "expected" dimension, namely . Properties By Wedderburn's theorem, every finite division ring is commutative, and therefore a finite field. Another condition ensuring commutativity of a ring, due to Jacobson, is the following: for every element r of R there exists an integer such that . If, for every r, the ring is called Boolean ring. More general conditions which guarantee commutativity of a ring are also known. Generalizations Graded-commutative rings A graded ring is called graded-commutative if, for all homogeneous elements a and b, If the Ri are connected by differentials ∂ such that an abstract form of the product rule holds, i.e., R is called a commutative differential graded algebra (cdga). An example is the complex of differential forms on a manifold, with the multiplication given by the exterior product, is a cdga. The cohomology of a cdga is a graded-commutative ring, sometimes referred to as the cohomology ring. A broad range examples of graded rings arises in this way. For example, the Lazard ring is the ring of cobordism classes of complex manifolds. A graded-commutative ring with respect to a grading by Z/2 (as opposed to Z) is called a superalgebra. A related notion is an almost commutative ring, which means that R is filtered in such a way that the associated graded ring is commutative. An example is the Weyl algebra and more general rings of differential operators. Simplicial commutative rings A simplicial commutative ring is a simplicial object in the category of commutative rings. They are building blocks for (connective) derived algebraic geometry. A closely related but more general notion is that of E∞-ring. Applications of the commutative rings Holomorphic functions Algebraic K-theory Topological K-theory Divided power structures Witt vectors Hecke algebra (used in Wiles's proof of Fermat's Last Theorem) Fontaine's period rings Cluster algebra Convolution algebra (of a commutative group) Fréchet algebra See also Almost ring, a certain generalization of a commutative ring Divisibility (ring theory): nilpotent element, (ex. dual numbers) Ideals and modules: Radical of an ideal, Morita equivalence Ring homomorphisms: integral element: Cayley–Hamilton theorem, Integrally closed domain, Krull ring, Krull–Akizuki theorem, Mori–Nagata theorem Primes: Prime avoidance lemma, Jacobson radical, Nilradical of a ring, Spectrum: Compact space, Connected ring, Differential calculus over commutative algebras, Banach–Stone theorem Local rings: Gorenstein local ring (also used in Wiles's proof of Fermat's Last Theorem): Duality (mathematics), Eben Matlis; Dualizing module, Popescu's theorem, Artin approximation theorem. Notes Citations References Further reading (Reprinted 1975–76 by Springer as volumes 28–29 of Graduate Texts in Mathematics.) Commutative algebra Ring theory Algebraic structures
Commutative ring
[ "Mathematics" ]
5,660
[ "Mathematical structures", "Mathematical objects", "Ring theory", "Fields of abstract algebra", "Algebraic structures", "Commutative algebra" ]
61,351
https://en.wikipedia.org/wiki/Laurent%20polynomial
In mathematics, a Laurent polynomial (named after Pierre Alphonse Laurent) in one variable over a field is a linear combination of positive and negative powers of the variable with coefficients in . Laurent polynomials in form a ring denoted . They differ from ordinary polynomials in that they may have terms of negative degree. The construction of Laurent polynomials may be iterated, leading to the ring of Laurent polynomials in several variables. Laurent polynomials are of particular importance in the study of complex variables. Definition A Laurent polynomial with coefficients in a field is an expression of the form where is a formal variable, the summation index is an integer (not necessarily positive) and only finitely many coefficients are non-zero. Two Laurent polynomials are equal if their coefficients are equal. Such expressions can be added, multiplied, and brought back to the same form by reducing similar terms. Formulas for addition and multiplication are exactly the same as for the ordinary polynomials, with the only difference that both positive and negative powers of can be present: and Since only finitely many coefficients and are non-zero, all sums in effect have only finitely many terms, and hence represent Laurent polynomials. Properties A Laurent polynomial over may be viewed as a Laurent series in which only finitely many coefficients are non-zero. The ring of Laurent polynomials is an extension of the polynomial ring obtained by "inverting ". More rigorously, it is the localization of the polynomial ring in the multiplicative set consisting of the non-negative powers of . Many properties of the Laurent polynomial ring follow from the general properties of localization. The ring of Laurent polynomials is a subring of the rational functions. The ring of Laurent polynomials over a field is Noetherian (but not Artinian). If is an integral domain, the units of the Laurent polynomial ring have the form , where is a unit of and is an integer. In particular, if is a field then the units of have the form , where is a non-zero element of . The Laurent polynomial ring is isomorphic to the group ring of the group of integers over . More generally, the Laurent polynomial ring in variables is isomorphic to the group ring of the free abelian group of rank . It follows that the Laurent polynomial ring can be endowed with a structure of a commutative, cocommutative Hopf algebra. See also Jones polynomial References Commutative algebra Polynomials Ring theory
Laurent polynomial
[ "Mathematics" ]
485
[ "Polynomials", "Ring theory", "Fields of abstract algebra", "Commutative algebra", "Algebra" ]
61,361
https://en.wikipedia.org/wiki/Boy%27s%20surface
In geometry, Boy's surface is an immersion of the real projective plane in three-dimensional space. It was discovered in 1901 by the German mathematician Werner Boy, who had been tasked by his doctoral thesis advisor David Hilbert to prove that the projective plane could not be immersed in three-dimensional space. Boy's surface was first parametrized explicitly by Bernard Morin in 1978. Another parametrization was discovered by Rob Kusner and Robert Bryant. Boy's surface is one of the two possible immersions of the real projective plane which have only a single triple point. Unlike the Roman surface and the cross-cap, it has no other singularities than self-intersections (that is, it has no pinch-points). Parametrization Boy's surface can be parametrized in several ways. One parametrization, discovered by Rob Kusner and Robert Bryant, is the following: given a complex number w whose magnitude is less than or equal to one (), let and then set we then obtain the Cartesian coordinates x, y, and z of a point on the Boy's surface. If one performs an inversion of this parametrization centered on the triple point, one obtains a complete minimal surface with three ends (that's how this parametrization was discovered naturally). This implies that the Bryant–Kusner parametrization of Boy's surfaces is "optimal" in the sense that it is the "least bent" immersion of a projective plane into three-space. Property of Bryant–Kusner parametrization If w is replaced by the negative reciprocal of its complex conjugate, then the functions g1, g2, and g3 of w are left unchanged. By replacing in terms of its real and imaginary parts , and expanding resulting parameterization, one may obtain a parameterization of Boy's surface in terms of rational functions of and . This shows that Boy's surface is not only an algebraic surface, but even a rational surface. The remark of the preceding paragraph shows that the generic fiber of this parameterization consists of two points (that is that almost every point of Boy's surface may be obtained by two parameters values). Relation to the real projective plane Let be the Bryant–Kusner parametrization of Boy's surface. Then This explains the condition on the parameter: if then However, things are slightly more complicated for In this case, one has This means that, if the point of the Boy's surface is obtained from two parameter values: In other words, the Boy's surface has been parametrized by a disk such that pairs of diametrically opposite points on the perimeter of the disk are equivalent. This shows that the Boy's surface is the image of the real projective plane, RP2 by a smooth map. That is, the parametrization of the Boy's surface is an immersion of the real projective plane into the Euclidean space. Symmetries Boy's surface has 3-fold symmetry. This means that it has an axis of discrete rotational symmetry: any 120° turn about this axis will leave the surface looking exactly the same. The Boy's surface can be cut into three mutually congruent pieces. Applications Boy's surface can be used in sphere eversion as a half-way model. A half-way model is an immersion of the sphere with the property that a rotation interchanges inside and outside, and so can be employed to evert (turn inside-out) a sphere. Boy's (the case p = 3) and Morin's (the case p = 2) surfaces begin a sequence of half-way models with higher symmetry first proposed by George Francis, indexed by the even integers 2p (for p odd, these immersions can be factored through a projective plane). Kusner's parametrization yields all these. Models Model at Oberwolfach The Oberwolfach Research Institute for Mathematics has a large model of a Boy's surface outside the entrance, constructed and donated by Mercedes-Benz in January 1991. This model has 3-fold rotational symmetry and minimizes the Willmore energy of the surface. It consists of steel strips representing the image of a polar coordinate grid under a parameterization given by Robert Bryant and Rob Kusner. The meridians (rays) become ordinary Möbius strips, i.e. twisted by 180 degrees. All but one of the strips corresponding to circles of latitude (radial circles around the origin) are untwisted, while the one corresponding to the boundary of the unit circle is a Möbius strip twisted by three times 180 degrees — as is the emblem of the institute . Model made for Clifford Stoll A model was made in glass by glassblower Lucas Clarke, with the cooperation of Adam Savage, for presentation to Clifford Stoll. It was featured on Adam Savage's YouTube channel, Tested. All three appeared in the video discussing it. References Citations Sources This describes a piecewise linear model of Boy's surface. Article on the cover illustration that accompanies the Rob Kirby article. . Sanderson, B. Boy's will be Boy's, (undated, 2006 or earlier). External links Boy's surface at MathCurve; contains various visualizations, various equations, useful links and references A planar unfolding of the Boy's surface – applet from Plus Magazine. Boy's surface resources, including the original article, and an embedding of a topologist in the Oberwolfach Boy's surface. A LEGO Boy's surface A paper model of Boy's surface – pattern and instructions A model of Boy's surface in Constructive Solid Geometry together with assembling instructions Boy's surface visualization video from the Mathematical Institute of the Serbian Academy of the Arts and Sciences This Object Should've Been Impossible to Make Adam Savage making a museum stand for a glass model of the surface Surfaces Geometric topology Eponyms in geometry
Boy's surface
[ "Mathematics" ]
1,224
[ "Eponyms in geometry", "Topology", "Geometry", "Geometric topology" ]
61,373
https://en.wikipedia.org/wiki/Golden%20ratio%20base
Golden ratio base is a non-integer positional numeral system that uses the golden ratio (the irrational number  ≈ 1.61803399 symbolized by the Greek letter φ) as its base. It is sometimes referred to as base-φ, golden mean base, phi-base, or, colloquially, phinary. Any non-negative real number can be represented as a base-φ numeral using only the digits 0 and 1, and avoiding the digit sequence "11" – this is called a standard form. A base-φ numeral that includes the digit sequence "11" can always be rewritten in standard form, using the algebraic properties of the base φ — most notably that φ + φ = φ. For instance, 11φ = 100φ. Despite using an irrational number base, when using standard form, all non-negative integers have a unique representation as a terminating (finite) base-φ expansion. The set of numbers which possess a finite base-φ representation is the ring Z[]; it plays the same role in this numeral systems as dyadic rationals play in binary numbers, providing a possibility to multiply. Other numbers have standard representations in base-φ, with rational numbers having recurring representations. These representations are unique, except that numbers with a terminating expansion also have a non-terminating expansion. For example, 1 = 0.1010101… in base-φ just as 1 = 0.99999… in decimal. Examples Writing golden ratio base numbers in standard form In the following example of conversion from non-standard to standard form, the notation 1 is used to represent the signed digit −1. 211.01φ is not a standard base-φ numeral, since it contains a "11" and additionally a "2" and a "1" = −1, which are not "0" or "1". To put a numeral in standard form, we may use the following substitutions: , , , . The substitutions may be applied in any order we like, as the result will be the same. Below, the substitutions applied to the number on the previous line are on the right, the resulting number on the left. Any positive number with a non-standard terminating base-φ representation can be uniquely standardized in this manner. If we get to a point where all digits are "0" or "1", except for the first digit being negative, then the number is negative. (The exception to this is when the first digit is negative one and the next two digits are one, like 1111.001=1.001.) This can be converted to the negative of a base-φ representation by negating every digit, standardizing the result, and then marking it as negative. For example, use a minus sign, or some other significance to denote negative numbers. Representing integers as golden ratio base numbers We can either consider our integer to be the (only) digit of a nonstandard base-φ numeral, and standardize it, or do the following: 1 × 1 = 1, φ × φ = 1 + φ and = −1 + φ. Therefore, we can compute (a + bφ) + (c + dφ) = ((a + c) + (b + d)φ), (a + bφ) − (c + dφ) = ((a − c) + (b − d)φ) and (a + bφ) × (c + dφ) = ((ac + bd) + (ad + bc + bd)φ). So, using integer values only, we can add, subtract and multiply numbers of the form (a + bφ), and even represent positive and negative integer powers of φ. (a + bφ) > (c + dφ) if and only if 2(a − c) − (d − b) > (d − b) × . If one side is negative, the other positive, the comparison is trivial. Otherwise, square both sides, to get an integer comparison, reversing the comparison direction if both sides were negative. On squaring both sides, the is replaced with the integer 5. So, using integer values only, we can also compare numbers of the form (a + bφ). To convert an integer x to a base-φ number, note that x = (x + 0φ). Subtract the highest power of φ, which is still smaller than the number we have, to get our new number, and record a "1" in the appropriate place in the resulting base-φ number. Unless our number is 0, go to step 2. Finished. The above procedure will never result in the sequence "11", since 11φ = 100φ, so getting a "11" would mean we missed a "1" prior to the sequence "11". Start, e.g., with integer = 5, with the result so far being ...00000.00000...φ Highest power of φ ≤ 5 is φ3 = 1 + 2φ ≈ 4.236067977 Subtracting this from 5, we have 5 − (1 + 2φ) = 4 − 2φ ≈ 0.763932023..., the result so far being 1000.00000...φ Highest power of φ ≤ 4 − 2φ ≈ 0.763932023... is φ−1 = −1 + 1φ ≈ 0.618033989... Subtracting this from 4 − 2φ ≈ 0.763932023..., we have 4 − 2φ − (−1 + 1φ) = 5 − 3φ ≈ 0.145898034..., the result so far being 1000.10000...φ Highest power of φ ≤ 5 − 3φ ≈ 0.145898034... is φ−4 = 5 − 3φ ≈ 0.145898034... Subtracting this from 5 − 3φ ≈ 0.145898034..., we have 5 − 3φ − (5 − 3φ) = 0 + 0φ = 0, with the final result being 1000.1001φ. Non-uniqueness Just as with any base-n system, numbers with a terminating representation have an alternative recurring representation. In base-10, this relies on the observation that 0.999...=1. In base-φ, the numeral 0.1010101... can be seen to be equal to 1 in several ways: Conversion to nonstandard form: 1 = 0.11φ = 0.1011φ = 0.101011φ = ... = 0.10101010...φ Geometric series: 1.0101010...φ is equal to Difference between "shifts": φ2 x − x = 10.101010...φ − 0.101010...φ = 10φ = φ so that x = = 1 This non-uniqueness is a feature of the numeration system, since both 1.0000 and 0.101010... are in standard form. In general, the final 1 of any number in base-φ can be replaced with a recurring 01 without changing the value of that number. Representing rational numbers as golden ratio base numbers Every non-negative rational number can be represented as a recurring base-φ expansion, as can any non-negative element of the field Q[] = Q + Q, the field generated by the rational numbers and . Conversely any recurring (or terminating) base-φ expansion is a non-negative element of Q[]. For recurring decimals, the recurring part has been overlined: = 0.010φ = 0.00101000φ = 0.001000φ = 0.001001010100100100φ = 0.000010000100010100001010001010101000100101000001001000100000φ The justification that a rational gives a recurring expansion is analogous to the equivalent proof for a base-n numeration system (n = 2,3,4,...). Essentially in base-φ long division there are only a finite number of possible remainders, and so once there must be a recurring pattern. For example, with = = long division looks like this (note that base-φ subtraction may be hard to follow at first): .0 1 0 0 1 1 0 0 1 ) 1 0 0.0 0 0 0 0 0 0 0 1 0 0 1 trade: 10000 = 1100 = 1011 ------- so 10000 − 1001 = 1011 − 1001 = 10 1 0 0 0 0 1 0 0 1 ------- etc. The converse is also true, in that a number with a recurring base-φ; representation is an element of the field Q[]. This follows from the observation that a recurring representation with period k involves a geometric series with ratio φ−k, which will sum to an element of Q[]. Representing irrational numbers of note as golden ratio base numbers The base-φ representations of some interesting numbers: ≈ 100.0100 1010 1001 0001 0101 0100 0001 0100 ...φ ≈ 100.0000 1000 0100 1000 0000 0100 ...φ ≈ 1.0100 0001 0100 1010 0100 0000 0101 0000 0000 0101 ...φ = 10.1φ Addition, subtraction, and multiplication It is possible to adapt all the standard algorithms of base-10 arithmetic to base-φ arithmetic. There are two approaches to this: Calculate, then convert to standard form For addition of two base-φ numbers, add each pair of digits, without carry, and then convert the numeral to standard form. For subtraction, subtract each pair of digits without borrow (borrow is a negative amount of carry), and then convert the numeral to standard form. For multiplication, multiply in the typical base-10 manner, without carry, then convert the numeral to standard form. For example, 2 + 3 = 10.01 + 100.01 = 110.02 = 110.1001 = 1000.1001 2 × 3 = 10.01 × 100.01 = 1000.1 + 1.0001 = 1001.1001 = 1010.0001 7 − 2 = 10000.0001 − 10.01 = 10010.0101 = 1110.0101 = 1001.0101 = 1000.1001 Avoid digits other than 0 and 1 A more "native" approach is to avoid having to add digits 1+1 or to subtract 0 – 1. This is done by reorganising the operands into nonstandard form so that these combinations do not occur. For example, 2 + 3 = 10.01 + 100.01 = 10.01 + 100.0011 = 110.0111 = 1000.1001 7 − 2 = 10000.0001 − 10.01 = 1100.0001 − 10.01 = 1011.0001 − 10.01 = 1010.1101 − 10.01 = 1000.1001 The subtraction seen here uses a modified form of the standard "trading" algorithm for subtraction. Division No non-integer rational number can be represented as a finite base-φ number. In other words, all finitely representable base-φ numbers are either integers or (more likely) an irrational in a quadratic field Q[]. Due to long division having only a finite number of possible remainders, a division of two integers (or other numbers with finite base-φ representation) will have a recurring expansion, as demonstrated above. Relationship with Fibonacci coding Fibonacci coding is a closely related numeration system used for integers. In this system, only digits 0 and 1 are used and the place values of the digits are the Fibonacci numbers. As with base-φ, the digit sequence "11" is avoided by rearranging to a standard form, using the Fibonacci recurrence relation Fk+1 = Fk + Fk−1. For example, 30 = 1×21 + 0×13 + 1×8 + 0×5 + 0×3 + 0×2 + 1×1 + 0×1 = 10100010fib. Practical usage It is possible to mix base-φ arithmetic with Fibonacci integer sequences. The sum of numbers in a General Fibonacci integer sequence that correspond with the nonzero digits in the base-φ number, is the multiplication of the base-φ number and the element at the zero-position in the sequence. For example: product 10 (10100.0101 base-φ) and 25 (zero position) = 5 + 10 + 65 + 170 = 250 base-φ: 1 0 1 0 0. 0 1 0 1 partial sequence: ... 5 5 10 15 25 40 65 105 170 275 445 720 1165 ... product 10 (10100.0101 base-φ) and 65 (zero position) = 10 + 25 + 170 + 445 = 650 base-φ: 1 0 1 0 0. 0 1 0 1 partial sequence: ... 5 5 10 15 25 40 65 105 170 275 445 720 1165 ... See also Beta encoder – Originally used golden ratio base Ostrowski numeration References External links Using Powers of Phi to represent Integers (Base Phi) Non-standard positional numeral systems Golden ratio
Golden ratio base
[ "Mathematics" ]
2,878
[ "Golden ratio" ]
61,388
https://en.wikipedia.org/wiki/Deoxyribose
Deoxyribose, or more precisely 2-deoxyribose, is a monosaccharide with idealized formula H−(C=O)−(CH2)−(CHOH)3−H. Its name indicates that it is a deoxy sugar, meaning that it is derived from the sugar ribose by loss of a hydroxy group. Discovered in 1929 by Phoebus Levene, deoxyribose is most notable for its presence in DNA. Since the pentose sugars arabinose and ribose only differ by the stereochemistry at C2′, 2-deoxyribose and 2-deoxyarabinose are equivalent, although the latter term is rarely used because ribose, not arabinose, is the precursor to deoxyribose. Structure Several isomers exist with the formula H−(C=O)−(CH2)−(CHOH)3−H, but in deoxyribose all the hydroxyl groups are on the same side in the Fischer projection. The term "2-deoxyribose" may refer to either of two enantiomers: the biologically important -2-deoxyribose and to the rarely encountered mirror image -2-deoxyribose. -2-deoxyribose is a precursor to the nucleic acid DNA. 2-deoxyribose is an aldopentose, that is, a monosaccharide with five carbon atoms and having an aldehyde functional group. In aqueous solution, deoxyribose primarily exists as a mixture of three structures: the linear form H−(C=O)−(CH2)−(CHOH)3−H and two ring forms, deoxyribofuranose ("C3′-endo"), with a five-membered ring, and deoxyribopyranose ("C2′-endo"), with a six-membered ring. The latter form is predominant (whereas the C3′-endo form is favored for ribose). Biological importance As a component of DNA, 2-deoxyribose derivatives have an important role in biology. The DNA (deoxyribonucleic acid) molecule, which is the main repository of genetic information in life, consists of a long chain of deoxyribose-containing units called nucleotides, linked via phosphate groups. In the standard nucleic acid nomenclature, a DNA nucleotide consists of a deoxyribose molecule with an organic base (usually adenine, thymine, guanine or cytosine) attached to the 1′ ribose carbon. The 5′ hydroxyl of each deoxyribose unit is replaced by a phosphate (forming a nucleotide) that is attached to the 3′ carbon of the deoxyribose in the preceding unit. The absence of the 2′ hydroxyl group in deoxyribose is apparently responsible for the increased mechanical flexibility of DNA compared to RNA, which allows it to assume the double-helix conformation, and also (in the eukaryotes) to be compactly coiled within the small cell nucleus. The double-stranded DNA molecules are also typically much longer than RNA molecules. The backbone of RNA and DNA are structurally similar, but RNA is single stranded, and made from ribose as opposed to deoxyribose. Other biologically important derivatives of deoxyribose include mono-, di-, and triphosphates, as well as 3′-5′ cyclic monophosphates. Biosynthesis Deoxyribose is generated from ribose 5-phosphate by enzymes called ribonucleotide reductases. These enzymes catalyse the deoxygenation process. Angiogenesis In one study, deoxyribose was shown to have pro-angiogenic properties when applied topically in a gel to wounds in rats. In addition, this topical gel also increased Vascular Endothelial Growth Factor (VEGF), which has been implicated in hair growth. This could potentially lead to future products to treat hair loss in humans. References Aldopentoses Deoxy sugars 1929 in biology Furanoses Pyranoses
Deoxyribose
[ "Chemistry" ]
908
[ "Deoxy sugars", "Carbohydrates" ]
61,419
https://en.wikipedia.org/wiki/Tokenization%20%28data%20security%29
Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no intrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system. The mapping from original data to a token uses methods that render tokens infeasible to reverse in the absence of the tokenization system, for example using tokens created from random numbers. A one-way cryptographic function is used to convert the original data into tokens, making it difficult to recreate the original data without obtaining entry to the tokenization system's resources. To deliver such services, the system maintains a vault database of tokens that are connected to the corresponding sensitive data. Protecting the system vault is vital to the system, and improved processes must be put in place to offer database integrity and physical security. The tokenization system must be secured and validated using security best practices applicable to sensitive data protection, secure storage, audit, authentication and authorization. The tokenization system provides data processing applications with the authority and interfaces to request tokens, or detokenize back to sensitive data. The security and risk reduction benefits of tokenization require that the tokenization system is logically isolated and segmented from data processing systems and applications that previously processed or stored sensitive data replaced by tokens. Only the tokenization system can tokenize data to create tokens, or detokenize back to redeem sensitive data under strict security controls. The token generation method must be proven to have the property that there is no feasible means through direct attack, cryptanalysis, side channel analysis, token mapping table exposure or brute force techniques to reverse tokens back to live data. Replacing live data with tokens in systems is intended to minimize exposure of sensitive data to those applications, stores, people and processes, reducing risk of compromise or accidental exposure and unauthorized access to sensitive data. Applications can operate using tokens instead of live data, with the exception of a small number of trusted applications explicitly permitted to detokenize when strictly necessary for an approved business purpose. Tokenization systems may be operated in-house within a secure isolated segment of the data center, or as a service from a secure service provider. Tokenization may be used to safeguard sensitive data involving, for example, bank accounts, financial statements, medical records, criminal records, driver's licenses, loan applications, stock trades, voter registrations, and other types of personally identifiable information (PII). Tokenization is often used in credit card processing. The PCI Council defines tokenization as "a process by which the primary account number (PAN) is replaced with a surrogate value called a token. A PAN may be linked to a reference number through the tokenization process. In this case, the merchant simply has to retain the token and a reliable third party controls the relationship and holds the PAN. The token may be created independently of the PAN, or the PAN can be used as part of the data input to the tokenization technique. The communication between the merchant and the third-party supplier must be secure to prevent an attacker from intercepting to gain the PAN and the token. De-tokenization is the reverse process of redeeming a token for its associated PAN value. The security of an individual token relies predominantly on the infeasibility of determining the original PAN knowing only the surrogate value". The choice of tokenization as an alternative to other techniques such as encryption will depend on varying regulatory requirements, interpretation, and acceptance by respective auditing or assessment entities. This is in addition to any technical, architectural or operational constraint that tokenization imposes in practical use. Concepts and origins The concept of tokenization, as adopted by the industry today, has existed since the first currency systems emerged centuries ago as a means to reduce risk in handling high value financial instruments by replacing them with surrogate equivalents. In the physical world, coin tokens have a long history of use replacing the financial instrument of minted coins and banknotes. In more recent history, subway tokens and casino chips found adoption for their respective systems to replace physical currency and cash handling risks such as theft. Exonumia and scrip are terms synonymous with such tokens. In the digital world, similar substitution techniques have been used since the 1970s as a means to isolate real data elements from exposure to other data systems. In databases for example, surrogate key values have been used since 1976 to isolate data associated with the internal mechanisms of databases and their external equivalents for a variety of uses in data processing. More recently, these concepts have been extended to consider this isolation tactic to provide a security mechanism for the purposes of data protection. In the payment card industry, tokenization is one means of protecting sensitive cardholder data in order to comply with industry standards and government regulations. Tokenization was applied to payment card data by Shift4 Corporation and released to the public during an industry Security Summit in Las Vegas, Nevada in 2005. The technology is meant to prevent the theft of the credit card information in storage. Shift4 defines tokenization as: “The concept of using a non-decryptable piece of data to represent, by reference, sensitive or secret data. In payment card industry (PCI) context, tokens are used to reference cardholder data that is managed in a tokenization system, application or off-site secure facility.” To protect data over its full lifecycle, tokenization is often combined with end-to-end encryption to secure data in transit to the tokenization system or service, with a token replacing the original data on return. For example, to avoid the risks of malware stealing data from low-trust systems such as point of sale (POS) systems, as in the Target breach of 2013, cardholder data encryption must take place prior to card data entering the POS and not after. Encryption takes place within the confines of a security hardened and validated card reading device and data remains encrypted until received by the processing host, an approach pioneered by Heartland Payment Systems as a means to secure payment data from advanced threats, now widely adopted by industry payment processing companies and technology companies. The PCI Council has also specified end-to-end encryption (certified point-to-point encryption—P2PE) for various service implementations in various PCI Council Point-to-point Encryption documents. The tokenization process The process of tokenization consists of the following steps: The application sends the tokenization data and authentication information to the tokenization system. It is stopped if authentication fails and the data is delivered to an event management system. As a result, administrators can discover problems and effectively manage the system. The system moves on to the next phase if authentication is successful. Using one-way cryptographic techniques, a token is generated and kept in a highly secure data vault. The new token is provided to the application for further use. Tokenization systems share several components according to established standards. Token Generation is the process of producing a token using any means, such as mathematically reversible cryptographic functions based on strong encryption algorithms and key management mechanisms, one-way nonreversible cryptographic functions (e.g., a hash function with strong, secret salt), or assignment via a randomly generated number. Random Number Generator (RNG) techniques are often the best choice for generating token values. Token Mapping – this is the process of assigning the created token value to its original value. To enable permitted look-ups of the original value using the token as the index, a secure cross-reference database must be constructed. Token Data Store – this is a central repository for the Token Mapping process that holds the original values as well as the related token values after the Token Generation process. On data servers, sensitive data and token values must be securely kept in encrypted format. Encrypted Data Storage – this is the encryption of sensitive data while it is in transit. Management of Cryptographic Keys. Strong key management procedures are required for sensitive data encryption on Token Data Stores. Difference from encryption Tokenization and “classic” encryption effectively protect data if implemented properly, and a computer security system may use both. While similar in certain regards, tokenization and classic encryption differ in a few key aspects. Both are cryptographic data security methods and they essentially have the same function, however they do so with differing processes and have different effects on the data they are protecting. Tokenization is a non-mathematical approach that replaces sensitive data with non-sensitive substitutes without altering the type or length of data. This is an important distinction from encryption because changes in data length and type can render information unreadable in intermediate systems such as databases. Tokenized data can still be processed by legacy systems which makes tokenization more flexible than classic encryption. In many situations, the encryption process is a constant consumer of processing power, hence such a system needs significant expenditures in specialized hardware and software. Another difference is that tokens require significantly less computational resources to process. With tokenization, specific data is kept fully or partially visible for processing and analytics while sensitive information is kept hidden. This allows tokenized data to be processed more quickly and reduces the strain on system resources. This can be a key advantage in systems that rely on high performance. In comparison to encryption, tokenization technologies reduce time, expense, and administrative effort while enabling teamwork and communication. Types of tokens There are many ways that tokens can be classified however there is currently no unified classification. Tokens can be: single or multi-use, cryptographic or non-cryptographic, reversible or irreversible, authenticable or non-authenticable, and various combinations thereof. In the context of payments, the difference between high and low value tokens plays a significant role. High-value tokens (HVTs) HVTs serve as surrogates for actual PANs in payment transactions and are used as an instrument for completing a payment transaction. In order to function, they must look like actual PANs. Multiple HVTs can map back to a single PAN and a single physical credit card without the owner being aware of it. Additionally, HVTs can be limited to certain networks and/or merchants whereas PANs cannot. HVTs can also be bound to specific devices so that anomalies between token use, physical devices, and geographic locations can be flagged as potentially fraudulent. HVT blocking enhances efficiency by reducing computational costs while maintaining accuracy and reducing record linkage as it reduces the number of records that are compared. Low-value tokens (LVTs) or security tokens LVTs also act as surrogates for actual PANs in payment transactions, however they serve a different purpose. LVTs cannot be used by themselves to complete a payment transaction. In order for an LVT to function, it must be possible to match it back to the actual PAN it represents, albeit only in a tightly controlled fashion. Using tokens to protect PANs becomes ineffectual if a tokenization system is breached, therefore securing the tokenization system itself is extremely important. System operations, limitations and evolution First generation tokenization systems use a database to map from live data to surrogate substitute tokens and back. This requires the storage, management, and continuous backup for every new transaction added to the token database to avoid data loss. Another problem is ensuring consistency across data centers, requiring continuous synchronization of token databases. Significant consistency, availability and performance trade-offs, per the CAP theorem, are unavoidable with this approach. This overhead adds complexity to real-time transaction processing to avoid data loss and to assure data integrity across data centers, and also limits scale. Storing all sensitive data in one service creates an attractive target for attack and compromise, and introduces privacy and legal risk in the aggregation of data Internet privacy, particularly in the EU. Another limitation of tokenization technologies is measuring the level of security for a given solution through independent validation. With the lack of standards, the latter is critical to establish the strength of tokenization offered when tokens are used for regulatory compliance. The PCI Council recommends independent vetting and validation of any claims of security and compliance: "Merchants considering the use of tokenization should perform a thorough evaluation and risk analysis to identify and document the unique characteristics of their particular implementation, including all interactions with payment card data and the particular tokenization systems and processes" The method of generating tokens may also have limitations from a security perspective. With concerns about security and attacks to random number generators, which are a common choice for the generation of tokens and token mapping tables, scrutiny must be applied to ensure proven and validated methods are used versus arbitrary design. Random-number generators have limitations in terms of speed, entropy, seeding and bias, and security properties must be carefully analysed and measured to avoid predictability and compromise. With tokenization's increasing adoption, new tokenization technology approaches have emerged to remove such operational risks and complexities and to enable increased scale suited to emerging big data use cases and high performance transaction processing, especially in financial services and banking. In addition to conventional tokenization methods, Protegrity provides additional security through its so-called "obfuscation layer." This creates a barrier that prevents not only regular users from accessing information they wouldn't see but also privileged users who has access, such as database administrators. Stateless tokenization allows live data elements to be mapped to surrogate values randomly, without relying on a database, while maintaining the isolation properties of tokenization. November 2014, American Express released its token service which meets the EMV tokenization standard. Other notable examples of Tokenization-based payment systems, according to the EMVCo standard, include Google Wallet, Apple Pay, Samsung Pay, Microsoft Wallet, Fitbit Pay and Garmin Pay. Visa uses tokenization techniques to provide a secure online and mobile shopping. Using blockchain, as opposed to relying on trusted third parties, it is possible to run highly accessible, tamper-resistant databases for transactions. With help of blockchain, tokenization is the process of converting the value of a tangible or intangible asset into a token that can be exchanged on the network. This enables the tokenization of conventional financial assets, for instance, by transforming rights into a digital token backed by the asset itself using blockchain technology. Besides that, tokenization enables the simple and efficient compartmentalization and management of data across multiple users. Individual tokens created through tokenization can be used to split ownership and partially resell an asset. Consequently, only entities with the appropriate token can access the data. Numerous blockchain companies support asset tokenization. In 2019, eToro acquired Firmo and renamed as eToroX. Through its Token Management Suite, which is backed by USD-pegged stablecoins, eToroX enables asset tokenization. The tokenization of equity is facilitated by STOKR, a platform that links investors with small and medium-sized businesses. Tokens issued through the STOKR platform are legally recognized as transferable securities under European Union capital market regulations. Breakers enable tokenization of intellectual property, allowing content creators to issue their own digital tokens. Tokens can be distributed to a variety of project participants. Without intermediaries or governing body, content creators can integrate reward-sharing features into the token. Application to alternative payment systems Building an alternate payments system requires a number of entities working together in order to deliver near field-communication (NFC) or other technology based payment services to the end users. One of the issues is the interoperability between the players and to resolve this issue the role of trusted service manager (TSM) is proposed to establish a technical link between mobile network operators (MNO) and providers of services, so that these entities can work together. Tokenization can play a role in mediating such services. Tokenization as a security strategy lies in the ability to replace a real card number with a surrogate (target removal) and the subsequent limitations placed on the surrogate card number (risk reduction). If the surrogate value can be used in an unlimited fashion or even in a broadly applicable manner, the token value gains as much value as the real credit card number. In these cases, the token may be secured by a second dynamic token that is unique for each transaction and also associated to a specific payment card. Example of dynamic, transaction-specific tokens include cryptograms used in the EMV specification. Application to PCI DSS standards The Payment Card Industry Data Security Standard, an industry-wide set of guidelines that must be met by any organization that stores, processes, or transmits cardholder data, mandates that credit card data must be protected when stored. Tokenization, as applied to payment card data, is often implemented to meet this mandate, replacing credit card and ACH numbers in some systems with a random value or string of characters. Tokens can be formatted in a variety of ways. Some token service providers or tokenization products generate the surrogate values in such a way as to match the format of the original sensitive data. In the case of payment card data, a token might be the same length as a Primary Account Number (bank card number) and contain elements of the original data such as the last four digits of the card number. When a payment card authorization request is made to verify the legitimacy of a transaction, a token might be returned to the merchant instead of the card number, along with the authorization code for the transaction. The token is stored in the receiving system while the actual cardholder data is mapped to the token in a secure tokenization system. Storage of tokens and payment card data must comply with current PCI standards, including the use of strong cryptography. Standards (ANSI, the PCI Council, Visa, and EMV) Tokenization is currently in standards definition in ANSI X9 as X9.119 Part 2. X9 is responsible for the industry standards for financial cryptography and data protection including payment card PIN management, credit and debit card encryption and related technologies and processes. The PCI Council has also stated support for tokenization in reducing risk in data breaches, when combined with other technologies such as Point-to-Point Encryption (P2PE) and assessments of compliance to PCI DSS guidelines. Visa Inc. released Visa Tokenization Best Practices for tokenization uses in credit and debit card handling applications and services. In March 2014, EMVCo LLC released its first payment tokenization specification for EMV. PCI DSS is the most frequently utilized standard for Tokenization systems used by payment industry players. Risk reduction Tokenization can render it more difficult for attackers to gain access to sensitive data outside of the tokenization system or service. Implementation of tokenization may simplify the requirements of the PCI DSS, as systems that no longer store or process sensitive data may have a reduction of applicable controls required by the PCI DSS guidelines. As a security best practice, independent assessment and validation of any technologies used for data protection, including tokenization, must be in place to establish the security and strength of the method and implementation before any claims of privacy compliance, regulatory compliance, and data security can be made. This validation is particularly important in tokenization, as the tokens are shared externally in general use and thus exposed in high risk, low trust environments. The infeasibility of reversing a token or set of tokens to a live sensitive data must be established using industry accepted measurements and proofs by appropriate experts independent of the service or solution provider. Restrictions on token use Not all organizational data can be tokenized, and needs to be examined and filtered. When databases are utilized on a large scale, they expand exponentially, causing the search process to take longer, restricting system performance, and increasing backup processes. A database that links sensitive information to tokens is called a vault. With the addition of new data, the vault's maintenance workload increases significantly. For ensuring database consistency, token databases need to be continuously synchronized. Apart from that, secure communication channels must be built between sensitive data and the vault so that data is not compromised on the way to or from storage. See also Adaptive Redaction PAN truncation Format preserving encryption References External links Cloud vs Payment - Cloud vs Payment - Introduction to tokenization via cloud payments. Cryptography
Tokenization (data security)
[ "Mathematics", "Engineering" ]
4,192
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
61,476
https://en.wikipedia.org/wiki/Radius%20of%20convergence
In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or . When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function (singularities are those values of the argument for which the function is not defined), the radius of convergence is the shortest or minimum of all the respective distances (which are all non-negative numbers) calculated from the center of the disk of convergence to the respective singularities of the function. Definition For a power series f defined as: where a is a complex constant, the center of the disk of convergence, cn is the n-th complex coefficient, and z is a complex variable. The radius of convergence r is a nonnegative real number or such that the series converges if and diverges if Some may prefer an alternative definition, as existence is obvious: On the boundary, that is, where |z − a| = r, the behavior of the power series may be complicated, and the series may converge for some values of z and diverge for others. The radius of convergence is infinite if the series converges for all complex numbers z. Finding the radius of convergence Two cases arise: The first case is theoretical: when you know all the coefficients then you take certain limits and find the precise radius of convergence. The second case is practical: when you construct a power series solution of a difficult problem you typically will only know a finite number of terms in a power series, anywhere from a couple of terms to a hundred terms. In this second case, extrapolating a plot estimates the radius of convergence. Theoretical radius The radius of convergence can be found by applying the root test to the terms of the series. The root test uses the number "lim sup" denotes the limit superior. The root test states that the series converges if C < 1 and diverges if C > 1. It follows that the power series converges if the distance from z to the center a is less than and diverges if the distance exceeds that number; this statement is the Cauchy–Hadamard theorem. Note that r = 1/0 is interpreted as an infinite radius, meaning that f is an entire function. The limit involved in the ratio test is usually easier to compute, and when that limit exists, it shows that the radius of convergence is finite. This is shown as follows. The ratio test says the series converges if That is equivalent to Practical estimation of radius in the case of real coefficients Usually, in scientific applications, only a finite number of coefficients are known. Typically, as increases, these coefficients settle into a regular behavior determined by the nearest radius-limiting singularity. In this case, two main techniques have been developed, based on the fact that the coefficients of a Taylor series are roughly exponential with ratio where r is the radius of convergence. The basic case is when the coefficients ultimately share a common sign or alternate in sign. As pointed out earlier in the article, in many cases the limit exists, and in this case . Negative means the convergence-limiting singularity is on the negative axis. Estimate this limit, by plotting the versus , and graphically extrapolate to (effectively ) via a linear fit. The intercept with estimates the reciprocal of the radius of convergence, . This plot is called a Domb–Sykes plot. The more complicated case is when the signs of the coefficients have a more complex pattern. Mercer and Roberts proposed the following procedure. Define the associated sequence Plot the finitely many known versus , and graphically extrapolate to via a linear fit. The intercept with estimates the reciprocal of the radius of convergence, . This procedure also estimates two other characteristics of the convergence limiting singularity. Suppose the nearest singularity is of degree and has angle to the real axis. Then the slope of the linear fit given above is . Further, plot versus , then a linear fit extrapolated to has intercept at . Radius of convergence in complex analysis A power series with a positive radius of convergence can be made into a holomorphic function by taking its argument to be a complex variable. The radius of convergence can be characterized by the following theorem: The radius of convergence of a power series f centered on a point a is equal to the distance from a to the nearest point where f cannot be defined in a way that makes it holomorphic. The set of all points whose distance to a is strictly less than the radius of convergence is called the disk of convergence. The nearest point means the nearest point in the complex plane, not necessarily on the real line, even if the center and all coefficients are real. For example, the function has no singularities on the real line, since has no real roots. Its Taylor series about 0 is given by The root test shows that its radius of convergence is 1. In accordance with this, the function f(z) has singularities at ±i, which are at a distance 1 from 0. For a proof of this theorem, see analyticity of holomorphic functions. A simple example The arctangent function of trigonometry can be expanded in a power series: It is easy to apply the root test in this case to find that the radius of convergence is 1. A more complicated example Consider this power series: where the rational numbers Bn are the Bernoulli numbers. It may be cumbersome to try to apply the ratio test to find the radius of convergence of this series. But the theorem of complex analysis stated above quickly solves the problem. At z = 0, there is in effect no singularity since the singularity is removable. The only non-removable singularities are therefore located at the other points where the denominator is zero. We solve by recalling that if and then and then take x and y to be real. Since y is real, the absolute value of is necessarily 1. Therefore, the absolute value of e can be 1 only if e is 1; since x is real, that happens only if x = 0. Therefore z is purely imaginary and . Since y is real, that happens only if cos(y) = 1 and sin(y) = 0, so that y is an integer multiple of 2. Consequently the singular points of this function occur at z = a nonzero integer multiple of 2i. The singularities nearest 0, which is the center of the power series expansion, are at ±2i. The distance from the center to either of those points is 2, so the radius of convergence is 2. Convergence on the boundary If the power series is expanded around the point a and the radius of convergence is , then the set of all points such that is a circle called the boundary of the disk of convergence. A power series may diverge at every point on the boundary, or diverge on some points and converge at other points, or converge at all the points on the boundary. Furthermore, even if the series converges everywhere on the boundary (even uniformly), it does not necessarily converge absolutely. Example 1: The power series for the function , expanded around , which is simply has radius of convergence 1 and diverges at every point on the boundary. Example 2: The power series for , expanded around , which is has radius of convergence 1, and diverges for but converges for all other points on the boundary. The function of Example 1 is the derivative of . Example 3: The power series has radius of convergence 1 and converges everywhere on the boundary absolutely. If is the function represented by this series on the unit disk, then the derivative of h(z) is equal to g(z)/z with g of Example 2. It turns out that is the dilogarithm function. Example 4: The power series has radius of convergence 1 and converges uniformly on the entire boundary , but does not converge absolutely on the boundary. Rate of convergence If we expand the function around the point x = 0, we find out that the radius of convergence of this series is meaning that this series converges for all complex numbers. However, in applications, one is often interested in the precision of a numerical answer. Both the number of terms and the value at which the series is to be evaluated affect the accuracy of the answer. For example, if we want to calculate accurate up to five decimal places, we only need the first two terms of the series. However, if we want the same precision for we must evaluate and sum the first five terms of the series. For , one requires the first 18 terms of the series, and for we need to evaluate the first 141 terms. So for these particular values the fastest convergence of a power series expansion is at the center, and as one moves away from the center of convergence, the rate of convergence slows down until you reach the boundary (if it exists) and cross over, in which case the series will diverge. Abscissa of convergence of a Dirichlet series An analogous concept is the abscissa of convergence of a Dirichlet series Such a series converges if the real part of s is greater than a particular number depending on the coefficients an: the abscissa of convergence. Notes References See also Abel's theorem Convergence tests Root test External links What is radius of convergence? Analytic functions Convergence (mathematics) Mathematical physics Radii
Radius of convergence
[ "Physics", "Mathematics" ]
1,958
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Mathematical relations", "Mathematical physics" ]
61,532
https://en.wikipedia.org/wiki/Absolute%20convergence
In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if A convergent series that is not absolutely convergent is called conditionally convergent. Absolute convergence is important for the study of infinite series, because its definition guarantees that a series will have some "nice" behaviors of finite sums that not all convergent series possess. For instance, rearrangements do not change the value of the sum, which is not necessarily true for conditionally convergent series. Background When adding a finite number of terms, addition is both associative and commutative, meaning that grouping and rearrangment do not alter the final sum. For instance, is equal to both and . However, associativity and commutativity do not necessarily hold for infinite sums. One example is the alternating harmonic series whose terms are fractions that alternate in sign. This series is convergent and can be evaluated using the Maclaurin series for the function , which converges for all satisfying : Substituting reveals that the original sum is equal to . The sum can also be rearranged as follows: In this rearrangement, the reciprocal of each odd number is grouped with the reciprocal of twice its value, while the reciprocals of every multiple of 4 are evaluated separately. However, evaluating the terms inside the parentheses yields or half the original series. The violation of the associativity and commutativity of addition reveals that the alternating harmonic series is conditionally convergent. Indeed, the sum of the absolute values of each term is , or the divergent harmonic series. According to the Riemann series theorem, any conditionally convergent series can be permuted so that its sum is any finite real number or so that it diverges. When an absolutely convergent series is rearranged, its sum is always preserved. Definition for real and complex numbers A sum of real numbers or complex numbers is absolutely convergent if the sum of the absolute values of the terms converges. Sums of more general elements The same definition can be used for series whose terms are not numbers but rather elements of an arbitrary abelian topological group. In that case, instead of using the absolute value, the definition requires the group to have a norm, which is a positive real-valued function on an abelian group (written additively, with identity element 0) such that: The norm of the identity element of is zero: For every implies For every For every In this case, the function induces the structure of a metric space (a type of topology) on Then, a -valued series is absolutely convergent if In particular, these statements apply using the norm (absolute value) in the space of real numbers or complex numbers. In topological vector spaces If is a topological vector space (TVS) and is a (possibly uncountable) family in then this family is absolutely summable if is summable in (that is, if the limit of the net converges in where is the directed set of all finite subsets of directed by inclusion and ), and for every continuous seminorm on the family is summable in If is a normable space and if is an absolutely summable family in then necessarily all but a countable collection of 's are 0. Absolutely summable families play an important role in the theory of nuclear spaces. Relation to convergence If is complete with respect to the metric then every absolutely convergent series is convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality. In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is a Banach space. If a series is convergent but not absolutely convergent, it is called conditionally convergent. An example of a conditionally convergent series is the alternating harmonic series. Many standard tests for divergence and convergence, most notably including the ratio test and the root test, demonstrate absolute convergence. This is because a power series is absolutely convergent on the interior of its disk of convergence. Proof that any absolutely convergent series of complex numbers is convergent Suppose that is convergent. Then equivalently, is convergent, which implies that and converge by termwise comparison of non-negative terms. It suffices to show that the convergence of these series implies the convergence of and for then, the convergence of would follow, by the definition of the convergence of complex-valued series. The preceding discussion shows that we need only prove that convergence of implies the convergence of Let be convergent. Since we have Since is convergent, is a bounded monotonic sequence of partial sums, and must also converge. Noting that is the difference of convergent series, we conclude that it too is a convergent series, as desired. Alternative proof using the Cauchy criterion and triangle inequality By applying the Cauchy criterion for the convergence of a complex series, we can also prove this fact as a simple implication of the triangle inequality. By the Cauchy criterion, converges if and only if for any there exists such that for any But the triangle inequality implies that so that for any which is exactly the Cauchy criterion for Proof that any absolutely convergent series in a Banach space is convergent The above result can be easily generalized to every Banach space Let be an absolutely convergent series in As is a Cauchy sequence of real numbers, for any and large enough natural numbers it holds: By the triangle inequality for the norm , one immediately gets: which means that is a Cauchy sequence in hence the series is convergent in Rearrangements and unconditional convergence Real and complex numbers When a series of real or complex numbers is absolutely convergent, any rearrangement or reordering of that series' terms will still converge to the same value. This fact is one reason absolutely convergent series are useful: showing a series is absolutely convergent allows terms to be paired or rearranged in convenient ways without changing the sum's value. The Riemann rearrangement theorem shows that the converse is also true: every real or complex-valued series whose terms cannot be reordered to give a different value is absolutely convergent. Series with coefficients in more general space The term unconditional convergence is used to refer to a series where any rearrangement of its terms still converges to the same value. For any series with values in a normed abelian group , as long as is complete, every series which converges absolutely also converges unconditionally. Stated more formally: For series with more general coefficients, the converse is more complicated. As stated in the previous section, for real-valued and complex-valued series, unconditional convergence always implies absolute convergence. However, in the more general case of a series with values in any normed abelian group , the converse does not always hold: there can exist series which are not absolutely convergent, yet unconditionally convergent. For example, in the Banach space ℓ∞, one series which is unconditionally convergent but not absolutely convergent is: where is an orthonormal basis. A theorem of A. Dvoretzky and C. A. Rogers asserts that every infinite-dimensional Banach space has an unconditionally convergent series that is not absolutely convergent. Proof of the theorem For any we can choose some such that: Let where so that is the smallest natural number such that the list includes all of the terms (and possibly others). Finally for any integer let so that and thus This shows that that is: Q.E.D. Products of series The Cauchy product of two series converges to the product of the sums if at least one of the series converges absolutely. That is, suppose that The Cauchy product is defined as the sum of terms where: If the or sum converges absolutely then Absolute convergence over sets A generalization of the absolute convergence of a series, is the absolute convergence of a sum of a function over a set. We can first consider a countable set and a function We will give a definition below of the sum of over written as First note that because no particular enumeration (or "indexing") of has yet been specified, the series cannot be understood by the more basic definition of a series. In fact, for certain examples of and the sum of over may not be defined at all, since some indexing may produce a conditionally convergent series. Therefore we define only in the case where there exists some bijection such that is absolutely convergent. Note that here, "absolutely convergent" uses the more basic definition, applied to an indexed series. In this case, the value of the sum of over is defined by Note that because the series is absolutely convergent, then every rearrangement is identical to a different choice of bijection Since all of these sums have the same value, then the sum of over is well-defined. Even more generally we may define the sum of over when is uncountable. But first we define what it means for the sum to be convergent. Let be any set, countable or uncountable, and a function. We say that the sum of over converges absolutely if There is a theorem which states that, if the sum of over is absolutely convergent, then takes non-zero values on a set that is at most countable. Therefore, the following is a consistent definition of the sum of over when the sum is absolutely convergent. Note that the final series uses the definition of a series over a countable set. Some authors define an iterated sum to be absolutely convergent if the iterated series This is in fact equivalent to the absolute convergence of That is to say, if the sum of over converges absolutely, as defined above, then the iterated sum converges absolutely, and vice versa. Absolute convergence of integrals The integral of a real or complex-valued function is said to converge absolutely if One also says that is absolutely integrable. The issue of absolute integrability is intricate and depends on whether the Riemann, Lebesgue, or Kurzweil-Henstock (gauge) integral is considered; for the Riemann integral, it also depends on whether we only consider integrability in its proper sense ( and both bounded), or permit the more general case of improper integrals. As a standard property of the Riemann integral, when is a bounded interval, every continuous function is bounded and (Riemann) integrable, and since continuous implies continuous, every continuous function is absolutely integrable. In fact, since is Riemann integrable on if is (properly) integrable and is continuous, it follows that is properly Riemann integrable if is. However, this implication does not hold in the case of improper integrals. For instance, the function is improperly Riemann integrable on its unbounded domain, but it is not absolutely integrable: Indeed, more generally, given any series one can consider the associated step function defined by Then converges absolutely, converges conditionally or diverges according to the corresponding behavior of The situation is different for the Lebesgue integral, which does not handle bounded and unbounded domains of integration separately (see below). The fact that the integral of is unbounded in the examples above implies that is also not integrable in the Lebesgue sense. In fact, in the Lebesgue theory of integration, given that is measurable, is (Lebesgue) integrable if and only if is (Lebesgue) integrable. However, the hypothesis that is measurable is crucial; it is not generally true that absolutely integrable functions on are integrable (simply because they may fail to be measurable): let be a nonmeasurable subset and consider where is the characteristic function of Then is not Lebesgue measurable and thus not integrable, but is a constant function and clearly integrable. On the other hand, a function may be Kurzweil-Henstock integrable (gauge integrable) while is not. This includes the case of improperly Riemann integrable functions. In a general sense, on any measure space the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts: integrable implies integrable measurable, integrable implies integrable are essentially built into the definition of the Lebesgue integral. In particular, applying the theory to the counting measure on a set one recovers the notion of unordered summation of series developed by Moore–Smith using (what are now called) nets. When is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide. Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one. For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral. See also Notes References Works cited General references Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964). Mathematical series Integral calculus Summability theory Convergence (mathematics)
Absolute convergence
[ "Mathematics" ]
2,888
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Series (mathematics)", "Calculus", "Mathematical objects", "Mathematical relations", "Integral calculus" ]
61,553
https://en.wikipedia.org/wiki/Tree%20fern
Tree ferns are arborescent (tree-like) ferns that grow with a trunk elevating the fronds above ground level, making them trees. Many extant tree ferns are members of the order Cyatheales, to which belong the families Cyatheaceae (scaly tree ferns), Dicksoniaceae, Metaxyaceae, and Cibotiaceae. It is estimated that Cyatheales originated in the early Jurassic, and is the third group of ferns known to have given rise to tree-like forms. The others are the extinct Tempskya of uncertain position, and Osmundales where the extinct Guaireaceae and some members of Osmundaceae also grew into trees. In addition there were the Psaroniaceae including Tietea in the Marattiales, which is the sister group to most living ferns including Cyatheales. Other ferns which are also tree ferns, are Leptopteris and Todea in the family Osmundaceae, which can achieve short trunks under a metre tall. Fern species with short trunks in the genera Blechnum, Cystodium and Sadleria from the order Polypodiales, and smaller members of Cyatheales like Calochlaena, Cnemedaria, Culcita (mountains only tree fern), Lophosoria and Thyrsopteris are also considered tree ferns. Range Tree ferns are found growing in tropical and subtropical areas worldwide, as well as cool to temperate rainforests in Australia, New Zealand and neighbouring regions (e.g. Lord Howe Island, etc.). Like all ferns, tree ferns reproduce by means of spores formed on the undersides of the fronds. Description The fronds of tree ferns are usually very large and multiple-pinnate. Their trunk is actually a vertical and modified rhizome, and woody tissue is absent. To add strength, there are deposits of lignin in the cell walls and the lower part of the stem is reinforced with thick, interlocking mats of tiny roots. If the crown of Dicksonia antarctica (the most common species in gardens) is damaged, it will inevitably die because that is where all the new growth occurs. But other clump-forming tree fern species, such as D. squarrosa and D. youngiae, can regenerate from basal offsets or from "pups" emerging along the surviving trunk length. Tree ferns often fall over in the wild, yet manage to re-root from this new prostrate position and begin new vertical growth. Uses Tree-ferns have been cultivated for their beauty alone; a few, however, were of some economic application, chiefly as sources of starch. These include the Sphaeropteris excelsa of Norfolk Island that was threatened with extinction for the sake of its sago-like pith, which was eaten by pigs. It is now widely cultivated as an ornamental tree, although there is only one small wild population on Norfolk Island. Sphaeropteris medullaris (mamaku, black tree fern) also furnished a kind of sago to people living in New Zealand, Queensland and the Pacific islands. A Javanese species of Dicksonia (D. chrysotricha) furnishes silky hairs, which were once imported as a styptic, and the long silky or wooly hairs, abundant on the stem and frond-leaves in the various species of Cibotium have not only been put to a similar use, but in the Hawaiian Islands furnished wool for stuffing mattresses and cushions, which was formerly an article of export. Species It is not certain the exact number of species of tree ferns there are, but it may be close to 600–700 species. Many species have become extinct in the last century as forest habitats have come under pressure from human intervention. Lophosoria (tropical America, 1 species) Metaxya (tropical America, 1 species) Sphaeropteris (tropical America, India, Southeast Asia to New Zealand, the Marquesas, and Pitcairn Island, about 120 species) Alsophila (pantropic area, about 230 species) Nephelea (tropical America, about 30 species) Trichipteris (tropical America, about 90 species) Cyathea (tropical America, Australasia, about 110 species) Cnemidaria (tropical America, about 40 species) Dicksonia (tropics and southern subtropics in Island Southeast Asia, Australasia, America, Hawaii, St. Helena, about 25 species) Cystodium (Island Southeast Asia, 1 species) Thyrsopteris (Juan Fernández Islands, 1 species) Culcita (tropical America, Macaronesia, Iberian Peninsula, 2 species) Cibotium (Southeast Asia, Hawaii, Central America, about 12 species) References External links Flora Technical Note No. 5: Identification and management of tree ferns from Tasmania Forest Practices Authority Tree Fern from the San Diego Zoo website Ferns Plant common names Plants by habit
Tree fern
[ "Biology" ]
1,026
[ "Ferns", "Plant common names", "Common names of organisms", "Plants" ]
61,559
https://en.wikipedia.org/wiki/Archimedean%20spiral
The Archimedean spiral (also known as Archimedes' spiral, the arithmetic spiral) is a spiral named after the 3rd-century BC Greek mathematician Archimedes. The term Archimedean spiral is sometimes used to refer to the more general class of spirals of this type (see below), in contrast to Archimedes' spiral (the specific arithmetic spiral of Archimedes). It is the locus corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line that rotates with constant angular velocity. Equivalently, in polar coordinates it can be described by the equation with real number . Changing the parameter controls the distance between loops. From the above equation, it can thus be stated: position of the particle from point of start is proportional to angle as time elapses. Archimedes described such a spiral in his book On Spirals. Conon of Samos was a friend of his and Pappus states that this spiral was discovered by Conon. Derivation of general equation of spiral A physical approach is used below to understand the notion of Archimedean spirals. Suppose a point object moves in the Cartesian system with a constant velocity directed parallel to the -axis, with respect to the -plane. Let at time , the object was at an arbitrary point . If the plane rotates with a constant angular velocity about the -axis, then the velocity of the point with respect to -axis may be written as: As shown in the figure alongside, we have representing the modulus of the position vector of the particle at any time , with and as the velocity components along the x and y axes, respectively. The above equations can be integrated by applying integration by parts, leading to the following parametric equations: Squaring the two equations and then adding (and some small alterations) results in the Cartesian equation (using the fact that and ) or Its polar form is Arc length and curvature Given the parametrization in cartesian coordinates the arc length from to is or, equivalently: The total length from to is therefore The curvature is given by Characteristics The Archimedean spiral has the property that any ray from the origin intersects successive turnings of the spiral in points with a constant separation distance (equal to if is measured in radians), hence the name "arithmetic spiral". In contrast to this, in a logarithmic spiral these distances, as well as the distances of the intersection points measured from the origin, form a geometric progression. The Archimedean spiral has two arms, one for and one for . The two arms are smoothly connected at the origin. Only one arm is shown on the accompanying graph. Taking the mirror image of this arm across the -axis will yield the other arm. For large a point moves with well-approximated uniform acceleration along the Archimedean spiral while the spiral corresponds to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity (see contribution from Mikhail Gaichenkov). As the Archimedean spiral grows, its evolute asymptotically approaches a circle with radius . General Archimedean spiral Sometimes the term Archimedean spiral is used for the more general group of spirals The normal Archimedean spiral occurs when . Other spirals falling into this group include the hyperbolic spiral (), Fermat's spiral (), and the lituus (). Applications One method of squaring the circle, due to Archimedes, makes use of an Archimedean spiral. Archimedes also showed how the spiral can be used to trisect an angle. Both approaches relax the traditional limitations on the use of straightedge and compass in ancient Greek geometric proofs. The Archimedean spiral has a variety of real-world applications. Scroll compressors, used for compressing gases, have rotors that can be made from two interleaved Archimedean spirals, involutes of a circle of the same size that almost resemble Archimedean spirals, or hybrid curves. Archimedean spirals can be found in spiral antenna, which can be operated over a wide range of frequencies. The coils of watch balance springs and the grooves of very early gramophone records form Archimedean spirals, making the grooves evenly spaced (although variable track spacing was later introduced to maximize the amount of music that could be cut onto a record). Asking for a patient to draw an Archimedean spiral is a way of quantifying human tremor; this information helps in diagnosing neurological diseases. Archimedean spirals are also used in digital light processing (DLP) projection systems to minimize the "rainbow effect", making it look as if multiple colors are displayed at the same time, when in reality red, green, and blue are being cycled extremely quickly. Additionally, Archimedean spirals are used in food microbiology to quantify bacterial concentration through a spiral platter. They are also used to model the pattern that occurs in a roll of paper or tape of constant thickness wrapped around a cylinder. Many dynamic spirals (such as the Parker spiral of the solar wind, or the pattern made by a Catherine's wheel) are Archimedean. For instance, the star LL Pegasi shows an approximate Archimedean spiral in the dust clouds surrounding it, thought to be ejected matter from the star that has been shepherded into a spiral by another companion star as part of a double star system. Construction methods The Archimedean Spiral cannot be constructed precisely by traditional compass and straightedge methods, since the arithmetic spiral requires the radius of the curve to be incremented constantly as the angle at the origin is incremented. But an arithmetic spiral can be constructed approximately, to varying degrees of precision, by various manual drawing methods. One such method uses compass and straightedge; another method uses a modified string compass. The common traditional construction uses compass and straightedge to approximate the arithmetic spiral. First, a large circle is constructed and its circumference is subdivided by 12 diameters into 12 arcs (of 30 degrees each; see regular dodecagon). Next, the radius of this circle is itself subdivided into 12 unit segments (radial units), and a series of concentric circles is constructed, each with radius incremented by one radial unit. Starting with the horizontal diameter and the innermost concentric circle, the point is marked where its radius intersects its circumference; one then moves to the next concentric circle and to the next diameter (moving up to construct a counterclockwise spiral, or down for clockwise) to mark the next point. After all points have been marked, successive points are connected by a line approximating the arithmetic spiral (or by a smooth curve of some sort; see French Curve). Depending on the desired degree of precision, this method can be improved by increasing the size of the large outer circle, making more subdivisions of both its circumference and radius, increasing the number of concentric circles (see Polygonal Spiral). Approximating the Archimedean Spiral by this method is of course reminiscent of Archimedes’ famous method of approximating π by doubling the sides of successive polygons (see Polygon approximation of π). Compass and straightedge construction of the Spiral of Theodorus is another simple method to approximate the Archimedean Spiral. A mechanical method for constructing the arithmetic spiral uses a modified string compass, where the string wraps and winds (or unwraps/unwinds) about a fixed central pin (that does not pivot), thereby incrementing (or decrementing) the length of the radius (string) as the angle changes (the string winds around the fixed pin which does not pivot). Such a method is a simple way to create an arithmetic spiral, arising naturally from use of a string compass with winding pin (not the loose pivot of a common string compass). The string compass drawing tool has various modifications and designs, and this construction method is reminiscent of string-based methods for creating ellipses (with two fixed pins). Yet another mechanical method is a variant of the previous string compass method, providing greater precision and more flexibility. Instead of the central pin and string of the string compass, this device uses a non-rotating shaft (column) with helical threads (screw; see Archimedes’ screw) to which are attached two slotted arms: one horizontal arm is affixed to (travels up) the screw threads of the vertical shaft at one end, and holds a drawing tool at the other end; another sloped arm is affixed at one end to the top of the screw shaft, and is joined by a pin loosely fitted in its slot to the slot of the horizontal arm. The two arms rotate together and work in consort to produce the arithmetic spiral: as the horizontal arm gradually climbs the screw, that arm’s slotted attachment to the sloped arm gradually shortens the drawing radius. The angle of the sloped arm remains constant throughout (traces a cone), and setting a different angle varies the pitch of the spiral. This device provides a high degree of precision, depending on the precision with which the device is machined (machining a precise helical screw thread is a related challenge). And of course the use of a screw shaft in this mechanism is reminiscent of Archimedes’ screw. See also References External links Jonathan Matt making the Archimedean spiral interesting - Video : The surprising beauty of Mathematics - TedX Talks, Green Farms Page with Java application to interactively explore the Archimedean spiral and its related curves Online exploration using JSXGraph (JavaScript) Archimedean spiral at "mathcurve" Squaring the circle Spirals Spiral Articles with example R code Plane curves
Archimedean spiral
[ "Mathematics" ]
2,051
[ "Geometry problems", "Squaring the circle", "Plane curves", "Euclidean plane geometry", "Planes (geometry)", "Mathematical problems", "Pi" ]
61,577
https://en.wikipedia.org/wiki/Electrical%20resistance%20and%20conductance
The electrical resistance of an object is a measure of its opposition to the flow of electric current. Its reciprocal quantity is , measuring the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with mechanical friction. The SI unit of electrical resistance is the ohm (), while electrical conductance is measured in siemens (S) (formerly called the 'mho' and then represented by ). The resistance of an object depends in large part on the material it is made of. Objects made of electrical insulators like rubber tend to have very high resistance and low conductance, while objects made of electrical conductors like metals tend to have very low resistance and high conductance. This relationship is quantified by resistivity or conductivity. The nature of a material is not the only factor in resistance and conductance, however; it also depends on the size and shape of an object because these properties are extensive rather than intensive. For example, a wire's resistance is higher if it is long and thin, and lower if it is short and thick. All objects resist electrical current, except for superconductors, which have a resistance of zero. The resistance of an object is defined as the ratio of voltage across it to current through it, while the conductance is the reciprocal: For a wide variety of materials and conditions, and are directly proportional to each other, and therefore and are constants (although they will depend on the size and shape of the object, the material it is made of, and other factors like temperature or strain). This proportionality is called Ohm's law, and materials that satisfy it are called ohmic materials. In other cases, such as a transformer, diode or battery, and are not directly proportional. The ratio is sometimes still useful, and is referred to as a chordal resistance or static resistance, since it corresponds to the inverse slope of a chord between the origin and an – curve. In other situations, the derivative may be most useful; this is called the differential resistance. Introduction In the hydraulic analogy, current flowing through a wire (or resistor) is like water flowing through a pipe, and the voltage drop across the wire is like the pressure drop that pushes water through the pipe. Conductance is proportional to how much flow occurs for a given pressure, and resistance is proportional to how much pressure is required to achieve a given flow. The voltage drop (i.e., difference between voltages on one side of the resistor and the other), not the voltage itself, provides the driving force pushing current through a resistor. In hydraulics, it is similar: the pressure difference between two sides of a pipe, not the pressure itself, determines the flow through it. For example, there may be a large water pressure above the pipe, which tries to push water down through the pipe. But there may be an equally large water pressure below the pipe, which tries to push water back up through the pipe. If these pressures are equal, no water flows. (In the image at right, the water pressure below the pipe is zero.) The resistance and conductance of a wire, resistor, or other element is mostly determined by two properties: geometry (shape), and material Geometry is important because it is more difficult to push water through a long, narrow pipe than a wide, short pipe. In the same way, a long, thin copper wire has higher resistance (lower conductance) than a short, thick copper wire. Materials are important as well. A pipe filled with hair restricts the flow of water more than a clean pipe of the same shape and size. Similarly, electrons can flow freely and easily through a copper wire, but cannot flow as easily through a steel wire of the same shape and size, and they essentially cannot flow at all through an insulator like rubber, regardless of its shape. The difference between copper, steel, and rubber is related to their microscopic structure and electron configuration, and is quantified by a property called resistivity. In addition to geometry and material, there are various other factors that influence resistance and conductance, such as temperature; see below. Conductors and resistors Substances in which electricity can flow are called conductors. A piece of conducting material of a particular resistance meant for use in a circuit is called a resistor. Conductors are made of high-conductivity materials such as metals, in particular copper and aluminium. Resistors, on the other hand, are made of a wide variety of materials depending on factors such as the desired resistance, amount of energy that it needs to dissipate, precision, and costs. Ohm's law For many materials, the current through the material is proportional to the voltage applied across it: over a wide range of voltages and currents. Therefore, the resistance and conductance of objects or electronic components made of these materials is constant. This relationship is called Ohm's law, and materials which obey it are called ohmic materials. Examples of ohmic components are wires and resistors. The current–voltage graph of an ohmic device consists of a straight line through the origin with positive slope. Other components and materials used in electronics do not obey Ohm's law; the current is not proportional to the voltage, so the resistance varies with the voltage and current through them. These are called nonlinear or non-ohmic. Examples include diodes and fluorescent lamps. Relation to resistivity and conductivity The resistance of a given object depends primarily on two factors: what material it is made of, and its shape. For a given material, the resistance is inversely proportional to the cross-sectional area; for example, a thick copper wire has lower resistance than an otherwise-identical thin copper wire. Also, for a given material, the resistance is proportional to the length; for example, a long copper wire has higher resistance than an otherwise-identical short copper wire. The resistance and conductance of a conductor of uniform cross section, therefore, can be computed as where is the length of the conductor, measured in metres (m), is the cross-sectional area of the conductor measured in square metres (m2), (sigma) is the electrical conductivity measured in siemens per meter (S·m−1), and (rho) is the electrical resistivity (also called specific electrical resistance) of the material, measured in ohm-metres (Ω·m). The resistivity and conductivity are proportionality constants, and therefore depend only on the material the wire is made of, not the geometry of the wire. Resistivity and conductivity are reciprocals: . Resistivity is a measure of the material's ability to oppose electric current. This formula is not exact, as it assumes the current density is totally uniform in the conductor, which is not always true in practical situations. However, this formula still provides a good approximation for long thin conductors such as wires. Another situation for which this formula is not exact is with alternating current (AC), because the skin effect inhibits current flow near the center of the conductor. For this reason, the geometrical cross-section is different from the effective cross-section in which current actually flows, so resistance is higher than expected. Similarly, if two conductors near each other carry AC current, their resistances increase due to the proximity effect. At commercial power frequency, these effects are significant for large conductors carrying large currents, such as busbars in an electrical substation, or large power cables carrying more than a few hundred amperes. The resistivity of different materials varies by an enormous amount: For example, the conductivity of teflon is about 1030 times lower than the conductivity of copper. Loosely speaking, this is because metals have large numbers of "delocalized" electrons that are not stuck in any one place, so they are free to move across large distances. In an insulator, such as Teflon, each electron is tightly bound to a single molecule so a great force is required to pull it away. Semiconductors lie between these two extremes. More details can be found in the article: Electrical resistivity and conductivity. For the case of electrolyte solutions, see the article: Conductivity (electrolytic). Resistivity varies with temperature. In semiconductors, resistivity also changes when exposed to light. See below. Measurement An instrument for measuring resistance is called an ohmmeter. Simple ohmmeters cannot measure low resistances accurately because the resistance of their measuring leads causes a voltage drop that interferes with the measurement, so more accurate devices use four-terminal sensing. Typical values Static and differential resistance Many electrical elements, such as diodes and batteries do satisfy Ohm's law. These are called non-ohmic or non-linear, and their current–voltage curves are straight lines through the origin. Resistance and conductance can still be defined for non-ohmic elements. However, unlike ohmic resistance, non-linear resistance is not constant but varies with the voltage or current through the device; i.e., its operating point. There are two types of resistance: AC circuits Impedance and admittance When an alternating current flows through a circuit, the relation between current and voltage across a circuit element is characterized not only by the ratio of their magnitudes, but also the difference in their phases. For example, in an ideal resistor, the moment when the voltage reaches its maximum, the current also reaches its maximum (current and voltage are oscillating in phase). But for a capacitor or inductor, the maximum current flow occurs as the voltage passes through zero and vice versa (current and voltage are oscillating 90° out of phase, see image below). Complex numbers are used to keep track of both the phase and magnitude of current and voltage: where: is time; and are the voltage and current as a function of time, respectively; and indicate the amplitude of the voltage and current, respectively; is the angular frequency of the AC current; is the displacement angle; and are the complex-valued voltage and current, respectively; and are the complex impedance and admittance, respectively; indicates the real part of a complex number; and is the imaginary unit. The impedance and admittance may be expressed as complex numbers that can be broken into real and imaginary parts: where is resistance, is conductance, is reactance, and is susceptance. These lead to the complex number identities which are true in all cases, whereas is only true in the special cases of either DC or reactance-free current. The complex angle is the phase difference between the voltage and current passing through a component with impedance . For capacitors and inductors, this angle is exactly -90° or +90°, respectively, and and are nonzero. Ideal resistors have an angle of 0°, since is zero (and hence also), and and reduce to and respectively. In general, AC systems are designed to keep the phase angle close to 0° as much as possible, since it reduces the reactive power, which does no useful work at a load. In a simple case with an inductive load (causing the phase to increase), a capacitor may be added for compensation at one frequency, since the capacitor's phase shift is negative, bringing the total impedance phase closer to 0° again. is the reciprocal of () for all circuits, just as for DC circuits containing only resistors, or AC circuits for which either the reactance or susceptance happens to be zero ( or , respectively) (if one is zero, then for realistic systems both must be zero). Frequency dependence A key feature of AC circuits is that the resistance and conductance can be frequency-dependent, a phenomenon known as the universal dielectric response. One reason, mentioned above is the skin effect (and the related proximity effect). Another reason is that the resistivity itself may depend on frequency (see Drude model, deep-level traps, resonant frequency, Kramers–Kronig relations, etc.) Energy dissipation and Joule heating Resistors (and other elements with resistance) oppose the flow of electric current; therefore, electrical energy is required to push current through the resistance. This electrical energy is dissipated, heating the resistor in the process. This is called Joule heating (after James Prescott Joule), also called ohmic heating or resistive heating. The dissipation of electrical energy is often undesired, particularly in the case of transmission losses in power lines. High voltage transmission helps reduce the losses by reducing the current for a given power. On the other hand, Joule heating is sometimes useful, for example in electric stoves and other electric heaters (also called resistive heaters). As another example, incandescent lamps rely on Joule heating: the filament is heated to such a high temperature that it glows "white hot" with thermal radiation (also called incandescence). The formula for Joule heating is: where is the power (energy per unit time) converted from electrical energy to thermal energy, is the resistance, and is the current through the resistor. Dependence on other conditions Temperature dependence Near room temperature, the resistivity of metals typically increases as temperature is increased, while the resistivity of semiconductors typically decreases as temperature is increased. The resistivity of insulators and electrolytes may increase or decrease depending on the system. For the detailed behavior and explanation, see Electrical resistivity and conductivity. As a consequence, the resistance of wires, resistors, and other components often change with temperature. This effect may be undesired, causing an electronic circuit to malfunction at extreme temperatures. In some cases, however, the effect is put to good use. When temperature-dependent resistance of a component is used purposefully, the component is called a resistance thermometer or thermistor. (A resistance thermometer is made of metal, usually platinum, while a thermistor is made of ceramic or polymer.) Resistance thermometers and thermistors are generally used in two ways. First, they can be used as thermometers: by measuring the resistance, the temperature of the environment can be inferred. Second, they can be used in conjunction with Joule heating (also called self-heating): if a large current is running through the resistor, the resistor's temperature rises and therefore its resistance changes. Therefore, these components can be used in a circuit-protection role similar to fuses, or for feedback in circuits, or for many other purposes. In general, self-heating can turn a resistor into a nonlinear and hysteretic circuit element. For more details see Thermistor#Self-heating effects. If the temperature does not vary too much, a linear approximation is typically used: where is called the temperature coefficient of resistance, is a fixed reference temperature (usually room temperature), and is the resistance at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. The temperature coefficient is typically to for metals near room temperature. It is usually negative for semiconductors and insulators, with highly variable magnitude. Strain dependence Just as the resistance of a conductor depends upon temperature, the resistance of a conductor depends upon strain. By placing a conductor under tension (a form of stress that leads to strain in the form of stretching of the conductor), the length of the section of conductor under tension increases and its cross-sectional area decreases. Both these effects contribute to increasing the resistance of the strained section of conductor. Under compression (strain in the opposite direction), the resistance of the strained section of conductor decreases. See the discussion on strain gauges for details about devices constructed to take advantage of this effect. Light illumination dependence Some resistors, particularly those made from semiconductors, exhibit photoconductivity, meaning that their resistance changes when light is shining on them. Therefore, they are called photoresistors (or light dependent resistors). These are a common type of light detector. Superconductivity Superconductors are materials that have exactly zero resistance and infinite conductance, because they can have and . This also means there is no joule heating, or in other words no dissipation of electrical energy. Therefore, if superconductive wire is made into a closed loop, current flows around the loop forever. Superconductors require cooling to temperatures near with liquid helium for most metallic superconductors like niobium–tin alloys, or cooling to temperatures near with liquid nitrogen for the expensive, brittle and delicate ceramic high temperature superconductors. Nevertheless, there are many technological applications of superconductivity, including superconducting magnets. See also Conductance quantum Von Klitzing constant (its reciprocal) Electrical measurements Contact resistance Electrical resistivity and conductivity for more information about the physical mechanisms for conduction in materials. Johnson–Nyquist noise Quantum Hall effect, a standard for high-accuracy resistance measurements. Resistor RKM code Series and parallel circuits Sheet resistance SI electromagnetism units Thermal resistance Voltage divider Voltage drop Footnotes References External links Electricity Electromagnetic quantities
Electrical resistance and conductance
[ "Physics", "Mathematics" ]
3,588
[ "Electromagnetic quantities", "Physical quantities", "Quantity", "Wikipedia categories named after physical quantities", "Electrical resistance and conductance" ]
61,580
https://en.wikipedia.org/wiki/Electrical%20resistivity%20and%20conductivity
Electrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter  (rho). The SI unit of electrical resistivity is the ohm-metre (Ω⋅m). For example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is . Electrical conductivity (or specific conductance) is the reciprocal of electrical resistivity. It represents a material's ability to conduct electric current. It is commonly signified by the Greek letter  (sigma), but  (kappa) (especially in electrical engineering) and  (gamma) are sometimes used. The SI unit of electrical conductivity is siemens per metre (S/m). Resistivity and conductivity are intensive properties of materials, giving the opposition of a standard cube of material to current. Electrical resistance and conductance are corresponding extensive properties that give the opposition of a specific object to electric current. Definition Ideal case In an ideal case, cross-section and physical composition of the examined material are uniform across the sample, and the electric field and current density are both parallel and constant everywhere. Many resistors and conductors do in fact have a uniform cross section with a uniform flow of electric current, and are made of a single material, so that this is a good model. (See the adjacent diagram.) When this is the case, the resistance of the conductor is directly proportional to its length and inversely proportional to its cross-sectional area, where the electrical resistivity  (Greek: rho) is the constant of proportionality. This is written as: where The resistivity can be expressed using the SI unit ohm metre (Ω⋅m) — i.e. ohms multiplied by square metres (for the cross-sectional area) then divided by metres (for the length). Both resistance and resistivity describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property and does not depend on geometric properties of a material. This means that all pure copper (Cu) wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same , but a long, thin copper wire has a much larger than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper. In a hydraulic analogy, passing current through a high-resistivity material is like pushing water through a pipe full of sand - while passing current through a low-resistivity material is like pushing water through an empty pipe. If the pipes are the same size and shape, the pipe full of sand has higher resistance to flow. Resistance, however, is not determined by the presence or absence of sand. It also depends on the length and width of the pipe: short or wide pipes have lower resistance than narrow or long pipes. The above equation can be transposed to get Pouillet's law (named after Claude Pouillet): The resistance of a given element is proportional to the length, but inversely proportional to the cross-sectional area. For example, if  = ,  = (forming a cube with perfectly conductive contacts on opposite faces), then the resistance of this element in ohms is numerically equal to the resistivity of the material it is made of in Ω⋅m. Conductivity, , is the inverse of resistivity: Conductivity has SI units of siemens per metre (S/m). General scalar quantities If the geometry is more complicated, or if the resistivity varies from point to point within the material, the current and electric field will be functions of position. Then it is necessary to use a more general expression in which the resistivity at a particular point is defined as the ratio of the electric field to the density of the current it creates at that point: where The current density is parallel to the electric field by necessity. Conductivity is the inverse (reciprocal) of resistivity. Here, it is given by: For example, rubber is a material with large and small  — because even a very large electric field in rubber makes almost no current flow through it. On the other hand, copper is a material with small and large  — because even a small electric field pulls a lot of current through it. This expression simplifies to the formula given above under "ideal case" when the resistivity is constant in the material and the geometry has a uniform cross-section. In this case, the electric field and current density are constant and parallel. {| class="toccolours collapsible collapsed" width="80%" style="text-align:left;" ! Derivation of the constant case from the general case |- |We will combine three equations. Assume the geometry has a uniform cross-section and the resistivity is constant in the material. Then the electric field and current density are constant and parallel, and by the general definition of resistivity, we obtain Since the electric field is constant, it is given by the total voltage across the conductor divided by the length of the conductor: Since the current density is constant, it is equal to the total current divided by the cross sectional area: Plugging in the values of and into the first expression, we obtain: Finally, we apply Ohm's law, : |} Tensor resistivity When the resistivity of a material has a directional component, the most general definition of resistivity must be used. It starts from the tensor-vector form of Ohm's law, which relates the electric field inside a material to the electric current flow. This equation is completely general, meaning it is valid in all cases, including those mentioned above. However, this definition is the most complicated, so it is only directly used in anisotropic cases, where the more simple definitions cannot be applied. If the material is not anisotropic, it is safe to ignore the tensor-vector definition, and use a simpler expression instead. Here, anisotropic means that the material has different properties in different directions. For example, a crystal of graphite consists microscopically of a stack of sheets, and current flows very easily through each sheet, but much less easily from one sheet to the adjacent one. In such cases, the current does not flow in exactly the same direction as the electric field. Thus, the appropriate equations are generalized to the three-dimensional tensor form: where the conductivity and resistivity are rank-2 tensors, and electric field and current density are vectors. These tensors can be represented by 3×3 matrices, the vectors with 3×1 matrices, with matrix multiplication used on the right side of these equations. In matrix form, the resistivity relation is given by: where Equivalently, resistivity can be given in the more compact Einstein notation: In either case, the resulting expression for each electric field component is: Since the choice of the coordinate system is free, the usual convention is to simplify the expression by choosing an -axis parallel to the current direction, so . This leaves: Conductivity is defined similarly: or both resulting in: Looking at the two expressions, and are the matrix inverse of each other. However, in the most general case, the individual matrix elements are not necessarily reciprocals of one another; for example, may not be equal to . This can be seen in the Hall effect, where is nonzero. In the Hall effect, due to rotational invariance about the -axis, and , so the relation between resistivity and conductivity simplifies to: If the electric field is parallel to the applied current, and are zero. When they are zero, one number, , is enough to describe the electrical resistivity. It is then written as simply , and this reduces to the simpler expression. Conductivity and current carriers Relation between current density and electric current velocity Electric current is the ordered movement of electric charges. Causes of conductivity Band theory simplified According to elementary quantum mechanics, an electron in an atom or crystal can only have certain precise energy levels; energies between these levels are impossible. When a large number of such allowed levels have close-spaced energy values – i.e. have energies that differ only minutely – those close energy levels in combination are called an "energy band". There can be many such energy bands in a material, depending on the atomic number of the constituent atoms and their distribution within the crystal. The material's electrons seek to minimize the total energy in the material by settling into low energy states; however, the Pauli exclusion principle means that only one can exist in each such state. So the electrons "fill up" the band structure starting from the bottom. The characteristic energy level up to which the electrons have filled is called the Fermi level. The position of the Fermi level with respect to the band structure is very important for electrical conduction: Only electrons in energy levels near or above the Fermi level are free to move within the broader material structure, since the electrons can easily jump among the partially occupied states in that region. In contrast, the low energy states are completely filled with a fixed limit on the number of electrons at all times, and the high energy states are empty of electrons at all times. Electric current consists of a flow of electrons. In metals there are many electron energy levels near the Fermi level, so there are many electrons available to move. This is what causes the high electronic conductivity of metals. An important part of band theory is that there may be forbidden bands of energy: energy intervals that contain no energy levels. In insulators and semiconductors, the number of electrons is just the right amount to fill a certain integer number of low energy bands, exactly to the boundary. In this case, the Fermi level falls within a band gap. Since there are no available states near the Fermi level, and the electrons are not freely movable, the electronic conductivity is very low. In metals A metal consists of a lattice of atoms, each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. This 'sea' of dissociable electrons allows the metal to conduct electric current. When an electrical potential difference (a voltage) is applied across the metal, the resulting electric field causes electrons to drift towards the positive terminal. The actual drift velocity of electrons is typically small, on the order of magnitude of metres per hour. However, due to the sheer number of moving electrons, even a slow drift velocity results in a large current density. The mechanism is similar to transfer of momentum of balls in a Newton's cradle but the rapid propagation of an electric energy along a wire is not due to the mechanical forces, but the propagation of an energy-carrying electromagnetic field guided by the wire. Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions. In semiconductors and insulators In metals, the Fermi level lies in the conduction band (see Band Theory, above) giving rise to free conduction electrons. However, in semiconductors the position of the Fermi level is within the band gap, about halfway between the conduction band minimum (the bottom of the first band of unfilled electron energy levels) and the valence band maximum (the top of the band below the conduction band, of filled electron energy levels). That applies for intrinsic (undoped) semiconductors. This means that at absolute zero temperature, there would be no free conduction electrons, and the resistance is infinite. However, the resistance decreases as the charge carrier density (i.e., without introducing further complications, the density of electrons) in the conduction band increases. In extrinsic (doped) semiconductors, dopant atoms increase the majority charge carrier concentration by donating electrons to the conduction band or producing holes in the valence band. (A "hole" is a position where an electron is missing; such holes can behave in a similar way to electrons.) For both types of donor or acceptor atoms, increasing dopant density reduces resistance. Hence, highly doped semiconductors behave metallically. At very high temperatures, the contribution of thermally generated carriers dominates over the contribution from dopant atoms, and the resistance decreases exponentially with temperature. In ionic liquids/electrolytes In electrolytes, electrical conduction happens not by band electrons or holes, but by full atomic species (ions) traveling, each carrying an electrical charge. The resistivity of ionic solutions (electrolytes) varies tremendously with concentration – while distilled water is almost an insulator, salt water is a reasonable electrical conductor. Conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. In biological membranes, currents are carried by ionic salts. Small holes in cell membranes, called ion channels, are selective to specific ions and determine the membrane resistance. The concentration of ions in a liquid (e.g., in an aqueous solution) depends on the degree of dissociation of the dissolved substance, characterized by a dissociation coefficient , which is the ratio of the concentration of ions to the concentration of molecules of the dissolved substance : The specific electrical conductivity () of a solution is equal to: where : module of the ion charge, and : mobility of positively and negatively charged ions, : concentration of molecules of the dissolved substance, : the coefficient of dissociation. Superconductivity The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In normal (that is, non-superconducting) conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. In a normal conductor, the current is driven by a voltage gradient, whereas in a superconductor, there is no voltage gradient and the current is instead related to the phase gradient of the superconducting order parameter. A consequence of this is that an electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen so that the resistance of the material becomes truly zero. Plasma Plasmas are very good conductors and electric potentials play an important role. The potential as it exists on average in the space between charged particles, independent of the question of how it can be measured, is called the plasma potential, or space potential. If an electrode is inserted into a plasma, its potential generally lies considerably below the plasma potential, due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of quasineutrality, which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma (), but on the scale of the Debye length there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths. The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation: Differentiating this relation provides a means to calculate the electric field from the density: (∇ is the vector gradient operator; see nabla symbol and gradient for more information.) It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small. Otherwise, the repulsive electrostatic force dissipates it. In astrophysical plasmas, Debye screening prevents electric fields from directly affecting the plasma over large distances, i.e., greater than the Debye length. However, the existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. This can and does cause extremely complex behavior, such as the generation of plasma double layers, an object that separates charge over a few tens of Debye lengths. The dynamics of plasmas interacting with external and self-generated magnetic fields are studied in the academic discipline of magnetohydrodynamics. Plasma is often called the fourth state of matter after solid, liquids and gases. It is distinct from these and other lower-energy states of matter. Although it is closely related to the gas phase in that it also has no definite form or volume, it differs in a number of ways, including the following: Resistivity and conductivity of various materials A conductor such as a metal has high conductivity and a low resistivity. An insulator such as glass has low conductivity and a high resistivity. The conductivity of a semiconductor is generally intermediate, but varies widely under different conditions, such as exposure of the material to electric fields or specific frequencies of light, and, most important, with temperature and composition of the semiconductor material. The degree of semiconductors doping makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a water/aqueous solution is highly dependent on its concentration of dissolved salts, and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as specific conductance, relative to the conductivity of pure water at . An EC meter is normally used to measure conductivity in a solution. A rough summary is as follows: This table shows the resistivity (), conductivity and temperature coefficient of various materials at . The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at . The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his popular science book One, Two, Three...Infinity (1947): More technically, the free electron model gives a basic description of electron flow in metals. Wood is widely regarded as an extremely good insulator, but its resistivity is sensitively dependent on moisture content, with damp wood being a factor of at least worse insulator than oven-dry. In any case, a sufficiently high voltage – such as that in lightning strikes or some high-tension power lines – can lead to insulation breakdown and electrocution risk even with apparently dry wood. Temperature dependence Linear approximation The electrical resistivity of most materials changes with temperature. If the temperature does not vary too much, a linear approximation is typically used: where is called the temperature coefficient of resistivity, is a fixed reference temperature (usually room temperature), and is the resistivity at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used. Metals In general, electrical resistivity of metals increases with temperature. Electron–phonon interactions can play a key role. At high temperatures, the resistance of a metal increases linearly with temperature. As the temperature of a metal is reduced, the temperature dependence of resistivity follows a power law function of temperature. Mathematically the temperature dependence of the resistivity of a metal can be approximated through the Bloch–Grüneisen formula: where is the residual resistivity due to defect scattering, A is a constant that depends on the velocity of electrons at the Fermi surface, the Debye radius and the number density of electrons in the metal. is the Debye temperature as obtained from resistivity measurements and matches very closely with the values of Debye temperature obtained from specific heat measurements. n is an integer that depends upon the nature of interaction:  = 5 implies that the resistance is due to scattering of electrons by phonons (as it is for simple metals)  = 3 implies that the resistance is due to s-d electron scattering (as is the case for transition metals)  = 2 implies that the resistance is due to electron–electron interaction. The Bloch–Grüneisen formula is an approximation obtained assuming that the studied metal has spherical Fermi surface inscribed within the first Brillouin zone and a Debye phonon spectrum. If more than one source of scattering is simultaneously present, Matthiessen's rule (first formulated by Augustus Matthiessen in the 1860s) states that the total resistance can be approximated by adding up several different terms, each with the appropriate value of . As the temperature of the metal is sufficiently reduced (so as to 'freeze' all the phonons), the resistivity usually reaches a constant value, known as the residual resistivity. This value depends not only on the type of metal, but on its purity and thermal history. The value of the residual resistivity of a metal is decided by its impurity concentration. Some materials lose all electrical resistivity at sufficiently low temperatures, due to an effect known as superconductivity. An investigation of the low-temperature resistivity of metals was the motivation to Heike Kamerlingh Onnes's experiments that led in 1911 to discovery of superconductivity. For details see History of superconductivity. Wiedemann–Franz law The Wiedemann–Franz law states that for materials where heat and charge transport is dominated by electrons, the ratio of thermal to electrical conductivity is proportional to the temperature: where is the thermal conductivity, is the Boltzmann constant, is the electron charge, is temperature, and is the electric conductivity. The ratio on the rhs is called the Lorenz number. Semiconductors In general, intrinsic semiconductor resistivity decreases with increasing temperature. The electrons are bumped to the conduction energy band by thermal energy, where they flow freely, and in doing so leave behind holes in the valence band, which also flow freely. The electric resistance of a typical intrinsic (non doped) semiconductor decreases exponentially with temperature following an Arrhenius model: An even better approximation of the temperature dependence of the resistivity of a semiconductor is given by the Steinhart–Hart equation: where , and are the so-called Steinhart–Hart coefficients. This equation is used to calibrate thermistors. Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers. In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of where = 2, 3, 4, depending on the dimensionality of the system. Kondo insulators Kondo insulators are materials where the resistivity follows the formula where , , and are constant parameters, the residual resistivity, the Fermi liquid contribution, a lattice vibrations term and the Kondo effect. Complex resistivity and conductivity When analyzing the response of materials to alternating electric fields (dielectric spectroscopy), in applications such as electrical impedance tomography, it is convenient to replace resistivity with a complex quantity called impedivity (in analogy to electrical impedance). Impedivity is the sum of a real component, the resistivity, and an imaginary component, the reactivity (in analogy to reactance). The magnitude of impedivity is the square root of sum of squares of magnitudes of resistivity and reactivity. Conversely, in such cases the conductivity must be expressed as a complex number (or even as a matrix of complex numbers, in the case of anisotropic materials) called the admittivity. Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity. An alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity. The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity. Resistance versus resistivity in complicated geometries Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula above. One example is spreading resistance profiling, where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious. In cases like this, the formulas must be replaced with where and are now vector fields. This equation, along with the continuity equation for and the Poisson's equation for , form a set of partial differential equations. In special cases, an exact or approximate solution to these equations can be worked out by hand, but for very accurate answers in complex cases, computer methods like finite element analysis may be required. Resistivity-density product In some applications where the weight of an item is very important, the product of resistivity and density is more important than absolute low resistivity – it is often possible to make the conductor thicker to make up for a higher resistivity; and then a low-resistivity-density-product material (or equivalently a high conductivity-to-density ratio) is desirable. For example, for long-distance overhead power lines, aluminium is frequently used rather than copper (Cu) because it is lighter for the same conductance. Silver, although it is the least resistive metal known, has a high density and performs similarly to copper by this measure, but is much more expensive. Calcium and the alkali metals have the best resistivity-density products, but are rarely used for conductors due to their high reactivity with water and oxygen (and lack of physical strength). Aluminium is far more stable. Toxicity excludes the choice of beryllium. (Pure beryllium is also brittle.) Thus, aluminium is usually the metal of choice when the weight or cost of a conductor is the driving consideration. History John Walsh and the conductivity of a vacuum In a 1774 letter to Dutch-born British scientist Jan Ingenhousz, Benjamin Franklin relates an experiment by another British scientist, John Walsh, that purportedly showed this astonishing fact: Although rarified air conducts electricity better than common air, a vacuum does not conduct electricity at all. However, to this statement a note (based on modern knowledge) was added by the editors—at the American Philosophical Society and Yale University—of the webpage hosting the letter: See also Charge transport mechanisms Chemiresistor Classification of materials based on permittivity Conductivity near the percolation threshold Contact resistance Electrical resistivities of the elements (data page) Electrical resistivity tomography Sheet resistance SI electromagnetism units Skin effect Spitzer resistivity Dielectric strength Notes References Further reading Measuring Electrical Resistivity and Conductivity External links Comparison of the electrical conductivity of various elements in WolframAlpha https://edu-physics.com/2021/01/07/resistivity-of-the-material-of-a-wire-physics-practical/ Physical quantities Materials science
Electrical resistivity and conductivity
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
6,199
[ "Physical phenomena", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Materials science", "nan", "Wikipedia categories named after physical quantities", "Physical properties", "Electrical resistance and conductance" ]
61,589
https://en.wikipedia.org/wiki/Access-control%20list
In computer security, an access-control list (ACL) is a list of permissions associated with a system resource (object or facility). An ACL specifies which users or system processes are granted access to resources, as well as what operations are allowed on given resources. Each entry in a typical ACL specifies a subject and an operation. For instance, If a file object has an ACL that contains , this would give Alice permission to read and write the file and give Bob permission only to read it. If the RACF profile CONSOLE CLASS(TSOAUTH) has an ACL that contains , this would give ALICE permission to use the TSO CONSOLE command. Implementations Many kinds of operating systems implement ACLs or have a historical implementation; the first implementation of ACLs was in the filesystem of Multics in 1965. Filesystem ACLs A filesystem ACL is a data structure (usually a table) containing entries that specify individual user or group rights to specific system objects such as programs, processes, or files. These entries are known as access-control entries (ACEs) in the Microsoft Windows NT, OpenVMS, and Unix-like operating systems such as Linux, macOS, and Solaris. Each accessible object contains an identifier to its ACL. The privileges or permissions determine specific access rights, such as whether a user can read from, write to, or execute an object. In some implementations, an ACE can control whether or not a user, or group of users, may alter the ACL on an object. One of the first operating systems to provide filesystem ACLs was Multics. PRIMOS featured ACLs at least as early as 1984. In the 1990s the ACL and RBAC models were extensively tested and used to administer file permissions. POSIX ACL POSIX 1003.1e/1003.2c working group made an effort to standardize ACLs, resulting in what is now known as "POSIX.1e ACL" or simply "POSIX ACL". The POSIX.1e/POSIX.2c drafts were withdrawn in 1997 due to participants losing interest for funding the project and turning to more powerful alternatives such as NFSv4 ACL. , no live sources of the draft could be found on the Internet, but it can still be found in the Internet Archive. Most of the Unix and Unix-like operating systems (e.g. Linux since 2.5.46 or November 2002, FreeBSD, or Solaris) support POSIX.1e ACLs (not necessarily draft 17). ACLs are usually stored in the extended attributes of a file on these systems. NFSv4 ACL NFSv4 ACLs are much more powerful than POSIX draft ACLs. Unlike draft POSIX ACLs, NFSv4 ACLs are defined by an actually published standard, as part of the Network File System. NFSv4 ACLs are supported by many Unix and Unix-like operating systems. Examples include AIX, FreeBSD, Mac OS X beginning with version 10.4 ("Tiger"), or Solaris with ZFS filesystem, support NFSv4 ACLs, which are part of the NFSv4 standard. There are two experimental implementations of NFSv4 ACLs for Linux: NFSv4 ACLs support for Ext3 filesystem and the more recent Richacls, which brings NFSv4 ACLs support for Ext4 filesystem. As with POSIX ACLs, NFSv4 ACLs are usually stored as extended attributes on Unix-like systems. NFSv4 ACLs are organized nearly identically to the Windows NT ACLs used in NTFS. NFSv4.1 ACLs are a superset of both NT ACLs and POSIX draft ACLs. Samba supports saving the NT ACLs of SMB-shared files in many ways, one of which is as NFSv4-encoded ACLs. Active Directory ACLs Microsoft's Active Directory service implements an LDAP server that store and disseminate configuration information about users and computers in a domain. Active Directory extends the LDAP specification by adding the same type of access-control list mechanism as Windows NT uses for the NTFS filesystem. Windows 2000 then extended the syntax for access-control entries such that they could not only grant or deny access to entire LDAP objects, but also to individual attributes within these objects. Networking ACLs On some types of proprietary computer hardware (in particular, routers and switches), an access-control list provides rules that are applied to port numbers or IP addresses that are available on a host or other layer 3, each with a list of hosts and/or networks permitted to use the service. Although it is additionally possible to configure access-control lists based on network domain names, this is a questionable idea because individual TCP, UDP, and ICMP headers do not contain domain names. Consequently, the device enforcing the access-control list must separately resolve names to numeric addresses. This presents an additional attack surface for an attacker who is seeking to compromise security of the system which the access-control list is protecting. Both individual servers and routers can have network ACLs. Access-control lists can generally be configured to control both inbound and outbound traffic, and in this context they are similar to firewalls. Like firewalls, ACLs could be subject to security regulations and standards such as PCI DSS. SQL implementations ACL algorithms have been ported to SQL and to relational database systems. Many "modern" (2000s and 2010s) SQL-based systems, like enterprise resource planning and content management systems, have used ACL models in their administration modules. Comparing with RBAC The main alternative to the ACL model is the role-based access-control (RBAC) model. A "minimal RBAC model", RBACm, can be compared with an ACL mechanism, ACLg, where only groups are permitted as entries in the ACL. Barkley (1997) showed that RBACm and ACLg are equivalent. In modern SQL implementations, ACLs also manage groups and inheritance in a hierarchy of groups. So "modern ACLs" can express all that RBAC express and are notably powerful (compared to "old ACLs") in their ability to express access-control policy in terms of the way in which administrators view organizations. For data interchange, and for "high-level comparisons", ACL data can be translated to XACML. See also Access token manager Cacls Capability-based security C-list Confused deputy problem DACL Extended file attributes File-system permissions Privilege (computing) Role-based access control (RBAC) Notes References Further reading Computer access control
Access-control list
[ "Engineering" ]
1,423
[ "Cybersecurity engineering", "Computer access control" ]
61,622
https://en.wikipedia.org/wiki/List%20of%20architectural%20styles
An architectural style is characterized by the features that make a building or other structure notable and historically identifiable. A style may include such elements as form, method of construction, building materials, and regional character. Most architecture can be classified as a chronology of styles which change over time reflecting changing fashions, beliefs and religions, or the emergence of new ideas, technology, or materials which make new styles possible. Styles therefore emerge from the history of a society and are documented in the subject of architectural history. At any time several styles may be fashionable, and when a style changes it usually does so gradually, as architects learn and adapt to new ideas. Styles often spread to other places, so that the style at its source continues to develop in new ways while other countries follow with their own twist. A style may also spread through colonialism, either by foreign colonies learning from their home country, or by settlers moving to a new land. After a style has gone out of fashion, there are often revivals and re-interpretations. For instance, classicism has been revived many times and found new life as neoclassicism. Each time it is revived, it is different. Vernacular architecture works slightly differently and is listed separately. It is the native method of construction used by local people, usually using labour-intensive methods and local materials, and usually for small structures such as rural cottages. It varies from region to region even within a country, and takes little account of national styles or technology. As western society has developed, vernacular styles have mostly become outmoded by new technology and national building standards. Chronology of styles Prehistoric Early civilizations developed, often independently, in scattered locations around the globe. The architecture was often a mixture of styles in timber cut from local forests and stone hewn from local rocks. Most of the timber has gone, although the earthworks remain. Impressively, massive stone structures have survived for years. Neolithic 10,000–3000 BC Ancient Americas Mesoamerican Mezcala Talud-tablero Western Native Americans Mediterranean and Middle-East civilizations Phoenician 3000–500 BC Ancient Egyptian 3000 BC–373 BC Minoan 3000?+ BC (Crete) Knossos (Crete) Mycenaean 1600–1100 BC (Greece) Ancient Near East and Mesopotamia Sumerian 5300–2000 BC Elam Iranian/Persian Ancient Persian Achaemenid Sassanid Iranian, c. 8th century+ (Iran) Persian Garden Style (Iran) Classical Style – Hayat Formal Style – Meidān (public) or Charbagh (private) Casual Style – Park (public) or Bāgh (private) Paradise garden Ancient Asian Classical Era in South Asia Karnataka Kerala Tamil Nadu Dravidian architecture (South Indian temple style) Buddhist Temple East Asian Ancient Chinese Japanese Korean Ancient South Asian Achitecture Harappan (7000–1900 BCE) Dravidian architecture Tamil Nadu (Early Tamil Sangam Era) Classical Antiquity The architecture of Ancient Greece and Ancient Rome, derived from the ancient Mediterranean civilizations such as at Knossos on Crete. They developed highly refined systems for proportions and style, using mathematics and geometry. Ancient Greek 776–265 BC Roman 753 BC–663 AD Etruscan 700–200 BC Classical 600 BC–323 AD Herodian 37–4 BC (Judea) Early Christian 100–500 Byzantine 527–1520 Middle Ages The European Early Middle Ages are generally taken to run from the end of the Roman Empire, around 400 AD, to around 1000 AD. During this period, Christianity made a significant impact on European culture. Early Medieval Europe Latin Armenian 4th–16th centuries Anglo-Saxon 450s–1066 (England) Bulgarian from 681 First Bulgarian Empire 681–1018 Pre-Romanesque c. 700–1000 (Merovingian and Carolingian empires) Iberian pre-Romanesque Merovingian 5th–8th centuries (France, Germany, Italy and neighbouring locations) Visigothic 5th–8th centuries (Spain and Portugal) Asturian 711–910 (North Spain, North Portugal) Carolingian 780s–9th century (mostly France, Germany) Ottonian 950s–1050s (mostly Germany, also considered Early Romanesque) Repoblación 880s–11th century (Spain) Medieval Europe The dominance of the Church over everyday life was expressed in grand spiritual designs which emphasized piety and sobriety. The Romanesque style was simple and austere. The Gothic style heightened the effect with heavenly spires, pointed arches and religious carvings. Medieval Byzantine Late Byzantine architecture before 1520 (see above) Kievan Rus' architecture 988–1237 Tarnovo Artistic School 12th–14th century (Bulgaria) Rashka School 12th–15th centuries (Serbian principalities) Morava School (Serbian principalities/Bulgaria) Romanesque Pre-Romanesque (see above) First Romanesque 1000–? (France, Italy, Spain) (including "Lombard Romanesque" in Italy) Romanesque 1000–1300 Norman 1074–1250 (Normandy, UK, Ireland, South Italy and Sicily) Norman–Arab–Byzantine 1071–1200 (Sicily, Malta, South Italy) Cistercian Romanesque style c. 1120–c. 1240 (Europe) Timber styles Stave churches, oldest 845(d) in England, in Norway one 11th century, several 12th century, some with Romanesque elements Timber frame styles, mostly Gothic or later (UK, France, Germany, the Netherlands) Gothic 1135/40–1520 Gothic Cistercian Gothic 1138–15th century (various European countries) Angevin Gothic or Plantagenet Style since 1148 (western France) Early English Period c. 1190–c. 1250 Gotico Angioiano since 1266 (southern Italy) Decorated Period c. 1290–c. 1350 Perpendicular Period c. 1350–c. 1550 Rayonnant Gothic 1240–c. 1350 (France, Germany, Central Europe) Venetian Gothic 14th–15th centuries (Venice in Italy) Spanish Gothic Mudéjar Style c. 1200–1700 (Spain, Portugal, Latin America) Aragonese Mudéjar c. 1200–1700 (Aragon in Spain) Isabelline Gothic 1474–1505 (reign) (Spain) Plateresque 1490–1560 (Spain & colonies, bridging Gothic and Renaissance styles) Brick Gothic mid 13th to 16th century (Germany, Netherlands, Flanders, Poland, northern Europe) Brabantine Gothic (Belgium and Netherlands) 14th century Flamboyant Gothic 1400–1500 (Spain, France, Portugal) Manueline 1495–1521 (Portugal and colonies) Asian architecture During its Late classical and Medieval ages Japanese Shinden-zukuri (Heian Period Japan) Chinese Songnic architecture Korean Hanok South Asia Bengalese Karnataka Kerala Tamil Nadu Pakistani Khmer Indonesian Myanmar architecture Late Dravidian temple styles Badami Chalukya or "Deccan architecture" (450–700CE) Rashtrakuta 750–983 (Central and South India) Western Chalukya or Gadag (1050-1200CE) Hoysala (900–1300CE) Vijayanagara 1336–1565 (South India) ( Dravidian influenced) South Asian Architecture styles Mauryan (321–185 BC) Kalinga Architecture ( present day Orissa and Andhra Pradesh) Rekha Deula Pidha Deula Khakhara Deula Hemadpanthi (1200–1270 CE) (Maharashtra) Sikh architecture Bengal temple architecture: 1400 to present Nagara Style Māru-Gurjara architecture 900 to present (Rajasthan and Gujarat) Vesara Style (Dravidian fusion styles) Badami Chalukya architecture Islamic Architecture 620–1918 Central Styles (Multi-Regional) Prophetic Era – based in Medina (c. 620–630) Rashidi Period – based in Medina (c. 630–660) Umayyad architecture – based in Damascus (c. 660–750) Abbasid architecture – based in Baghdad (c. 750–1256) Mamluk architecture – based in Cairo (c. 1256–1517) Ottoman architecture – based in Istanbul (c. 1517–1918) Regional Styles Egypt Early Islamic architecture (Rashidi + Umayyad) (641–750) Abbasid architecture (750–954) Fatimid architecture (954–1170) Ayyubid architecture (1174–1250) Mamluk architecture (1254–1517) Ottoman architecture (1517–1820) North Africa (Maghrib) The Umayyads (705–750) The Abbasid Era (750–909) The Fatimids (909–1048) The Amazigh Dynasties (1048–1550) Zirids 1048–1148 (Middle Maghreb) Almoravids 1040–1147 (Far Maghreb) Almohads 1121–1269 (Far Maghreb) Hafsids 1229–1574 (Near and Middle Maghreb) Marinids 1244–1465 (Middle and Far Maghreb) Zayyanids 1235–1550 (Middle Maghreb) Ottoman Rule 1550–1830 (Near and Middle Maghreb) Local Dynasties 1549–present (Far Maghreb) Islamic Spain Umayyad architecture (756–1031) Taifa Kingdoms-1 (1031–1090) Almoravid architecture (1090–1147) Taifa Kingdoms-2 (1140–1203) Almohad architecture (1147–1238), Taifa Kingdoms-3 (1232–1492) Granada architecture (1287–1492) Persia and Central Asia Khurasani architecture (Late 7th–10th century) Razi Style (10th–13th century) Samanid Period (10th c.) Ghaznawid Period (11th c.) Saljuk Period (11th–12th c.) Mongol Period (13th c.) Timurid Style (14th–16th c.) Isfahani Style (17th–19th c.) Islamic (influenced) architecture in South Asia Indo-Islamic architecture (1204–1857) Mughal architecture (1526–1707) Turkey Anatolian Seljuk architecture (1071–1299) Ottoman architecture (1299–1922) First national architectural movement (1908–1940) Pre-Columbian Indigenous American Styles Aztec (ca. 14th century – 1521) Maya Pueblo Puuc Early Modern Period and European Colonialism 1425–1660. The Renaissance began in Italy and spread through Europe, rebelling against the all-powerful Church, by placing Man at the centre of his world instead of God. The Gothic spires and pointed arches were replaced by classical domes and rounded arches, with comfortable spaces and entertaining details, in a celebration of humanity. The Baroque style was a florid development of this 200 years later, largely by the Catholic Church to restate its religious values. Renaissance c. 1425–1600 (Europe, American colonies) Renaissance Central European Renaissance Polish Renaissance French Renaissance Eastern European Renaissance Palladian 1516–1580 (Venezia, Italy; revived in UK) Mannerism 1520–1600 Polish Mannerism 1550–1650 Brâncovenesc style late 17th and early 18th centuries Eastern Orthodox Church 1400?+ (Southeast and Eastern Europe) France Henry II 1530–1590 Louis XIII 1601–1643 United Kingdom Tudor 1485–1603 Elizabethan 1480–1620? Jacobean 1580–1660 Spain and Portugal Asturian pre-Romanesque 711 - 910 (Kingdom of Asturias) Mudéjar Art 13th and 16th centuries Spanish Renaissance 15th and 16th centuries Plateresque continued from Spanish Gothic – 1560 (Spain and colonies, Low Countries) Herrerian 1550–1650 (Spain and colonies, primarily in Castille and the surroundings of Madrid) Barroque Churrigueresque 17th – 1750 (Hispanic countries, primarily in Spain and Mexico) Modernisme 1880s - 1910s (Primarily Catalonia, but also in Valencian Community, Majorca Island and Melilla) Portuguese Renaissance Portuguese Plain style 1580–1640 (Portugal and colonies) Colonial Portuguese Colonial c. 1480–1820 (Brazil, India, Macao, Africa, East Timor) Spanish Colonial 1520s – c. 1820s (New World, East Indies, other colonies) Cape Dutch 1652–1802 (Cape Colony, South Africa) Netherlands Indies 1609–1949 Old Indies 18th century-19th century Indies Empire mid-18th century–late 19th century New Indies late 19th century–20th century (mixed architecture) Dutch Colonial 1615–1674 (Treaty of Westminster) (New England) Chilotan 1600+ (Chiloé and southern Chile) First Period 1625–1725 pre-American vernacular Architecture of the California missions 1769–1823, (California, US) French Colonial Colonial Georgian architecture Baroque 1600–1800, up to 1900 Andean Baroque, 1680–1780 (Viceroyalty of Peru) Baroque c. 1600–1750 (Europe, the Americas) English Baroque 1666 (Great Fire) – 1713 (Treaty of Utrecht) Spanish Baroque c. 1600–1760 Churrigueresque, 1660s–1750s (Spain & New World), revival 1915+ (southwest US, Hawaii) Earthquake Baroque, 17th–18th centuries (Philippines) Maltese Baroque c. 1635–1798 New Spanish Baroque, mid-17th-early-18th centuries (New Spain) French Baroque c. 1650–1789 Dutch Baroque c. 1650–1700 Sicilian Baroque 1693 earthquake – c. 1745 Portuguese Joanine baroque c. 1700–1750 Russian Baroque (c. 1680–1750) Naryshkin Baroque c. 1690–1720 (Moscow, Russian Empire) Petrine Baroque c. 1700–1745 (Saint Petersburg, Russian Empire) Elizabethan Baroque 1736–1762 (Russian Empire) Ukrainian Baroque late 17th–18th centuries (Ukrainian lands) Rococo c. 1720–1789 (France, Germany, Austria, Italy, Russia, Spain) Asian architecture contemporary with Renaissance and post-Renaissance Europe Japanese Shoin-zukuri (1560s–1860s) Sukiya-zukuri (1530s–present) Minka (Japanese commoner or folk architecture) Gassho-zukuri (Edo period and later) Honmune-zukuri (Edo period and later) Imperial Crown Style (1919–1945) Giyōfū architecture (1800s) Indian Indo-Islamic Mughal 1540- 1860 CE (Present day India, Pakistan, Bangladesh) Akbari Mughal Garden Style Sharqi aka Janpur Style Late Modern Period and the Industrial Revolution Neoclassicism 1720–1837 and onward. A time often depicted as a rural idyll by the great painters, but in fact was a hive of early industrial activity, with small kilns and workshops springing up wherever materials could be mined or manufactured. After the Renaissance, neoclassical forms were developed and refined into new styles for public buildings and the gentry. New Cooperism Neoclassical Neoclassical c. 1715–1820 Beaux-Arts 1670+ (France) and 1880 (US) Georgian 1720–1840s (UK, US) Jamaican Georgian architecture c. 1750 – c. 1850 (Jamaica) American Colonial 1720–1780s (US) Pombaline style 1755 – c. 1860 (Lisbon in Portugal) Josephinischer Stil 1760–1780/90 (Austria) Adam style 1760–1795 (England, Scotland, Russia, US) Federal 1780–1830 (US) Empire 1804–1830, revival 1870 (Europe, US) Regency 1811–1830 (UK) Antebellum 1812–1861 (Southern United States) Palazzo Style 1814–1930? (Europe, Australia, US) Neo-Palladian Jeffersonian 1790s–1830s (Virginia in US) American Empire 1810 Greek Revival architecture Rundbogenstil 1835–1900 (Germany) Neo-Grec 1845–65 (UK, US, France) Nordic Classicism 1910–30 (Norway, Sweden, Denmark & Finland) Polish Neoclassicism (Poland) New Classical architecture 20th/21st century (global) Temple 1832+ (global) Revivalism and Orientalism Late 19th and early 20th centuries. The Victorian Era was a time of giant leaps forward in technology and society, such as iron bridges, aqueducts, sewer systems, roads, canals, trains, and factories. As engineers, inventors, and businessmen they reshaped much of the British Empire, including the UK, India, Australia, South Africa, and Canada, and influenced Europe and the United States. Architecturally, they were revivalists who modified old styles to suit new purposes. Revivalism Resort architecture (Germany) Victorian 1837–1901 (UK) See also San Francisco architecture Edwardian 1901–1910 (UK) Revivals started before the Victorian Era Gothic Revival 1740s+ (UK, US, Europe) Scots Baronial (UK) Italianate 1802–1890 (UK, Europe, US) Egyptian Revival 1809–1820s, 1840s, 1920s (Europe, US) Biedermeier 1815–1848 (Central Europe) Russian Revival 1826–1917 (Russian Empire, Germany, Middle Asia) Russo-Byzantine style 1861–1917 (Russian Empire, Balkans) Russian neoclassical revival 1900–1920 (Russian Empire) Victorian revivals Renaissance Revival 1840–1890 (UK) Timber frame revivals in various styles (Europe) Black-and-white Revival 1811+ (UK especially Chester) Jacobethan 1830–1870 (UK) Tudorbethan aka Mock Tudor 1835–1885+ (UK) Baroque Revival aka Neo-Baroque 1840?- Bristol Byzantine 1850–1880 Edwardian Baroque 1901–1922 (UK & British Empire) Second Empire 1855–1880 (France, UK, US, Canada, Australia) Napoleon III style 1852–1870 (Paris, France) Queen Anne Style 1870–1910s (UK, US) Romanian Revival 1884-1940s (Romania) Orientalism Orientalism Neo-Mudéjar 1880s–1920s (Spain, Portugal, Bosnia, California) Moorish Revival (US, Europe) Egyptian Revival 1920s (Europe, US; see above) Mayan Revival 1920–1930s (US) Indo-Saracenic Revival or Indo-Gothic, Mughal-Gothic, Neo-Mughal late 19th century ( also influenced by British India, British Raj) Revivals in North America Romanesque Revival 1840–1930s (US) Gothic Revival (see above) Carpenter Gothic 1870+ (US) High Victorian Gothic (English-speaking world) Collegiate Gothic, 1910–1960 (US) Stick Style 1860–1890+ (US) Queen Anne Style architecture (United States) 1880–1910s (US) Eastlake Style 1879–1905 (US) Richardsonian Romanesque 1880s–1905 (US) Shingle Style 1879–1905 Neo-Byzantine 1882–1920s (US) Renaissance Revival American Renaissance Châteauesque 1887–1930s (Canada, US, Hungary) Canadian Chateau 1880s–1920s (Canada) Mediterranean Revival 1890s+ (US, Latin America, Europe) Mission Revival 1894–1936; (California, southwest US) Pueblo Revival 1898–1930+ (southwest US) Colonial Revival 1890s+ Dutch Colonial Revival c. 1900 (New England) Spanish Colonial Revival 1915+ (Mexico, California, Hawaii, Florida, southwest US) Beaux-Arts Revival 1880+ (US, Canada), 1920+ (Australia) City Beautiful 1890–20th century (US) Territorial Revival architecture 1930+ Other late 19th century styles Australian styles Queenslander 1840s–1960s (Australian) Federation 1890–1920 (Australian) Heimatstil 1870–1900 (Austria, Germany, Switzerland Neoclásico Isabelino 1843–1897 (Ponce, Puerto Rico) Neo-Manueline 1840s–1910s (Portugal, Brazil, Portuguese colonies) Dragestil 1880s–1910s (Norway) Palazzo style architecture Neo-Plateresque and Monterrey Style 19th-early 20th centuries (Spain, Mexico) Rural styles Swiss chalet style 1840s–1920s+ (Scandinavia, Austria, Germany, later global) Adirondack 1850s (New York, US) National Park Service rustic aka Parkitecture 1903+ (US) Western false front (Western United States) Reactions to the Industrial Revolution Industrial Industrial, 1760–present (worldwide) Arts and Crafts in Europe Arts and Crafts 1880–1910 (UK) Art Nouveau aka Jugendstil 1885–1910 Modernisme 1888–1911 (Catalan Art Nouveau) Glasgow Style 1890–1910 (Glasgow, Scotland) Vienna Secession 1897–1905 (Austrian Art Nouveau) National Romantic style 1900–1923? (Norway, Sweden, Denmark and Finland) Arts and Crafts in the US American Craftsman, aka American Arts and Crafts 1890s–1930 (US) Prairie Style 1900–1917 (US) American Foursquare mid-1890s – late 1930s (US) California Bungalow 1910–1939 (US, Australia, then global) Modernism and other styles contemporary with modernism 1880 onwards. The Industrial Revolution had brought steel, plate glass, and mass-produced components. These enabled a brave new world of bold structural frames, with clean lines and plain or shiny surfaces. In the early stages, a popular motto was "decoration is a crime". In the Eastern Bloc the Communists rejected the Western Bloc's 'decadent' ways, and modernism developed in a markedly more bureaucratic, sombre, and monumental fashion. Avant-garde Parametricism 2008+ Russian avant-garde 1890–1930 (Russian Empire/Soviet Union) Chicago School 1880–1920, 1940s–1960s (US) Functionalism c. 1900 – 1930s (Europe, US) Futurism 1909 (Europe) Expressionism 1910 – c. 1924 Amsterdam School 1912–1924 (Netherlands) Organic architecture New Objectivity 1920–1939 (Italy, Germany, Holland, Budapest) Rationalism 1920s–1930s (Italy) Bauhaus 1919–1930+ (Germany, Northern Europe) De Stijl 1920s (Holland, Europe) Moderne 1925+ (global) Art Deco 1925–1940s (global) List of Art Deco architecture Streamline Moderne 1930–1937 Modernism 1927–1960s International Style 1930+ (Europe, US) Usonian 1936–1940s (US) Modernism under communism Constructivism 1925–1932 (USSR) Postconstructivism 1932–1941 (USSR) Stalinist 1933–1955 (USSR) Fascist/Nazi Fascist architecture Nazi 1933–1944 (Germany) Post-Second World War 1945– Modernism (continued) International Style (continued) New towns 1946–1968+ (UK, global) Mid-century modern 1950s (California, etc.) Googie 1950s (US) Brutalism 1950s–1970s Structuralism 1950s–1970s Megastructures 1960s Metabolist 1959 (Japan) Danish Functionalism 1960s (Denmark) Structural Expressionism aka Hi-Tech 1980s+ Other 20th century styles Heimatschutz Architecture 1900–1940 (Austria, Germany) Ponce Creole 1895–1920 (Ponce in Puerto Rico) Heliopolis style 1905 – c. 1935 (Egypt) Mar del Plata style 1935–1950 (Mar del Plata, Argentina) Minimal Traditional 1930s–1940s (US) Soft Portuguese 1940–1955 (Portugal & colonies) Ranch-style 1940s–1970s (US) Jengki style (Indonesia) Postmodernism and early 21st century styles Postmodernism 1945+ (US, UK) Bowellism Shed Style Arcology 1970s+ (Europe) Deconstructivism 1982+ (Europe, US, Far East) Critical regionalism 1983+ Blobitecture 2003+ High-tech 1970s+ Interactive architecture 2000+ Sustainable architecture 2000+ Earthship 1980+ (Started in US, now global) Green building 2000+ Natural building 2000+ Neo-Andean 2005+ Neo-futurism late 1960s-early 21st century New Classical Architecture 1980+ New London Vernacular 2009+ Berlin Style 1990s+ Mass timber 2010s+ Fortified styles Fortification 6800 BC+ Ringfort 800 BC – 400 AD Dzong 17th century+ Star fort 1530–1800? Polygonal fort 1850?- Vernacular styles Vernacular architecture Generic methods Natural building Ice – Igloo, quinzhee Earth – Cob house, sod house, adobe, mudbrick house, rammed earth Timber – Log cabin, log house, Carpenter Gothic, roundhouse, stilt house Nomadic structures – Yaranga, bender tent Temporary structures – Quonset hut, Nissen hut, prefabricated home Underground – Underground living, rock-cut architecture, monolithic church, pit-house Modern low-energy systems – Straw-bale construction, earthbag construction, rice-hull bagwall construction, earthship, earth house Various styles – Longhouse European European Arctic (North Norway and Sweden, Finland, North Russia) – Sami lavvu, Sami goahti Northwest Europe (Norway, Sweden, Fresia, Jutland, Denmark, North Poland, UK, Iceland) – Norse architecture, heathen hofs, Viking ring fortress, fogou, souterrain, Grubenhaus (also known as Grubhouse or Grubhut) Central and Eastern Europe – Burdei, zemlyanka Bulgaria – Rock-hewn Churches of Ivanovo Estonia Germany – Black Forest house, Swiss chalet style, Gulf house (aka East Frisian house), Geestharden house (aka Cimbrian house, Schleswig house), Haubarg, Low German house (aka Low Saxon house), Middle German house, Reed house, Seaside resort house, Ständerhaus, Uthland-Frisian house Netherlands – Frisian farmhouse, Old Frisian longhouse, Bildts farmhouse Iceland – Turf houses Ireland – Clochán, Crannog Italy – Trullo Lithuania – Kaunas modernism, Lithuanian folk architecture, Polish-Lithuanian wooden synagogues Norway – Architecture of Norway: Post church, Palisade church, Stave church, Norwegian Turf house, Vernacular architecture in Norway, Rorbu, Dragestil, also National Romantic style, Swiss chalet style and Nordic Classicism buildings Poland – Zakopane, Polish-Lithuanian wooden synagogues, wooden churches of Southern Lesser Poland, Upper Lusatian house Romania – Carpathian vernacular, wooden churches of Maramureș Russia – Dacha Scotland – Medieval turf building in Cronberry, blackhouses Slovakia – Wooden churches of the Slovak Carpathians Spain – Asturian teito, Asturian hórreo, Gallician palloza Ukraine – Wooden churches United Kingdom – Dartmoor longhouse, Neolithic long house, palisade church, mid-20th-century system-built houses Scotland – Broch, Atlantic roundhouse, crannog, dun African Central and South African countries – Rondavel, Xhosa and Zulu Architecture, Zimbabwean Architecture, Sotho-Tswana Architecture, Zulu and Nguni Architecture, and Madagascan Architecture Dutch Colonial, Cape Dutch Asian China Yaodong Siheyuan Tulou Shanxi Hokkien Cantonese Hui Hakka Jiangxi Sichuan Pang uk (Architecture of Hong Kong) India – Rock-cut, Toda hut Indonesia – Rumah adat Iran, Turkey – Caravanserai Iran – Yakhchal Israel – Rock-cut tombs Japan – Minka Mongolia – Yurt Papua New Guinea – Papua New Guinea stilt house Philippines – Bahay kubo, Jin-jin, Torogan, Bale Russia – Siberian chum Thailand – Thai stilt house Myanmar – Shwenandaw Monastery Australasian Australia, New Zealand – slab hut Australia – Aborigine humpy Alphabetical listing Examples of styles See also National Register of Historic Places architectural style categories Architectural design values Feminism and modern architecture List of house styles Sacred architecture Architecture of cathedrals and great churches Synagogue architecture Timeline of architecture Timeline of architectural styles Parametricism References Lewis, Philippa; Gillian Darley (1986). Dictionary of Ornament, NY: Pantheon Baker, John Milnes, AIA (1994) American House Styles, NY: Norton Further reading Hamlin Alfred Dwight Foster, History of Architectural Styles, BiblioBazaar, 2009 Carson Dunlop, Architectural Styles, Dearborn Real Estate, 2003 Herbert Pothorn, A guide to architectural styles, Phaidon, 1983 External links Victoria & Albert Museum Microsite on Introduction to Architectural Styles Architectural design Architectural history
List of architectural styles
[ "Engineering" ]
5,733
[ "Architectural history", "Architecture", "Architectural design", "Design", "Architecture lists" ]
61,626
https://en.wikipedia.org/wiki/Gersonides
Levi ben Gershon (1288 – 20 April 1344), better known by his Graecized name as Gersonides, or by his Latinized name Magister Leo Hebraeus, or in Hebrew by the abbreviation of first letters as RaLBaG, was a medieval French Jewish philosopher, Talmudist, mathematician, physician and astronomer/astrologer. He was born at Bagnols in Languedoc, France. According to Abraham Zacuto and others, he was the son of Gerson ben Solomon Catalan. Biography As in the case of the other medieval Jewish philosophers, little is known of his life. His family had been distinguished for piety and exegetical skill in Talmud, but though he was known in the Jewish community by commentaries on certain books of the Bible, he never seems to have accepted any rabbinical post. It has been suggested that the uniqueness of his opinions may have put obstacles in the way of his advancement to a higher position or office. He is known to have been at Avignon and Orange during his life, and is believed to have died in 1344, though Zacuto asserts that he died at Perpignan in 1370. Gersonides is known for his unorthodox views and rigid Aristotelianism, which eventually led him to rationalize many of the miracles in the Bible. His commentary on the Bible was sharply criticized by the most prominent scholars, such as Abarbanel, Chisdai Crescas, and Rivash, the latter accusing him of heresy and almost banning his works. Philosophical and religious works Part of his writings consist of commentaries on the portions of Aristotle then known, or rather of commentaries on the commentaries of Averroes. Some of these are printed in the early Latin editions of Aristotle's works. His most important treatise, that by which he has a place in the history of philosophy, is entitled Sefer Milhamot Ha-Shem, ("The Wars of the Lord"), and occupied twelve years in composition (1317–1329). A portion of it, containing an elaborate survey of astronomy as known to the Arabs, was translated into Latin in 1342 at the request of Pope Clement VI. The Wars of the Lord is modeled after the plan of the great work of Jewish philosophy, the Guide for the Perplexed of Maimonides. It may be regarded as a criticism of some elements of Maimonides' syncretism of Aristotelianism and rabbinic Jewish thought. Ralbag's treatise strictly adhered to Aristotelian thought. The Wars of the Lord review: 1. the doctrine of the soul, in which Gersonides defends the theory of impersonal reason as mediating between God and man, and explains the formation of the higher reason (or acquired intellect, as it was called) in humanity—his view being thoroughly realist and resembling that of Avicebron; 2. prophecy; 3. and 4. God's knowledge of facts and providence, in which is advanced the theory that God does not decide individual facts. While there is general providence for all, special providence only extends to those whose reason has been enlightened; 5. celestial substances, treating of the strange spiritual hierarchy which the Jewish philosophers of the middle ages accepted from the Neoplatonists and the pseudo-Dionysius, and also giving, along with astronomical details, much of astrological theory; and 6. creation and miracles, in respect to which Gersonides deviates widely from the position of Maimonides. Gersonides was also the author of commentaries on the Pentateuch, Joshua, Judges, I & II Samuel, I & II Kings, Proverbs, Job, Ecclesiastes, Song of Songs, Ruth, Esther, Daniel, Ezra-Nehemiah, and Chronicles. He makes reference to a commentary on Isaiah, but it is not extant. Views on God and omniscience In contrast to the theology held by other Jewish thinkers, Jewish theologian Louis Jacobs argues, Gersonides held that God does not have complete foreknowledge of human acts. "Gersonides, bothered by the old question of how God's foreknowledge is compatible with human freedom, holds that what God knows beforehand is all the choices open to each individual. God does not know, however, which choice the individual, in his freedom, will make." Another neoclassical Jewish proponent of self-limited omniscience was Abraham ibn Daud. "Whereas the earlier Jewish philosophers extended the omniscience of God to include the free acts of man, and had argued that human freedom of decision was not affected by God's foreknowledge of its results, Ibn Daud, evidently following Alexander of Aphrodisias, excludes human action from divine foreknowledge. God, he holds, limited his omniscience even as He limited His omnipotence in regard to human acts". The view that God does not have foreknowledge of moral decisions which was advanced by ibn Daud and Gersonides (Levi ben Gershom) is not quite as isolated as Rabbi Bleich indicates, and it enjoys the support of two highly respected Acharonim, Rabbi Yeshayahu Horowitz (Shelah haKadosh) and Rabbi Chaim ibn Attar (Or haHayim haKadosh). The former takes the views that God cannot know which moral choices people will make, but this does not impair His perfection. The latter considers that God could know the future if He wished, but deliberately refrains from using this ability in order to avoid the conflict with free will. Rabbi Yeshayahu Horowitz explained the apparent paradox of his position by citing the old question, "Can God create a rock so heavy that He cannot pick it up?" He said that we cannot accept free choice as a creation of God's, and simultaneously question its logical compatibility with omnipotence. See further discussion in Free will in Jewish thought. Views of the afterlife Gersonides posits that people's souls are composed of two parts: a material, or human, intellect; and an acquired, or agent, intellect. The material intellect is inherent in every person, and gives people the capacity to understand and learn. This material intellect is mortal, and dies with the body. However, he also posits that the soul also has an acquired intellect. This survives death, and can contain the accumulated knowledge that the person acquired during his lifetime. For Gersonides, Seymour Feldman points out, Man is immortal insofar as he attains the intellectual perfection that is open to him. This means that man becomes immortal only if and to the extent that he acquires knowledge of what he can in principle know, e.g. mathematics and the natural sciences. This knowledge survives his bodily death and constitutes his immortality. Talmudic works Gersonides was the author of the following Talmudic and halakhic works: Shaarei Tsedek (published at Leghorn, 1800): a commentary on the thirteen halachic rules of Rabbi Yishmael; Mechokek Safun, an interpretation of the aggadic material in the fifth chapter of Tractate Bava Basra; A commentary to tractate Berachos; two responsa. Only the first work is extant. Works in mathematics and astronomy/astrology Gersonides was the first to make a number of major mathematical and scientific advances, though since he wrote only in Hebrew and few of his writings were translated to other languages, his influence on non-Jewish thought was limited. Gersonides wrote Maaseh Hoshev in 1321 dealing with arithmetical operations including extraction of square and cube roots, various algebraic identities, certain sums including sums of consecutive integers, squares, and cubes, binomial coefficients, and simple combinatorial identities. The work is notable for its early use of proof by mathematical induction, and pioneering work in combinatorics. The title Maaseh Hoshev literally means the Work of the thinker, but it is also a pun on a biblical phrase meaning "clever work". Maaseh Hoshev is sometimes mistakenly referred to as Sefer Hamispar (The Book of Number), which is an earlier and less sophisticated work by Rabbi Abraham ben Meir ibn Ezra (1090–1167). In 1342, Gersonides wrote On Sines, Chords and Arcs, which examined trigonometry, in particular proving the sine law for plane triangles and giving five-figure sine tables. One year later, at the request of the bishop of Meaux, he wrote The Harmony of Numbers in which he considers a problem of Philippe de Vitry involving so-called harmonic numbers, which have the form 2m·3n. The problem was to characterize all pairs of harmonic numbers differing by 1. Gersonides proved that there are only four such pairs: (1,2), (2,3), (3,4) and (8,9). He is also credited to have invented the Jacob's staff, an instrument to measure the angular distance between celestial objects. It is described as consisting Gersonides observed a solar eclipse on March 3, 1337. After he had observed this event he proposed a new theory of the sun which he proceeded to test by further observations. Another eclipse observed by Gersonides was the eclipse of the Moon on 3 October 1335. He described a geometrical model for the motion of the Moon and made other astronomical observations of the Moon, Sun and planets using a camera obscura. Some of his beliefs were well wide of the truth, such as his belief that the Milky Way was on the sphere of the fixed stars and shines by the reflected light of the Sun. Gersonides was also the earliest known mathematician to have used the technique of mathematical induction in a systematic and self-conscious fashion and anticipated Galileo's error theory. The lunar crater Rabbi Levi is named after him. Gersonides believed that astrology was real, and developed a naturalistic, non-supernatural explanation of how it works. Julius Guttman explained that for Gersonides, astrology was: Estimation of stellar distances and refutation of Ptolemy's model Gersonides appears to be the only astronomer before modern times to have surmised that the fixed stars are much further away than the planets. Whereas all other astronomers put the fixed stars on a rotating sphere just beyond the outer planets, Gersonides estimated the distance to the fixed stars to be no less than 159,651,513,380,944 earth radii, or about 100,000 lightyears in modern units. Using data he collected from his own observations, Gersonides refuted Ptolemy's model in what the notable physicist Yuval Ne'eman has considered as "one of the most important insights in the history of science, generally missed in telling the story of the transition from epicyclic corrections to the geocentric model to Copernicus' heliocentric model". Ne'eman argued that after Gersonides reviewed Ptolemy's model with its epicycles he realized that it could be checked, by measuring the changes in the apparent brightnesses of Mars and looking for cyclical changes along the conjectured epicycles. These thus ceased being dogma, they were a theory that had to be experimentally verified, "à la Popper". Gersonides developed tools for these measurements, essentially pinholes and the camera obscura. The results of his observations did not fit Ptolemy's model at all. Concluding that the model was inadequate, Gersonides tried (unsuccessfully) to improve on it. That challenge was finally answered, of course, by Copernicus and Kepler three centuries later, but Gersonides was the first to falsify the Alexandrian dogma - the first known instance of modern falsification philosophy. Gersonides also showed that Ptolemy's model for the lunar orbit, though reproducing correctly the evolution of the Moon's position, fails completely in predicting the apparent sizes of the Moon in its motion. Unfortunately, there is no evidence that the findings influenced later generations of astronomers, even though Gersonides' writings were translated and available. In modern fiction Gersonides is an important character in the novel The Dream of Scipio by Iain Pears, where he is depicted as the mentor of the protagonist Olivier de Noyen, a non-Jewish poet and intellectual. A (fictional) encounter between Gersonides and Pope Clement VI at Avignon during the Black Death is a major element in the book's plot. Awards 1985: National Jewish Book Award Scholarship for The Wars of the Lord References Further reading "Gersonides". The Encyclopaedia Judaica. Keter Publishing. Feldman, Seymour. The Wars of the Lord (3 volumes). Jewish Publication Society. Gerson Lange (ed. & transl.), Sefer Maassei Choscheb: Die Praxis des Rechners – Ein hebräisch-arithmetisches Werk des Levi ben Gerschom aus dem Jahre 1321 (Frankfurt am Main: Buchdruckerei Louis Golde, 1909) online link. Guttman, Julius (1964). Philosophies of Judaism, pp. 214–215. JPS. Lévi ben Gershom ( Gersonide ), Les Guerres du Seigneur, livres III et IV, introduction, traduction [française] et notes par Charles Touati. Paris-La Haye, Mouton & Co., 1968. Charles Touati, La pensée philosophique et théologique de Gersonide, Paris, 1973. Bernard R. Goldstein (ed. & transl.), The Astronomy of Levi ben Gerson (1288-1344) - A Critical Edition of Chapters 1-20 with Translation and Commentary (New York: Springer-Verlag, 1985 [= Studies in the History of Mathematics and Physical Sciences, nr. 11]). Eisen, Robert (1995). Gersonides on Providence, Covenant, and the Chosen People: A Study in Medieval Jewish Philosophy and Biblical Commentary. State University of New York. . C. Sirat, S. Klein-Braslavy, Olga Weijers, Ph. Bobichon, G. Dahan, M. Darmon, G. Freudenthal R. Glasner, M. Kellner, J.-L. Mancha,Les méthodes de travail de Gersonide et le maniement du savoir chez les scolastiques, Librairie philosophique Vrin, Paris, 2003. External links Stanford Encyclopedia of Philosophy (PDF version) Milhamot HaShem First Edition (PDF) This is the text excluding the astronomical text (Book V, Part I). The quality varies. Detailed bibliography of works on and by Gersonides Milchamot Hashem 1288 births 1344 deaths 14th-century French mathematicians 14th-century French philosophers 14th-century French rabbis 14th-century Jewish theologians Bible commentators French astrologers 14th-century astrologers Medieval French astronomers Jewish astronomers Medieval Jewish philosophers Philosophers of Judaism 14th-century Jewish biblical scholars
Gersonides
[ "Astronomy" ]
3,176
[ "Astronomers", "Jewish astronomers" ]
61,632
https://en.wikipedia.org/wiki/John%20Milnor
John Willard Milnor (born February 20, 1931) is an American mathematician known for his work in differential topology, algebraic K-theory and low-dimensional holomorphic dynamical systems. Milnor is a distinguished professor at Stony Brook University and the only mathematician to have won the Fields Medal, the Wolf Prize, the Abel Prize and all three Steele prizes. Early life and career Milnor was born on February 20, 1931, in Orange, New Jersey. His father was J. Willard Milnor, an engineer, and his mother was Emily Cox Milnor. As an undergraduate at Princeton University he was named a Putnam Fellow in 1949 and 1950 and also proved the Fáry–Milnor theorem when he was only 19 years old. Milnor graduated with an A.B. in mathematics in 1951 after completing a senior thesis, titled "Link groups", under the supervision of Ralph Fox. He remained at Princeton to pursue graduate studies and received his Ph.D. in mathematics in 1954 after completing a doctoral dissertation, titled "Isotopy of links", also under the supervision of Fox. His dissertation concerned link groups (a generalization of the classical knot group) and their associated link structure, classifying Brunnian links up to link-homotopy and introduced new invariants of it, called Milnor invariants. Upon completing his doctorate, he went on to work at Princeton. He was a professor at the Institute for Advanced Study from 1970 to 1990. He was an editor of the Annals of Mathematics for a number of years after 1962. He has written a number of books which are famous for their clarity, presentation, and an inspiration for the research by many mathematicians in their areas even after many decades since their publication. He served as Vice President of the AMS in 1976–77 period. His students have included Tadatoshi Akiba, Jon Folkman, John Mather, Laurent C. Siebenmann, Michael Spivak, and Jonathan Sondow. His wife, Dusa McDuff, is a professor of mathematics at Barnard College and is known for her work in symplectic topology. Research One of Milnor's best-known works is his proof in 1956 of the existence of 7-dimensional spheres with nonstandard differentiable structure, which marked the beginning of a new field – differential topology. He coined the term exotic sphere, referring to any n-sphere with nonstandard differential structure. Kervaire and Milnor initiated the systematic study of exotic spheres, showing in particular that the 7-sphere has 15 distinct differentiable structures (28 if one considers orientation). Egbert Brieskorn found simple algebraic equations for 28 complex hypersurfaces in complex 5-space such that their intersection with a small sphere of dimension 9 around a singular point is diffeomorphic to these exotic spheres. Subsequently, Milnor worked on the topology of isolated singular points of complex hypersurfaces in general, developing the theory of the Milnor fibration whose fiber has the homotopy type of a bouquet of μ spheres where μ is known as the Milnor number. Milnor's 1968 book on his theory, Singular Points of Complex Hypersurfaces, inspired the growth of a huge and rich research area that continues to mature to this day. In 1961 Milnor disproved the Hauptvermutung by illustrating two simplicial complexes that are homeomorphic but combinatorially distinct, using the concept of Reidemeister torsion. In 1984 Milnor introduced a definition of attractor. The objects generalize standard attractors, include so-called unstable attractors and are now known as Milnor attractors. Milnor's current interest is dynamics, especially holomorphic dynamics. His work in dynamics is summarized by Peter Makienko in his review of Topological Methods in Modern Mathematics: It is evident now that low-dimensional dynamics, to a large extent initiated by Milnor's work, is a fundamental part of general dynamical systems theory. Milnor cast his eye on dynamical systems theory in the mid-1970s. By that time the Smale program in dynamics had been completed. Milnor's approach was to start over from the very beginning, looking at the simplest nontrivial families of maps. The first choice, one-dimensional dynamics, became the subject of his joint paper with Thurston. Even the case of a unimodal map, that is, one with a single critical point, turns out to be extremely rich. This work may be compared with Poincaré's work on circle diffeomorphisms, which 100 years before had inaugurated the qualitative theory of dynamical systems. Milnor's work has opened several new directions in this field, and has given us many basic concepts, challenging problems and nice theorems. His other significant contributions include microbundles, influencing the usage of Hopf algebras, theory of quadratic forms and the related area of symmetric bilinear forms, higher algebraic K-theory, game theory, and three-dimensional Lie groups. Awards and honors Milnor was elected as a member of the American Academy of Arts and Sciences in 1961. In 1962 Milnor was awarded the Fields Medal for his work in differential topology. He was elected to the United States National Academy of Sciences in 1963 and the American Philosophical Society 1965. He later went on to win the National Medal of Science (1967), the Lester R. Ford Award in 1970 and again in 1984, the Leroy P. Steele Prize for "Seminal Contribution to Research" (1982), the Wolf Prize in Mathematics (1989), the Leroy P. Steele Prize for Mathematical Exposition (2004), and the Leroy P. Steele Prize for Lifetime Achievement (2011). In 1991 a symposium was held at Stony Brook University in celebration of his 60th birthday. Milnor was awarded the 2011 Abel Prize, for his "pioneering discoveries in topology, geometry and algebra." Reacting to the award, Milnor told the New Scientist "It feels very good," adding that "[o]ne is always surprised by a call at 6 o'clock in the morning." In 2013 he became a fellow of the American Mathematical Society, for "contributions to differential topology, geometric topology, algebraic topology, algebra, and dynamical systems". In 2020 he received the Lomonosov Gold Medal of the Russian Academy of Sciences. Publications Books Journal articles Lecture notes See also List of things named after John Milnor Orbit portrait Microbundle References External links Home page at SUNYSB Photo Exotic spheres home page The Abel Prize 2011 – video (40 links from 1965 to May 2021, with 9 videos from Milnor's seminars) 1931 births 20th-century American mathematicians 21st-century American mathematicians Abel Prize laureates Fields Medalists Institute for Advanced Study faculty Living people Members of the United States National Academy of Sciences Foreign members of the Russian Academy of Sciences National Medal of Science laureates People from Orange, New Jersey Princeton University alumni Princeton University faculty Putnam Fellows Stony Brook University faculty American topologists Wolf Prize in Mathematics laureates Fellows of the American Mathematical Society Dynamical systems theorists American geometers Sloan Research Fellows Members of the American Philosophical Society Reeves family
John Milnor
[ "Mathematics" ]
1,468
[ "Dynamical systems theorists", "Dynamical systems" ]
61,633
https://en.wikipedia.org/wiki/Ren%C3%A9%20Thom
René Frédéric Thom (; 2 September 1923 – 25 October 2002) was a French mathematician, who received the Fields Medal in 1958. He made his reputation as a topologist, moving on to aspects of what would be called singularity theory; he became world-famous among the wider academic community and the educated general public for one aspect of this latter interest, his work as founder of catastrophe theory (later developed by Christopher Zeeman). Life and career René Thom grew up in a modest family in Montbéliard, Doubs and obtained a Baccalauréat in 1940. After the German invasion of France, his family took refuge in Switzerland and then in Lyon. In 1941 he moved to Paris to attend Lycée Saint-Louis and in 1943 he began studying mathematics at École Normale Supérieure, becoming agrégé in 1946. He received his PhD in 1951 from the University of Paris. His thesis, titled Espaces fibrés en sphères et carrés de Steenrod (Sphere bundles and Steenrod squares), was written under the direction of Henri Cartan. After a fellowship at Princeton University Graduate College (1951–1952), he became Maître de conférences at the Universities of Grenoble (1953–1954) and Strasbourg (1954–1963), where he was appointed Professor in 1957. In 1964 he moved to the Institut des Hautes Études Scientifiques, in Bures-sur-Yvette, where he worked until 1990. In 1958 Thom received the Fields Medal at the International Congress of Mathematicians in Edinburgh for the foundations of cobordism theory, which were already present in his thesis. He was invited speaker at the International Congress of Mathematicians two more times: in 1970 in Nice and 1983 in Warsaw (which he did not attend). He was awarded the Brouwer Medal in 1970, the Grand Prix Scientifique de la Ville de Paris in 1974, and the John von Neumann Lecture Prize in 1976. He become the first president, together with Louis Néel, of the newly established Fondation Louis-de-Broglie In 1973 and was elected Member of the Académie des Sciences of Paris in 1976. Salvador Dalí paid homage to René Thom with the paintings The Swallow's Tail and Topological Abduction of Europe. Research While René Thom is most known to the public for his development of catastrophe theory between 1968 and 1972, his academic achievements concern mostly his mathematical work on topology. In the early 1950s it concerned what are now called Thom spaces, characteristic classes, cobordism theory, and the Thom transversality theorem. Another example of this line of work is the Thom conjecture, versions of which have been investigated using gauge theory. From the mid 1950s he moved into singularity theory, of which catastrophe theory is just one aspect, and in a series of deep (and at the time obscure) papers between 1960 and 1969 developed the theory of stratified sets and stratified maps, proving a basic stratified isotopy theorem describing the local conical structure of Whitney stratified sets, now known as the Thom–Mather isotopy theorem. Much of his work on stratified sets was developed so as to understand the notion of topologically stable maps, and to eventually prove the result that the set of topologically stable mappings between two smooth manifolds is a dense set. Thom's lectures on the stability of differentiable mappings, given at the University of Bonn in 1960, were written up by Harold Levine and published in the proceedings of a year long symposium on singularities at Liverpool University during 1969–70, edited by C. T. C. Wall. The proof of the density of topologically stable mappings was completed by John Mather in 1970, based on the ideas developed by Thom in the previous ten years. A coherent detailed account was published in 1976 by Christopher Gibson, Klaus Wirthmüller, Andrew du Plessis, and Eduard Looijenga. During the last twenty years of his life Thom's published work was mainly in philosophy and epistemology, and he undertook a reevaluation of Aristotle's writings on science. In 1992, he was one of eighteen academics who sent a letter to Cambridge University protesting against plans to award Jacques Derrida an honorary doctorate. Beyond Thom's contributions to algebraic topology, he studied differentiable mappings, through the study of generic properties. In his final years, he turned his attention to an effort to apply his ideas about structural topography to the questions of thought, language, and meaning in the form of a "semiophysics". Bibliography "Ensembles et morphismes stratifiés", Bulletin of the American Mathematical Society 75 (1969), 240–284. "Semio Physics: A Sketch", Addison Wesley, (1990), Structural Stability and Morphogenesis, W. A. Benjamin, (1972), . See also "Quelques propriétés globales des variétés differentiables" Reeb graph References External links Washington Post Online edition (free registration) Meeting René THOM 1923 births 2002 deaths Scientists from Montbéliard 20th-century French mathematicians École Normale Supérieure alumni Recipients of the National Order of Scientific Merit (Brazil) Fields Medalists Brouwer Medalists French semioticians Members of the French Academy of Sciences Institute for Advanced Study visiting scholars Theoretical biologists Topologists Lycée Saint-Louis alumni Academic staff of the University of Strasbourg University of Paris alumni French lecturers
René Thom
[ "Mathematics", "Biology" ]
1,120
[ "Bioinformatics", "Topologists", "Topology", "Theoretical biologists" ]
61,636
https://en.wikipedia.org/wiki/Lars%20Ahlfors
Lars Valerian Ahlfors (18 April 1907 – 11 October 1996) was a Finnish mathematician, remembered for his work in the field of Riemann surfaces and his textbook on complex analysis. Background Ahlfors was born in Helsinki, Finland. His mother, Sievä Helander, died at his birth. His father, Axel Ahlfors, was a professor of engineering at the Helsinki University of Technology. The Ahlfors family was Swedish-speaking, so he first attended the private school Nya svenska samskolan where all classes were taught in Swedish. Ahlfors studied at University of Helsinki from 1924, graduating in 1928 having studied under Ernst Lindelöf and Rolf Nevanlinna. He assisted Nevanlinna in 1929 with his work on Denjoy's conjecture on the number of asymptotic values of an entire function. In 1929 Ahlfors published the first proof of this conjecture, now known as the Denjoy–Carleman–Ahlfors theorem. It states that the number of asymptotic values approached by an entire function of order ρ along curves in the complex plane going toward infinity is less than or equal to 2ρ. He completed his doctorate from the University of Helsinki in 1930. Career Ahlfors worked as an associate professor at the University of Helsinki from 1933 to 1936. In 1936 he was one of the first two people to be awarded the Fields Medal (the other was Jesse Douglas). In 1935 Ahlfors visited Harvard University. He returned to Finland in 1938 to take up a professorship at the University of Helsinki. The outbreak of war in 1939 led to problems although Ahlfors was unfit for military service. He was offered a position at the Swiss Federal Institute of Technology at Zurich in 1944 and finally managed to travel there in March 1945. He did not enjoy his time in Switzerland, so in 1946 he jumped at a chance to leave, returning to work at Harvard, where he remained until his retirement in 1977; he was William Caspar Graustein Professor of Mathematics from 1964. Ahlfors was a visiting scholar at the Institute for Advanced Study in 1962 and again in 1966. He was awarded the Wihuri Prize in 1968 and the Wolf Prize in Mathematics in 1981. He served as the Honorary President of the International Congress of Mathematicians in 1986 at Berkeley, California, in celebration of his 50th year of the award of his Fields Medal. His book Complex Analysis (1953) is the classic text on the subject and is almost certainly referenced in any more recent text which makes heavy use of complex analysis. Ahlfors wrote several other significant books, including Riemann surfaces (1960) and Conformal invariants (1973). He made decisive contributions to meromorphic curves, value distribution theory, Riemann surfaces, conformal geometry, quasiconformal mappings and other areas during his career. Personal life In 1933, he married Erna Lehnert, an Austrian who with her parents had first settled in Sweden and then in Finland. The couple had three daughters. Ahlfors died of pneumonia at the Willowwood nursing home in Pittsfield, Massachusetts in 1996. See also Ahlfors finiteness theorem Ahlfors function Ahlfors measure conjecture Beurling–Ahlfors transform Schwarz–Ahlfors–Pick theorem Measurable Riemann mapping theorem Bibliography Articles Ahlfors, Lars V. An extension of Schwarz's lemma. Trans. Amer. Math. Soc. 43 (1938), no. 3, 359–364. doi:10.2307/1990065 Ahlfors, Lars; Beurling, Arne. Conformal invariants and function-theoretic null-sets. Acta Math. 83 (1950), 101–129. doi:10.1007/BF02392634 Beurling, A.; Ahlfors, L. The boundary correspondence under quasiconformal mappings. Acta Math. 96 (1956), 125–142. doi:10.1007/BF02392360 Ahlfors, Lars; Bers, Lipman. Riemann's mapping theorem for variable metrics. Ann. of Math. (2) 72 (1960), 385–404. doi:10.2307/1970141 Ahlfors, Lars Valerian. Collected papers. Vol. 1. 1929–1955. Edited with the assistance of Rae Michael Shortt. Contemporary Mathematicians. Birkhäuser, Boston, Mass., 1982. xix+520 pp. Ahlfors, Lars Valerian. Collected papers. Vol. 2. 1954–1979. Edited with the assistance of Rae Michael Shortt. Contemporary Mathematicians. Birkhäuser, Boston, Mass., 1982. xix+515 pp. Books Ahlfors, Lars V. Complex analysis. An introduction to the theory of analytic functions of one complex variable. Third edition. International Series in Pure and Applied Mathematics. McGraw-Hill Book Co., New York, 1978. xi+331 pp. Ahlfors, Lars V. Conformal invariants. Topics in geometric function theory. Reprint of the 1973 original. With a foreword by Peter Duren, F. W. Gehring and Brad Osgood. AMS Chelsea Publishing, Providence, RI, 2010. xii+162 pp. Ahlfors, Lars V. Lectures on quasiconformal mappings. Second edition. With supplemental chapters by C. J. Earle, I. Kra, M. Shishikura and J. H. Hubbard. University Lecture Series, 38. American Mathematical Society, Providence, RI, 2006. viii+162 pp. Ahlfors, Lars V. Möbius transformations in several dimensions. Ordway Professorship Lectures in Mathematics. University of Minnesota, School of Mathematics, Minneapolis, Minn., 1981. ii+150 pp. Ahlfors, Lars V.; Sario, Leo. Riemann surfaces. Princeton Mathematical Series, No. 26 Princeton University Press, Princeton, N.J. 1960 xi+382 pp. References External links Ahlfors entry on Harvard University Mathematics department web site. Papers of Lars Valerian Ahlfors : an inventory (Harvard University Archives) Lars Valerian Ahlfors The MacTutor History of Mathematics page about Ahlfors The Mathematics of Lars Valerian Ahlfors, Notices of the American Mathematical Society; vol. 45, no. 2 (February 1998). Lars Valerian Ahlfors (1907–1996), Notices of the American Mathematical Society; vol. 45, no. 2 (February 1998). National Academy of Sciences Biographical Memoir Author profile in the database zbMATH 1907 births 1996 deaths 20th-century Finnish mathematicians Academic staff of the Helsinki University of Technology Finnish emigrants to the United States Complex analysts Fields Medalists Foreign members of the Russian Academy of Sciences Foreign members of the USSR Academy of Sciences Harvard University Department of Mathematics faculty Institute for Advanced Study visiting scholars Mathematical analysts Members of the United States National Academy of Sciences People from Uusimaa Province (Grand Duchy of Finland) People from Winchester, Massachusetts Swedish-speaking Finns Wolf Prize in Mathematics laureates Members of the Royal Swedish Academy of Sciences
Lars Ahlfors
[ "Mathematics" ]
1,469
[ "Mathematical analysis", "Mathematical analysts" ]
61,648
https://en.wikipedia.org/wiki/Natural%20abundance
In physics, natural abundance (NA) refers to the abundance of isotopes of a chemical element as naturally found on a planet. The relative atomic mass (a weighted average, weighted by mole-fraction abundance figures) of these isotopes is the atomic weight listed for the element in the periodic table. The abundance of an isotope varies from planet to planet, and even from place to place on the Earth, but remains relatively constant in time (on a short-term scale). As an example, uranium has three naturally occurring isotopes: 238U, 235U, and 234U. Their respective natural mole-fraction abundances are 99.2739–99.2752%, 0.7198–0.7202%, and 0.0050–0.0059%. For example, if 100,000 uranium atoms were analyzed, one would expect to find approximately 99,274 238U atoms, approximately 720 235U atoms, and very few (most likely 5 or 6) 234U atoms. This is because 238U is much more stable than 235U or 234U, as the half-life of each isotope reveals: 4.468 × 109 years for 238U compared with 7.038 × 108 years for 235U and 245,500 years for 234U. Exactly because the different uranium isotopes have different half-lives, when the Earth was younger, the isotopic composition of uranium was different. As an example, 1.7×109 years ago the NA of 235U was 3.1% compared with today's 0.7%, and that allowed a natural nuclear fission reactor to form, something that cannot happen today. However, the natural abundance of a given isotope is also affected by the probability of its creation in nucleosynthesis (as in the case of samarium; radioactive 147Sm and 148Sm are much more abundant than stable 144Sm) and by production of a given isotope as a daughter of natural radioactive isotopes (as in the case of radiogenic isotopes of lead). Deviations from natural abundance It is now known from study of the Sun and primitive meteorites that the solar system was initially almost homogeneous in isotopic composition. Deviations from the (evolving) galactic average, locally sampled around the time that the Sun's nuclear burning began, can generally be accounted for by mass fractionation (see the article on mass-independent fractionation) plus a limited number of nuclear decay and transmutation processes. There is also evidence for injection of short-lived (now-extinct) isotopes from a nearby supernova explosion that may have triggered solar nebula collapse. Hence deviations from natural abundance on Earth are often measured in parts per thousand (per mille or ‰) because they are less than one percent (%). An exception to this lies with the presolar grains found in primitive meteorites. These small grains condensed in the outflows of evolved ("dying") stars and escaped the mixing and homogenization processes in the interstellar medium and the solar accretion disk (also known as the solar nebula or protoplanetary disk). As stellar condensates ("stardust"), these grains carry the isotopic signatures of specific nucleosynthesis processes in which their elements were made. In these materials, deviations from "natural abundance" are sometimes measured in factors of 100. Natural isotope abundance of some elements The next table gives the terrestrial isotope distributions for some elements. Some elements, such as phosphorus and fluorine, only exist as a single isotope, with a natural abundance of 100%. See also Abundance of the chemical elements Decay product Isotope Presolar grains Radionuclide References External links Berkeley Isotopes Project Interactive Table (archived 2015) Exact Masses of the Elements and Isotopic Abundances, Scientific Instrument Services Tools to compute low- and high-precision isotopic distribution (archived 2011) Chemical properties Isotopes
Natural abundance
[ "Physics", "Chemistry" ]
808
[ "nan", "Isotopes", "Nuclear physics" ]
61,692
https://en.wikipedia.org/wiki/PC%20Card
PC Card is a parallel peripheral interface for laptop computers and PDAs. The PCMCIA originally introduced the 16-bit ISA-based PCMCIA Card in 1990, but renamed it to PC Card in March 1995 to avoid confusion with the name of the organization. The CardBus PC Card was introduced as a 32-bit version of the original PC Card, based on the PCI specification. CardBus slots are backwards compatible, but older slots are not forward compatible with CardBus cards. Although originally designed as a standard for memory-expansion cards for computer storage, the existence of a usable general standard for notebook peripherals led to the development of many kinds of devices including network cards, modems, and hard disks. The PC Card port has been superseded by the ExpressCard interface since 2003, which was also initially developed by the PCMCIA. The organization dissolved in 2009, with its assets merged into the USB Implementers Forum. Applications Many notebooks in the 1990s had two adjacent type-II slots, which allowed installation of two type-II cards or one, double-thickness, type-III card. The cards were also used in early digital SLR cameras, such as the Kodak DCS 300 series. However, their original use as storage expansion is no longer common. Some manufacturers such as Dell continued to offer them into 2012 on their ruggedized XFR notebooks. Mercedes-Benz used a PCMCIA card reader in the W221 S-Class for model years 2006-2009. It was used for reading media files such as MP3 audio files to play through the COMAND infotainment system. After 2009, it was replaced with a standard SD Card reader. , some vehicles from Honda equipped with a navigation system still included a PC Card reader integrated into the audio system. Some Japanese brand consumer entertainment devices such as TV sets include a PC Card slot for playback of media. Adapters for PC Cards to Personal Computer ISA slots were available when these technologies were current. Cardbus adapters for PCI slots have been made. These adapters were sometimes used to fit Wireless (802.11) PCMCIA cards into desktop computers with PCI slots. History Before the introduction of the PCMCIA card, the parallel port was commonly used for portable peripherals. The PCMCIA 1.0 card standard was published by the Personal Computer Memory Card International Association in November 1990 and was soon adopted by more than eighty vendors. It corresponds with the Japanese JEIDA memory card 4.0 standard. It was originally developed to support Memory cards. Intel authored the Exchangable Card Architecture (ExCA) specification, but later merged this into the PCMCIA. SanDisk (operating at the time as "SunDisk") launched its PCMCIA card in October 1992. The company was the first to introduce a writeable Flash RAM card for the HP 95LX (an early MS-DOS pocket computer). These cards conformed to a supplemental PCMCIA-ATA standard that allowed them to appear as more conventional IDE hard drives to the 95LX or a PC. This had the advantage of raising the upper limit on capacity to the full 32 MB available under DOS 3.22 on the 95LX. New Media Corporation was one of the first companies established for the express purpose of manufacturing PC Cards; they became a major OEM for laptop manufacturers such as Toshiba and Compaq for PC Card products. It soon became clear that the PCMCIA card standard needed expansion to support "smart" I/O cards to address the emerging need for fax, modem, LAN, harddisk and floppy disk cards. It also needed interrupt facilities and hot plugging, which required the definition of new BIOS and operating system interfaces. This led to the introduction of release 2.0 of the PCMCIA standard and JEIDA 4.1 in September 1991, which saw corrections and expansion with Card Services (CS) in the PCMCIA 2.1 standard in November 1992. To recognize increased scope beyond memory, and to aid in marketing, the association acquired the rights to the simpler term "PC Card" from IBM. This was the name of the standard from version 2 of the specification onwards. These cards were used for wireless networks, modems, and other functions in notebook PCs. After the release of PCIe-based ExpressCard in 2003, laptop manufacturers started to fit ExpressCard slots to new laptops instead of PC Card slots. Form factors All PC Card devices use a similar sized package which is long and wide, the same size as a credit card. Type I Cards designed to the original specification (PCMCIA 1.0) are type I and have a 16-bit interface. They are thick and have a dual row of 34 holes (68 in total) along a short edge as a connecting interface. Type-I PC Card devices are typically used for memory devices such as RAM, flash memory, OTP (One-Time Programmable), and SRAM cards. Type II introduced with version 2.0 of the standard. Type-II and above PC Card devices use two rows of 34 sockets, and have a 16- or 32-bit interface. They are thick. Type-II cards introduced I/O support, allowing devices to attach an array of peripherals or to provide connectors/slots to interfaces for which the host computer had no built-in support. For example, many modem, network, and TV cards accept this configuration. Due to their thinness, most Type II interface cards have miniature interface connectors on the card connecting to a dongle, a short cable that adapts from the card's miniature connector to an external full-size connector. Some cards instead have a lump on the end with the connectors. This is more robust and convenient than a separate adapter but can block the other slot where slots are present in a pair. Some Type II cards, most notably network interface and modem cards, have a retractable jack, which can be pushed into the card and will pop out when needed, allowing insertion of a cable from above. When use of the card is no longer needed, the jack can be pushed back into the card and locked in place, protecting it from damage. Most network cards have their jack on one side, while most modems have their jack on the other side, allowing the use of both at the same time as they do not interfere with each other. Wireless Type II cards often had a plastic shroud that jutted out from the end of the card to house the antenna. In the mid-90s, PC Card Type II hard disk drive cards became available; previously, PC Card hard disk drives were only available in Type III. Type III introduced with version 2.01 of the standard in 1992. Type-III PC Card devices are 16-bit or 32-bit. These cards are thick, allowing them to accommodate devices with components that would not fit type I or type II height. Examples are hard disk drive cards, and interface cards with full-size connectors that do not require dongles (as is commonly required with type II interface cards). Type IV Type-IV cards, introduced by Toshiba, were not officially standardized or sanctioned by the PCMCIA. These cards are thick. Bus Original The original standard was defined for both 5 V and 3.3 volt cards, with 3.3 V cards having a key on the side to prevent them from being inserted fully into a 5 V-only slot. Some cards and some slots operate at both voltages as needed. The original standard was built around an 'enhanced' 16-bit ISA bus platform. A newer version of the PCMCIA standard is CardBus (see below), a 32-bit version of the original standard. In addition to supporting a wider bus of 32 bits (instead of the original 16), CardBus also supports bus mastering and operation speeds up to 33 MHz. CardBus CardBus are PCMCIA 5.0 or later (JEIDA 4.2 or later) 32-bit PCMCIA devices, introduced in 1995 and present in laptops from late 1997 onward. CardBus is effectively a 32-bit, 33 MHz PCI bus in the PC Card design. CardBus supports bus mastering, which allows a controller on the bus to talk to other devices or memory without going through the CPU. Many chipsets, such as those that support Wi-Fi, are available for both PCI and CardBus. The notch on the left hand front of the device is slightly shallower on a CardBus device so, by design, a 32-bit device cannot be plugged into earlier equipment supporting only 16-bit devices. Most new slots accept both CardBus and the original 16-bit PC Card devices. CardBus cards can be distinguished from older cards by the presence of a gold band with eight small studs on the top of the card next to the pin sockets. The speed of CardBus interfaces in 32-bit burst mode depends on the transfer type: in byte mode, transfer is 33 MB/s; in word mode it is 66 MB/s; and in dword (double-word) mode 132 MB/s. CardBay CardBay is a variant added to the PCMCIA specification introduced in 2001. It was intended to add some forward compatibility with USB and IEEE 1394, but was not universally adopted and only some notebooks have PC Card controllers with CardBay features. This is an implementation of Microsoft and Intel's joint Drive Bay initiative. Design The card information structure (CIS) is metadata stored on a PC card that contains information about the formatting and organization of the data on the card. The CIS also contains information such as: Type of card Supported power supply options Supported power saving capabilities Manufacturer Model number When a card is unrecognized it is frequently because the CIS information is either lost or damaged. Descendants and variants ExpressCard ExpressCard is a later specification from the PCMCIA, intended as a replacement for PC Card, built around the PCI Express and USB 2.0 standards. The PC Card standard is closed to further development and PCMCIA strongly encourages future product designs to utilize the ExpressCard interface. From about 2006, ExpressCard slots replaced PCMCIA slots in laptop computers, with a few laptops having both in the transition period. ExpressCard and CardBus sockets are physically and electrically incompatible. ExpressCard-to-CardBus and Cardbus-to-ExpressCard adapters are available that connect a Cardbus card to an Expresscard slot, or vice versa, and carry out the required electrical interfacing. These adapters do not handle older non-Cardbus PCMCIA cards. PC Card devices can be plugged into an ExpressCard adaptor, which provides a PCI-to-PCIe Bridge. Despite being much faster in speed/bandwidth, ExpressCard was not as popular as PC Card, due in part to the ubiquity of USB ports on modern computers. Most functionality provided by PC Card or ExpressCard devices is now available as an external USB device. These USB devices have the advantage of being compatible with desktop computers as well as portable devices. (Desktop computers were rarely fitted with a PC Card or ExpressCard slot.) This reduced the requirement for internal expansion slots; by 2011, many laptops had none. Some IBM ThinkPad laptops took their onboard RAM (in sizes ranging from 4 to 16 MB) in the factor of an IC-DRAM Card. While very similar in form-factor, these cards did not go into a standard PC Card Slot, often being installed under the keyboard, for example. They also were not pin-compatible, as they had 88 pins but in two staggered rows, as opposed to even rows like PC Cards. These correspond to versions 1 and 2 of the JEIDA memory card standard. Others The shape is also used by the Common Interface form of conditional-access modules for DVB, and by Panasonic for their professional "P2" video acquisition memory cards. A CableCARD conditional-access module is a type II PC Card intended to be plugged into a cable set-top box or digital cable-ready television. The interface has spawned a generation of flash memory cards that set out to improve on the size and features of Type I cards: CompactFlash, MiniCard, P2 Card and SmartMedia. For example, the PC Card electrical specification is also used for CompactFlash, so a PC Card CompactFlash adapter can be a passive physical adapter rather than requiring additional circuitry. CompactFlash is a smaller dimensioned 50 pin subset of the 68 pin PC Card interface. It requires a setting for the interface mode of either "memory" or "ATA storage". The EOMA68 open-source hardware standard uses the same 68-pin PC Card connectors and corresponds to the PC Card form factor in many other ways. See also Further reading References External links Understanding PC Card, PCMCIA, Cardbus, 16-bit, 32-bit. Solid-state computer storage media Motherboard PCMCIA Computer standards Computer-related introductions in 1990
PC Card
[ "Technology" ]
2,684
[ "Computer standards" ]
61,697
https://en.wikipedia.org/wiki/Aye-aye
The aye-aye (Daubentonia madagascariensis) is a long-fingered lemur, a strepsirrhine primate native to Madagascar with rodent-like teeth that perpetually grow and a special thin middle finger that they can use to catch grubs and larvae out of tree trunks. It is the world's largest nocturnal primate. It is characterized by its unusual method of finding food: it taps on trees to find grubs, then gnaws holes in the wood using its forward-slanting incisors to create a small hole into which it inserts its narrow middle finger to pull the grubs out. This foraging method is called percussive foraging, and takes up 5–41% of foraging time. The only other living mammal species known to find food in this way are the striped possum and trioks (genus Dactylopsila) of northern Australia and New Guinea, which are marsupials. From an ecological point of view, the aye-aye fills the niche of a woodpecker, as it is capable of penetrating wood to extract the invertebrates within. The aye-aye is the only extant member of the genus Daubentonia and family Daubentoniidae. It is currently classified as Endangered by the IUCN. A second species, Daubentonia robusta, appears to have become extinct at some point within the last 1000 years, and is known from subfossil finds. Etymology The genus Daubentonia was named after the French naturalist Louis-Jean-Marie Daubenton by his student, Étienne Geoffroy Saint-Hilaire, in 1795. Initially, Geoffroy considered using the Greek name Scolecophagus ("worm-eater") in reference to its eating habits, but he decided against it because he was uncertain about the aye-aye's habits and whether other related species might eventually be discovered. In 1863, British zoologist John Edward Gray coined the family name Daubentoniidae. The French naturalist Pierre Sonnerat was the first to use the vernacular name "aye-aye" in 1782 when he described and illustrated the lemur, though it was also called the "long-fingered lemur" by English zoologist George Shaw in 1800—a name that did not stick. According to Sonnerat, the name "aye-aye" was a "" (cry of exclamation and astonishment). However, American paleoanthropologist Ian Tattersall noted in 1982 that the name resembles the Malagasy name "hai hai" or "hay hay", (also ahay, , haihay) which refers to the animal and is used around the island. According to Dunkel et al. (2012), the widespread use of the Malagasy name indicates that the name could not have come from Sonnerat. Another hypothesis proposed by Simons and Meyers (2001) is that it derives from "heh heh", which is Malagasy for "I don't know". If correct, then the name might have originated from Malagasy people saying "heh heh" to avoid saying the name of a feared, magical animal. Evolutionary history and taxonomy Due to its derived morphological features, the classification of the aye-aye was debated following its discovery. The possession of continually growing incisors (front teeth) parallels those of rodents, leading early naturalists to mistakenly classify the aye-aye within the mammalian order Rodentia and as a squirrel, due to its toes, hair coloring, and tail. However, the aye-aye is also similar to felines in its head shape, eyes, ears and nostrils. The aye-aye's classification with the order Primates has been just as uncertain. It has been considered a highly derived member of the family Indridae, a basally diverging branch of the strepsirrhine suborder, and of indeterminate relation to all living primates. In 1931, Anthony and Coupin classified the aye-aye under infraorder Chiromyiformes, a sister group to the other strepsirrhines. Colin Groves upheld this classification in 2005 because he was not entirely convinced the aye-aye formed a clade with the rest of the Malagasy lemurs. However, molecular results have consistently placed Daubentonia as the most basally diverging of lemurs. The most parsimonious explanation for this is that all lemurs are derived from a single ancestor that rafted from Africa to Madagascar during the Paleogene. Similarities in dentition between aye-ayes and several African primate fossils (Plesiopithecus and Propotto) have led to the alternate theory that the ancestors of aye-ayes colonized Madagascar separately from other lemurs. In 2008, Russell Mittermeier, Colin Groves, and others ignored addressing higher-level taxonomy by defining lemurs as monophyletic and containing five living families, including Daubentoniidae. Further evidence indicating that the aye-aye belongs in the superfamily Lemuroidea can be inferred from the presence of petrosal bullae encasing the ossicles of the ear. The aye-ayes are also similar to lemurs in their shorter back legs. Anatomy and morphology A full-grown aye-aye is typically about long with a tail longer than its body. The species has an average head and body length of plus a tail of , and weighs around . Young aye-ayes typically are silver colored on their front and have a stripe down their back. However, as the aye-ayes begin to reach maturity, their bodies will be completely covered in thick fur and are typically not one solid color. On the head and back, the ends of the hair are typically tipped with white while the rest of the body will ordinarily be a yellow and/or brown color. Among the aye-aye's signature traits are its fingers. The third finger, which is much thinner than the others, is used for extracting grubs and insects out of trees, using the hooked nail. The finger is unique in the animal kingdom in that it possesses a ball-and-socket metacarpophalangeal joint, can reach the throat through a nostril and is used for picking one's nose and eating mucus (mucophagy) so harvested from inside the nose. The aye-aye has also evolved a sixth digit, a pseudothumb, to aid in gripping. The complex geometry of ridges on the inner surface of aye-aye ears helps to sharply focus not only echolocation signals from the tapping of its finger, but also to passively listen for any other sound produced by the prey. These ridges can be regarded as the acoustic equivalent of a Fresnel lens, and may be seen in a large variety of unrelated animals, such as lesser galago, bat-eared fox, mouse lemur, and others. Females have two nipples located in the region of the groin. The male's genitalia are similar to those of canids, with a large prostate and long baculum. Behaviour and lifestyle The aye-aye is a nocturnal and arboreal animal meaning that it spends most of its life high in the trees. Although they are known to come down to the ground on occasion, aye-ayes sleep, eat, travel and mate in the trees and are most commonly found close to the canopy where there is plenty of cover from the dense foliage. During the day, aye-ayes sleep in spherical nests in the forks of tree branches that are constructed out of leaves, branches and vines before emerging after dark to begin their hunt for food. Aye-aye are solitary animals that mark their large home range with scent. The smaller territories of females often overlap those of at least a couple of males. Male aye-ayes tend to share their territories with other males and are even known to share the same nests (although not at the same time), and can seemingly tolerate each other until they hear the call of a female that is looking for a mate. Mating season extends throughout the year, with females typically starting to breed at the age of three or four. They give birth to one offspring every two to three years. During the period of parenting, a female becomes the dominant figure over males, likely to secure better access to food while caring for her young. The infant remains in a nest for up to two months before venturing out, but it takes another seven months before the young aye-aye can maneuver the canopy as skillfully as an adult. Diet and foraging The aye-aye is an omnivore and commonly eats seeds, nuts, fruits, nectar, plant exudates and fungi, but also xylophagous, or wood boring, insect larvae (especially cerambycid beetle larvae) and honey. Aye-ayes tap on the trunks and branches of trees at a rate of up to eight times per second, and listen to the echo produced to find hollow chambers. Studies have suggested that the acoustic properties associated with the foraging cavity have no effect on excavation behavior. Once a chamber is found, they chew a hole into the wood and get grubs out of that hole with their highly adapted narrow and bony middle fingers. The aye-aye begins foraging between 30 minutes before and three hours after sunset. Up to 80% of the night is spent foraging in the canopy, separated by occasional rest periods. It climbs trees by making successive vertical leaps, much like a squirrel. Horizontal movement is more difficult, but the aye-aye rarely descends to jump to another tree, and can often travel up to a night. Though foraging is usually solitary, they occasionally forage in groups. Individual movements within the group are coordinated using both vocalisations and scent signals. Social systems The aye-aye is classically considered 'solitary' as they have not been observed to groom each other. However, recent research suggests that it is more social than once thought. It usually sticks to foraging in its own personal home range, or territory. The home ranges of males often overlap, and the males can be very social with each other. Female home ranges never overlap, though a male's home range often overlaps that of several females. The male aye-ayes live in large areas up to , while females have smaller living spaces that go up to . It is difficult for the males to defend a singular female because of the large home range. They are seen exhibiting polygyny because of this. Regular scent marking with their cheeks and neck is how aye-ayes let others know of their presence and repel intruders from their territory. Like many other prosimians, the female aye-aye is dominant to the male. They are not typically monogamous, and will often challenge each other for mates. Male aye-ayes are very assertive in this way, and sometimes even pull other males away from a female during mating. Males are normally locked to females during mating in sessions that may last up to an hour. Outside of mating, males and females interact only occasionally, usually while foraging. The aye-aye is thought to be the only primate which uses echolocation to find its prey. Distribution and habitat The aye-aye lives primarily on the east coast of Madagascar. Its natural habitat is rainforest or dry deciduous forest, but many live in cultivated areas due to deforestation. Rainforest aye-ayes, the most common, dwell in canopy areas, and are usually sighted above 70 meters altitude. They sleep during the day in nests built from interwoven twigs and dead leaves up in the canopy among the vines and branches. Conservation The aye-aye was thought to be extinct in 1933, but was rediscovered in 1957. In 1966, nine individuals were transported to Nosy Mangabe, an island near Maroantsetra off eastern Madagascar. Recent research shows the aye-aye is more widespread than was previously thought, but its conservation status was changed to endangered in 2014. This is for four main reasons: the aye-aye is considered evil by local cultures, and is killed as such. The forests of Madagascar are declining in range due to deforestation. Local farmers will kill aye-ayes to protect their crops; aye-aye poaching is another major issue. However, there is no direct evidence to suggest aye-ayes pose any legitimate threat to crops and therefore are killed based on superstition. As many as 50 aye-ayes can be found in zoological facilities worldwide. Folk belief The aye-aye is often viewed as a harbinger of evil and death and killed on sight. Others believe, if one points its narrowest finger at someone, they are marked for death. Some say that the appearance of an aye-aye in a village predicts the death of a villager, and the only way to prevent this is to kill it. The Sakalava people go so far as to claim aye-ayes sneak into houses through the thatched roofs and murder the sleeping occupants by using their middle fingers to puncture their victims' aorta. Captive breeding The conservation of this species has been aided by captive breeding, primarily at the Duke Lemur Center in Durham, North Carolina. This center has been influential in keeping, researching and breeding aye-ayes and other lemurs. They have sent multiple teams to capture lemurs in Madagascar and have since created captive breeding groups for their lemurs. Specifically, they were responsible for the first aye-aye born into captivity and studied how he and the other aye-aye infants born at the center develop through infancy. They have also revolutionized the understanding of the aye-aye diet. References Literature cited External links ARKive – images and movies of the aye-aye (Daubentonia madagascariensis) EDGE species Fauna of the Madagascar lowland forests Lemurs Madagascar dry deciduous forests Mammals described in 1788 Mammals of Madagascar Taxa named by Johann Friedrich Gmelin
Aye-aye
[ "Biology" ]
2,867
[ "EDGE species", "Biodiversity" ]
61,699
https://en.wikipedia.org/wiki/Genera%20%28operating%20system%29
Genera is a commercial operating system and integrated development environment for Lisp machines created by Symbolics. It is essentially a fork of an earlier operating system originating on the Massachusetts Institute of Technology (MIT) AI Lab's Lisp machines which Symbolics had used in common with Lisp Machines, Inc. (LMI), and Texas Instruments (TI). Genera was also sold by Symbolics as Open Genera, which runs Genera on computers based on a Digital Equipment Corporation (DEC) Alpha processor using Tru64 UNIX. In 2021 a new version was released as Portable Genera which runs on Tru64 UNIX on Alpha, Linux on x86-64 and Arm64 Linux, and macOS on x86-64 and Arm64 (Apple Silicon M Series). It is released and licensed as proprietary software. Genera is an example of an object-oriented operating system based on the programming language Lisp. Genera supports incremental and interactive development of complex software using a mix of programming styles with extensive support for object-oriented programming. MIT's Lisp machine operating system The Lisp Machine operating system was written in Lisp Machine Lisp. It was a one-user workstation initially targeted at software developers for artificial intelligence (AI) projects. The system had a large bitmap screen, a mouse, a keyboard, a network interface, a disk drive, and slots for expansion. The operating system was supporting this hardware and it provided (among others): code for a frontend processor means to boot the operating system virtual memory management garbage collection interface to various hardware: mouse, keyboard, bitmap frame buffer, disk, printer, network interface an interpreter and a native code compiler for Lisp Machine Lisp an object system: Flavors a graphical user interface (GUI) window system and window manager a local file system support for the Chaosnet (CHAOS) network an Emacs-like Editor named Zmacs a mail program named Zmail a Lisp listener a debugger This was already a complete one-user Lisp-based operating system and development environment. The MIT Lisp machine operating system was developed from the middle 1970s to the early 1980s. In 2006, the source code for this Lisp machine operating system from MIT was released as free and open-source software. Genera operating system Symbolics developed new Lisp machines and published the operating system under the name Genera. The latest version is 8.5. Symbolics Genera was developed in the early 1980s and early 1990s. In the final years, development entailed mostly patches, with very little new function. Symbolics developed Genera based on this foundation of the MIT Lisp machine operating system. It sells the operating system and layered software. Some of the layered software has been integrated into Genera in later releases. Symbolics improved the operating system software from the original MIT Lisp machine and expanded it. The Genera operating system was only available for Symbolics Lisp machines and the Open Genera virtual machine. Symbolics Genera has many features and supports all the versions of various hardware that Symbolics built over its life. Its source code is more than a million lines; the number depends on the release and what amount of software is installed. Symbolics Genera was published on magnetic tape and CD-ROM. The release of the operating system also provided most of the source code of the operating system and its applications. The user has free access to all parts of the running operating system and can write changes and extensions. The source code of the operating system is divided into systems. These systems bundle sources, binaries and other files. The system construction toolkit (SCT) maintains the dependencies, the components and the versions of all the systems. A system has two numbers: a major and a minor version number. The major version number counts the number of full constructions of a system. The minor version counts the number of patches to that system. A patch is a file that can be loaded to fix problems or provide extensions to a particular version of a system. Symbolics developed a version named Open Genera, that included a virtual machine that enabled executing Genera on DEC Alpha based workstations, plus several Genera extensions and applications that were sold separately (like the Symbolics S-Graphics suite). Also, they made a new operating system named Minima for embedded uses, in Common Lisp. The latest version is Portable Genera, which has the virtual machine ported to x86-64, Arm64 and Apple M1 processors - additionally to the DEC Alpha processor. The virtual machine then runs under the Linux and macOS, additionally to Tru64 UNIX. The original Lisp machine operating system was developed in Lisp Machine Lisp, using the Flavors object-oriented extension to that Lisp. Symbolics provided a successor to Flavors named New Flavors. Later Symbolics also supported Common Lisp and the Common Lisp Object System (CLOS). Then Symbolics Common Lisp became the default Lisp dialect for writing software with Genera. The software of the operating system was written mostly in Lisp Machine Lisp (named ZetaLisp) and Symbolics Common Lisp. These Lisp dialects are both provided by Genera. Also parts of the software was using either Flavors, New Flavors, and Common Lisp Object System. Some of the older parts of the Genera operating system have been rewritten in Symbolics Common Lisp and the Common Lisp Object system. Many parts of the operating systems remained written in ZetaLisp and Flavors (or New Flavors). User interface The early versions of Symbolics Genera were built with the original graphical user interface (GUI) windowing system of the Lisp machine operating system. Symbolics then developed a radically new windowing system named Dynamic Windows with a presentation-based user interface. This window system was introduced with Genera 7 in 1986. Many of the applications of Genera have then been using Dynamic Windows for their user interface. Eventually there was a move to port parts of the window system to run on other Common Lisp implementations by other vendors as the Common Lisp Interface Manager (CLIM). Versions of CLIM have been available (among others) for Allegro Common Lisp, LispWorks, and Macintosh Common Lisp. An open source version is available (McCLIM). Dynamic Windows uses typed objects for all output to the screen. All displayed information keeps its connection to the objects displayed (output recording). This works for both textual and graphical output. At runtime the applicable operations to these objects are computed based on the class hierarchy and the available operations (commands). Commands are organized in hierarchical command tables with typed parameters. Commands can be entered with the mouse (making extensive use of mouse chording), keystrokes, and with a command line interface. All applications share one command line interpreter implementation, which adapts to various types of usage. The graphical abilities of the window system are based on the PostScript graphics model. The user interface is mostly in monochrome (black-and-white) since that was what the hardware console typically provided. But extensive support exists for color, using color frame buffers or X Window System (X11) servers with color support. The activities (applications) use the whole screen with several panes, though windows can also be smaller. The layout of these activity windows adapts to different screen sizes. Activities can also switch between different pane layouts. Genera provides a system menu to control windows, switch applications, and operate the window system. Many features of the user interface (switching between activities, creating activities, stopping and starting processes, and much more) can also be controlled with keyboard commands. The Dynamic Lisp Listener is an example of a command line interface with full graphics abilities and support for mouse-based interaction. It accepts Lisp expressions and commands as input. The output is mouse sensitive. The Lisp listener can display forms to input data for the various built-in commands. The user interface provides extensive online help and context sensitive help, completion of choices in various contexts. Documentation Genera supports fully hyperlinked online documentation. The documentation is read with the Document Examiner, an early hypertext browser. The documentation is based on small reusable documentation records that can also be displayed in various contexts with the Editor and the Lisp Listener. The documentation is organized in books and sections. The books were also provided in printed versions with the same contents as the online documentation. The documentation database information is delivered with Genera and can be modified with incremental patches. The documentation was created with a separate application that was not shipped with Genera: Symbolics Concordia. Concordia provides an extension to the Zmacs editor for editing documentation records, a graphics editor and a page previewer. The documentation provides user guides, installation guidelines and references of the various Lisp constructs and libraries. The markup language is based on the Scribe markup language and also usable by the developer. Genera supports printing to postscript printers, provides a printing queue and also a PostScript interpreter (written in Lisp). Features Genera also has support for various network protocols and applications using those. It has extensive support for TCP/IP. Genera supports one-processor machines with several threads (called processes). Genera supports several different types of garbage collection (GC): full GC, in-place GC, incremental GC, and ephemeral GC. The ephemeral collector uses only physical memory and uses the memory management unit to get information about changed pages in physical memory. The collector uses generations and the virtual memory is divided into areas. Areas can contain objects of certain types (strings, bitmaps, pathnames, ...), and each area can use different memory management mechanisms. Genera implements two file systems: the FEP file system for large files and the Lisp Machine File System (LMFS) optimized for many small files. These systems also maintain different versions of files. If a file is modified, Genera still keeps the old versions. Genera also provides access to, can read from and write to, other, local and remote, file systems including: NFS, FTP, HFS, CD-ROMs, tape drives. Genera supports netbooting. Genera provides a client for the Statice object database from Symbolics. Genera makes extensive use of the condition system (exception handling) to handle all kinds of runtime errors and is able to recover from many of these errors. For example, it allows retrying network operations if a network connection has a failure; the application code will keep running. When errors occur, users are presented a menu of restarts (abort, retry, continue options) that are specific to the error signalled. Genera has extensive debugging tools. Genera can save versions of the running system to worlds. These worlds can be booted and then will contain all the saved data and code. Programming languages Symbolics provided several programming languages for use with Genera: ZetaLisp, the Symbolics version of Lisp Machine Lisp Common Lisp in several versions: Symbolics Common Lisp, Future Common Lisp (ANSI Common Lisp), CLtL1 Symbolics Pascal, a version of Pascal written in Lisp (Lisp source is included in Genera distribution) Symbolics C, a version of C written in Lisp (Lisp source is included in Genera distribution) Symbolics Fortran, a version of Fortran written in Lisp (Lisp source is included in Genera distribution) Symbolics Common Lisp provides most of the Common Lisp standard with very many extensions, many of them coming from ZetaLisp. Other languages from Symbolics Symbolics Prolog, a version of Prolog written and integrated in Lisp Symbolics Ada, a version of Ada written in Lisp It is remarkable that these programming language implementations inherited some of the dynamic features of the Lisp system (like garbage collection and checked access to data) and supported incremental software development. Third-party developers provided more programming languages, such as OPS5, and development tools, such as the Knowledge Engineering Environment (KEE) from IntelliCorp). Applications Symbolics Genera comes with several applications. Applications are called activities. Some of the activities: Zmacs, an Emacs-like text editor Zmail, a mail reader also providing a calendar File system browser with tools for file system maintenance Lisp Listener with command-line interface Document Examiner for browsing documentation Restore Distribution to install software. Distribute Systems, to create software distributions Peek to examine system information (processes, windows, network connections, ...) Debugger Namespace Editor to access information about objects in the network (users, computers, file systems, ...) Converse, a chat client Terminal Inspector, for browsing Lisp data structures Notifications Frame-Up, for designing user interfaces Flavor Examiner, to examine the classes and methods of the Flavor object-oriented extension to Lisp Other applications from Symbolics Symbolics sold several applications that run on Symbolics Genera. Symbolics Concordia, a document production suite Symbolics Joshua, an expert system shell Symbolics Macsyma, a computer algebra system Symbolics NS, a chip design tool Symbolics Plexi, a neural network development tool Symbolics S-Graphics, a suite of tools: S-Paint, S-Geometry, S-Dynamics, S-Render Symbolics S-Utilities: S-Record, S-Compositor, S-Colorize, S-Convert Symbolics Scope, digital image processing with a Pixar Image Computer Symbolics Statice, an object database Third-party applications Several companies developed and sold applications for Symbolics Genera. Some examples: Ascent Technology Gatekeeper, a rule-based resource manager for airports and airlines Automated Reasoning Tool (ART), an expert system shell from Inference Corporation ICAD, 3d parametric CAD system Illustrate, graphics editor Knowledge Engineering Environment (KEE), an expert system shell, from IntelliCorp Knowledge Craft, an expert system shell, from Carnegie Group Metal, machine translation system from Siemens Highlights Genera is written fully in Lisp, using ZetaLisp and Symbolics Common Lisp, including all low-level system code, such as device drivers, garbage collection, process scheduler, network stacks, etc. The source code is more than a million lines of Lisp, yet relatively compact, compared to the provided functions, due to extensive reuse. It is also available for users to inspect and change. The operating system is mostly written in an object-oriented style using Flavors, New Flavors, and CLOS It has extensive online documentation readable with the Document Examiner Dynamic Windows provides a presentation-based user interface The user interface can be used locally (on Lisp Machines and MacIvories) and remotely (using X11) Groups of developers can work together in a networked environment A central namespace server provides a directory of machines, users, services, networks, file systems, databases, and more There is little protection against changing the operating system. The whole system is fully accessible and changeable. Limits Genera's limits include: Only runs on Symbolics Lisp Machines or the Open Genera emulator. Only one user can be logged in at once. Only one Lisp system can run at once. Data and code is shared by applications and the operating system. However, multiple instances of Open Genera can run on one DEC Alpha. Development effectively stopped in the middle 1990s. Releases 1982 – Release 78 1982 – Release 210 1983 – Release 4.0 1984 – Release 5.0 1985 – Release 6.0, introduces Symbolics Common Lisp, the Ephemeral Object Garbage Collector, and Document Examiner 1986 – Genera 7.0, introduces Dynamic Windows 1990 – Genera 8.0, introduces CLOS 1991 – Genera 8.1, introduces CLIM 1992 – Genera 8.2 1993 – Genera 8.3 1993 – Open Genera 1.0, introduces the Virtual Lisp Machine 1998 – Open Genera 2.0 2021 – Portable Genera 2.0, the Virtual Lisp Machine ported to additional platforms A stable version of Open Genera that can run on x86-64 or arm64 Linux, and Apple M1 MacOS has been released and has been renamed to Portable Genera. A hacked version of Open Genera that can run on x86-64 Linux exists. References External links Symbolics Genera Integrated Development Environment "Symbolics Technical Summary" "Genera Concepts" web copy of Symbolics' introduction to Genera Symbolics software documents at bitsavers.org A page of screenshots of Genera Screenshots of the award-winning Symbolics Document Examiner "The Symbolics Virtual Lisp Machine, Or, Using The Dec Alpha As A Programmable Micro-engine" "2013 Video Demonstration by Symbolics programmer Kalman Reti" Common Lisp implementations Common Lisp (programming language) software Computing platforms Integrated development environments Lisp (programming language)-based operating systems Object-oriented operating systems
Genera (operating system)
[ "Technology" ]
3,443
[ "Computing platforms" ]
61,701
https://en.wikipedia.org/wiki/Venn%20diagram
A Venn diagram is a widely used diagram style that shows the logical relation between sets, popularized by John Venn (1834–1923) in the 1880s. The diagrams are used to teach elementary set theory, and to illustrate simple set relationships in probability, logic, statistics, linguistics and computer science. A Venn diagram uses simple closed curves drawn on a plane to represent sets. Very often, these curves are circles or ellipses. Similar ideas had been proposed before Venn such as by Christian Weise in 1712 (Nucleus Logicoe Wiesianoe) and Leonhard Euler (Letters to a German Princess) in 1768. The idea was popularised by Venn in Symbolic Logic, Chapter V "Diagrammatic Representation", published in 1881. Details A Venn diagram, also called a set diagram or logic diagram, shows all possible logical relations between a finite collection of different sets. These diagrams depict elements as points in the plane, and sets as regions inside closed curves. A Venn diagram consists of multiple overlapping closed curves, usually circles, each representing a set. The points inside a curve labelled S represent elements of the set S, while points outside the boundary represent elements not in the set S. This lends itself to intuitive visualizations; for example, the set of all elements that are members of both sets S and T, denoted S ∩ T and read "the intersection of S and T", is represented visually by the area of overlap of the regions S and T. In Venn diagrams, the curves are overlapped in every possible way, showing all possible relations between the sets. They are thus a special case of Euler diagrams, which do not necessarily show all relations. Venn diagrams were conceived around 1880 by John Venn. They are used to teach elementary set theory, as well as illustrate simple set relationships in probability, logic, statistics, linguistics, and computer science. A Venn diagram in which the area of each shape is proportional to the number of elements it contains is called an area-proportional (or scaled) Venn diagram. Example This example involves two sets of creatures, represented here as colored circles. The orange circle represents all types of creatures that have two legs. The blue circle represents creatures that can fly. Each separate type of creature can be imagined as a point somewhere in the diagram. Living creatures that have two legs and can fly—for example, parrots—are then in both sets, so they correspond to points in the region where the blue and orange circles overlap. This overlapping region would only contain those elements (in this example, creatures) that are members of both the orange set (two-legged creatures) and the blue set (flying creatures). Humans and penguins are bipedal, and so are in the orange circle, but since they cannot fly, they appear in the left part of the orange circle, where it does not overlap with the blue circle. Mosquitoes can fly, but have six, not two, legs, so the point for mosquitoes is in the part of the blue circle that does not overlap with the orange one. Creatures that are neither two-legged nor able to fly (for example, whales and spiders) would all be represented by points outside both circles. The combined region of the two sets is called their union, denoted by , where A is the orange circle and B the blue. The union in this case contains all living creatures that either are two-legged or can fly (or both). The region included in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by . History Venn diagrams were introduced in 1880 by John Venn in a paper entitled "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings" in the Philosophical Magazine and Journal of Science, about the different ways to represent propositions by diagrams. The use of these types of diagrams in formal logic, according to Frank Ruskey and Mark Weston, predates Venn but are "rightly associated" with him as he "comprehensively surveyed and formalized their usage, and was the first to generalize them". Diagrams of overlapping circles representing unions and intersections were introduced by Catalan philosopher Ramon Llull (c. 1232–1315/1316) in the 13th century, who used them to illustrate combinations of basic principles. Gottfried Wilhelm Leibniz (1646–1716) produced similar diagrams in the 17th century (though much of this work was unpublished), as did Johann Christian Lange in a work from 1712 describing Christian Weise's contributions to logic. Euler diagrams, which are similar to Venn diagrams but don't necessarily contain all possible unions and intersections, were first made prominent by mathematician Leonhard Euler in the 18th century. Venn did not use the term "Venn diagram" and referred to the concept as "Eulerian Circles". He became acquainted with Euler diagrams in 1862 and wrote that Venn diagrams did not occur to him "till much later", while attempting to adapt Euler diagrams to Boolean logic. In the opening sentence of his 1880 article Venn wrote that Euler diagrams were the only diagrammatic representation of logic to gain "any general acceptance". Venn viewed his diagrams as a pedagogical tool, analogous to verification of physical concepts through experiment. As an example of their applications, he noted that a three-set diagram could show the syllogism: 'All A is some B. No B is any C. Hence, no A is any C.' Charles L. Dodgson (Lewis Carroll) includes "Venn's Method of Diagrams" as well as "Euler's Method of Diagrams" in an "Appendix, Addressed to Teachers" of his book Symbolic Logic (4th edition published in 1896). The term "Venn diagram" was later used by Clarence Irving Lewis in 1918, in his book A Survey of Symbolic Logic. In the 20th century, Venn diagrams were further developed. David Wilson Henderson showed, in 1963, that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was a prime number. He also showed that such symmetric Venn diagrams exist when n is five or seven. In 2002, Peter Hamburger found symmetric Venn diagrams for n = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. These combined results show that rotationally symmetric Venn diagrams exist, if and only if n is a prime number. Venn diagrams and Euler diagrams were incorporated as part of instruction in set theory, as part of the new math movement in the 1960s. Since then, they have also been adopted in the curriculum of other fields such as reading. Popular culture Venn diagrams have been commonly used in memes. At least one politician has been mocked for misusing Venn diagrams. Overview A Venn diagram is constructed with a collection of simple closed curves drawn in a plane. According to Lewis, the "principle of these diagrams is that classes [or sets] be represented by regions in such relation to one another that all the possible logical relations of these classes can be indicated in the same diagram. That is, the diagram initially leaves room for any possible relation of the classes, and the actual or given relation, can then be specified by indicating that some particular region is null or is not-null". Venn diagrams normally comprise overlapping circles. The interior of the circle symbolically represents the elements of the set, while the exterior represents elements that are not members of the set. For instance, in a two-set Venn diagram, one circle may represent the group of all wooden objects, while the other circle may represent the set of all tables. The overlapping region, or intersection, would then represent the set of all wooden tables. Shapes other than circles can be employed as shown below by Venn's own higher set diagrams. Venn diagrams do not generally contain information on the relative or absolute sizes (cardinality) of sets. That is, they are schematic diagrams generally not drawn to scale. Venn diagrams are similar to Euler diagrams. However, a Venn diagram for n component sets must contain all 2n hypothetically possible zones, that correspond to some combination of inclusion or exclusion in each of the component sets. Euler diagrams contain only the actually possible zones in a given context. In Venn diagrams, a shaded zone may represent an empty zone, whereas in an Euler diagram, the corresponding zone is missing from the diagram. For example, if one set represents dairy products and another cheeses, the Venn diagram contains a zone for cheeses that are not dairy products. Assuming that in the context cheese means some type of dairy product, the Euler diagram has the cheese zone entirely contained within the dairy-product zone—there is no zone for (non-existent) non-dairy cheese. This means that as the number of contours increases, Euler diagrams are typically less visually complex than the equivalent Venn diagram, particularly if the number of non-empty intersections is small. The difference between Euler and Venn diagrams can be seen in the following example. Take the three sets: The Euler and the Venn diagram of those sets are: Extensions to higher numbers of sets Venn diagrams typically represent two or three sets, but there are forms that allow for higher numbers. Shown below, four intersecting spheres form the highest order Venn diagram that has the symmetry of a simplex and can be visually represented. The 16 intersections correspond to the vertices of a tesseract (or the cells of a 16-cell, respectively). For higher numbers of sets, some loss of symmetry in the diagrams is unavoidable. Venn was keen to find "symmetrical figures ... elegant in themselves," that represented higher numbers of sets, and he devised an elegant four-set diagram using ellipses (see below). He also gave a construction for Venn diagrams for any number of sets, where each successive curve that delimits a set interleaves with previous curves, starting with the three-circle diagram. Edwards–Venn diagrams Anthony William Fairbank Edwards constructed a series of Venn diagrams for higher numbers of sets by segmenting the surface of a sphere, which became known as Edwards–Venn diagrams. For example, three sets can be easily represented by taking three hemispheres of the sphere at right angles (x = 0, y = 0 and z = 0). A fourth set can be added to the representation, by taking a curve similar to the seam on a tennis ball, which winds up and down around the equator, and so on. The resulting sets can then be projected back to a plane, to give cogwheel diagrams with increasing numbers of teeth—as shown here. These diagrams were devised while designing a stained-glass window in memory of Venn. Other diagrams Edwards–Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum, which were based around intersecting polygons with increasing numbers of sides. They are also two-dimensional representations of hypercubes. Henry John Stephen Smith devised similar n-set diagrams using sine curves with the series of equations Charles Lutwidge Dodgson (also known as Lewis Carroll) devised a five-set diagram known as Carroll's square. Joaquin and Boyles, on the other hand, proposed supplemental rules for the standard Venn diagram, in order to account for certain problem cases. For instance, regarding the issue of representing singular statements, they suggest to consider the Venn diagram circle as a representation of a set of things, and use first-order logic and set theory to treat categorical statements as statements about sets. Additionally, they propose to treat singular statements as statements about set membership. So, for example, to represent the statement "a is F" in this retooled Venn diagram, a small letter "a" may be placed inside the circle that represents the set F. Related concepts Venn diagrams correspond to truth tables for the propositions , , etc., in the sense that each region of Venn diagram corresponds to one row of the truth table. This type is also known as Johnston diagram. Another way of representing sets is with John F. Randolph's R-diagrams. See also Existential graph (by Charles Sanders Peirce) Logical connective Information diagram Marquand diagram (and as further derivation Veitch chart and Karnaugh map) Spherical octahedron – A stereographic projection of a regular octahedron makes a three-set Venn diagram, as three orthogonal great circles, each dividing space into two halves. Stanhope Demonstrator Three circles model Triquetra Vesica piscis UpSet plot Notes References Further reading (NB. The book comes with a 3-page foldout of a seven-bit cylindrical Venn diagram.) External links Lewis Carroll's Logic Game – Venn vs. Euler at Cut-the-knot Six sets Venn diagrams made from triangles Interactive seven sets Venn diagram VBVenn, a free open source program for calculating and graphing quantitative two-circle Venn diagrams InteractiVenn, a web-based tool for visualizing Venn diagrams DeepVenn, a tool for creating area-proportional Venn Diagrams Graphical concepts in set theory Diagrams Statistical charts and diagrams Logical diagrams
Venn diagram
[ "Mathematics" ]
2,736
[ "Basic concepts in set theory", "Graphical concepts in set theory" ]
61,708
https://en.wikipedia.org/wiki/Shrub
A shrub or bush is a small-to-medium-sized perennial woody plant. Unlike herbaceous plants, shrubs have persistent woody stems above the ground. Shrubs can be either deciduous or evergreen. They are distinguished from trees by their multiple stems and shorter height, less than tall. Small shrubs, less than 2 m (6.6 ft) tall are sometimes termed as subshrubs. Many botanical groups have species that are shrubs, and others that are trees and herbaceous plants instead. Some define a shrub as less than and a tree as over 6 m. Others use as the cutoff point for classification. Many trees do not reach this mature height because of hostile, less than ideal growing conditions, and resemble shrub-sized plants. Others in such species have the potential to grow taller in ideal conditions. For longevity, most shrubs are classified between perennials and trees. Some only last about five years in good conditions. Others, usually larger and more woody, live beyond 70. On average, they die after eight years. Shrubland is the natural landscape dominated by various shrubs; there are many distinct types around the world, including fynbos, maquis, shrub-steppe, shrub swamp and moorland. In gardens and parks, an area largely dedicated to shrubs (now somewhat less fashionable than a century ago) is called a shrubbery, shrub border or shrub garden. There are many garden cultivars of shrubs, bred for flowering, for example rhododendrons, and sometimes even leaf colour or shape. Compared to trees and herbaceous plants, a small number of shrubs have culinary usage. Apart from the several berry-bearing species (using the culinary rather than botanical definition), few are eaten directly, and they are generally too small for much timber use unlike trees. Those that are used include several perfumed species such as lavender and rose, and a wide range of plants with medicinal uses. Tea and coffee are on the tree-shrub boundary; they are normally harvested from shrub-sized plants, but these would be large enough to become small trees if left to grow instead. Definition Shrubs are perennial woody plants, and therefore have persistent woody stems above ground (compare with succulent stems of herbaceous plants). Usually, shrubs are distinguished from trees by their height and multiple stems. Some shrubs are deciduous (e.g. hawthorn) and others evergreen (e.g. holly). Ancient Greek philosopher Theophrastus divided the plant world into trees, shrubs and herbs. Small, low shrubs, generally less than tall, such as lavender, periwinkle and most small garden varieties of rose, are often termed as subshrubs. Most definitions characterize shrubs as possessing multiple stems with no main trunk below. This is because the stems have branched below ground level. There are exceptions to this, with some shrubs having main trunks, but these tend to be very short and divide into multiple stems close to ground level without a reasonable length beforehand. Many trees can grow in multiple stemmed forms also while being tall enough to be trees, such as oak or ash. Use in gardens and parks An area of cultivated shrubs in a park or a garden is known as a shrubbery. When clipped as topiary, suitable species or varieties of shrubs develop dense foliage and many small leafy branches growing close together. Many shrubs respond well to renewal pruning, in which hard cutting back to a "stool", removes everything but vital parts of the plant, resulting in long new stems known as "canes". Other shrubs respond better to selective pruning to dead or unhealthy, or otherwise unattractive parts to reveal their structure and character. Shrubs in common garden practice are generally considered broad-leaved plants, though some smaller conifers such as mountain pine and common juniper are also shrubby in structure. Species that grow into a shrubby habit may be either deciduous or evergreen. Botanical structure In botany and ecology, a shrub is more specifically used to describe the particular physical canopy structure or plant life-form of woody plants which are less than high and usually multiple stems arising at or near the surface of the ground. For example, a descriptive system widely adopted in Australia is based on structural characteristics based on life-form, plus the height and amount of foliage cover of the tallest layer or dominant species. For shrubs that are high, the following structural forms are categorized: dense foliage cover (70–100%) — closed-shrubs mid-dense foliage cover (30–70%) — open-shrubs sparse foliage cover (10–30%) — tall shrubland very sparse foliage cover (<10%) — tall open shrubland For shrubs less than high, the following structural forms are categorized: dense foliage cover (70–100%) — closed-heath or closed low shrubland—(North America) mid-dense foliage cover (30–70%) — open-heath or mid-dense low shrubland—(North America) sparse foliage cover (10–30%) — low shrubland very sparse foliage cover (<10%) — low open shrubland List Those marked with * can also develop into tree form if in ideal conditions. A Abelia (Abelia) Acer (Maple) * Actinidia (Actinidia) Aloe (Aloe) Aralia (Angelica Tree, Hercules' Club) * Arctostaphylos (Bearberry, Manzanita) * Aronia (Chokeberry) Artemisia (Sagebrush) Aucuba (Aucuba) B Berberis (Barberry) Bougainvillea (Bougainvillea) Brugmansia (Angel's trumpet) Buddleja (Butterfly bush) Buxus (Box) * C Calia (Mescalbean) Callicarpa (Beautyberry) * Callistemon (Bottlebrush) * Calluna (Heather) Calycanthus (Sweetshrub) Camellia (Camellia, Tea) * Caragana (Pea-tree) * Carpenteria (Carpenteria) Caryopteris (Blue Spiraea) Cassiope (Moss-heather) Ceanothus (Ceanothus) * Celastrus (Staff vine) * Ceratostigma (Hardy Plumbago) Cercocarpus (Mountain-mahogany) * Chaenomeles (Japanese Quince) Chamaebatiaria (Fernbush) Chamaedaphne (Leatherleaf) Chimonanthus (Wintersweet) Chionanthus (Fringe-tree) * Choisya (Mexican-orange Blossom) * Cistus (Rockrose) Clerodendrum (Clerodendrum) Clethra (Summersweet, Pepperbush) * Clianthus (Glory Pea) Colletia (Colletia) Colutea (Bladder Senna) Comptonia (Sweetfern) Cornus (Dogwood) * Corylopsis (Winter-hazel) * Cotinus (Smoketree) * Cotoneaster (Cotoneaster) * Cowania (Cliffrose) Crataegus (Hawthorn) * Crinodendron (Crinodendron) * Cytisus and allied genera (Broom) * D Daboecia (Heath) Danae (Alexandrian laurel) Daphne (Daphne) Decaisnea (Decaisnea) Dasiphora (Shrubby Cinquefoil) Dendromecon (Tree poppy) Desfontainea (Desfontainea) Deutzia (Deutzia) Diervilla (Bush honeysuckle) Dipelta (Dipelta) Dirca (Leatherwood) Dracaena (Dragon tree) * Drimys (Winter's Bark) * Dryas (Mountain Avens) E Edgeworthia (Paper Bush) * Elaeagnus (Elaeagnus) * Embothrium (Chilean Firebush) * Empetrum (Crowberry) Enkianthus (Pagoda Bush) Ephedra (Ephedra) Epigaea (Trailing Arbutus) Erica (Heath) Eriobotrya (Loquat) * Escallonia (Escallonia) Eucryphia (Eucryphia) * Euonymus (Spindle) * Exochorda (Pearl Bush) F Fabiana (Fabiana) Fallugia (Apache Plume) Fatsia (Fatsia) Forsythia (Forsythia) Fothergilla (Fothergilla) Franklinia (Franklinia) * Fremontodendron (Flannelbush) Fuchsia (Fuchsia) * G Garrya (Silk-tassel) * Gaultheria (Salal) Gaylussacia (Huckleberry) Genista (Broom) * Gordonia (Loblolly-bay) * Grevillea (Grevillea) Griselinia (Griselinia) * H Hakea (Hakea) * Halesia (Silverbell) * Halimium (Rockrose) Hamamelis (Witch-hazel) * Hebe (Hebe) Hedera (Ivy) Helianthemum (Rockrose) Hibiscus (Hibiscus) * Hippophae (Sea-buckthorn) * Hoheria (Lacebark) * Holodiscus (Creambush) Hudsonia (Hudsonia) Hydrangea (Hydrangea) Hypericum (Rose of Sharon) Hyssopus (Hyssop) I Ilex (Holly) * Illicium (Star Anise) * Indigofera (Indigo) Itea (Sweetspire) J Jamesia (Cliffbush) Jasminum (Jasmine) Juniperus (Juniper) * K Kalmia (Mountain-laurel) Kerria (Kerria) Kolkwitzia (Beauty-bush) L Lagerstroemia (Crape-myrtle) * Lapageria (Copihue) Lantana (Lantana) Lavandula (Lavender) Lavatera (Tree Mallow) Ledum (Ledum) Leitneria (Corkwood) * Lespedeza (Bush Clover) * Leptospermum (Manuka) * Leucothoe (Doghobble) Leycesteria (Leycesteria) Ligustrum (Privet) * Lindera (Spicebush) * Linnaea (Twinflower) Lonicera (Honeysuckle) Lupinus (Tree Lupin) Lycium (Boxthorn) M Magnolia (Magnolia) Mahonia (Mahonia) Malpighia (Acerola) Menispermum (Moonseed) Menziesia (Menziesia) Mespilus (Medlar) * Microcachrys (Microcachrys) Myrica (Bayberry) * Myricaria (Myricaria) Myrtus and allied genera (Myrtle) * N Neillia (Neillia) Nerium (Oleander) O Olearia (Daisy bush) * Osmanthus (Osmanthus) P Pachysandra (Pachysandra) Paeonia (Tree-peony) Persoonia (Geebungs) Philadelphus (Mock orange) * Phlomis (Jerusalem Sage) Photinia (Photinia) * Physocarpus (Ninebark) * Pieris (Pieris) Pistacia (Pistachio, Mastic) * Pittosporum (Pittosporum) * Plumbago (Leadwort) Polygala (Milkwort) Poncirus * Prunus (Cherry) * Purshia (Antelope Bush) Pyracantha (Firethorn) Q Quassia (Quassia) * Quercus (Oak) * Quillaja (Quillay) Quintinia (Tawheowheo) * R Rhamnus (Buckthorn) * Rhododendron (Rhododendron, Azalea) * Rhus (Sumac) * Ribes (Currant, Gooseberry) Romneya (Tree poppy) Rosa (Rose) Rosmarinus (Rosemary) Rubus (Bramble, Raspberry, Salmonberry, Wineberry) Ruta (Rue) S Sabia * Salix (Willow) * Salvia (Sage) Sambucus (Elder) * Santolina (Lavender Cotton) Sapindus (Soapberry) * Senecio (Senecio) Simmondsia (Jojoba) Skimmia (Skimmia) Smilax (Smilax) Sophora (Kōwhai) * Sorbaria (Sorbaria) Spartium (Spanish Broom) Spiraea (Spiraea) * Staphylea (Bladdernut) * Stephanandra (Stephanandra) Styrax * Symphoricarpos (Snowberry) Syringa (Lilac) * T Tamarix (Tamarix) * Taxus (Yew) * Telopea (Waratah) * Thuja cvs. (Arborvitae) * Thymelaea Thymus (Thyme) Trochodendron * U Ulex (Gorse) Ulmus pumila celer (Turkestan elm – Wonder Hedge) Ungnadia (Mexican Buckeye) V Vaccinium (Bilberry, Blueberry, Cranberry) Verbesina centroboyacana Verbena (Vervain) Viburnum (Viburnum) * Vinca (Periwinkle) Viscum (Mistletoe) W Weigela (Weigela) X Xanthoceras Xanthorhiza (Yellowroot) Xylosma Y Yucca (Yucca, Joshua tree) * Z Zanthoxylum * Zauschneria Zenobia Ziziphus * References Plants Plant morphology Lists of plants Plant life-forms Plants by habit
Shrub
[ "Biology" ]
2,867
[ "Lists of plants", "Plants", "Lists of biota", "Plant morphology", "Plant life-forms" ]
61,750
https://en.wikipedia.org/wiki/Gumma%20%28pathology%29
A gumma (plural gummata or gummas) is a soft, non-cancerous growth resulting from the tertiary stage of syphilis (and yaws). It is a form of granuloma. Gummas are most commonly found in the liver (gumma hepatis), but can also be found in brain, heart, skin, bone, testis, and other tissues, leading to a variety of potential problems including neurological disorders or heart valve disease. Presentation Gummas have a firm, necrotic center surrounded by inflamed tissue, which forms an amorphous proteinaceous mass. The center may become partly hyalinized. These central regions begin to die through coagulative necrosis, though they also retain some of the structural characteristics of previously normal tissues, enabling a distinction from the granulomas of tuberculosis where caseous necrosis obliterates preexisting structures. Other histological features of gummas include an intervening zone containing epithelioid cells with indistinct borders and multinucleated giant cells, and a peripheral zone of fibroblasts and capillaries. Infiltration of lymphocytes and plasma cells can be seen in the peripheral zone as well. With time, gummas eventually undergo fibrous degeneration, leaving behind an irregular scar or a round fibrous nodule. It is restricted to necrosis involving spirochaetal infections that cause syphilis. Growths that have the appearance of gummas are described as gummatous. Pathology In syphilis, the gumma is caused by a reaction to spirochaete bacteria in the tissue. It appears to be the human body's way to slow down the action of this bacteria; it is a unique immune response that develops in humans after the immune system fails to kill off syphilis. Epidemiology The formation of gummata is rare in developed countries, but common in areas that lack adequate medical treatment. Syphilitic gummas are found in most but not all cases of tertiary syphilis, and can occur either singly or in groups. Gummatous lesions are usually associated with long-term syphilitic infection; however, such lesions can also be a symptom of benign late syphilis. References External links Histopathology Sexually transmitted diseases and infections Necrosis Syphilis
Gumma (pathology)
[ "Chemistry", "Biology" ]
490
[ "Necrosis", "Cellular processes", "Histopathology", "Microscopy" ]
61,762
https://en.wikipedia.org/wiki/Class%20%28biology%29
In biological classification, class () is a taxonomic rank, as well as a taxonomic unit, a taxon, in that rank. It is a group of related taxonomic orders. Other well-known ranks in descending order of size are life, domain, kingdom, phylum, order, family, genus, and species, with class ranking between phylum and order. History The class as a distinct rank of biological classification having its own distinctive name – and not just called an top-level genus (genus summum) – was first introduced by French botanist Joseph Pitton de Tournefort in the classification of plants that appeared in his Eléments de botanique of 1694. Insofar as a general definition of a class is available, it has historically been conceived as embracing taxa that combine a distinct grade of organization—i.e. a 'level of complexity', measured in terms of how differentiated their organ systems are into distinct regions or sub-organs—with a distinct type of construction, which is to say a particular layout of organ systems. This said, the composition of each class is ultimately determined by the subjective judgment of taxonomists. In the first edition of his Systema Naturae (1735), Carl Linnaeus divided all three of his kingdoms of nature (minerals, plants, and animals) into classes. Only in the animal kingdom are Linnaeus's classes similar to the classes used today; his classes and orders of plants were never intended to represent natural groups, but rather to provide a convenient "artificial key" according to his Systema Sexuale, largely based on the arrangement of flowers. In botany, classes are now rarely discussed. Since the first publication of the APG system in 1998, which proposed a taxonomy of the flowering plants up to the level of orders, many sources have preferred to treat ranks higher than orders as informal clades. Where formal ranks have been assigned, the ranks have been reduced to a very much lower level, e.g. class Equisitopsida for the land plants, with the major divisions within the class assigned to subclasses and superorders. The class was considered the highest level of the taxonomic hierarchy until George Cuvier's embranchements, first called Phyla by Ernst Haeckel, were introduced in the early nineteenth century. See also Cladistics List of animal classes Phylogenetics Systematics Taxonomy Explanatory notes References Bacterial nomenclature Zoological nomenclature Class Plant taxonomy
Class (biology)
[ "Biology" ]
497
[ "Zoological nomenclature", "Botanical nomenclature", "Plants", "Bacterial nomenclature", "Botanical terminology", "Biological nomenclature", "Plant taxonomy", "Bacteria" ]
61,763
https://en.wikipedia.org/wiki/Order%20%28biology%29
Order () is one of the eight major hierarchical taxonomic ranks in Linnaean taxonomy. It is classified between family and class. In biological classification, the order is a taxonomic rank used in the classification of organisms and recognized by the nomenclature codes. An immediately higher rank, superorder, is sometimes added directly above order, with suborder directly beneath order. An order can also be defined as a group of related families. What does and does not belong to each order is determined by a taxonomist, as is whether a particular order should be recognized at all. Often there is no exact agreement, with different taxonomists each taking a different position. There are no hard rules that a taxonomist needs to follow in describing or recognizing an order. Some taxa are accepted almost universally, while others are recognized only rarely. The name of an order is usually written with a capital letter. For some groups of organisms, their orders may follow consistent naming schemes. Orders of plants, fungi, and algae use the suffix (e.g. Dictyotales). Orders of birds and fishes use the Latin suffix meaning 'having the form of' (e.g. Passeriformes), but orders of mammals and invertebrates are not so consistent (e.g. Artiodactyla, Actiniaria, Primates). Hierarchy of ranks Zoology For some clades covered by the International Code of Zoological Nomenclature, several additional classifications are sometimes used, although not all of these are officially recognized. In their 1997 classification of mammals, McKenna and Bell used two extra levels between superorder and order: grandorder and mirorder. Michael Novacek (1986) inserted them at the same position. Michael Benton (2005) inserted them between superorder and magnorder instead. This position was adopted by Systema Naturae 2000 and others. Botany In botany, the ranks of subclass and suborder are secondary ranks pre-defined as respectively above and below the rank of order. Any number of further ranks can be used as long as they are clearly defined. The superorder rank is commonly used, with the ending that was initiated by Armen Takhtajan's publications from 1966 onwards. History The order as a distinct rank of biological classification having its own distinctive name (and not just called a higher genus ()) was first introduced by the German botanist Augustus Quirinus Rivinus in his classification of plants that appeared in a series of treatises in the 1690s. Carl Linnaeus was the first to apply it consistently to the division of all three kingdoms of nature (then minerals, plants, and animals) in his Systema Naturae (1735, 1st. Ed.). Botany For plants, Linnaeus' orders in the Systema Naturae and the Species Plantarum were strictly artificial, introduced to subdivide the artificial classes into more comprehensible smaller groups. When the word was first consistently used for natural units of plants, in 19th-century works such as the Prodromus Systematis Naturalis Regni Vegetabilis of Augustin Pyramus de Candolle and the Genera Plantarum of Bentham & Hooker, it indicated taxa that are now given the rank of family (see ordo naturalis, 'natural order'). In French botanical publications, from Michel Adanson's (1763) and until the end of the 19th century, the word (plural: ) was used as a French equivalent for this Latin . This equivalence was explicitly stated in the 's (1868), the precursor of the currently used International Code of Nomenclature for algae, fungi, and plants. In the first international Rules of botanical nomenclature from the International Botanical Congress of 1905, the word family () was assigned to the rank indicated by the French , while order () was reserved for a higher rank, for what in the 19th century had often been named a 'cohort' (, plural ). Some of the plant families still retain the names of Linnaean "natural orders" or even the names of pre-Linnaean natural groups recognized by Linnaeus as orders in his natural classification (e.g. Palmae or Labiatae). Such names are known as descriptive family names. Zoology In the field of zoology, the Linnaean orders were used more consistently. That is, the orders in the zoology part of the Systema Naturae refer to natural groups. Some of his ordinal names are still in use, e.g. Lepidoptera (moths and butterflies) and Diptera (flies, mosquitoes, midges, and gnats). Virology In virology, the International Committee on Taxonomy of Viruses's virus classification includes fifteen taxonomic ranks to be applied for viruses, viroids and satellite nucleic acids: realm, subrealm, kingdom, subkingdom, phylum, subphylum, class, subclass, order, suborder, family, subfamily, genus, subgenus, and species. There are currently fourteen viral orders, each ending in the suffix . See also Biological classification Cladistics Phylogenetics Systematics Taxonomic rank Taxonomy Virus classification References Works cited Botanical nomenclature Plant taxonomy rank08 Bacterial nomenclature
Order (biology)
[ "Biology" ]
1,071
[ "Zoological nomenclature", "Botanical nomenclature", "Plants", "Bacterial nomenclature", "Botanical terminology", "Biological nomenclature", "Plant taxonomy", "Bacteria" ]
61,839
https://en.wikipedia.org/wiki/Inner%20automorphism
In abstract algebra, an inner automorphism is an automorphism of a group, ring, or algebra given by the conjugation action of a fixed element, called the conjugating element. They can be realized via operations from within the group itself, hence the adjective "inner". These inner automorphisms form a subgroup of the automorphism group, and the quotient of the automorphism group by this subgroup is defined as the outer automorphism group. Definition If is a group and is an element of (alternatively, if is a ring, and is a unit), then the function is called (right) conjugation by (see also conjugacy class). This function is an endomorphism of : for all where the second equality is given by the insertion of the identity between and Furthermore, it has a left and right inverse, namely Thus, is both an monomorphism and epimorphism, and so an isomorphism of with itself, i.e. an automorphism. An inner automorphism is any automorphism that arises from conjugation. When discussing right conjugation, the expression is often denoted exponentially by This notation is used because composition of conjugations satisfies the identity: for all This shows that right conjugation gives a right action of on itself. A common example is as follows: Describe a homomorphism for which the image, , is a normal subgroup of inner automorphisms of a group ; alternatively, describe a natural homomorphism of which the kernel of is the center of (all for which conjugating by them returns the trivial automorphism), in other words, . There is always a natural homomorphism , which associates to every an (inner) automorphism in . Put identically, . Let as defined above. This requires demonstrating that (1) is a homomorphism, (2) is also a bijection, (3) is a homomorphism. The condition for bijectivity may be verified by simply presenting an inverse such that we can return to from . In this case it is conjugation by denoted as . and Inner and outer automorphism groups The composition of two inner automorphisms is again an inner automorphism, and with this operation, the collection of all inner automorphisms of is a group, the inner automorphism group of denoted . is a normal subgroup of the full automorphism group of . The outer automorphism group, is the quotient group The outer automorphism group measures, in a sense, how many automorphisms of are not inner. Every non-inner automorphism yields a non-trivial element of , but different non-inner automorphisms may yield the same element of . Saying that conjugation of by leaves unchanged is equivalent to saying that and commute: Therefore the existence and number of inner automorphisms that are not the identity mapping is a kind of measure of the failure of the commutative law in the group (or ring). An automorphism of a group is inner if and only if it extends to every group containing . By associating the element with the inner automorphism in as above, one obtains an isomorphism between the quotient group (where is the center of ) and the inner automorphism group: This is a consequence of the first isomorphism theorem, because is precisely the set of those elements of that give the identity mapping as corresponding inner automorphism (conjugation changes nothing). Non-inner automorphisms of finite -groups A result of Wolfgang Gaschütz says that if is a finite non-abelian -group, then has an automorphism of -power order which is not inner. It is an open problem whether every non-abelian -group has an automorphism of order . The latter question has positive answer whenever has one of the following conditions: is nilpotent of class 2 is a regular -group is a powerful -group The centralizer in , , of the center, , of the Frattini subgroup, , of , , is not equal to Types of groups The inner automorphism group of a group , , is trivial (i.e., consists only of the identity element) if and only if is abelian. The group is cyclic only when it is trivial. At the opposite end of the spectrum, the inner automorphisms may exhaust the entire automorphism group; a group whose automorphisms are all inner and whose center is trivial is called complete. This is the case for all of the symmetric groups on elements when is not 2 or 6. When , the symmetric group has a unique non-trivial class of non-inner automorphisms, and when , the symmetric group, despite having no non-inner automorphisms, is abelian, giving a non-trivial center, disqualifying it from being complete. If the inner automorphism group of a perfect group is simple, then is called quasisimple. Lie algebra case An automorphism of a Lie algebra is called an inner automorphism if it is of the form , where is the adjoint map and is an element of a Lie group whose Lie algebra is . The notion of inner automorphism for Lie algebras is compatible with the notion for groups in the sense that an inner automorphism of a Lie group induces a unique inner automorphism of the corresponding Lie algebra. Extension If is the group of units of a ring, , then an inner automorphism on can be extended to a mapping on the projective line over by the group of units of the matrix ring, . In particular, the inner automorphisms of the classical groups can be extended in that way. References Further reading Group theory Group automorphisms de:Automorphismus#Innere Automorphismen
Inner automorphism
[ "Mathematics" ]
1,177
[ "Functions and mappings", "Mathematical objects", "Group theory", "Fields of abstract algebra", "Mathematical relations", "Group automorphisms" ]
61,866
https://en.wikipedia.org/wiki/Max%20Born
Max Born (; 11 December 1882 – 5 January 1970) was a German-British theoretical physicist who was instrumental in the development of quantum mechanics. He also made contributions to solid-state physics and optics and supervised the work of a number of notable physicists in the 1920s and 1930s. Born was awarded the 1954 Nobel Prize in Physics for his "fundamental research in quantum mechanics, especially in the statistical interpretation of the wave function". Born entered the University of Göttingen in 1904, where he met the three renowned mathematicians Felix Klein, David Hilbert, and Hermann Minkowski. He wrote his PhD thesis on the subject of the stability of elastic wires and tapes, winning the university's Philosophy Faculty Prize. In 1905, he began researching special relativity with Minkowski, and subsequently wrote his habilitation thesis on the Thomson model of the atom. A chance meeting with Fritz Haber in Berlin in 1918 led to discussion of how an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle. In World War I he was originally placed as a radio operator, but his specialist knowledge led to his being moved to research duties on sound ranging. In 1921 Born returned to Göttingen, where he arranged another chair for his long-time friend and colleague James Franck. Under Born, Göttingen became one of the world's foremost centres for physics. In 1925 Born and Werner Heisenberg formulated the matrix mechanics representation of quantum mechanics. The following year, he formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, for which he was awarded the Nobel Prize in 1954. His influence extended far beyond his own research. Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf all received their PhD degrees under Born at Göttingen, and his assistants included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner. In January 1933, the Nazi Party came to power in Germany, and Born, who was Jewish, was suspended from his professorship at the University of Göttingen. He emigrated to the United Kingdom, where he took a job at St John's College, Cambridge, and wrote a popular science book, The Restless Universe, as well as Atomic Physics, which soon became a standard textbook. In October 1936, he became the Tait Professor of Natural Philosophy at the University of Edinburgh, where, working with German-born assistants E. Walter Kellermann and Klaus Fuchs, he continued his research into physics. Born became a naturalised British subject on 31 August 1939, one day before World War II broke out in Europe. He remained in Edinburgh until 1952. He retired to Bad Pyrmont, in West Germany, and died in a hospital in Göttingen on 5 January 1970. Early life Max Born was born on 11 December 1882 in Breslau (now Wrocław, Poland), which at the time of Born's birth was part of the Prussian Province of Silesia in the German Empire, to a family of Jewish descent. He was one of two children born to Gustav Born, an anatomist and embryologist, who was a professor of embryology at the University of Breslau, and his wife Margarethe (Gretchen) née Kauffmann, from a Silesian family of industrialists. She died when Max was four years old, on 29 August 1886. Max had a sister, Käthe, who was born in 1884, and a half-brother, Wolfgang, from his father's second marriage, to Bertha Lipstein. Wolfgang later became Professor of Art History at the City College of New York. Initially educated at the König-Wilhelm-Gymnasium in Breslau, Born entered the University of Breslau in 1901. The German university system allowed students to move easily from one university to another, so he spent summer semesters at Heidelberg University in 1902 and the University of Zurich in 1903. Fellow students at Breslau, Otto Toeplitz and Ernst Hellinger, told Born about the University of Göttingen, and Born went there in April 1904. At Göttingen he found three renowned mathematicians: Felix Klein, David Hilbert and Hermann Minkowski. Very soon after his arrival, Born formed close ties to the latter two men. From the first class he took with Hilbert, Hilbert identified Born as having exceptional abilities and selected him as the lecture scribe, whose function was to write up the class notes for the students' mathematics reading room at the University of Göttingen. Being class scribe put Born into regular, invaluable contact with Hilbert. Hilbert became Born's mentor after selecting him to be the first to hold the unpaid, semi-official position of assistant. Born's introduction to Minkowski came through Born's stepmother, Bertha, as she knew Minkowski from dancing classes in Königsberg. The introduction netted Born invitations to the Minkowski household for Sunday dinners. In addition, while performing his duties as scribe and assistant, Born often saw Minkowski at Hilbert's house. Born's relationship with Klein was more problematic. Born attended a seminar conducted by Klein and professors of applied mathematics, Carl Runge and Ludwig Prandtl, on the subject of elasticity. Although not particularly interested in the subject, Born was obliged to present a paper. He presented one in which, taking the simple case of a curved wire with both ends fixed, he used Hilbert's calculus of variations to determine the configuration that would minimise potential energy and therefore be the most stable. Klein was impressed, and invited Born to submit a thesis on the subject of "Stability of Elastica in a Plane and Space" – a subject near and dear to Klein – which Klein had arranged to be the subject for the prestigious annual Philosophy Faculty Prize offered by the university. Entries could also qualify as doctoral dissertations. Born responded by turning down the offer, as applied mathematics was not his preferred area of study. Klein was greatly offended. Klein had the power to make or break academic careers, so Born felt compelled to atone by submitting an entry for the prize. Because Klein refused to supervise him, Born arranged for Carl Runge to be his supervisor. Woldemar Voigt and Karl Schwarzschild became his other examiners. Starting from his paper, Born developed the equations for the stability conditions. As he became more interested in the topic, he had an apparatus constructed that could test his predictions experimentally. On 13 June 1906, the rector announced that Born had won the prize. A month later, he passed his oral examination and was awarded his PhD in mathematics magna cum laude. On graduation, Born was obliged to perform his military service, which he had deferred while a student. He found himself drafted into the German army, and posted to the 2nd Guards Dragoons "Empress Alexandra of Russia", which was stationed in Berlin. His service was brief, as he was discharged early after an asthma attack in January 1907. He then travelled to England, where he was admitted to Gonville and Caius College, Cambridge, and studied physics for six months at the Cavendish Laboratory under J. J. Thomson, George Searle and Joseph Larmor. After Born returned to Germany, the Army re-inducted him, and he served with the elite 1st (Silesian) Life Cuirassiers "Great Elector" until he was again medically discharged after just six weeks' service. He then returned to Breslau, where he worked under the supervision of Otto Lummer and Ernst Pringsheim, hoping to do his habilitation in physics. A minor accident involving Born's black body experiment, a ruptured cooling water hose, and a flooded laboratory, led to Lummer telling him that he would never become a physicist. In 1905, Albert Einstein published his paper On the Electrodynamics of Moving Bodies about special relativity. Born was intrigued, and began researching the subject. He was devastated to discover that Minkowski was also researching special relativity along the same lines, but when he wrote to Minkowski about his results, Minkowski asked him to return to Göttingen and do his habilitation there. Born accepted. Toeplitz helped Born brush up on his matrix algebra so he could work with the four-dimensional Minkowski space matrices used in the latter's project to reconcile relativity with electrodynamics. Born and Minkowski got along well, and their work made good progress, but Minkowski died suddenly of appendicitis on 12 January 1909. The mathematics students had Born speak on their behalf at the funeral. A few weeks later, Born attempted to present their results at a meeting of the Göttingen Mathematics Society. He did not get far before he was publicly challenged by Klein and Max Abraham, who rejected relativity, forcing him to terminate the lecture. However, Hilbert and Runge were interested in Born's work, and, after some discussion with Born, they became convinced of the veracity of his results and persuaded him to give the lecture again. This time he was not interrupted, and Voigt offered to sponsor Born's habilitation thesis. Born subsequently published his talk as an article on "The Theory of the Rigid Electron in the Kinematics of the Principle of Relativity" (), which introduced the concept of Born rigidity. On 23 October Born presented his habilitation lecture on the Thomson model of the atom. Career Berlin and Frankfurt Born settled in as a young academic at Göttingen as a . In Göttingen, Born stayed at a boarding house run by Sister Annie at Dahlmannstraße 17, known as El BoKaReBo. The name was derived from the first letters of the last names of its boarders: "El" for Ella Philipson (a medical student), "Bo" for Born and Hans Bolza (a physics student), "Ka" for Theodore von Kármán (a ), and "Re" for Albrecht Renner (another medical student). A frequent visitor to the boarding house was Paul Peter Ewald, a doctoral student of Arnold Sommerfeld on loan to Hilbert at Göttingen as a special assistant for physics. Richard Courant, a mathematician and , called these people the "in group". In 1912, Born met Hedwig (Hedi) Ehrenberg, the daughter of a Leipzig University law professor, and a friend of Carl Runge's daughter Iris. She was of Jewish background on her father's side, although he had become a practising Lutheran when he got married, as did Max's sister Käthe. Despite never practising his religion, Born refused to convert, and his wedding on 2 August 1913 was a garden ceremony. However, he was baptised as a Lutheran in March 1914 by the same pastor who had performed his wedding ceremony. Born regarded "religious professions and churches as a matter of no importance". His decision to be baptised was made partly in deference to his wife, and partly due to his desire to assimilate into German society. The marriage produced three children: two daughters, Irene, born in 1914, and Margarethe (Gritli), born in 1915, and a son, Gustav, born in 1921. Through marriage, Born is related to jurists Victor Ehrenberg, his father-in-law, and Rudolf von Jhering, his wife's maternal grandfather, as well as to philosopher and theologian Hans Ehrenberg, and is a great uncle of British comedian Ben Elton. By the end of 1913, Born had published 27 papers, including important work on relativity and the dynamics of crystal lattices (3 with Theodore von Karman), which became a book. In 1914, he received a letter from Max Planck explaining that a new professor extraordinarius chair of theoretical physics had been created at the University of Berlin. The chair had been offered to Max von Laue, but he had turned it down. Born accepted. The First World War was now raging. Soon after arriving in Berlin in 1915, he enlisted in an Army signals unit. In October, he joined the Artillerie Prüfungskommission, the Army's Berlin-based artillery research and development organisation, under Rudolf Ladenburg, who had established a special unit dedicated to the new technology of sound ranging. In Berlin, Born formed a lifelong friendship with Einstein, who became a frequent visitor to Born's home. Within days of the armistice in November 1918, Planck had the Army release Born. A chance meeting with Fritz Haber that month led to discussion of the manner in which an ionic compound is formed when a metal reacts with a halogen, which is today known as the Born–Haber cycle. Even before Born had taken up the chair in Berlin, von Laue had changed his mind, and decided that he wanted it after all. He arranged with Born and the faculties concerned for them to exchange jobs. In April 1919, Born became professor ordinarius and Director of the Institute of Theoretical Physics on the science faculty at the University of Frankfurt am Main. While there, he was approached by the University of Göttingen, which was looking for a replacement for Peter Debye as Director of the Physical Institute. "Theoretical physics," Einstein advised him, "will flourish wherever you happen to be; there is no other Born to be found in Germany today." In negotiating for the position with the education ministry, Born arranged for another chair, of experimental physics, at Göttingen for his long-time friend and colleague James Franck. In 1919 Elisabeth Bormann joined the Institut für Theoretische Physik as his assistant. She developed the first atomic beams. Working with Born, Bormann was the first to measure the free path of atoms in gases and the size of molecules. Göttingen For the 12 years Born and Franck were at the University of Göttingen (1921 to 1933), Born had a collaborator with shared views on basic scientific concepts—a benefit for teaching and research. Born's collaborative approach with experimental physicists was similar to that of Arnold Sommerfeld at the University of Munich, who was ordinarius professor of theoretical physics and Director of the Institute of Theoretical Physics—also a prime mover in the development of quantum theory. Born and Sommerfeld collaborated with experimental physicists to test and advance their theories. In 1922, when lecturing in the United States at the University of Wisconsin–Madison, Sommerfeld sent his student Werner Heisenberg to be Born's assistant. Heisenberg returned to Göttingen in 1923, where he completed his habilitation under Born in 1924, and became a at Göttingen. In 1919 and 1920, Max Born became displeased about the large number of objections against Einstein's relativity, and gave speeches in the winter of 1919 in support of Einstein. Born received pay for his relativity speeches which helped with expenses through the year of rapid inflation. The speeches in German language became a book published in 1920 of which Einstein received the proofs before publication. A third edition was published in 1922 and an English translation was published in 1924. Born represented light speed as a function of curvature, "the velocity of light is much greater for some directions of the light ray than its ordinary value c, and other bodies can also attain much greater velocities." In 1925, Born and Heisenberg formulated the matrix mechanics representation of quantum mechanics. On 9 July, Heisenberg gave Born a paper entitled Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen ("Quantum-Theoretical Re-interpretation of Kinematic and Mechanical Relations") to review, and submit for publication. In the paper, Heisenberg formulated quantum theory, avoiding the concrete, but unobservable, representations of electron orbits by using parameters such as transition probabilities for quantum jumps, which necessitated using two indexes corresponding to the initial and final states. When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices, which he had learned from his study under Jakob Rosanes at Breslau University. Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912, and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics. With the help of his assistant and former student Pascual Jordan, Born began immediately to make a transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper. A follow-on paper was submitted for publication before the end of the year by all three authors. The result was a surprising formulation: where p and q were matrices for location and momentum, and I is the identity matrix. The left hand side of the equation is not zero because matrix multiplication is not commutative. This formulation was entirely attributable to Born, who also established that all the elements not on the diagonal of the matrix were zero. Born considered that his paper with Jordan contained "the most important principles of quantum mechanics including its extension to electrodynamics." The paper put Heisenberg's approach on a solid mathematical basis. Born was surprised to discover that Paul Dirac had been thinking along the same lines as Heisenberg. Soon, Wolfgang Pauli used the matrix method to calculate the energy values of the hydrogen atom and found that they agreed with the Bohr model. Another important contribution was made by Erwin Schrödinger, who looked at the problem using wave mechanics. This had a great deal of appeal to many at the time, as it offered the possibility of returning to deterministic classical physics. Born would have none of this, as it ran counter to facts determined by experiment. He formulated the now-standard interpretation of the probability density function for ψ*ψ in the Schrödinger equation, which he published in July 1926. In a letter to Born on 4 December 1926, Einstein made his famous remark regarding quantum mechanics: This quotation is often paraphrased as 'God does not play dice'. In 1928, Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics, but Heisenberg alone won the 1932 Prize "for the creation of quantum mechanics, the application of which has led to the discovery of the allotropic forms of hydrogen", while Schrödinger and Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory". On 25 November 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration—you, Jordan and I." Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside." In 1954, Heisenberg wrote an article honouring Planck for his insight in 1900, in which he credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye." Those who received their PhD degrees under Born at Göttingen included Max Delbrück, Siegfried Flügge, Friedrich Hund, Pascual Jordan, Maria Goeppert-Mayer, Lothar Wolfgang Nordheim, Robert Oppenheimer, and Victor Weisskopf. Born's assistants at the University of Göttingen's Institute for Theoretical Physics included Enrico Fermi, Werner Heisenberg, Gerhard Herzberg, Friedrich Hund, Pascual Jordan, Wolfgang Pauli, Léon Rosenfeld, Edward Teller, and Eugene Wigner. Walter Heitler became an assistant to Born in 1928, and completed his habilitation under him in 1929. Born not only recognised talent to work with him, but he "let his superstars stretch past him; to those less gifted, he patiently handed out respectable but doable assignments." Delbrück, and Goeppert-Mayer went on to be awarded Nobel Prizes. Later life In January 1933, the Nazi Party came to power in Germany. In May, Born became one of six Jewish professors at Göttingen who were suspended with pay; Franck had already resigned. In twelve years they had built Göttingen into one of the world's foremost centres for physics. Born began looking for a new job, writing to Maria Göppert-Mayer at Johns Hopkins University and Rudi Ladenburg at Princeton University. He accepted an offer from St John's College, Cambridge. At Cambridge, he wrote a popular science book, The Restless Universe, and a textbook, Atomic Physics, that soon became a standard text, going through seven editions. His family soon settled into life in England, with his daughters Irene and Gritli becoming engaged to Welshman Brinley (Bryn) Newton-John and Englishman Maurice Pryce respectively. Born's granddaughter Olivia Newton-John was the daughter of Irene. Born's position at Cambridge was only a temporary one, and his tenure at Göttingen was terminated in May 1935. He therefore accepted an offer from C. V. Raman to go to Bangalore in 1935. Born considered taking a permanent position there, but the Indian Institute of Science did not create an additional chair for him. In November 1935, the Born family had their German citizenship revoked, rendering them stateless. A few weeks later Göttingen cancelled Born's doctorate. Born considered an offer from Pyotr Kapitsa in Moscow, and started taking Russian lessons from Rudolf Peierls's Russian-born wife Genia. But then Charles Galton Darwin asked Born if he would consider becoming his successor as Tait Professor of Natural Philosophy at the University of Edinburgh, an offer that Born promptly accepted, assuming the chair in October 1936. In Edinburgh, Born promoted the teaching of mathematical physics. He had two German assistants, E. Walter Kellermann and Klaus Fuchs, and one Scottish assistant, Robert Schlapp, and together they continued to investigate the mysterious behaviour of electrons. Born became a Fellow of the Royal Society of Edinburgh in 1937, and of the Royal Society of London in March 1939. During 1939, he got as many of his remaining friends and relatives still in Germany as he could out of the country, including his sister Käthe, in-laws Kurt and Marga, and the daughters of his friend Heinrich Rausch von Traubenberg. Hedi ran a domestic bureau, placing young Jewish women in jobs. Born received his certificate of naturalisation as a British subject on 31 August 1939, one day before the Second World War broke out in Europe. Born remained at Edinburgh until he reached the retirement age of 70 in 1952. He retired to Bad Pyrmont, in West Germany, in 1954. In October, he received word that he was being awarded the Nobel Prize. His fellow physicists had never stopped nominating him. Franck and Fermi had nominated him in 1947 and 1948 for his work on crystal lattices, and over the years, he had also been nominated for his work on solid state physics, quantum mechanics and other topics. In 1954, he received the prize for "fundamental research in Quantum Mechanics, especially in the statistical interpretation of the wave function"—something that he had worked on alone. In his Nobel lecture he reflected on the philosophical implications of his work: In retirement, he continued scientific work, and produced new editions of his books. In 1955 he became one of signatories to the Russell-Einstein Manifesto. He died at age 87 in hospital in Göttingen on 5 January 1970, and is buried in the Stadtfriedhof there, in the same cemetery as Walther Nernst, Wilhelm Weber, Max von Laue, Otto Hahn, Max Planck, and David Hilbert. Global policy He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt a Constitution for the Federation of Earth. Personal life Born's wife Hedwig (Hedi) Martha Ehrenberg (1891–1972) was a daughter of the jurist Victor Ehrenberg and Elise von Jhering (a daughter of the jurist Rudolf von Jhering). Born was survived by his wife Hedi and their children Irene, Gritli and Gustav. Singer and actress Olivia Newton-John was a daughter of Irene (1914–2003), while Gustav is the father of musician and academic Georgina Born and actor Max Born (Fellini Satyricon) who are thus also Max's grandchildren. His great-grandchildren include songwriter Brett Goldsmith, singer Tottie Goldsmith, racing car driver Emerson Newton-John, and singer Chloe Rose Lattanzi. Born helped his nephew, architect, Otto Königsberger (1908–1999) obtain commission in the Mysore State. Awards and honors 1934 – Stokes Medal of Cambridge 1939 – Fellow of the Royal Society 1945 – Makdougall–Brisbane Prize of the Royal Society of Edinburgh 1945 – Gunning Victoria Jubilee Prize of the Royal Society of Edinburgh 1948 – Max Planck Medaille der Deutschen Physikalischen Gesellschaft 1950 – Hughes Medal of the Royal Society of London 1953 – Honorary citizen of the town of Göttingen 1954 – Nobel Prize in Physics The award was for Born's fundamental research in quantum mechanics, especially for his statistical interpretation of the wavefunction. 1954 – Nobel Prize Banquet Speech 1954 – Born Nobel Prize Lecture 1956 – Hugo Grotius Medal for International Law, Munich 1959 – Grand Cross of Merit with Star of the Order of Merit of the German Federal Republic 1972 – Max Born Medal and Prize was created by the German Physical Society and the British Institute of Physics. It is awarded annually. 1982 – Ceremony at the University of Göttingen in the 100th Birth Year of Max Born and James Franck, Institute Directors 1921–1933. 1991 – – Institute named in his honor. 2017 – On 11 December 2017, Google showed a Google doodle, designed by Kati Szilagyi, in honouring the 135th birth anniversary of Born. Bibliography During his life, Born wrote several semi-popular and technical books. His volumes on topics like atomic physics and optics were very well received. They are considered classics in their fields, and are still in print. The following is a chronological listing of his major works: Über das Thomson'sche Atommodell Habilitations-Vortrag (FAM, 1909) – The Habilitation was done at the University of Göttingen, on 23 October 1909. – Based on Born's lectures at the University of Frankfurt am Main. Available in English under the title . Dynamik der Kristallgitter (Teubner, 1915) – After its publication, the physicist Arnold Sommerfeld asked Born to write an article based on it for the 5th volume of the Mathematical Encyclopedia. The First World War delayed the start of work on this article, but it was taken up in 1919 and finished in 1922. It was published as a revised edition under the title Atomic Theory of Solid States. Vorlesungen über Atommechanik (Springer, 1925) Problems of Atomic Dynamics (MIT Press, 1926) – A first account of matrix mechanics being developed in Germany, based on two series of lectures given at MIT, over three months, in late 1925 and early 1926. Mechanics of the Atom (George Bell & Sons, 1927) – Translated by J. W. Fisher and revised by D. R. Hartree. Elementare Quantenmechanik (Zweiter Band der Vorlesungen über Atommechanik), with Pascual Jordan. (Springer, 1930) – This was the first volume of what was intended as a two-volume work. This volume was limited to the work Born did with Jordan on matrix mechanics. The second volume was to deal with Erwin Schrödinger's wave mechanics. However, the second volume was not even started by Born, as he believed his friend and colleague Hermann Weyl had written it before he could do so. Optik: Ein Lehrbuch der elektromagnetische Lichttheorie (Springer, 1933) – The book was released just as the Borns were emigrating to England. Moderne Physik (1933) – Based on seven lectures given at the Technischen Hochschule Berlin. Atomic Physics (Blackie, London, 1935) – Authorized translation of Moderne Physik by John Dougall, with updates. The Restless Universe (Blackie and Son Limited, 1935) – A popularised rendition of the workshop of nature, translated by Winifred Margaret Deans. Born's nephew, Otto Königsberger, whose successful career as an architect in Berlin was brought to an end when the Nazis took over, was temporarily brought to England to illustrate the book. Experiment and Theory in Physics (Cambridge University Press, 1943) – The address given King's College, Newcastle upon Tyne, at the request of the Durham Philosophical Society and the Pure Science Society. An expanded version of the lecture appeared in a 1956 Dover Publications edition. Natural Philosophy of Cause and Chance (Oxford University Press, 1949) – Based on Born's 1948 Waynflete lectures, given at the College of St. Mary Magdalen, Oxford University. A later edition (Dover, 1964) included two appendices: "Symbol and Reality" and Born's lecture given at the Nobel laureates 1964 meeting in Landau, Germany. A General Kinetic Theory of Liquids with H. S. Green (Cambridge University Press, 1949) – The six papers in this book were reproduced with permission from the Proceedings of the Royal Society. Natural Philosophy Of Cause And Chance, Oxford 1949 Dynamical Theory of Crystal Lattices, with Kun Huang. (Oxford, Clarendon Press, 1954) Max Born The statistical interpretation of quantum mechanics. Nobel Lecture – 11 December 1954. Physics in My Generation: A Selection of Papers (Pergamon, 1956) Physik im Wandel meiner Zeit (Vieweg, 1957) Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, with Emil Wolf. (Pergamon, 1959) – This book is not an English translation of Optik, but rather a substantially new book. Shortly after World War II, a number of scientists suggested that Born update and translate his work into English. Since there had been many advances in optics in the intervening years, updating was warranted. In 1951, Wolf began as Born's private assistant on the book; it was eventually published in 1959 by Robert Maxwell's Pergamon Press. – the delay being due to the lengthy time needed "to resolve all the financial and publishing tricks created by Maxwell." Physik und Politik (VandenHoeck und Ruprecht, 1960) Zur Begründung der Matrizenmechanik, with Werner Heisenberg and Pascual Jordan (Battenberg, 1962) – Published in honor of Max Born's 80th birthday. This edition reprinted the authors' articles on matrix mechanics published in Zeitschrift für Physik, Volumes 26 and 33–35, 1924–1926. My Life and My Views: A Nobel Prize Winner in Physics Writes Provocatively on a Wide Range of Subjects (Scribner, 1968) – Part II (pp. 63–206) is a translation of Von der Verantwortung des Naturwissenschaftlers. Briefwechsel 1916–1955, kommentiert von Max Born with Hedwig Born and Albert Einstein (Nymphenburger, 1969) The Born–Einstein Letters: Correspondence between Albert Einstein and Max and Hedwig Born from 1916–1955, with commentaries by Max Born (Macmillan, 1971). Mein Leben: Die Erinnerungen des Nobelpreisträgers (Munich: Nymphenburger, 1975). Born's published memoirs. My Life: Recollections of a Nobel Laureate (Scribner, 1978). Translation of Mein Leben. For a full list of his published papers, see HistCite . For his published works, see Published Works – Berlin-Brandenburgische Akademie der Wissenschaften Akademiebibliothek. See also List of things named after Max Born List of refugees List of Jewish Nobel laureates Citations General references Reprinted as chapter 7 in Bernstein, Jeremy (2014). A Chorus of Bells and Other Scientific Inquiries. Also published in Germany: Max Born – Baumeister der Quantenwelt. Eine Biographie Spektrum Akademischer Verlag, 2005, . External links American Institute of Physics History Search: Max Born Encyclopædia Britannica, Max Born – full article Annotated bibliography for Max Born from the Alsos Digital Library for Nuclear Issues Freeview video of Gustav Born (son of Max) with conversation and film on Gustav's memories of his father by the Vega Science Trust Max Born information from Nobel Winners site including his Nobel Lecture, 11 December 1954 The Statistical Interpretations of Quantum Mechanics Papers of Professor Max Born (1882–1970) Held at the Edinburgh University Library, Special Collections Division The Papers of Professor Max Born held at Churchill Archives Centre, Cambridge Kuhn, Thomas S., John L. Heilbron, Paul Forman, and Lini Allen Sources for History of Quantum Physics (American Philosophical Society, 1967) Oral history interview transcript for Max Born on 1 June 1960, American Institute of Physics, Niels Bohr Library & Archives - Session I Oral history interview transcript for Max Born on 1 June 1960, American Institute of Physics, Niels Bohr Library & Archives - Session II Oral history interview transcript for Max Born on 17 October 1962, American Institute of Physics, Niels Bohr Library & Archives - Session III Oral history interview transcript for Max Born on 18 October 1962, American Institute of Physics, Niels Bohr Library & Archives - Session IV 1882 births 1970 deaths Scientists from Göttingen 20th-century German physicists Academics of the University of Cambridge Academics of the University of Edinburgh Alumni of Gonville and Caius College, Cambridge 20th-century British physicists British theoretical physicists Fellows of the Royal Society of Edinburgh Fellows of the Royal Society Foreign associates of the National Academy of Sciences Foreign members of the USSR Academy of Sciences German emigrants to Scotland German Nobel laureates Academic staff of Goethe University Frankfurt Grand Crosses with Star and Sash of the Order of Merit of the Federal Republic of Germany Heidelberg University alumni Honorary members of the USSR Academy of Sciences Academic staff of the Humboldt University of Berlin Jewish emigrants from Nazi Germany to the United Kingdom Jewish German physicists Members of the German Academy of Sciences at Berlin Members of the Prussian Academy of Sciences Nobel laureates in Physics Optical physicists People associated with the University of Zurich People from the Province of Silesia Scientists from Wrocław Quantum physicists Scientists from Frankfurt Silesian Jews Theoretical physicists German theoretical physicists University of Breslau alumni University of Göttingen alumni Academic staff of the University of Göttingen Winners of the Max Planck Medal Max Members of the Göttingen Academy of Sciences and Humanities Members of the Royal Swedish Academy of Sciences Ehrenberg family World Constitutional Convention call signatories Jewish British physicists
Max Born
[ "Physics" ]
7,279
[ "Theoretical physics", "Quantum physicists", "Theoretical physicists", "Quantum mechanics" ]
61,867
https://en.wikipedia.org/wiki/Trombe%20wall
A Trombe wall is a massive equator-facing wall that is painted a dark color in order to absorb thermal energy from incident sunlight and covered with a glass on the outside with an insulating air-gap between the wall and the glaze. A Trombe wall is a passive solar building design strategy that adopts the concept of indirect-gain, where sunlight first strikes a solar energy collection surface in contact with a thermal mass of air. The sunlight absorbed by the mass is converted to thermal energy (heat) and then transferred into the living space. Trombe walls may also be referred to as a mass wall, solar wall, or thermal storage wall. However, due to the extensive work of professor and architect Félix Trombe in the design of passively heated and cooled solar structure, they are often called Trombe Walls. This system is similar to the air heater (as a simple glazed box on the south wall with a dark absorber, air space, and two sets of vents at top and bottom) created by professor Edward S. Morse a hundred years ago. History of passive solar systems and evolution of Trombe walls In 1920s, the idea of solar heating began in Europe. In Germany, housing projects were designed to take advantage of the sun. The research and accumulated solar design experience was then spread across the Atlantic by architects such as Walter Gropius and Marcel Breuer. Apart from these early examples, heating homes with the sun made slow progress until the 1930s, when several different American architects started to explore the potential of solar heating. The pioneering work of these American architects, the influence of immigrant Europeans, and the memory of wartime fuel shortages made solar heating very popular during the initial housing boom at the end of World War II. Later in the 1970s, before and after the international oil crisis of 1973, some European architectural periodicals were critical of standard construction methods and architecture of the time. They described how architects and engineers reacted to the crisis, proposing new techniques and projects in order to intervene innovatively in the built environment, using energy and natural resources more efficiently. Moreover, the depletion of natural resources generated interest in renewable energy sources, such as solar energy. Parallel to global population growth, energy consumption and environmental issues have become a global concern - especially while the building sector is consuming the highest energy in the world and most of the energy is used for heating, ventilation and air conditioning systems. For these reasons, today's buildings are expected to achieve both energy efficiency and environmental-friendly design through the use of renewable energy partly or completely instead of fossil energy for heating and cooling. In this direction, the integration of passive solar systems in buildings is one strategy for sustainable development and increasingly encouraged by international regulations. Today's low-energy buildings with Trombe walls often improve on an ancient technique that incorporates a thermal storage and delivery system people have already used: thick walls of adobe or stone to trap the sun's heat during the day and release it slowly and evenly at night to heat their building. Today, the Trombe wall continues to serve as an effective strategy of passive solar design. The first well-known example of a Trombe wall system was used in the Trombe house of Odeillo, France in 1967. The black painted wall is constructed of approximately 2 foot thick concrete with an air space and a double glazing on its exterior side. The house is primarily heated by radiation and convection from the inner surface of the concrete wall and the results from studies show that 70% of this building's yearly heating needs are supplied by solar energy. Therefore, the efficiency of the system is comparable to a good active solar heating system. PV, Photovoltaic for electrical production converts 15%-20% radiation to energy. Meaning its energy efficiency is low - 85% of the sun's radiation is lost. Whereas the solar thermal collector, Trombe Wall is able to convert 70%-80% of the suns radiation to heat, meaning, it is far more energy efficient and its heat production is powerful. Another passive collector-distributor Trombe Wall system was built in 1970, in Montmedy, France. The house with 280 m3 living space required 7000 kWh for space heating annually. At Montmedy-between 49° and 50° North latitude-5400 kWh were supplied by solar heating and the remainder from an auxiliary electrical system. The annual heating cost for electricity was approximately $225 when compared to an estimated $750 for a home entirely heated by electricity in the same area. This yields to a 77% reduction in heating load and a 70% reduction in the cost for winter heating requirements. In 1974, the first example of Trombe wall system was used in the Kelbaugh House in Princeton, New Jersey. The house is located along the northern boundary of the site to maximize the unshaded access to available sunlight. The two-story building has 600 ft2 of thermal storage wall which is constructed of concrete and painted with a selective black paint over a masonry sealer. Although the main heating is accomplished by radiation and convection from the inner face of the wall, two vents in the wall also allow daytime heating by the natural convection loop. According to data collected in the winters of 1975-1976 and 1976–1977, the Trombe wall system reduced the heating costs respectively by 76% and 84%. How Trombe walls work Unlike an active solar system that employs hardware and mechanical equipment to collect or transport heat, a Trombe wall is a passive solar-heating system where the thermal energy flows in the system by natural means such as radiation, conduction, and natural convection. As a consequence, the wall works by absorbing sunlight on its outer face and then transferring this heat through the wall by conduction. Heat conducted through the wall is then distributed to the living space by radiation, and to some degree by convection, from the wall's inner surface. The greenhouse effect helps this system by trapping the solar radiation between the glazing and the thermal mass. Heat from the sun, in the form of shorter-wavelength radiation, passes through the glazing largely unimpeded. When this radiation strikes the dark colored surface of the thermal mass facing the sun, the energy is absorbed and then re-emitted in the form of longer-wavelength radiation that cannot pass through the glazing as readily. Hence heat becomes trapped and builds up in the air space between the high heat capacity thermal mass and the glazing that faces the sun. Another phenomenon that plays a role in the Trombe wall's operation is the time lag caused by the heat capacity of the materials. Since Trombe walls are quite thick and made of high heat capacity materials, the heat-flow from the warmer outer surface to the cooler inner surface is slower than other materials with less heat capacity. This delayed heat-flow phenomenon is known as time lag and it causes the heat gained during the day to reach the interior surface of the thermal mass later. This property of the mass helps to heat the living space in the evenings as well. So, if there is enough mass, the wall can act as a radiant heater all night long. On the other hand, if the mass is too thick, it takes too long to transmit the thermal energy it collects, thus, the living space does not receive enough heat during the evening hours when it is needed the most. Likewise, if the thermal mass is too thin, it transmits the heat too quickly, resulting in overheating of the living space during the day and little energy left for the evening. Also, Trombe walls using water as a thermal mass collect and distribute heat to a space in the same way, but they transfer the heat through the wall components (tubes, bottles, barrels, drums, etc.) by convection rather than by conduction and the convection performance of the water walls differs according to their different heat capacities. Larger storage volumes provide a greater and longer-term heat storage capacity, while smaller contained volumes provide greater heat exchange surfaces and thus faster distribution. Design and construction Trombe walls are often designed to serve as a load-bearing function as well as to collect and store the sun's energy and to help enclose the building's interior spaces. The requirements of a Trombe Wall are glazing areas faced toward the equator for maximum winter solar gain and a thermal mass, located 4 inches or more directly behind the glass, which serves for heat storage and distribution. Also, there are many factors, such as color, thickness, or additional thermal control devices that have an impact on the design and the effectiveness of Trombe Walls. Equatorial, which is Southward in the Northern Hemisphere and Northward in the Southern Hemisphere, is the best rotation for passive solar strategies because they collect much more sun during the day than they lose during the night, and collect much more sun in the winter than in the summer. The first design strategy to increase the effectiveness of Trombe Walls is painting the outside surface of the wall to black (or a dark color) for the best possible absorption of sunlight. Moreover, a selective coating to a Trombe wall improves its performance by reducing the amount of infrared energy radiated back through the glass. The selective surface consists of a sheet of metal foil glued to the outside surface of the wall and it absorbs almost all the radiation in the visible portion of the solar spectrum and emits very little in the infrared range. High absorbency turns the sunlight into heat at the wall's surface, and low emittance prevents the heat from radiating back towards the glass. Although the Trombe walls are usually made of solid materials, such as concrete, brick, stone, or adobe, they can also be made of water. The advantage of using water as a thermal mass is that water stores considerably more heat per volume (has a greater heat capacity) than masonry. The developer of this water wall, Steve Baer, names this system “Drum Wall”. He painted the steel containers similar to oil drums and filled them almost full of water, leaving some room for the thermal expansion. Then stacked the containers horizontally behind an equator-facing double glazing with the blackened bottoms facing outside. This water wall involves the same principles as the Trombe walls but employs a different storage material and different methods of containing that material. Like the dark colored thermal mass of the Trombe walls, the containers that store the water are also frequently painted with dark colors to increase their absorptivity, but it is also common to leave them transparent or translucent to allow some daylight to pass through. Another critical part of Trombe wall design is choosing the proper thermal mass material and thickness. The optimum thickness of the thermal mass is dependent on the heat capacity and the thermal conductivity of the material used. There are some rules to follow while sizing the thermal mass. The optimum thickness of a masonry wall increases as the thermal conductivity of the wall material increases. For instance, to compensate for a rapid heat transfer through a highly conductive material, the wall needs to be thicker. Accordingly, since the thicker wall absorbs and stores more heat to use at night, the efficiency of the wall increases as the conductivity and thickness of the wall increase. There is an optimum thickness range for the masonry materials. The efficiency of the water wall increases as the thickness of the wall increases. However, it is hard to notice a considerable performance increase as the walls get thicker than 6 inches. Likely, a water wall thinner than 6 inches is also not enough to act as a proper thermal mass that stores the heat during the day. In the early Trombe wall design, there are vents on the walls to distribute the heat by natural convection (thermocirculation) from the exterior face of the wall, but only during the daytime and early evening. Solar radiation passing through the glass is absorbed by the wall heating its surface to temperature as high as 150 °F. This heat is transferred to the air in the air space between the wall and the glass. Through openings or vents located at the top of the wall, warm air rising in the air space enters the room while simultaneously drawing cool room air through the low vents in the wall. In this way additional heat can be supplied to the living space during periods of sunny weather. However, it is now clear that the vents do not work well in either summer or winter. It becomes more common to design a half Trombe Wall then combine it with a direct gain system. The direct gain part delivers heat early in the day while the Trombe wall stores heat for the nighttime use. Moreover, unlike a full Trombe wall, the direct gain part allows views and the delight of winter sunshine. To minimize the possible drawbacks of the Trombe wall system, there are additional thermal control strategies to employ to the wall design. For instance, the minimum 4-inch distance between the glass and the mass allows cleaning the glazing and the insertion of a roll-down radiant barrier as needed. Adding a radiant barrier or night insulation between the glazing and the thermal mass reduces nighttime heat losses and summer daytime heat gains. However, to prevent overheating in summers, combining this strategy with an outdoor shading device like shutter, a roof overhang, or an interior shading to block excessive solar radiation from heating the Trombe wall would be the best. Another strategy helps to benefit from the solar collection without some of the drawbacks of the Trombe walls is to use exterior mirror-like reflectors. The additional reflected area helps Trombe walls to benefit more from the sunlight with the flexibility of removing or rotating the reflector device if the solar collection is undesired. When three different Trombe wall facades with single glass, double glass, and an integrated semi-transparent PV module are compared in hot and humid climate, the single glass provides the highest solar radiation gain due to its higher solar heat gain efficiency. However, it is recommended to use the single glass with a shutter for the evening and night times, to offset its heat losses. High transmission glazing maximizes solar gains of the Trombe wall while allowing to recognize the dark brick, natural stones, water containers, or another attractive thermal mass system behind the glazing as well. However, from an aesthetics perspective, sometimes it is not desirable to distinguish the black thermal mass. As an architectural detail, patterned glass can be used to limit the exterior visibility of the dark wall without sacrificing transmissivity. The largest Trombe wall in the Northeastern United States is located in NJIT’s Mechanical Engineering Building, at 200 Central Avenue, Newark, NJ. Advantages and disadvantages Advantages Indoor temperature swings are 10 °F to 15 °F less with indirect-gain systems than with direct-gain systems. Trombe walls perform better at maintaining a steady indoor temperature than other indirect-gain heating systems. Among the passive solar heating strategies, Trombe walls can harmonize the relationship between humans and the natural environment and are widely used because of advantages such as simple configuration, high efficiency, zero running cost and so on. While passive solar techniques can reduce annual heating demand up to 25%, specifically using a Trombe wall in building can reduce a building's energy consumption up to 30% in addition to being environmentally friendly. Similarly, the energy heating savings of 16.36% can be achieved if a Trombe wall was added to the building envelope. Glare, ultraviolet degradation, or reduction of night time privacy are not problems with a full-height Trombe wall system. As seen in the Trombe wall design and construction section, the performance of the Trombe walls is well characterized for a variety of design and climate parameters. Possible other modifications can be adding a rigid insulation board to the foundation area and insulation curtains between the glass and thermal mass to avoid heat transfer into the building during undesired periods or heat loss from the Trombe wall to the foundation, or adding a ventilation system into the wall system (if the wall has upper and lower vents) to provide an additional heat transfer by air convection which is desirable to circulate the air evenly. Energy delivery to a living space is more controllable than for a direct-gain system. It can be immediate through convection to satisfy daytime loads or delayed through conduction and re-radiation from the thermal mass’ inside surface to meet the nighttime loads. Multiple uses of solar energy components help greatly to reduce the overall labor and material cost of constructing a passively heated building. Roof ponds, as another passive solar heating strategy, do not work well with multistory buildings since only the top floor is in direct thermal contact with the roof. However, the Trombe walls can be the load-bearing structure of the buildings, so each floor's equator-facing facade can take the advantage of the Trombe wall system. Compared to other passive solar systems, using the Trombe walls in commercial buildings with significant internal loads (people and electronic equipment) is useful because of the time lag involved in the transfer of energy through the wall into the space. Since the thermal mass reaches its capacity and becomes able to conduct heat in the evening hours, the space will benefit most by not causing potential overheating problems during occupied hours yet have little effect on heating costs if the building is not occupied after sundown. Disadvantages Since the Trombe wall is consolidated in one building element - only the equator-facing facade - its impact on the overall building design is limited when compared to roof ponds or direct-gain systems. Natural daylight is lost in the full-height Trombe walls unless the system is combined with a direct-gain system or windows are introduced. Wall hangings or other type of coverings are not allowed on Trombe walls as they block the radiation emitted from the interior surface of the wall at night. The living spaces behind the Trombe walls need alternative access to natural daylight to prevent these spaces from being claustrophobic. If a Trombe wall is constructed with upper and lower vents, the upper vent on the thermal mass can suck the heated air from the warmer indoor spaces to the cooler air space between the mass and the glazing (reverse-siphon) at night. To avoid this, it is necessary to use back-draft dampers. In regions closer to the equator, although summer ventilation can help to ameliorate overheating, insulating and shading the Trombe wall can minimize this overheating during the hot season. It is a very climate-dependent system and external temperature and incident solar radiation levels have a significant role in the energy savings and emission reductions of Trombe walls. Even though Trombe walls built in hot-summer and warm-winter zones provide more energy savings per unit wall area compared to a conventional wall, they display a poorer economic performance if solar radiation is low during the heating season. The system requires user action to operate movable insulation or shutters, often on a daily basis. In regions where the local users are not familiar with the system, to get the maximum performance from the Trombe wall system, users can be given guidance either by modeling a prototype or providing a user-friendly operation manual for the wall during different seasons or days. This participation can lead to post-project acceptance of the Trombe wall idea and make it easier for locals to reproduce it locally. Mitigating design variations The Kachadorian floor overcomes the disadvantages of the Trombe wall by orienting it horizontally instead of vertically. The Barra system combines actual Trombe walls with a ventilated slab like the Kachadorian floor. See also Passive solar building design Kachadorian floor Barra system List of pioneering solar buildings References External links Druk White Lotus School website including Trombe wall example. Sketchup model at 3D Warehouse Air heater with the same working principle as Trombe wall, patented by E.S. Morse in 1881. Solar architecture Solar design Sustainable building Building engineering Types of wall Architectural elements
Trombe wall
[ "Technology", "Engineering" ]
4,098
[ "Structural engineering", "Sustainable building", "Building engineering", "Solar design", "Energy engineering", "Types of wall", "Construction", "Architectural elements", "Civil engineering", "Components", "Architecture" ]
61,889
https://en.wikipedia.org/wiki/Division%20%28taxonomy%29
<noinclude> Division is a taxonomic rank in biological classification that is used differently in zoology and in botany. In botany and mycology, division is the traditional name for a rank now considered equivalent to phylum. The use of either term is allowed under the International Code of Botanical Nomenclature. The main Divisions of land plants are the Marchantiophyta (liverworts), Anthocerotophyta (hornworts), Bryophyta (mosses), Filicophyta (ferns), Sphenophyta (horsetails), Cycadophyta (cycads), Ginkgophyta (ginkgo)s, Pinophyta (conifers), Gnetophyta (gnetophytes), and the Magnoliophyta (Angiosperms, flowering plants). The Magnoliophyta now dominate terrestrial ecosystems, comprising 80% of vascular plant species. In zoology, the term division is applied to an optional rank subordinate to the infraclass and superordinate to the legion and cohort. A widely used classification (e.g. Carroll 1988) recognises teleost fishes as a Division Teleostei within Class Actinopterygii (the ray-finned fishes). Less commonly (as in Milner 1988), living tetrapods are ranked as Divisions Amphibia and Amniota within the clade of vertebrates with fleshy limbs (Sarcopterygii). Proposals for standardisation In 1978, a group of botanists including Harold Charles Bold, Arthur Cronquist and Lynn Margulis proposed replacing the term "division" with "phylum" in botanical nomenclature, arguing that maintaining different terms for the same taxonomic rank across biological kingdoms created unnecessary confusion. This was particularly problematic for unicellular eukaryotes, where heterotrophic organisms were classified under zoological nomenclature (using "phylum") while autotrophic organisms fell under botanical nomenclature (using "division"). They proposed updating the International Code of Botanical Nomenclature to use "phylum" and "subphylum" throughout, while maintaining that names originally published as divisions would be treated as if they had been published as phyla. Molecular phylogenetic classification The use of molecular methods, particularly 16S ribosomal RNA analysis, helped establish major bacterial divisions in the 1980s. In 1985, Carl Woese and colleagues identified ten major groups of eubacteria through oligonucleotide signature analysis, noting that these groupings were "appropriately termed eubacterial Phyla or Divisions." This work provided early molecular evidence for the equivalence of bacterial divisions with phyla and helped establish a phylogenetic basis for high-level bacterial classification. Viruses and prokaryotes In 2020, the International Committee on Taxonomy of Viruses (ICTV) formalised a 15-rank hierarchical classification system, ranging from the highest rank "realm" (rather than domain) down through the lower ranks, notably using "phylum" rather than "division". Under this system, the first viral realm established was Riboviria, encompassing all RNA viruses that encode an RNA-directed RNA polymerase. In 2021, the International Code of Nomenclature of Prokaryotes (ICNP) formally included the rank of phylum for the first time, adopting the suffix "-ota" for phylum names. This led to the publication of names for 46 prokaryotic phyla with cultured representatives, replacing some established names with neologisms – for example, "Proteobacteria" became "Pseudomonadota" and "Firmicutes" became "Bacillota". References Works cited Scientific classification Botanical nomenclature
Division (taxonomy)
[ "Biology" ]
778
[ "Botanical nomenclature", "Botanical terminology", "Biological nomenclature" ]
61,891
https://en.wikipedia.org/wiki/Genus%20%28mathematics%29
In mathematics, genus (: genera) has a few different, but closely related, meanings. Intuitively, the genus is the number of "holes" of a surface. A sphere has genus 0, while a torus has genus 1. Topology Orientable surfaces The genus of a connected, orientable surface is an integer representing the maximum number of cuttings along non-intersecting closed simple curves without rendering the resultant manifold disconnected. It is equal to the number of handles on it. Alternatively, it can be defined in terms of the Euler characteristic , via the relationship for closed surfaces, where is the genus. For surfaces with boundary components, the equation reads . In layman's terms, the genus is the number of "holes" an object has ("holes" interpreted in the sense of doughnut holes; a hollow sphere would be considered as having zero holes in this sense). A torus has 1 such hole, while a sphere has 0. The green surface pictured above has 2 holes of the relevant sort. For instance: The sphere and a disc both have genus zero. A torus has genus one, as does the surface of a coffee mug with a handle. This is the source of the joke "topologists are people who can't tell their donut from their coffee mug." Explicit construction of surfaces of the genus g is given in the article on the fundamental polygon. Non-orientable surfaces The non-orientable genus, demigenus, or Euler genus of a connected, non-orientable closed surface is a positive integer representing the number of cross-caps attached to a sphere. Alternatively, it can be defined for a closed surface in terms of the Euler characteristic χ, via the relationship χ = 2 − k, where k is the non-orientable genus. For instance: A real projective plane has a non-orientable genus 1. A Klein bottle has non-orientable genus 2. Knot The genus of a knot K is defined as the minimal genus of all Seifert surfaces for K. A Seifert surface of a knot is however a manifold with boundary, the boundary being the knot, i.e. homeomorphic to the unit circle. The genus of such a surface is defined to be the genus of the two-manifold, which is obtained by gluing the unit disk along the boundary. Handlebody The genus of a 3-dimensional handlebody is an integer representing the maximum number of cuttings along embedded disks without rendering the resultant manifold disconnected. It is equal to the number of handles on it. For instance: A ball has genus 0. A solid torus D2 × S1 has genus 1. Graph theory The genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n handles (i.e. an oriented surface of the genus n). Thus, a planar graph has genus 0, because it can be drawn on a sphere without self-crossing. The non-orientable genus of a graph is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps (i.e. a non-orientable surface of (non-orientable) genus n). (This number is also called the demigenus.) The Euler genus is the minimal integer n such that the graph can be drawn without crossing itself on a sphere with n cross-caps or on a sphere with n/2 handles. In topological graph theory there are several definitions of the genus of a group. Arthur T. White introduced the following concept. The genus of a group G is the minimum genus of a (connected, undirected) Cayley graph for G. The graph genus problem is NP-complete. Algebraic geometry There are two related definitions of genus of any projective algebraic scheme : the arithmetic genus and the geometric genus. When is an algebraic curve with field of definition the complex numbers, and if has no singular points, then these definitions agree and coincide with the topological definition applied to the Riemann surface of (its manifold of complex points). For example, the definition of elliptic curve from algebraic geometry is connected non-singular projective curve of genus 1 with a given rational point on it. By the Riemann–Roch theorem, an irreducible plane curve of degree given by the vanishing locus of a section has geometric genus where is the number of singularities when properly counted. Differential geometry In differential geometry, a genus of an oriented manifold may be defined as a complex number subject to the conditions if and are cobordant. In other words, is a ring homomorphism , where is Thom's oriented cobordism ring. The genus is multiplicative for all bundles on spinor manifolds with a connected compact structure if is an elliptic integral such as for some This genus is called an elliptic genus. The Euler characteristic is not a genus in this sense since it is not invariant concerning cobordisms. Biology Genus can be also calculated for the graph spanned by the net of chemical interactions in nucleic acids or proteins. In particular, one may study the growth of the genus along the chain. Such a function (called the genus trace) shows the topological complexity and domain structure of biomolecules. See also Group (mathematics) Arithmetic genus Geometric genus Genus of a multiplicative sequence Genus of a quadratic form Spinor genus Citations References Topology Geometric topology Surfaces Algebraic topology Algebraic curves Graph invariants Topological graph theory Geometry processing
Genus (mathematics)
[ "Physics", "Mathematics" ]
1,125
[ "Graph theory", "Algebraic topology", "Geometric topology", "Graph invariants", "Topology", "Mathematical relations", "Space", "Geometry", "Fields of abstract algebra", "Spacetime", "Topological graph theory" ]
61,899
https://en.wikipedia.org/wiki/Phloem
Phloem (, ) is the living tissue in vascular plants that transports the soluble organic compounds made during photosynthesis and known as photosynthates, in particular the sugar sucrose, to the rest of the plant. This transport process is called translocation. In trees, the phloem is the innermost layer of the bark, hence the name, derived from the Ancient Greek word (phloiós), meaning "bark". The term was introduced by Carl Nägeli in 1858. Different types of phloem can be distinguished. The early phloem formed in the growth apices is called protophloem. Protophloem eventually becomes obliterated once it connects to the durable phloem in mature organs, the metaphloem. Further, secondary phloem is formed during the thickening of stem structures. Structure Phloem tissue consists of conducting cells, generally called sieve elements, parenchyma cells, including both specialized companion cells or albuminous cells and unspecialized cells and supportive cells, such as fibres and sclereids. Conducting cells (sieve elements) Sieve tube elements are the type of cell that are responsible for transporting sugars throughout the plant. At maturity they lack a nucleus and have very few organelles, so they rely on companion cells or albuminous cells for most of their metabolic needs. Sieve tube cells do contain vacuoles and other organelles, such as ribosomes, before they mature, but these generally migrate to the cell wall and dissolve at maturity; this ensures there is little to impede the movement of fluids. One of the few organelles they do contain at maturity is the rough endoplasmic reticulum, which can be found at the plasma membrane, often nearby the plasmodesmata that connect them to their companion or albuminous cells. All sieve cells have groups of pores at their ends that grow from modified and enlarged plasmodesmata, called sieve areas. The pores are reinforced by platelets of a polysaccharide called callose. Parenchyma cells Other parenchyma cells within the phloem are generally undifferentiated and used for food storage. Companion cells The metabolic functioning of sieve-tube members depends on a close association with the companion cells, a specialized form of parenchyma cell. All of the cellular functions of a sieve-tube element are carried out by the (much smaller) companion cell, a typical nucleate plant cell except the companion cell usually has a larger number of ribosomes and mitochondria. The dense cytoplasm of a companion cell is connected to the sieve-tube element by plasmodesmata. The common sidewall shared by a sieve tube element and a companion cell has large numbers of plasmodesmata. There are three types of companion cells. Ordinary companion cells, which have smooth walls and few or no plasmodesmatal connections to cells other than the sieve tube. Transfer cells, which have much-folded walls that are adjacent to non-sieve cells, allowing for larger areas of transfer. They are specialized in scavenging solutes from those in the cell walls that are actively pumped requiring energy. Intermediary cells, which possess many vacuoles and plasmodesmata and synthesize raffinose family oligosaccharides. Albuminous cells Albuminous cells have a similar role to companion cells, but are associated with sieve cells only and are hence found only in seedless vascular plants and gymnosperms. Supportive cells Although its primary function is transport of sugars, phloem may also contain cells that have a mechanical support function. These are sclerenchyma cells which generally fall into two categories: fibres and sclereids. Both cell types have a secondary cell wall and are dead at maturity. The secondary cell wall increases their rigidity and tensile strength, especially because they contain lignin. Fibres Bast fibres are the long, narrow supportive cells that provide tension strength without limiting flexibility. They are also found in xylem, and are the main component of many textiles such as paper, linen, and cotton. Sclereids Sclereids are irregularly shaped cells that add compression strength but may reduce flexibility to some extent. They also serve as anti-herbivory structures, as their irregular shape and hardness will increase wear on teeth as the herbivores chew. For example, they are responsible for the gritty texture in pears, and in winter pears. Function Unlike xylem (which is composed primarily of dead cells), the phloem is composed of still-living cells that transport sap. The sap is a water-based solution, but rich in sugars made by photosynthesis. These sugars are transported to non-photosynthetic parts of the plant, such as the roots, or into storage structures, such as tubers or bulbs. During the plant's growth period, usually during the spring, storage organs such as the roots are sugar sources, and the plant's many growing areas are sugar sinks. The movement in phloem is multidirectional, whereas, in xylem cells, it is unidirectional (upward). After the growth period, when the meristems are dormant, the leaves are sources, and storage organs are sinks. Developing seed-bearing organs (such as fruit) are always sinks. Because of this multi-directional flow, coupled with the fact that sap cannot move with ease between adjacent sieve-tubes, it is not unusual for sap in adjacent sieve-tubes to be flowing in opposite directions. While movement of water and minerals through the xylem is driven by negative pressures (tension) most of the time, movement through the phloem is driven by positive hydrostatic pressures. This process is termed translocation, and is accomplished by a process called phloem loading and unloading. Phloem sap is also thought to play a role in sending informational signals throughout vascular plants. "Loading and unloading patterns are largely determined by the conductivity and number of plasmodesmata and the position-dependent function of solute-specific, plasma membrane transport proteins. Recent evidence indicates that mobile proteins and RNA are part of the plant's long-distance communication signaling system. Evidence also exists for the directed transport and sorting of macromolecules as they pass through plasmodesmata." Organic molecules such as sugars, amino acids, certain phytohormones, and even messenger RNAs are transported in the phloem through sieve tube elements. Phloem is also used as a popular site for oviposition and breeding of insects belonging to the order Diptera, including the fruit fly Drosophila montana. Girdling Because phloem tubes are located outside the xylem in most plants, a tree or other plant can be killed by stripping away the bark in a ring on the trunk or stem. With the phloem destroyed, nutrients cannot reach the roots, and the tree/plant will die. Trees located in areas with animals such as beavers are vulnerable since beavers chew off the bark at a fairly precise height. This process is known as girdling, or ring-barking, and can be used for agricultural purposes. For example, enormous fruits and vegetables seen at fairs and carnivals are produced via girdling. A farmer would place a girdle at the base of a large branch, and remove all but one fruit/vegetable from that branch. Thus, all the sugars manufactured by leaves on that branch have no sinks to go to but the one fruit/vegetable, which thus expands to many times its normal size. Origin When the plant is an embryo, vascular tissue emerges from procambium tissue, which is at the center of the embryo. Protophloem itself appears in the mid-vein extending into the cotyledonary node, which constitutes the first appearance of a leaf in angiosperms, where it forms continuous strands. The hormone auxin, transported by the protein PIN1 is responsible for the growth of those protophloem strands, signaling the final identity of those tissues. SHORTROOT (SHR), and microRNA165/166 also participate in that process, while Callose Synthase 3 inhibits the locations where SHR, and microRNA165 can go. Additionally, the expression of NAC45/86 genes during phloem differentiation functions to enucleate specific cells in the plants to produce the sieve elements. In the embryo, root phloem develops independently in the upper hypocotyl, which lies between the embryonic root, and the cotyledon. In an adult, the phloem originates, and grows outwards from, meristematic cells in the vascular cambium. Phloem is produced in phases. Primary phloem is laid down by the apical meristem and develops from the procambium. Secondary phloem is laid down by the vascular cambium to the inside of the established layer(s) of phloem. The molecular control of phloem development from stem cell to mature sieve element is best understood for the primary root of the model plant Arabidopsis thaliana. In some eudicot families (Apocynaceae, Convolvulaceae, Cucurbitaceae, Solanaceae, Myrtaceae, Asteraceae, Thymelaeaceae), phloem also develops on the inner side of the vascular cambium; in this case, a distinction between external and internal or intraxylary phloem is made. Internal phloem is mostly primary, and begins differentiation later than the external phloem and protoxylem, though it is not without exceptions. In some other families (Amaranthaceae, Nyctaginaceae, Salvadoraceae), the cambium also periodically forms inward strands or layers of phloem, embedded in the xylem: Such phloem strands are called included or interxylary phloem. Nutritional use Phloem of pine trees has been used in Finland and Scandinavia as a substitute food in times of famine and even in good years in the northeast. Supplies of phloem from previous years helped stave off starvation in the great famine of the 1860s which hit both Finland and Sweden. Phloem is dried and milled to flour (pettu in Finnish) and mixed with rye to form a hard dark bread, bark bread. The least appreciated was silkko, a bread made only from buttermilk and pettu without any real rye or cereal flour. Recently, pettu has again become available as a curiosity, and some have made claims of health benefits. Phloem from silver birch has been also used to make flour in the past. See also Apical dominance References External links Plant anatomy Plant physiology Tissues (biology)
Phloem
[ "Biology" ]
2,288
[ "Plant physiology", "Plants" ]
61,967
https://en.wikipedia.org/wiki/Detonator
A detonator is a device used to make an explosive or explosive device explode. Detonators come in a variety of types, depending on how they are initiated (chemically, mechanically, or electrically) and details of their inner working, which often involve several stages. Types of detonators include non-electric and electric. Non-electric detonators are typically stab or pyrotechnic while electric are typically "hot wire" (low voltage), exploding bridge wire (high voltage) or explosive foil (very high voltage). The original electric detonators invented in 1875 independently by Julius Smith and Perry Gardiner used mercury fulminate as the primary explosive. Around the turn of the century performance was enhanced in the Smith-Gardiner blasting cap by the addition of 10-20% potassium chlorate. This compound was superseded by others: lead azide, lead styphnate, some aluminium, or other materials such as DDNP (diazo dinitro phenol) to reduce the amount of lead emitted into the atmosphere by mining and quarrying operations. They also often use a small amount of TNT or tetryl in military detonators and PETN in commercial detonators. History The first blasting cap or detonator was demonstrated in 1745 when British physician and apothecary William Watson showed that the electric spark of a friction machine could ignite black powder, by way of igniting a flammable substance mixed in with the black powder. In 1750, Benjamin Franklin in Philadelphia made a commercial blasting cap consisting of a paper tube full of black powder, with wires leading in both sides and wadding sealing up the ends. The two wires came close but did not touch, so a large electric spark discharge between the two wires would fire the cap. In 1832, a hot wire detonator was produced by American chemist Robert Hare, although attempts along similar lines had earlier been attempted by the Italians Volta and Cavallo. Hare constructed his blasting cap by passing a multistrand wire through a charge of gunpowder inside a tin tube; he had cut all but one fine strand of the multistrand wire so that the fine strand would serve as the hot bridgewire. When a strong current from a large battery (which he called a "deflagrator" or "calorimotor") was passed through the fine strand, it became incandescent and ignited the charge of gunpowder. In 1863, Alfred Nobel realized that although nitroglycerin could not be detonated by a fuse, it could be detonated by the explosion of a small charge of gunpowder, which in turn was ignited by a fuse. Within a year, he was adding mercury fulminate to the gunpowder charges of his detonators, and by 1867 he was using small copper capsules of mercury fulminate, triggered by a fuse, to detonate nitroglycerin. In 1868, Henry Julius Smith of Boston introduced a cap that combined a spark gap ignitor and mercury fulminate, the first electric cap able to detonate dynamite. In 1875, Smith—and then in 1887, Perry G. Gardner of North Adams, Massachusetts—developed electric detonators that combined a hot wire detonator with mercury fulminate explosive. These were the first generally modern type blasting caps. Modern caps use different explosives and separate primary and secondary explosive charges, but are generally very similar to the Gardner and Smith caps. Smith also invented the first satisfactory portable power supply for igniting blasting caps: a high-voltage magneto that was driven by a rack and pinion, which in turn was driven by a T-handle that was pushed downwards. Electric match caps were developed in the early 1900s in Germany, and spread to the US in the 1950s when ICI International purchased Atlas Powder Co. These match caps have become the predominant world standard cap type. Purpose The need for detonators such as blasting caps came from the development of safer secondary and tertiary explosives . Secondary and tertiary explosives are typically initiated by an explosives train starting with the detonator. For safety, detonators and the main explosive device are typically only joined just before use. Design A detonator is usually a multi stage device, with three parts: at the first stage, the initiation mean (fire, electricity, etc.) provide enough energy (as heat or mechanical shock) to activate an easy-to-ignite primary explosive, which in turn detonates a small amount of a more powerful secondary explosive, directly in contact with the primary, and called "base" or "output" explosive, able to carry out the detonation through the casing of the detonator to the main explosive device to activate it. Explosives commonly used as primary in detonators include lead azide, lead styphnate, tetryl, and DDNP. Early blasting caps also used silver fulminate, but it has been replaced with cheaper and safer primary explosives. Silver azide is still used sometimes, but very rarely due to its high price. It is possible to construct a Non Primary Explosive Detonator (NPED) in which the primary explosive is replaced by a flammable but non-explosive mixture that propagates a shock wave along a tube into the secondary explosive. NPEDs are harder to accidentally trigger by shock and can avoid the use of lead. As secondary "base" or "output" explosive, TNT or tetryl are typically found in military detonators and PETN in commercial detonators. While detonators make explosive handling safer, they are hazardous to handle since, despite their small size, they contain enough explosive to injure people; untrained personnel might not recognize them as explosives or wrongly deem them not dangerous due to their appearance and handle them without the required care. Types Ordinary detonators usually take the form of ignition-based explosives. While they are mainly used in commercial operations, ordinary detonators are still used in military operations. This form of detonator is most commonly initiated using a safety fuse, and used in non time-critical detonations e.g. conventional munitions disposal. Well known detonators are lead azide [Pb(N3)2], silver azide [AgN3] and mercury fulminate [Hg(ONC)2]. There are three categories of electrical detonators: instantaneous electrical detonators (IED), short period delay detonators (SPD) and long period delay detonators (LPD). SPDs are measured in milliseconds and LPDs are measured in seconds. In situations where nanosecond accuracy is required, specifically in the implosion charges in nuclear weapons, exploding-bridgewire detonators are employed. The initial shock wave is created by vaporizing a length of a thin wire by an electric discharge. A new development is a slapper detonator, which uses thin plates accelerated by an electrically exploded wire or foil to deliver the initial shock. It is in use in some modern weapons systems. A variant of this concept is used in mining operations, when the foil is exploded by a laser pulse delivered to the foil by optical fiber. A non-electric detonator is a shock tube detonator designed to initiate explosions, generally for the purpose of demolition of buildings and for use in the blasting of rock in mines and quarries. Instead of electric wires, a hollow plastic tube delivers the firing impulse to the detonator, making it immune to most of the hazards associated with stray electric current. It consists of a small diameter, three-layer plastic tube coated on the innermost wall with a reactive explosive compound, which, when ignited, propagates a low energy signal, similar to a dust explosion. The reaction travels at approximately 6,500 ft/s (2,000 m/s) along the length of the tubing with minimal disturbance outside of the tube. Non-electric detonators were invented by the Swedish company Nitro Nobel in the 1960s and 1970s, and launched to the demolitions market in 1973. In civil mining, electronic detonators have a better precision for delays. Electronic detonators are designed to provide the precise control necessary to produce accurate and consistent blasting results in a variety of blasting applications in the mining, quarrying, and construction industries. Electronic detonators may be programmed in millisecond or sub-millisecond increments using a dedicated programming device. Wireless electronic detonators are beginning to be available in the civil mining market. Encrypted radio signals are used to communicate the blast signal to each detonator at the correct time. While currently expensive, wireless detonators can enable new mining techniques as multiple blasts can be loaded at once and fired in sequence without putting humans in harm's way. A number 8 test blasting cap is one containing 2 grams of a mixture of 80 percent mercury fulminate and 20 percent potassium chlorate, or a blasting cap of equivalent strength. An equivalent strength cap comprises 0.40-0.45 grams of PETN base charge pressed in an aluminum shell with bottom thickness not to exceed to 0.03 of an inch, to a specific gravity of not less than 1.4 g/cc, and primed with standard weights of primer depending on the manufacturer. Blasting caps The oldest and simplest type of cap, fuse caps are a metal cylinder, closed at one end. From the open end inwards, there is first an empty space into which a pyrotechnic fuse is inserted and crimped, then a pyrotechnic ignition mix, a primary explosive, and then the main detonating explosive charge. The primary hazard of pyrotechnic blasting caps is that for proper usage, the fuse must be inserted and then crimped into place by crushing the base of the cap around the fuse. If the tool used to crimp the cap is used too close to the explosives, the primary explosive compound can detonate during crimping. A common hazardous practice is crimping caps with one's teeth; an accidental detonation can cause serious injury to the mouth. Fuse type blasting caps are still in active use today. They are the safest type to use around certain types of electromagnetic interference, and they have a built in time delay as the fuse burns down. Solid pack electric blasting caps use a thin bridgewire in direct contact (hence solid pack) with a primary explosive, which is heated by electric current and causes the detonation of the primary explosive. That primary explosive then detonates a larger charge of secondary explosive. Some solid pack fuses incorporate a small pyrotechnic delay element, up to a few hundred milliseconds, before the cap fires. Match type blasting caps use an electric match (insulating sheet with electrodes on both sides, a thin bridgewire soldered across the sides, all dipped in ignition and output mixes) to initiate the primary explosive, rather than direct contact between the bridgewire and the primary explosive. The match can be manufactured separately from the rest of the cap and only assembled at the end of the process. Match type caps are now the most common type found worldwide. The exploding-bridgewire detonator was invented in the 1940s as part of the Manhattan Project to develop nuclear weapons. The design goal was to produce a detonator which functioned very rapidly and predictably). Both Match and Solid Pack type electric caps take a few milliseconds to fire, as the bridgewire heats up and heats the explosive to the point of detonation. Exploding bridgewire or EBW detonators use a higher voltage electric charge and a very thin bridgewire, .04 inch long, .0016 diameter, (1 mm long, 0.04 mm diameter). Instead of heating up the explosive, the EBW detonator wire is heated so quickly by the high firing current that the wire actually vaporizes and explodes due to electric resistance heating. That electrically-driven explosion causes the low-density initiating explosive (usually PETN) to detonate, which in turn detonates a higher density secondary explosive (typically RDX or HMX) in many EBW designs. In addition to firing very quickly when properly initiated, EBW detonators are much safer than blasting caps from stray static electricity and other electric current. Enough current will melt the bridgewire, but it cannot detonate the initiator explosive without the full high-voltage high-current charge passing through the bridgewire. EBW detonators are used in many civilian applications where radio signals, static electricity, or other electrical hazards might cause accidents with conventional electric detonators. Exploding foil initiators (EFI), also known as Slapper detonators are an improvement on EBW detonators. Slappers, instead of directly using the exploding foil to detonate the initiator explosive, use the electrical vaporization of the foil to drive a small circle of insulating material such as PET film or kapton down a circular hole in an additional disc of insulating material. At the far end of that hole is a pellet of high-density secondary explosive. Slapper detonators omit the low-density initiating explosive used in EBW designs and they require much greater energy density than EBW detonators to function, making them inherently safer. Laser initiation of explosives, propellants or pyrotechnics has been attempted in three different ways, (1) direct interaction with the HE or Direct Optical Initiation (DOI); (2) rapid heating of a thin film in contact with a HE; and (3) ablating a thin metal foil to produce a high velocity flyer plate that impacts the HE (laser flyer). See also References Further reading Cooper, Paul W. Explosives Engineering. New York: Wiley-VCH, 1996. . External links 1956 safety film "Blasting Cap - Danger!" from Prelinger Archives Modelling and Simulation of Burst Phenomenon in Electrically Exploded Foils Bombs Explosives Pyrotechnic initiators
Detonator
[ "Chemistry" ]
2,887
[ "Explosives", "Explosions" ]
61,978
https://en.wikipedia.org/wiki/Nelson%2C%20New%20Zealand
Nelson () is a consolidated city and unitary authority on the eastern shores of Tasman Bay at the top of the South Island of New Zealand. It is the oldest city in the South Island and the second-oldest settled city in the country; it was established in 1841 and became a city by British royal charter in 1858. It is the only consolidated city-region in the nation. Nelson City is bordered to the west and south-west by the Tasman District and to the north-east, east and south-east by the Marlborough District. The Nelson urban area has a population of , making it New Zealand's 15th most populous urban area. Nelson is well known for its thriving local arts and crafts scene; each year, the city hosts events popular with locals and tourists alike, such as the Nelson Arts Festival. Etymology Nelson was named in honour of Admiral Horatio Nelson, who defeated both the French and Spanish fleets at the Battle of Trafalgar in 1805. Many roads and public areas around the city are named after people and ships associated with that battle. Inhabitants of the city are referred to as Nelsonians; Trafalgar Street is its main shopping axis. Nelson's Māori name, Whakatū, means 'construct', 'raise', or 'establish'. In an article to The Colonist newspaper on 16 July 1867, Francis Stevens described Nelson as "The Naples of the Southern Hemisphere". Today, Nelson has the nicknames of "Sunny Nelson" due to its high sunshine hours per year or the "Top of the South" because of its geographic location. In New Zealand Sign Language, the name is signed by putting the index and middle fingers together which are raised to the nose until the fingertips touch the nose, then move the hand forward so that the fingers point slightly forward away from oneself. History Early settlement Settlement of Nelson began about 700 years ago by Māori. There is evidence that the earliest settlements in New Zealand were around the Nelson-Marlborough regions. Some of the earliest recorded iwi in the Nelson district are Ngāti Hāwea, Ngāti Wairangi, Waitaha and Kāti Māmoe. Waitaha people developed the land around the Waimea Gardens, are believed to have been the first people to quarry argillite in around Nelson. They also developed much of the Waimea Gardens complex – more than 400 hectares on the Waimea Plains near Nelson. In the early 1600s, Ngāti Tūmatakōkiri displaced other te Tau Ihu Māori, becoming the dominant tribe in the area until the early 1800s. Raids from northern tribes in the 1820s, led by Te Rauparaha and his Ngāti Toa, soon decimated the local population and quickly displaced them. Today there are eight mutually recognised tribes of the northernwestern region: Ngāti Kuia, Ngāti Apa ki te Rā Tō, Rangitāne, Ngāti Toarangatira, Ngāti Koata, Ngāti Rārua, Ngāti Tama and Te Atiawa o Te Waka-a-Māui. Historic places There are three main historic places located in Nelson. They are Broadgreen Historic House, Isel House, and Founders Heritage Park. The Broadgreen Historic House was originally built in 1855 for Mr and Mrs Edmund Buxton, additionally with their six daughters. The house was later sold to a Fred Langbein in 1901, who lived there with his family until 1965. In 1965, the house was bought by the Nelson City Council and is now used operated a museum for the general public. Isel House is a local historical building located in Nelson. It was home to one of Nelson's first families, the Marsdens. Many of the rooms have been transformed into displays for the public to view. The restoration of Isel House is managed by Isel House Charitable trust under the supervision of Sally Papps, but the house and the park ground surrounding it are owned by the Nelson City Council. Founders Heritage Park is a local historical visit in Nelson. This interactive park shows visitors the history of Nelson. The park is set up as a village filled with buildings set in a historical time, including well established gardens. Throughout the park, there are stories to be learned about the history of this town. New Zealand Company Planning The New Zealand Company in London planned the settlement of Nelson. They intended to buy from the Māori some of land, which they planned to divide into one thousand lots and sell to intending settlers. The company earmarked profits to finance the free passage of artisans and labourers, with their families, and for the construction of public works. However, by September 1841 only about one third of the lots had sold. Despite this, the colony pushed ahead, and land was surveyed by Frederick Tuckett. Three ships, the Arrow, Whitby, and Will Watch, sailed from London commanded by Captain Arthur Wakefield. Arriving in New Zealand, they discovered that the new Governor of the colony, William Hobson, would not give them a free hand to secure vast areas of land from the Māori or indeed to decide where to site the colony. However, after some delay, Hobson allowed the company to investigate the Tasman Bay area at the north end of the South Island. The Company selected the site now occupied by Nelson City because it had the best harbour in the area. But it had a major drawback: it lacked suitable arable land; Nelson City stands right on the edge of a mountain range while the nearby Waimea Plains amount to only about , less than one third of the area required by the Company plans. The Company secured land from the Māori, that was not clearly defined, for £800: it included Nelson, Waimea, Motueka, Riwaka and Whakapuaka. This allowed the settlement to begin, but the lack of definition would prove the source of much future conflict. The three colony ships sailed into Nelson Haven during the first week of November 1841. When the four first immigrant ships – Fifeshire, Mary-Ann, Lord Auckland and Lloyds – arrived three months later, they found the town already laid out with streets, some wooden houses, tents and rough sheds. The town was laid out on a grid plan. Within 18 months, the company had sent out 18 ships with 1052 men, 872 women and 1384 children. However, fewer than ninety of the settlers had the capital to start as landowners. Cultural and religious immigrants The early settlement of Nelson province included a proportion of German immigrants, who arrived on the ship Sankt Pauli and formed the nucleus of the villages of Sarau (Upper Moutere) and Neudorf. These were mostly Lutheran Protestants with a small number of Bavarian Catholics. In 1892, the New Zealand Church Mission Society (NZCMS) was formed in a Nelson church hall. Problems with land After a brief initial period of prosperity, the lack of land and of capital caught up with the settlement and it entered a prolonged period of relative depression. The labourers had to accept a cut in their wages. Organised immigration ceased (a state of affairs that continued until the 1850s). By the end of 1843, artisans and labourers began leaving Nelson; by 1846, some 25% of the immigrants had moved away. The pressure to find more arable land became intense. To the south-east of Nelson lay the wide and fertile plains of the Wairau Valley. The New Zealand Company tried to claim that they had purchased the land. The Māori owners stated adamantly that the Wairau Valley had not formed part of the original land sale, and made it clear they would resist any attempts by the settlers to occupy the area. The Nelson settlers led by Arthur Wakefield and Henry Thompson attempted to do just that. This resulted in the Wairau Affray, where 22 settlers and 4 Māori died. The subsequent Government inquiry exonerated the Māori and found that the Nelson settlers had no legitimate claim to any land outside Tasman Bay. Public fears of a Māori attack on Nelson led to the formation of the Nelson Battalion of Militia in 1845. City Nelson township was managed by the Nelson Provincial Council through a Board of Works constituted by the Provincial Government under the Nelson Improvement Act 1856 until 1874. It was proclaimed a Bishop's See and city under letters patent by Queen Victoria on 27 September 1858, the second New Zealand city proclaimed in this manner after Christchurch. Nelson only had some 5,000 residents at this time. Edmund Hobhouse was the first Bishop. The Municipal Corporations Act 1876 stated that Nelson was constituted a city on 30 March 1874. Coat of arms Nelson City has a coat of arms, obtained in 1958 from the College of Arms to mark the Centenary of Nelson as a City. The blazon of the arms is: "Barry wavy Argent and Azure a Cross Flory Sable on a Chief also Azure a Mitre proper And for the Crest on a Wreath of the Colours Issuant from a Mural Crown proper a Lion rampant Gules holding between the fore paws a Sun in splendour or. The supporters on the dexter side a Huia Bird and on the sinister side a Kotuku both proper." Motto "Palmam qui meruit ferat" (Let him, who has earned it, bear the palm). This motto is the same as that of Lord Nelson. Nelson Province From 1853 until 1876, when provincial governments were abolished, Nelson was the capital of Nelson Province. The province itself was much larger than present-day Nelson City and included all of the present-day Buller, Kaikōura, Marlborough, Nelson, and Tasman, as well as the Grey District north of the Grey River and the Hurunui District north of the Hurunui River. The Marlborough Province split from Nelson Province in October 1859. Nelson provincial anniversary Nelson Anniversary Day is a public holiday observed in the northern half of the South Island of New Zealand, being the area's provincial anniversary day. It is observed throughout the historic Nelson Province, even though the provinces of New Zealand were abolished in 1876. The modern area of observation includes all of Nelson City and includes all of the present-day Buller, Kaikōura, Marlborough, Tasman districts as well as the Grey District north of the Grey River / Māwheranui and the Hurunui District north of the Hurunui River. The holiday usually falls on the Monday closest to 1 February, the anniversary of the arrival of the first New Zealand Company boat, the Fifeshire on 1 February 1842. Anniversary celebrations in the early years featured a sailing regatta, horse racing, running races, shooting and ploughing matches. In 1892, the Nelson Jubilee Celebration featured an official week-long programme with church services, sports, concerts, a ball and a grand display of fireworks. Time gun In 1858, the Nelson Provincial Council erected a time gun at the spot on Brittania Heights where, in 1841, Captain Wakefield erected his flagpole. The gun was fired each Saturday at noon to give the correct time. The gun is now preserved as a historical relic and the Songer Tree marks the site on Signal Hill of the original flagpole. Geography The Nelson-Tasman area comprises two unitary authorities – Nelson City, administered by the Nelson City Council, and Tasman District, administered by the Tasman District Council, based in Richmond to the southwest. It is between Marlborough, another unitary authority, to the east, and the West Coast Regional Council to the west. For some while, there has been talk about amalgamating Nelson City and the Tasman District to streamline and render more financially economical the existing co-operation between the two councils, exemplified by the jointly owned Port Nelson and the creation of Nelson Tasman Tourism, a jointly owned tourism promotion organisation. However, an official poll conducted in April 2012 showed nearly three-quarters of those who voted in Richmond were opposed to the proposal, with a narrow majority in favour. Nelson has beaches and a sheltered harbour. The harbour entrance is protected by a Boulder Bank, a natural, bank of rocks transported south from Mackay Bluff via longshore drift. The bank creates a perfect natural harbour which enticed the first settlers, although the entrance was narrow. The wreck of the Fifeshire on Arrow Rock (now called Fifeshire Rock in memory of this disaster) in 1842 proved the difficulty of the passage. A cut was later made in the bank in 1906 which allowed larger vessels access to the port. The creation of Rocks Road around the waterfront area after the Tāhunanui slump in 1892 increased the effects of the tide on Nelson city's beach, Tāhunanui, and removed sediment. This meant the popular beach and adjoining car park were being eroded (plus the sand dunes) so a project to replace these sands was put in place and has so far proved a success, with the sand rising a considerable amount and the dunes continuing to grow. Waterways The Nelson territorial authority area is small (just 445 km2) and has four main waterways, the Whangamoa, Wakapuaka, Maitai and Roding Rivers. The Roding River, the southernmost in Nelson, arises in the hills between Mount Meares and Dun Mountain. From there it flows westward before entering the Tasman District where it eventually joins the Waimea River, which flows into Waimea Inlet near Rabbit Island. The Maitai River flows westward from the Dun Mountain area into the town centre of Nelson before entering the Nelson Haven then Tasman Bay via 'The Cut'. Major tributaries of the Maitai River are: York and Brook Streams plus Sharland, Packer, Groom, Glen, Neds, Sclanders, Beauchamp and Mill Creeks. The Wakapuaka River, which flows north from the Saddle Hill area to its mouth at Cable Bay in North Nelson, has two main tributaries, the Lud and Teal Rivers. Entering Tasman Bay near Kokorua in the north of Nelson, the Whangamoa River is the longest waterway in Nelson. Smaller waterways in the south of Nelson include: Saxton Creek, Orchard Stream, Poorman Valley Stream, Arapiki Stream, Jenkins Creek and Maire Stream. Central city The central city of Nelson, also referred to as the central business district (CBD), is bounded by Halifax Street to the north, Rutherford Street to the west, Collingwood Street to the east, and Selwyn Place to the south. Other major streets within the CBD include Trafalgar Street, Bridge Street and Hardy Street. Suburbs Suburbs within Nelson City's territorial area borders are grouped into four city districts: Nelson North: Glenduan Wakapuaka Todds Valley Marybank Atawhai Dodson Valley Brooklands City Centre: Nelson Central Port Nelson Beachville The Wood Hanby Park Maitai Nelson East Nelson South Toi Toi (Victory Village) Bishopdale The Brook Washington Valley Stepneyville Britannia Heights Tāhunanui-Port Hills: Tāhunanui Enner Glynn Moana Tasman Heights Annesbrook Wakatu Stoke: Stoke Greenmeadows Park Nayland Monaco Maitlands Saxton The Nelson commuter belt extends to Richmond, Brightwater, Hope, Māpua and Wakefield in the Tasman District. National parks Nelson is surrounded by mountains on three sides and Tasman Bay / Te Tai-o-Aorere on the fourth, with its region acting as the gateway to the Abel Tasman, Kahurangi, and Nelson Lakes National Parks. It is a centre for both ecotourism and adventure tourism and has a high reputation among caving enthusiasts due to several prominent cave systems around Takaka Hill and the Wharepapa / Arthur Range, including the Nettlebed Cave and some of the largest and deepest explored caverns in the Southern Hemisphere. Nelson is known for its great lakes, hikes and walks surrounding the town, the most popular being the Abel Tasman Coast Track, Abel Tasman National Park, and Heaphy Track. These tracks are also known for recreational activities. There are many huts and camping grounds in all three tracks for availability to stay in. There are places to fish, hunt and observe nature within the National Parks and Lakes. Climate Nelson has a temperate oceanic climate (Cfb), with cool winters and warm summers. Nelson has rainfall evenly distributed throughout the year and has fewer frosts due to the highly marine geography of New Zealand. Winter is the stormiest time, where gales and storms are more common. Nelson has one of the sunniest climates of all major New Zealand centres, earning the nickname 'Sunny Nelson' with an annual average total of over 2400 hours of sunshine. The highest recorded temperature in Nelson is , the lowest . "Centre of New Zealand" monument Nelson has a monument on Botanical Hill, near the centre of the city. The walk to this is called the "Centre of New Zealand walk". Despite the name, this monument does not mark the actual geographic centre of New Zealand. Instead, the monument marks the "zero, zero" point to which the first geodetic surveys of New Zealand were referenced. These surveys were started in the 1870s by John Spence Browning, the Chief Surveyor for Nelson. From this 360-degree viewpoint, survey marks in neighbouring regions (including Wellington in the North Island) could be triangulated and the local surveys connected. In 1962, Dr Ian Reilly from the now defunct Department of Scientific and Industrial Research calculated the geographic centre of New Zealand (including Stewart Island and some smaller islands in addition to the North and South Island, but excluding the Chathams) to be in a forest in Spooners Range southwest of Nelson at . Owing to the coarse nature of the underlying data (use of rectangular areas of 7.5 minutes of arc on each side), the centre calculated by Dr Reilly has quite large error margins. Recalculating the result with more modern and accurate data shows the geographic centre of New Zealand is approximately 60 km southwest of Nelson, in the Big Bush Conservation Area north of Saint Arnaud, New Zealand. Demographics Nelson covers and had an estimated population of as of with a population density of people per km2. Nelson City had a population of 52,584 in the 2023 New Zealand census, an increase of 1,704 people (3.3%) since the 2018 census, and an increase of 6,147 people (13.2%) since the 2013 census. There were 25,620 males, 26,712 females and 255 people of other genders in 20,967 dwellings. 3.6% of people identified as LGBTIQ+. The median age was 44.0 years (compared with 38.1 years nationally). There were 8,712 people (16.6%) aged under 15 years, 8,226 (15.6%) aged 15 to 29, 24,285 (46.2%) aged 30 to 64, and 11,361 (21.6%) aged 65 or older. People could identify as more than one ethnicity. The results were 84.7% European (Pākehā); 11.9% Māori; 2.8% Pasifika; 8.6% Asian; 1.4% Middle Eastern, Latin American and African New Zealanders (MELAA); and 2.7% other, which includes people giving their ethnicity as "New Zealander". English was spoken by 96.9%, Māori language by 2.9%, Samoan by 0.5% and other languages by 12.8%. No language could be spoken by 1.7% (e.g. too young to talk). New Zealand Sign Language was known by 0.6%. The percentage of people born overseas was 26.4, compared with 28.8% nationally. Religious affiliations were 28.2% Christian, 1.1% Hindu, 0.5% Islam, 0.3% Māori religious beliefs, 1.2% Buddhist, 0.7% New Age, 0.1% Jewish, and 1.5% other religions. People who answered that they had no religion were 59.1%, and 7.5% of people did not answer the census question. Of those at least 15 years old, 8,472 (19.3%) people had a bachelor's or higher degree, 22,197 (50.6%) had a post-high school certificate or diploma, and 10,218 (23.3%) people exclusively held high school qualifications. The median income was $38,800, compared with $41,500 nationally. 3,906 people (8.9%) earned over $100,000 compared to 12.1% nationally. The employment status of those at least 15 was that 20,679 (47.1%) people were employed full-time, 6,825 (15.6%) were part-time, and 969 (2.2%) were unemployed. Urban area Nelson's urban area covers and had an estimated population of as of with a population density of people per km2. The urban area had a population of 49,224 in the 2023 New Zealand census, an increase of 1,095 people (2.3%) since the 2018 census, and an increase of 4,953 people (11.2%) since the 2013 census. There were 23,997 males, 24,984 females and 243 people of other genders in 19,701 dwellings. 3.7% of people identified as LGBTIQ+. The median age was 43.5 years (compared with 38.1 years nationally). There were 8,181 people (16.6%) aged under 15 years, 7,830 (15.9%) aged 15 to 29, 22,782 (46.3%) aged 30 to 64, and 10,431 (21.2%) aged 65 or older. People could identify as more than one ethnicity. The results were 84.1% European (Pākehā); 12.2% Māori; 2.9% Pasifika; 9.0% Asian; 1.4% Middle Eastern, Latin American and African New Zealanders (MELAA); and 2.7% other, which includes people giving their ethnicity as "New Zealander". English was spoken by 96.8%, Māori language by 3.0%, Samoan by 0.6% and other languages by 13.0%. No language could be spoken by 1.7% (e.g. too young to talk). New Zealand Sign Language was known by 0.6%. The percentage of people born overseas was 26.5, compared with 28.8% nationally. Religious affiliations were 28.2% Christian, 1.1% Hindu, 0.5% Islam, 0.3% Māori religious beliefs, 1.2% Buddhist, 0.7% New Age, 0.1% Jewish, and 1.5% other religions. People who answered that they had no religion were 59.0%, and 7.5% of people did not answer the census question. Of those at least 15 years old, 7,899 (19.2%) people had a bachelor's or higher degree, 20,718 (50.5%) had a post-high school certificate or diploma, and 9,657 (23.5%) people exclusively held high school qualifications. The median income was $38,900, compared with $41,500 nationally. 3,555 people (8.7%) earned over $100,000 compared to 12.1% nationally. The employment status of those at least 15 was that 19,488 (47.5%) people were employed full-time, 6,303 (15.4%) were part-time, and 933 (2.3%) were unemployed. Economy The Nelson economy (and that of the neighbouring Tasman District) is based on the 'big five' industries; seafood, horticulture, forestry, farming and tourism. Port Nelson is the biggest fishing port in Australasia. There are also a range of growth industries, including art and craft, aviation, engineering technology, and information technology. The region is sixth in terms of GDP growth in the 2007–10 period. The combined sub-national GDP of Nelson and Tasman District was estimated at $3.4 billion in 2010, 1.8% of New Zealand's national GDP. Nelson is home to various business agencies that serve the city and its surrounds, including Nelson Tasman Tourism (NTT), which aims to promote the region and help advertisers reach visitors from New Zealand and overseas, and the Nelson Regional Economic Development Agency (EDA), which works to "coordinate, promote, facilitate, investigate, develop, implement, support and fund initiatives relating to economic development [and] employment growth ... within the Nelson region ..." Below is a list of some of the region's largest companies and employers: Former regional airline Air Nelson had its headquarters and maintenance base at Nelson Airport. Helicopters (NZ) has its headquarters and maintenance base at Nelson Airport. Japanese automobile manufacturer Honda has its New Zealand distribution centre in the Whakatu Industrial Estate in Stoke. Beverage company McCashins has a microbrewery in Stoke Sea Dragon Marine Oils has a fish oil refinery in Annesbrook. The Cawthron Institute has a research facility in The Wood. Food manufacturer, the Talley's Group has processing facilities at Port Nelson. The New Zealand King Salmon Company processes Chinook salmon at its factory in Annesbrook. Pic's Peanut Butter is made in its Stoke factory. In 2013, Nelson Mayor Aldo Miccio worked on a proposal that would see Australian call centres for companies such as Gen-i and Xero relocated to Nelson. The plan was in response to Australian companies moving call and contact centres out of Asia because their Australian customers preferred English-speaking centres. If the plan was successful, Mr Miccio expected 100 to 300 jobs paying NZ$50,000-plus in the first year to be created in Nelson. Government Local As a unitary authority, the Nelson City Council has the combined responsibilities and functions of both a territorial (local) and regional council. This is different from most other local authorities in New Zealand. More often, a regional council is a separate organisation with several territorial authorities (city or district councils) within its borders. Other unitary authorities are the Auckland Council, Gisborne District Council, Marlborough District Council, Tasman District Council and the Chatham Islands Council. The Nelson City Council currently holds its elections under the First Past the Post electoral system once every three years, with the most recent election held on 12 October 2019. Electors vote by indicating their choice for Mayor by placing a tick beside one of the names, and the person who receives the most votes becomes Mayor. Councillors are elected the same way, but voters could cast multiple votes, with the 12 candidates who each receive the most votes becoming Councillors. Voters in this system may vote for no more than 12 candidates. The elections are conducted by post over a three-week period to make it as convenient as possible for people to vote. The other option permitted under the Local Electoral Act 2001, but not currently used in Nelson, is the Single Transferable Vote system. Multiple-member districts are used. Electors vote by ranking candidates in order of preference by placing a number beside candidates' names. The elector can mark a preference for one or up to the total number of candidates on the paper. The number of votes required for a candidate to be elected, the quota, depends on the number of positions to be filled and the number of valid votes. (Election of mayor may be held using the Instant-runoff vote method.) Under the Local Electoral Act 2002, the Nelson City Council can resolve to change the electoral system to be used for the next two elections, and it must review this decision every six years. A referendum was held in 2003 to decide which electoral system would be used for the 2004 and 2007 Nelson City Council elections. The outcome was that the First Past the Post system was retained. The 2008 review retains that system for the 2010 and 2013 elections. On 12 October 2013, Rachel Reese was elected as Nelson's first woman mayor after receiving 1,500 votes more than incumbent mayor Aldo Miccio. As of 13 October 2022, the current council members for the 2022 to 2025 term are: National Nelson is covered by one general electorate: Nelson and one Māori electorate: Te Tai Tonga. As of the 2023 general election, Nelson is held by Rachel Boyack of the Labour Party. The Māori electorate Te Tai Tonga, which covers the entire South Island and part of Wellington in the North Island, is currently held by Te Pāti Māori and represented by Tākuta Ferris. Culture and the arts As the major regional centre, the city offers many lodgings, restaurants, and unique speciality shopping such as at the Jens Hansen Goldsmiths where "The One Ring" in The Lord of the Rings film trilogy was designed. Nelson has a vibrant local music and arts scene and is known nationwide for its culturally idiosyncratic craftsmen. These include potters, glass blowers (such as Flamedaisy Glass Design and Höglund Art Glass Studio & Gallery), and dozens of wood carvers using native New Zealand southern beech and exotic Cupressus macrocarpa. Nelson is a popular visitor destination and year-round attracts both New Zealanders and international tourists. The Nelson Saturday Market is a popular weekly market where one can buy direct from local artists. The Theatre Royal was restored in 2010 and is the oldest wooden functioning theatre in the Southern Hemisphere (built 1878) Art organisations include the Suter Art Gallery and Nelson Arts Festival. The Victory Village community received the 2010 New Zealander of the Year award for Community of the Year. The first rugby union match in New Zealand took place at the Botanic Reserve in Nelson on 14 May 1870, between the Nelson Suburbs FC and Nelson College, and an informative commemorative plaque was renovated at the western edge of the grassed area by Nelson City Council in 2006. Marae Whakatū Marae, in the suburb of Atawhai, is the marae (meeting ground) of Ngāti Kuia, Ngāti Kōata, Ngāti Rārua, Ngāti Tama ki Te Tau Ihu, Ngāti Toa Rangatira and Te Atiawa o Te Waka-a-Māui. It includes the Kākāti wharenui (meeting house). In October 2020, the Government committed $240,739 from the Provincial Growth Fund to restore the marae, creating an estimated 9 jobs. Events and festivals Several major events take place: Nelson Jazz & Blues Festival – January Nelson Kite Festival – January Nelson Yacht Regatta – January Baydreams-Nelson – January Taste Tasman – January Evolve Festival – January Adam Chamber Music Festival – biennial – January / February International Kai Festival – February Weet-bix Kids TRYathlon – March Evolve Festival – February Marchfest – March Taste Nelson festival – March Te Ramaroa Light Festival – biennial in June/July Winter Music Festival – July Nelson Arts Festival – October NZ Cider Festival – November Nelson A&P Show – November World of Wearable Art Awards The annual World of Wearable Art Awards was founded in Nelson in 1987 by Suzie Moncrieff. The first show was held at the restored William Higgins cob cottage in Spring Grove, near Brightwater. The show moved to Wellington in 2005 when it became too big to hold in Nelson. A local museum showcased winning designs alongside their collection of classic cars until the venture was forced to close because of the COVID-19 pandemic. The classic car museum re-opened in 2020. Architecture The tallest building in Nelson is the tall Rutherford Hotel located on the west edge of Trafalgar Square. Unlike many towns and cities in New Zealand, Nelson has retained many Victorian buildings in its historic centre and the South Street area has been designated as having heritage value. Surviving historic buildings Nelson Cathedral Amber House Broadgreen House Cabragh House Chez Eelco Fairfield House Founders Park Windmill Isel House Melrose House Nelson Central School Renwick House Theatre Royal Victorian Rose Pub Redwood College (Founders Park) Nelson Centre of Musical Arts (formerly Nelson School of Music) Est. 1894 Museums The Nelson region houses several museums. The Founders Heritage Park houses a number of groups with historical themes, including transport. The Nelson Provincial Museum houses a collection of locally significant artefacts. The Nelson Classic Car Museum houses a collection of collectable cars. Parks and zoo Nelson has a large number and variety of public parks and reserves maintained at public expense by Nelson City Council. Major reserves include Grampians Reserve, close to the suburb of Braemar, and the botanical Reserve in the east of Nelson, close to The Wood. Natureland Zoological Park is a small zoological facility close to Tāhunanui Beach. The facility is popular with children, where they can closely approach wallabies, monkeys, meerkats, llamas and alpacas, Kune Kune pigs, otters, and peacocks. There are also turtles, tropical fish and a walk through aviary. Although the zoo nearly closed in 2008, the Orana Wildlife Trust took over its running instead. It looked like a bright future ahead for Natureland and its staff, but since the repeated earthquakes in Christchurch in 2011 and the damage to Orana Park, Orana Wildlife Trust are uncertain of the future of Natureland. Orana Wildlife trust have since pulled out of Natureland, which is now run independently. Sister cities Nelson has sister city relationships with: Miyazu, Japan (1976) Huangshi, China (1996) Yangjiang, China (2014) Sport Major sports teams Major venues Infrastructure and services Healthcare The main hospital in Nelson is the Nelson Hospital. It is the seat of the Nelson Marlborough District Health Board. The Manuka Street Hospital is a private institution. Law enforcement The Nelson Central Police Station, located in St John Street, is the headquarters for the Tasman Police District. The Tasman Police District has the lowest crime rate within New Zealand. Several gangs have established themselves in Nelson. They include the now disbanded Lost Breed and the Red Devils a support club for the Hells Angels. The Rebels Motorcycle Club also has a presence in the wider Nelson-Tasman area. Electricity The Nelson City Municipal Electricity Department (MED) established the city's public electricity supply in 1923, with electricity generated by a coal-fired power station at Wakefield Quay. The city was connected to the newly commissioned Cobb hydroelectric power station in 1944 and to the rest of the South Island grid in 1958. The grid connection saw the Wakefield Quay power station was relegated to standby duty before being decommissioned in 1964. Today, Nelson Electricity operates the local distribution network in the former MED area, which covers the CBD and inner suburbs, while Network Tasman operates the local distribution network in the outer suburbs (including Stoke, Tāhunanui and Atawhai) and rural areas. Transport Air transport Nelson Airport is located southwest of the city, at Annesbrook. The airport operates a single terminal and runway, and in 2018 was the fifth-busiest airport in New Zealand by passenger numbers. There are more than a million passenger movements through the airport terminal annually and the airport averages 90 aircraft movements every day, with a plane taking off or landing every 4.5 minutes during scheduled hours. It is primarily used for domestic flights, with regular flights to and from Auckland, Christchurch, Hamilton, Kapiti Coast, Palmerston North and Wellington. Nelson Airport is home to the former airline Air Nelson, which operated and maintained New Zealand's largest domestic airline fleet and was also the headquarters of Origin Pacific Airways until its collapse in 2006. Sounds Air offers flights from Nelson to Wellington. In 2006, the airport received restricted international airport status to facilitate small private jets. In February 2018, the approach road to the airport was flooded when the adjoining Jenkins Creek burst its banks during a storm that brought king tides and strong winds. The airport was closed for about one hour. In 2022, the NZ SeaRise programme identified Nelson airport as one area of particular vulnerability to sea level rise, with a projected subsidence of per year. The airport's Chief Executive said that the proposed runway extension would be planned around the latest sea level rise forecast, and that the airport was "here to stay", despite the concerns over the threats posed by sea level rise. Maritime transport Port Nelson is the maritime gateway for the Nelson, Tasman and Marlborough regions and a vital hub for economic activity. The following shipping companies call at Port Nelson: Australian National Line / CMA CGM Maersk Line Mediterranean Shipping Company Pacifica Shipping Toyofuji Shipping Swire Shipping In the mid-1994, a group of local businessmen, fronted by local politician Owen Jennings proposed building a deep-water port featuring a one-kilometre-long wharf extending from the Boulder Bank into Tasman Bay, where giant ships could berth and manoeuvre with ease. Known as Port Kakariki, the $97 million project was to become the hub to ship West Coast coal to Asia, as well as handling logs, which would be barged across Tasman Bay from Mapua. In January 2010, the Western Blue Highway, a Nelson to New Plymouth ferry service, was proposed by Port Taranaki. However, to date, neither the Interislander nor Bluebridge have shown any interest in the route. Anchor Shipping and Foundry Company The 'Anchor Shipping and Foundry Company' was formed 31 March 1901 from the earlier companies of Nathaniel Edwards & Co (1857–1880) and the Anchor Steam Shipping Company (1880–1901). The Anchor Company never departed from its original aim of providing services to the people of Nelson and the West Coast of the South Island and was never a large company; it only owned 37 ships during its history. At its peak around 1930, there were 16 vessels in the fleet. The company operated three nightly return trips per week ferry service between Nelson and Wellington and a daily freight service was maintained between the two ports in conjunction with the Pearl Kasper Shipping Company, while another service carried general cargo on a Nelson-Onehunga route. In 1974, the Anchor Company was sold and merged into the Union Company. Public transport Nelson Motor Service Company ran the first motor bus in Nelson in 1906 and took over the Palace horse buses in 1907. Ebus Ebus provides public transport services between Nelson, Richmond, Motueka and Wakefield as well as on two local routes connecting Atawhai, Nelson Hospital, The Brook and the Airport. The Late Late Bus is a weekend night transport service between Nelson and Richmond. NBus Cards were replaced by Bee Cards on 3 August 2020. InterCity provides daily bus services connecting Nelson with towns and cities around the South Island. Taxis and shuttle vans Taxi companies in Nelson include the following: Nelson Bays Cabs Nelson City Taxis Sun City Taxis Rail transport Nelson is one of only five major urban areas in New Zealand without a rail connection – the others being Taupō, Rotorua, Gisborne and Queenstown. The Nelson Section was an isolated, gauge, government-owned railway line between Nelson and Glenhope. It operated for years between 1876 and 1955. In 1886, a route was proposed from Nelson to the junction of the Midland Railway Company at Buller via Richmond, Waimea West, Upper Moutere, Motueka, the Motueka Valley, Tadmor and Glenhope. The only sign of rail activity in Nelson today is a short heritage operation run by the Nelson Railway Society from Founders Heritage Park using their own line between Wakefield Grove and Grove. The society has proposed future extensions of their line, possibly into or near the city centre. There have been several proposals to connect Nelson to the South Island rail network, but none have come to fruition. Horse tramway The Dun Mountain Railway was a horse-drawn tramway serving a mine. Road transport The Nelson urban area is served by , which runs in a north to southwest direction. The highway travels through the city and nearby town of Richmond, continuing southwest across the plains of the Wairoa and Motueka Rivers. Plans to construct a motorway linking North Nelson to Brightwater in the south have so far been unsuccessful. A number of studies have been undertaken since 2007 including the 2007 North Nelson to Brightwater Study, the Southern Link Road Project and the Arterial Traffic Study. On 28 June 2013, the Nelson Mayor Aldo Miccio and Nelson MP Nick Smith jointly wrote to Transport Minister Gerry Brownlee seeking for the Southern Link to be given Road of National Significance (RoNS) status. Other significant road projects proposed over the years include a cross-city tunnel from Tāhunanui Drive to Haven Road; or from Annesbrook (or Tāhunanui) to Emano Street in Victory Square; or from Tāhunanui to Washington Valley. The passenger and freight company Newmans Coach Lines was formed in Nelson in 1879, and merged with Transport Nelson in 1972. Education Secondary schools Garin College Nayland College Nelson College Nelson College for Girls Tertiary institutions Nelson hosts two tertiary education institutions, the main one being Nelson Marlborough Institute of Technology. The institute has two main campuses, one in Nelson and the other in Blenheim, in the neighbouring Marlborough region. The institute has been providing tertiary education in the Nelson-Marlborough region for the last 100 years. Nelson also has a University of Canterbury College of Education campus, which currently has an intake two out of every three years for the primary sector. Media Broadcasting The city is served by all major national radio and television stations, with terrestrial television (Freeview) and FM radio. Local radio stations include The Hits (formerly Radio Nelson), More FM (formerly Fifeshire FM), The Breeze, ZM (formerly The Planet 97FM) and community station Fresh FM. The city has one local television station, Mainland Television. Print The Nelson Examiner was the first newspaper published in the South Island. It was established by Charles Elliott (1811–1876) in 1842, within a few weeks of New Zealand Company settlers arriving in Nelson. Other early newspapers were The Colonist and the Nelson Evening Mail. Today, the Nelson Mail publishes four days a week and is owned by Stuff Ltd. The Nelson Mail also publishes the weekly community papers The Nelson Leader and The Tasman Leader. The city's largest circulating newspaper is the locally owned Nelson Weekly, which is published every Wednesday. WildTomato was a glossy monthly lifestyle magazine, focused on the Nelson and Marlborough regions – the Top of the South Island of New Zealand. The regional magazine was launched by Murray Farquhar as a 16-page local magazine in Nelson in July 2006, but was put into liquidation in March 2021. Notable people Sophia Anstice – seamstress and businesswoman Harry Atmore – politician Francis Bell – politician George Bennett – cyclist Chester Borrows – politician Mark Bright – rugby union player Jeremy Brockie – footballer Cory Brown – footballer Paul Brydon – footballer Mel Courtney – politician Ryan Crotty – rugby union player Rod Dixon – athlete Frederick Richard Edmund Emmett – music dealer and colour therapist Dame Sister Pauline Engel – nun and educator Finn Fisher-Black – cyclist Rose Frank – photographer John Guy – cricket player Isaac Mason Hill – social reformer, servant, storekeeper and ironmonger Frederick Nelson Jones – inventor Nina Jones – painter Charles Littlejohn – rower Liam Malone – athlete Simon Mannering – rugby league player Aldo Miccio – politician Marjorie Naylor – artist Edgar Neale – politician Geoffrey Palmer – politician and former Prime Minister Nick Smith – politician Frank Howard Nelson Stapp – concert impresario Rhian Sheehan – composer and musician Riki van Steeden – footballer Mike Ward – politician George William Wallace Webber – postmaster, boarding-house keeper and farmer Nate Wilbourne – environmentalist Guy Williams – comedian Paul Williams – comedian Panoramas See also List of twin towns and sister cities in New Zealand References Bibliography A Complete Guide To Heraldry by A.C. Fox-Davies, 1909. External links Historic images of Nelson from the collection of the Museum of New Zealand Te Papa Tongarewa Nelson City Council Nelson Tasman Tourism 1858 establishments in New Zealand Former provincial capitals of New Zealand German-New Zealand culture Marinas in New Zealand Populated places established in 1858 Port cities in New Zealand South Island Wine regions of New Zealand Geographical centres Regions of New Zealand
Nelson, New Zealand
[ "Physics", "Mathematics" ]
9,058
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
61,983
https://en.wikipedia.org/wiki/Tannin
Tannins (or tannoids) are a class of astringent, polyphenolic biomolecules that bind to and precipitate proteins and various other organic compounds including amino acids and alkaloids. The term tannin is widely applied to any large polyphenolic compound containing sufficient hydroxyls and other suitable groups (such as carboxyls) to form strong complexes with various macromolecules. The term tannin (from scientific French tannin, from French tan "crushed oak bark", tanner "to tan", cognate with English tanning, Medieval Latin tannare, from Proto-Celtic *tannos "oak") refers to the abundance of these compounds in oak bark, which was used in tanning animal hides into leather. The tannin compounds are widely distributed in many species of plants, where they play a role in protection from predation (acting as pesticides) and might help in regulating plant growth. The astringency from the tannins is what causes the dry and puckery feeling in the mouth following the consumption of unripened fruit, red wine or tea. Likewise, the destruction or modification of tannins with time plays an important role when determining harvesting times. Tannins have molecular weights ranging from 500 to over 3,000 (gallic acid esters) and up to 20,000 daltons (proanthocyanidins). Structure and classes of tannins There are three major classes of tannins: Shown below are the base unit or monomer of the tannin. Particularly in the flavone-derived tannins, the base shown must be (additionally) heavily hydroxylated and polymerized in order to give the high molecular weight polyphenol motif that characterizes tannins. Typically, tannin molecules require at least 12 hydroxyl groups and at least five phenyl groups to function as protein binders. Oligostilbenoids (oligo- or polystilbenes) are oligomeric forms of stilbenoids and constitute a minor class of tannins. Pseudo-tannins Pseudo-tannins are low molecular weight compounds associated with other compounds. They do not change color during the Goldbeater's skin test, unlike hydrolysable and condensed tannins, and cannot be used as tanning compounds. Some examples of pseudo tannins and their sources are: History Ellagic acid, gallic acid, and pyrogallic acid were first discovered by chemist Henri Braconnot in 1831. Julius Löwe was the first person to synthesize ellagic acid by heating gallic acid with arsenic acid or silver oxide. Maximilian Nierenstein studied natural phenols and tannins found in different plant species. Working with Arthur George Perkin, he prepared ellagic acid from algarobilla and certain other fruits in 1905. He suggested its formation from galloyl-glycine by Penicillium in 1915. Tannase is an enzyme that Nierenstein used to produce m-digallic acid from gallotannins. He proved the presence of catechin in cocoa beans in 1931. He showed in 1945 that luteic acid, a molecule present in the myrobalanitannin, a tannin found in the fruit of Terminalia chebula, is an intermediary compound in the synthesis of ellagic acid. At these times, molecule formulas were determined through combustion analysis. The discovery in 1943 by Martin and Synge of paper chromatography provided for the first time the means of surveying the phenolic constituents of plants and for their separation and identification. There was an explosion of activity in this field after 1945, including prominent work by Edgar Charles Bate-Smith and Tony Swain at Cambridge University. In 1966, Edwin Haslam proposed a first comprehensive definition of plant polyphenols based on the earlier proposals of Bate-Smith, Swain and Theodore White, which includes specific structural characteristics common to all phenolics having a tanning property. It is referred to as the White–Bate-Smith–Swain–Haslam (WBSSH) definition. Occurrence Tannins are distributed in species throughout the plant kingdom. They are commonly found in both gymnosperms and angiosperms. Mole (1993) studied the distribution of tannin in 180 families of dicotyledons and 44 families of monocotyledons (Cronquist). Most families of dicot contain tannin-free species (tested by their ability to precipitate proteins). The best known families of which all species tested contain tannin are: Aceraceae, Actinidiaceae, Anacardiaceae, Bixaceae, Burseraceae, Combretaceae, Dipterocarpaceae, Ericaceae, Grossulariaceae, Myricaceae for dicot and Najadaceae and Typhaceae in Monocot. To the family of the oak, Fagaceae, 73% of the species tested contain tannin. For those of acacias, Mimosaceae, only 39% of the species tested contain tannin, among Solanaceae rate drops to 6% and 4% for the Asteraceae. Some families like the Boraginaceae, Cucurbitaceae, Papaveraceae contain no tannin-rich species. The most abundant polyphenols are the condensed tannins, found in virtually all families of plants, and comprising up to 50% of the dry weight of leaves. Cellular localization In all vascular plants studied, tannins are manufactured by a chloroplast-derived organelle, the tannosome. Tannins are mainly physically located in the vacuoles or surface wax of plants. These storage sites keep tannins active against plant predators, but also keep some tannins from affecting plant metabolism while the plant tissue is alive. Tannins are classified as ergastic substances, i.e., non-protoplasm materials found in cells. Tannins, by definition, precipitate proteins. In this condition, they must be stored in organelles able to withstand the protein precipitation process. Idioblasts are isolated plant cells which differ from neighboring tissues and contain non-living substances. They have various functions such as storage of reserves, excretory materials, pigments, and minerals. They could contain oil, latex, gum, resin or pigments etc. They also can contain tannins. In Japanese persimmon (Diospyros kaki) fruits, tannin is accumulated in the vacuole of tannin cells, which are idioblasts of parenchyma cells in the flesh. Presence in soils The convergent evolution of tannin-rich plant communities has occurred on nutrient-poor acidic soils throughout the world. Tannins were once believed to function as anti-herbivore defenses, but more and more ecologists now recognize them as important controllers of decomposition and nitrogen cycling processes. As concern grows about global warming, there is great interest to better understand the role of polyphenols as regulators of carbon cycling, in particular in northern boreal forests. Leaf litter and other decaying parts of kauri (Agathis australis), a tree species found in New Zealand, decompose much more slowly than those of most other species. Besides its acidity, the plant also bears substances such as waxes and phenols, most notably tannins, that are harmful to microorganisms. Presence in water and wood The leaching of highly water soluble tannins from decaying vegetation and leaves along a stream may produce what is known as a blackwater river. Water flowing out of bogs has a characteristic brown color from dissolved peat tannins. The presence of tannins (or humic acid) in well water can make it smell bad or taste bitter, but this does not make it unsafe to drink. Tannins leaching from an unprepared driftwood decoration in an aquarium can cause pH lowering and coloring of the water to a tea-like tinge. A way to avoid this is to boil the wood in water several times, discarding the water each time. Using peat as an aquarium substrate can have the same effect. Many hours of boiling the driftwood may need to be followed by many weeks or months of constant soaking and many water changes before the water will stay clear. Raising the water's pH level, e.g. by adding baking soda, will accelerate the process of leaching. Tannins in water can lead to feather staining on wild and domestic waterfowl which frequent the water; mute swans, which are typically white in colour, can often be observed with reddish-brown staining as a result of coming into contact with dissolved tannins, though dissolved iron compounds also play a role. Softwoods, while in general much lower in tannins than hardwoods, are usually not recommended for use in an aquarium so using a hardwood with a very light color, indicating a low tannin content, can be an easy way to avoid tannins. Tannic acid is brown in color, so in general white woods have a low tannin content. Woods with a lot of yellow, red, or brown coloration to them (like cedar, redwood, red oak, etc.) tend to contain a lot of tannin. Extraction There is no single protocol for extracting tannins from all plant material. The procedures used for tannins are widely variable. It may be that acetone in the extraction solvent increases the total yield by inhibiting interactions between tannins and proteins during extraction or even by breaking hydrogen bonds between tannin-protein complexes. Tests for tannins There are three groups of methods for the analysis of tannins: precipitation of proteins or alkaloids, reaction with phenolic rings, and depolymerization. Alkaloid precipitation Alkaloids such as caffeine, cinchonine, quinine or strychnine, precipitates polyphenols and tannins. This property can be used in a quantitation method. Goldbeater's skin test When goldbeater's skin or ox skin is dipped in HCl, rinsed in water, soaked in the tannin solution for 5 minutes, washed in water, and then treated with 1% FeSO4 solution, it gives a blue black color if tannin was present. Ferric chloride test The following describes the use of ferric chloride (FeCl3) tests for phenolics in general: Powdered plant leaves of the test plant (1.0 g) are weighed into a beaker and 10 ml of distilled water are added. The mixture is boiled for five minutes. Two drops of 5% FeCl3 are then added. Production of a greenish precipitate is an indication of the presence of tannins. Alternatively, a portion of the water extract is diluted with distilled water in a ratio of 1:4 and few drops of 10% ferric chloride solution is added. A blue or green color indicates the presence of tannins (Evans, 1989). Other methods The hide-powder method is used in tannin analysis for leather tannin and the Stiasny method for wood adhesives. Statistical analysis reveals that there is no significant relationship between the results from the hide-powder and the Stiasny methods. Hide-powder method 400 mg of sample tannins are dissolved in 100 ml of distilled water. 3 g of slightly chromated hide-powder previously dried in vacuum for 24h over CaCl2 are added and the mixture stirred for 1 h at ambient temperature. The suspension is filtered without vacuum through a sintered glass filter. The weight gain of the hide-powder expressed as a percentage of the weight of the starting material is equated to the percentage of tannin in the sample. Stiasny's method 100 mg of sample tannins are dissolved in 10 ml distilled water. 1 ml of 10M HCl and 2 ml of 37% formaldehyde are added and the mixture heated under reflux for 30 min. The reaction mixture is filtered while hot through a sintered glass filter. The precipitate is washed with hot water (5× 10 ml) and dried over CaCl2. The yield of tannin is expressed as a percentage of the weight of the starting material. Reaction with phenolic rings The bark tannins of Commiphora angolensis have been revealed by the usual color and precipitation reactions and by quantitative determination by the methods of Löwenthal-Procter and of Deijs (formalin-hydrochloric acid method). Colorimetric methods have existed such as the Neubauer-Löwenthal method which uses potassium permanganate as an oxidizing agent and indigo sulfate as an indicator, originally proposed by Löwenthal in 1877. The difficulty is that the establishing of a titer for tannin is not always convenient since it is extremely difficult to obtain the pure tannin. Neubauer proposed to remove this difficulty by establishing the titer not with regard to the tannin but with regard to crystallised oxalic acid, whereby he found that 83 g oxalic acid correspond to 41.20 g tannin. Löwenthal's method has been criticized. For instance, the amount of indigo used is not sufficient to retard noticeably the oxidation of the non-tannins substances. The results obtained by this method are therefore only comparative. A modified method, proposed in 1903 for the quantification of tannins in wine, Feldmann's method, is making use of calcium hypochlorite, instead of potassium permanganate, and indigo sulfate. Food items with tannins Pomegranates Accessory fruits Strawberries contain both hydrolyzable and condensed tannins. Berries Most berries, such as cranberries, and blueberries, contain both hydrolyzable and condensed tannins. Nuts Nuts vary in the amount of tannins they contain. Some species of acorns of oak contain large amounts. For example, acorns of Quercus robur and Quercus petraea in Poland were found to contain 2.4–5.2% and 2.6–4.8% tannins as a proportion of dry matter, but the tannins can be removed by leaching in water so that the acorns become edible. Other nuts – such as hazelnuts, walnuts, pecans, and almonds – contain lower amounts. Tannin concentration in the crude extract of these nuts did not directly translate to the same relationships for the condensed fraction. Herbs and spices Cloves, tarragon, cumin, thyme, vanilla, and cinnamon all contain tannins. Legumes Most legumes contain tannins. Red-colored beans contain the most tannins, and white-colored beans have the least. Peanuts without shells have a very low tannin content. Chickpeas (garbanzo beans) have a smaller amount of tannins. Chocolate Chocolate liquor contains about 6% tannins. Drinks with tannins Principal human dietary sources of tannins are tea and coffee. Most wines aged in charred oak barrels possess tannins absorbed from the wood. Soils high in clay also contribute to tannins in wine grapes. This concentration gives wine its signature astringency. Coffee pulp has been found to contain low to trace amounts of tannins. Fruit juices Although citrus fruits do not contain tannins, orange-colored juices often contain tannins from food colouring. Apple, grape and berry juices all contain high amounts of tannins. Sometimes tannins are even added to juices and ciders to create a more astringent feel to the taste. Beer In addition to the alpha acids extracted from hops to provide bitterness in beer, condensed tannins are also present. These originate both from malt and hops. Trained brewmasters, particularly those in Germany, consider the presence of tannins to be a flaw. However, in some styles, the presence of this astringency is acceptable or even desired, as, for example, in a Flanders red ale. In lager type beers, the tannins can form a precipitate with specific haze-forming proteins in the beer resulting in turbidity at low temperature. This chill haze can be prevented by removing part of the tannins or part of the haze-forming proteins. Tannins are removed using PVPP, haze-forming proteins by using silica or tannic acid. Properties for animal nutrition Tannins have traditionally been considered antinutritional, depending upon their chemical structure and dosage. Many studies suggest that chestnut tannins have positive effects on silage quality in the round bale silages, in particular reducing NPNs (non-protein nitrogen) in the lowest wilting level. Improved fermentability of soya meal nitrogen in the rumen may occur. Condensed tannins inhibit herbivore digestion by binding to consumed plant proteins and making them more difficult for animals to digest, and by interfering with protein absorption and digestive enzymes (for more on that topic, see plant defense against herbivory). Histatins, another type of salivary proteins, also precipitate tannins from solution, thus preventing alimentary adsorption. Legume fodders containing condensed tannins are a possible option for integrated sustainable control of gastrointestinal nematodes in ruminants, which may help address the worldwide development of resistance to synthetic anthelmintics. These include nuts, temperate and tropical barks, carob, coffee and cocoa. Tannin uses and market Tannins have been used since antiquity in the processes of tanning hides for leather, and in helping preserve iron artifacts (as with Japanese iron teapots). Industrial tannin production began at the beginning of the 19th century with the industrial revolution, to produce tanning material for the need for more leather. Before that time, processes used plant material and were long (up to six months). There was a collapse in the vegetable tannin market in the 1950s–1960s, due to the appearance of synthetic tannins, which were invented in response to a scarcity of vegetable tannins during World War II. At that time, many small tannin industry sites closed. Vegetable tannins are estimated to be used for the production of 10–20% of the global leather production. The cost of the final product depends on the method used to extract the tannins, in particular the use of solvents, alkali and other chemicals used (for instance glycerin). For large quantities, the most cost-effective method is hot water extraction. Tannic acid is used worldwide as clarifying agent in alcoholic drinks and as aroma ingredient in both alcoholic and soft drinks or juices. Tannins from different botanical origins also find extensive uses in the wine industry. Uses Tannins are an important ingredient in the process of tanning leather. Tanbark from oak, mimosa, chestnut and quebracho tree has traditionally been the primary source of tannery tannin, though inorganic tanning agents are also in use today and account for 90% of the world's leather production. Tannins produce different colors with ferric chloride (either blue, blue black, or green to greenish-black) according to the type of tannin. Iron gall ink is produced by treating a solution of tannins with iron(II) sulfate. Tannins can also be used as a mordant, and is especially useful in natural dyeing of cellulose fibers such as cotton. The type of tannin used may or may not have an impact on the final color of the fiber. Tannin is a component in a type of industrial particleboard adhesive developed jointly by the Tanzania Industrial Research and Development Organization and Forintek Labs Canada. Pinus radiata tannins has been investigated for the production of wood adhesives. Condensed tannins, e.g., quebracho tannin, and Hydrolyzable tannins, e.g., chestnut tannin, appear to be able to substitute a high proportion of synthetic phenol in phenol-formaldehyde resins for wood particleboard. Tannins can be used for production of anti-corrosive primers for treating rusted steel surfaces prior to painting, converting rust to iron tannate and consolidating and sealing the surface. The use of resins made of tannins has been investigated to remove mercury and methylmercury from solution. Immobilized tannins have been tested to recover uranium from seawater. References External links Tannins: fascinating but sometimes dangerous molecules   Nutrition Oenology Organic polymers Wine terminology Astringent flavors Phenol antioxidants Wood products Food stabilizers Phytochemicals Wood extracts
Tannin
[ "Chemistry" ]
4,380
[ "Organic compounds", "Organic polymers" ]
62,047
https://en.wikipedia.org/wiki/Mrs.%20Miniver%27s%20problem
Mrs. Miniver's problem is a geometry problem about the area of circles. It asks how to place two circles and of given radii in such a way that the lens formed by intersecting their two interiors has equal area to the symmetric difference of and (the area contained in one but not both circles). It was named for an analogy between geometry and social dynamics enunciated by fictional character Mrs. Miniver, who "saw every relationship as a pair of intersecting circles". Its solution involves a transcendental equation. Origin The problem derives from "A Country House Visit", one of Jan Struther's newspaper articles appearing in the Times of London between 1937 and 1939 featuring her character Mrs. Miniver. According to the story: She saw every relationship as a pair of intersecting circles. It would seem at first glance that the more they overlapped the better the relationship; but this is not so. Beyond a certain point the law of diminishing returns sets in, and there are not enough private resources left on either side to enrich the life that is shared. Probably perfection is reached when the area of the two outer crescents, added together, is exactly equal to that of the leaf-shaped piece in the middle. On paper there must be some neat mathematical formula for arriving at this; in life, none. Louis A. Graham and Clifton Fadiman formalized the mathematics of the problem and popularized it among recreational mathematicians. Solution The problem can be solved by cutting the lune along the line segment between the two crossing points of the circles, into two circular segments, and using the formula for the area of a circular segment to relate the distance between the crossing points to the total area that the problem requires the lune to have. This gives a transcendental equation for the distance between crossing points but it can be solved numerically. There are two boundary conditions whose distances between centers can be readily solved: the farthest apart the centers can be is when the circles have equal radii, and the closest they can be is when one circle is contained completely within the other, which happens when the ratio between radii is . If the ratio of radii falls beyond these limiting cases, the circles cannot satisfy the problem's area constraint. In the case of two circles of equal size, these equations can be simplified somewhat. The rhombus formed by the two circle centers and the two crossing points, with side lengths equal to the radius, has an angle radians at the circle centers, found by solving the equation from which it follows that the ratio of the distance between their centers to their radius is . See also Goat problem#Interior grazing problem, another problem of equalizing the areas of circular lunes and lenses References Circles Area Mathematical problems
Mrs. Miniver's problem
[ "Physics", "Mathematics" ]
567
[ "Circles", "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Wikipedia categories named after physical quantities", "Mathematical problems", "Pi", "Area" ]
62,062
https://en.wikipedia.org/wiki/Sacred%20prostitution
Sacred prostitution, temple prostitution, cult prostitution, and religious prostitution are purported rites consisting of paid intercourse performed in the context of religious worship, possibly as a form of fertility rite or divine marriage (). Scholars prefer the terms "sacred sex" or "sacred sexual rites" in cases where payment for services is not involved. The historicity of literal sacred prostitution, particularly in some places and periods, is a controversial topic within the academic world. Historically mainstream historiography has considered it a probable reality, based on the abundance of ancient sources and chroniclers detailing its practices, although it has proved harder to differentiate between true prostitution and sacred sex without remuneration. Beginning in the late 20th century, a number of scholars have challenged the veracity of sacred prostitution as a concept, suggesting that the claims are based on mistranslations, misunderstandings or outright inventions of ancient authors. Authors have also interpreted evidence as secular prostitution administered in the temple under the patronage of fertility deities, not as an act of religious worship by itself. Definitions Sacred prostitution has many different characteristics depending on the region, class and the religious ideals of the period and the place, and consequently can have many different definitions. One definition that was developed was due to the common types of sacred prostitution that are recorded in Classical sources: sale of a woman's virginity or rinni in honor of a goddess or a once-in-a-lifetime prostitution, professional prostitutes or slaves owned by a temple or sanctuary, and temporary prostitution that occurs before a marriage or during certain rituals. Ancient Near East Ancient Near Eastern societies along the Tigris and Euphrates rivers featured many shrines and temples or houses of heaven dedicated to various deities. The 5th-century BC historian Herodotus's account and some other testimony from the Hellenistic Period and Late Antiquity suggest that ancient societies encouraged the practice of sacred sexual rites not only in Babylonia and Cyprus, but throughout the Near East. The work of gender researchers like Daniel Arnaud, Julia Assante and Stephanie Budin has cast the whole tradition of scholarship that defined the concept of sacred prostitution into doubt. Budin regards the concept of sacred prostitution as a myth, arguing that the practices described in the sources were misunderstandings of either non-remunerated ritual sex or non-sexual religious ceremonies, or possibly even invented as rhetorical devices. Sumer Through the twentieth century, scholars generally believed that a form of sacred marriage rite (hieros gamos) was staged between the kings in the ancient Near Eastern region of Sumer and the high priestesses of Inanna, the Sumerian goddess of sexual love, fertility, and warfare, later called Ishtar. The king would have sex with the priestess to represent the union of Dumuzid with Inanna. According to the noted Assyriologist Samuel Noah Kramer, the kings would further establish their legitimacy by taking part in a ritual sexual act in the temple of the fertility goddess Ishtar every year on the tenth day of the New Year festival Akitu. However, no certain evidence has survived to prove that sexual intercourse was included, despite many popular descriptions of the habit. It is possible that these unions never occurred but were embellishments to the image of the king; hymns which praise Ancient Near Eastern kings for coupling with the goddess Ishtar often speak of them as running , offering sacrifices, feasting with the sun-god Utu, and receiving a royal crown from An, all in a single day. Some modern historians argue in the same direction, though their posture has been disputed. Babylonia According to Herodotus, the rites performed at these temples included sexual intercourse, or what scholars later called sacred sexual rites: The British anthropologist James Frazer accumulated citations to prove this in a chapter of his magnum opus The Golden Bough (1890–1915), and this has served as a starting point for several generations of scholars. Frazer and Henriques distinguished two major forms of sacred sexual rites: temporary rite of unwed girls (with variants such as dowry-sexual rite, or as public defloration of a bride), and lifelong sexual rite. However, Frazer took his sources mostly from authors of Late Antiquity (i.e. 150–500 AD), not from the Classical or Hellenistic periods. This raises questions as to whether the phenomenon of temple sexual rites can be generalised to the whole of the ancient world, as earlier scholars typically did. In Hammurabi's code of laws, the rights and good name of female sacred sexual priestesses were protected. The same legislation that protected married women from slander applied to them and their children. They could inherit property from their fathers, collect income from land worked by their brothers, and dispose of property. These rights have been described as extraordinary, taking into account the role of women at the time. Terms associated with temple prostitution in Sumer and Babylonia All translations are sourced from the Pennsylvania Sumerian Dictionary. Akkadian terms were used in the Akkadian Empire, Assyria, and Babylonia. The terms themselves come from lexical profession lists on tablets dating back to the Early Dynastic period. Notes on the cuneiform: by convention Akkadian is italicised, spoken Sumerian is lowercase and cuneiform sign transliteration is uppercase. In addition, a determinative sign is written as a superscript. Determinatives are only written and never spoken. In spoken Sumerian homophones are distinguished by a numerical subscript. Hittites The Hittites practiced sacred prostitution as part of a cult of deities, including the worship of a mated pair of deities, a bull god and a lion goddess, while in later days it was the mother-goddess who became prominent, representing fertility, and (in Phoenicia) the goddess who presided over human birth. Phoenicia It has been argued that sacred prostitution, worked by both males and females, was a custom of ancient Phoenicians. It would be dedicated to the deities Astarte and Adonis, and sometimes performed as a festival or social rite in the cities of Byblos, Afqa and Baalbek (later named Heliopolis) as well as the nearby Syrian city of Palmyra. At the Etruscan site of Pyrgi, a center of worship of the eastern goddess Astarte, archaeologists identified a temple consecrated to her and built with at least 17 small rooms that may have served as quarters for temple prostitutes. Similarly, a temple dedicated to her equated goddess Atargatis in Dura-Europos, was found with nearly a dozen small rooms with low benches, which might have used either for sacred meals or sacred services of women jailed in the temple for adultery. Pyrgi's sacred prostitutes were famous enough to be apparently mentioned in a lost fragment of Lucilius's works. In northern Africa, the area of influence of the Phoenician colony of Carthage, this service was associated to the city of Sicca, a nearby city that received the name of Sicca Veneria for its temple of Astarte or Tanit (called Venus by Roman authors). Valerius Maximus describes how their women gained gifts by engaging in prostitution with visitors. Phoenicio-Punic settlements in Hispania, like Cancho Roano, Gadir, Castulo and La Quéjola, have suggested this practice through their archaeology and iconography. In particular, Cancho Roano features a sanctuary built with multiple cells or rooms, which has been identified as a possible place of sacred prostitution in honor to Astarte. A similar institution might have been found in Gadir. Its posterior, renowned erotic dancers called puellae gaditanae in Roman sources (or cinaedi in the case of male dancers) might have been desecrated heirs of this practice, considering the role occupied by sex and dance on Phoenician culture. Another center of cult to Astarte was Cyprus, whose main temples were located in Paphos, Amathus and Kition. The epigraphy of the Kition temple describes personal economic activity on the temple, as sacred prostitution would have been taxed as any other occupation, and names possible practitioners as grm (male) and lmt (female). Ancient Palestine The Hebrew Bible uses two different words for prostitute, zonah (זונה)‎ and kedeshah (or qedesha) (קדשה)‎. The word zonah simply meant an ordinary prostitute or loose woman. But the word kedeshah literally means set apart (in feminine form), from the Semitic root Q-D-Sh (קדש)‎ meaning holy, consecrated or set apart. Nevertheless, zonah and qedeshah are not interchangeable terms: the former occurs 93 times in the Bible, whereas the latter is only used in three places, conveying different connotations. This double meaning has led to the belief that kedeshah were not ordinary prostitutes, but sacred harlots who worked out of fertility temples. However, the lack of solid evidence has indicated that the word might refer to prostitutes who offered their services in the vicinity of temples, where they could attract a larger number of clients. The term might have originated as consecrated maidens employed in Canaanite and Phoenician temples, which became synonymous with harlotry for Biblical writers. In any case, the translation of sacred prostitute has continued, however, because it explains how the word can mean such disparate concepts as sacred and prostitute. As put by DeGrado, "neither the interpretation of the קדשה as a 'priestess-not-prostitute' (according to Westenholz) nor as a 'prostitute-not-priestess' (according to Gruber) adequately represents the semantic range of Hebrew word in biblical and post-biblical Hebrew." Male prostitutes were called kadesh or qadesh (literally: male who is set apart). The Hebrew word keleb (dog) may also signify a male dancer or prostitute. The Mosaic Law as outlined in the Book of Deuteronomy was not universally observed in Israelite culture under the Davidic line in the Kingdom of Israel, as recorded in the Books of Kings. According to 2 Kings 22, the Kingdom of Judah had lost "the Book of the Law". During the reign of King Josiah, Hilkiah, the High Priest of Israel, discovered it in "the House of the Lord" and realized that the people have disobeyed, particularly regarding prostitution. Ancient Greece and Hellenistic world Ancient Greece The Greek term hierodoulos or hierodule has sometimes been taken to mean sacred holy woman, but it is more likely to refer to a former slave freed from slavery in order to be dedicated to a god. There were different levels of prostitutes within Ancient Greek society, but two categories are specifically related to sacred or temple prostitution. The first category are hetaires, also known as courtesans, typically more educated women that served within temples. The second category are known as hierodoules, slave women or female priests who worked within temples and served the sexual requests of visitors to the temple. While there may not be a direct connection between temples and prostitution, many prostitutes and courtesans worshipped Aphrodite, the goddess of love. Prostitutes would use their earnings to pay for dedications and ritualistic celebrations in honour of Aphrodite. Some prostitutes also viewed the action of sexual service and sexual pleasure as an act of devotion to the goddess of love, worshipping Aphrodite through an act rather than a physical dedication. In the temple of Apollo at Bulla Regia, a woman was found buried with an inscription reading: "Adulteress. Prostitute. Seize [me], because I fled from Bulla Regia." It has been speculated she might have been a woman forced into sacred prostitution as a punishment for adultery. Temple(s) of Aphrodite The act of sacred prostitution within the Temples of Aphrodite in the city of Corinth was well-known and well-spread. Greek writer-philosopher Strabo comments, "the Temple of Aphrodite was so rich that it owned a thousand temple-slaves, courtesans, whom both men and women had dedicated to the goddess". Within the same work, Strabo compares Corinth to the city of Comana, confirming the belief that temple prostitution was a notable characteristic of Corinth. Prostitutes performed sacred functions within the temple of Aphrodite. They would often burn incense in honor of Aphrodite. Chameleon of Heracleia recorded in his book, On Pindar, that whenever the city of Corinth prayed to Aphrodite in manners of great importance, many prostitutes were invited to participate in the prayers and petitions. The girls involved in temple prostitution were typically slaves owned by the temple. However, some of the girls were gifted to the temple from other members of society in return for success in particular endeavors. One example that shows the gifting of girls to the temple is the poem of Athenaeus, which explores an athlete Xenophon’s actions of gifting a group of courtesans to Aphrodite as a thanks-offering for his victory in a competition. Specifically in 464 BC, Xenophon was victorious in the Olympic Games and donated 100 slaves to Aphrodite’s temple. Pindar, a famous Greek poet, was commissioned to write a poem that was to be performed at Xenophon’s victory celebration in Corinth. The poet acknowledged that the slaves would serve Aphrodite as sacred prostitutes within her temple at Corinth. Another temple of Aphrodite was named Aphrodite Melainis, located near the city gates in an area known as “Craneion”. It is the resting place of Lais, who was a famous prostitute in Greek history. This suggests that there was a connection with ritual prostitution within temples of Aphrodite. There is a report that was found of an epigram of Simonides commemorating the prayer of the prostitutes of Corinth on behalf of the salvation of the Greeks from the invading Achaemenid Empire in the Greco-Persian Wars of the early fifth century BCE. Both temple prostitutes and priestesses prayed to Aphrodite for help, and were honoured for their potent prayers, which Greek citizens believed contributed to the repelling of the Persians. Athenaeus also alludes to the idea that many of Aphrodite’s temples and sanctuaries were occupied by temple prostitutes. These prostitutes were known to practise sexual rituals in different cities which included Corinth, Magnesia, and Samos. Signs of sacred prostitution within Minoan Crete Some evidence of sacred prostitution was evident in Minoan Crete. The building in question is known as the "East Building", but was also referred to as "the House of the Ladies" by the excavator of the building. Some believe that the architecture of this building seemed to reflect the grooming needs of women, but could also have been a brothel for high status individuals. The structure of the interior of the building seemed to suggest that the building was used for prostitution. Large clay vats typically used for bathing were found within the building, along with successive doors within the corridors. The successive doors suggested privacy, and within the time period, was associated with two functions: storage of valuable goods and protection of the private moments of its residents. Because the ground floors were found practically empty, the possibility that the building was used for prostitution increases. There were also religious embellishments found within the "East Building", such as vases and other vessels that seemed to be connected to religious rituals. The vessels were covered in motifs related to sacrilegious rituals, such as the sacral knot and the image of birds flying freely. The functions of the vessels would have been offering food or liquid in relation to the rituals. Combining these two factors, it is a possibility that sacred prostitution existed within this building. Hellenistic world In the Greek-influenced and colonised world, "sacred prostitution" was known in Cyprus (Greek-settled since 1100 BC), Sicily (Hellenised since 750 BC), in the Kingdom of Pontus (8th century BC) and in Cappadocia (c. 330 BC hellenised). 2 Maccabees () describes sacred prostitution in the Second Temple under the reign of the Hellenistic ruler Antiochus IV Epiphanes. Cyprus A passage in Herodotus explains a Babylonian custom where before marriage, girls had to offer themselves for sex, presumably within a temple, as required by rites of a goddess equivalent to Aphrodite in their culture. Herodotus records that a similar practice or custom took place within Cyprus, with girls offering themselves up for sex as required by the rites of Aphrodite. Ennius and Ovid corroborate each other on the idea that Aphrodite established the act of prostitution within the city of Cyprus. A temple of Kition also shows evidence of sacred prostitution. On a marble plaque, it lists sacred prostitutes among other professions (bakers, scribes, barbers) that were part of ritual personnel at some Cypriot temples. Temple of Aphaca The temple of Aphaca may be another source of evidence for temple prostitution. The process is similar to regular prostitution, where male customers paid two or three obol in the form of or in addition to dedications to Aphrodite in exchange for sex with a temple prostitute. In the temple of Aphaca specifically, the men would dedicate their payment to "Cyprian Aphrodite" before engaging in sex with a temple prostitute. Ancient Rome and late antiquity Ancient Rome Late antiquity The Roman emperor Constantine closed down a number of temples to Venus or similar deities in the 4th century AD, as the Christian church historian Eusebius proudly noted. Eusebius also writes that the Phoenician cities of Aphaca and Heliopolis (Baalbek) continued to practise temple prostitution until the emperor Constantine put an end to the rite in the 4th century AD. Asia India People in some Indian states practice hierodulic prostitution, with similar customary forms such as basavi, and involves dedicating pre-pubescent and young adolescent girls from villages in a ritual marriage to a Hindu deity or a Hindu temple, who then work in the temple and function as spiritual guides, dancers, and prostitutes servicing male devotees in the temple. The Devadasis were originally seen as intercessors who allowed upper-caste men to have contact with the gods. Though they did develop sexual relations with other men, they were not looked upon with lust. Before Muslim rule in the 14th century, they could live an existence apart from the men, with inheritance rights, wealth and influence, as well as living outside of the dangers of marriage. The system was criticised by British colonial government while defended by Brahmins, leading to a decline in support for the system and the devadasis soon turned to prostitution. Many scholars have stated that the Hindu scriptures do not mention the system. Human Rights Watch also reports claims that devadasis are forced into this service and, at least in some cases, to practise prostitution for upper-caste members. Various state governments in India enacted laws to ban this practice both prior to India's independence and more recently. They include Bombay Devdasi Act, 1934, Devdasi (Prevention of dedication) Madras Act, 1947, Karnataka Devdasi (Prohibition of dedication) Act, 1982, and Andhra Pradesh Devdasi (Prohibition of dedication) Act, 1988. However, the tradition continues in certain regions of India, particularly the states of Karnataka and Andhra Pradesh. Indonesia Japan During the Kamakura period, many shrines and temples, which provided for , fell into bankruptcy. Some miko started travelling in search of livelihood and came to be known as (歩き巫女 lit. walking shrine-maiden). While aruki miko primarily provided religious services, they were also widely associated with prostitution. However, no religious reasons for miko prostitution are known, and hence the act might be unrelated to sacred prostitution. Nepal The Deukis are temple prostitutes in Nepal. The practice is banned but still exists. Wealthier families from the Kanari, Thakuri and Bista buy girls from poorer families to be dedicated to a temple and are not allowed to marry. Mesoamerica and South America Maya The Maya maintained several phallic religious cults, possibly involving homosexual temple prostitution. Aztec Much evidence for the religious practices of the Aztec culture was destroyed during the Spanish conquest, and almost the only evidence for the practices of their religion is from Spanish accounts. The Franciscan Spanish Friar Bernardino de Sahagún learned their language and spent more than 50 years studying the culture. He wrote that they participated in religious festivals and rituals, as well as performing sexual acts as part of religious practice. This may be evidence for the existence of sacred prostitution in Mesoamerica, or it may be either confusion, or accusational polemic. He also speaks of kind of prostitutes named ahuianime ("pleasure girls"), whom he described as "an evil woman who finds pleasure in her body... [A] dissolute woman of debauched life." It is agreed that the Aztec god Xochipili (taken from both Toltec and Maya cultures) was both the patron of homosexuals and homosexual prostitutes. Xochiquetzal was worshiped as goddess of sexual power, patroness of prostitutes and artisans involved in the manufacture of luxury items. Inca The Inca sometimes dedicated young boys as temple prostitutes. The boys were dressed in girl's clothing, and chiefs and head men would have ritual sexual intercourse with them during religious ceremonies and on holy days. Recent Western occurrences In the 1970s and early 1980s, some religious cults practised sacred prostitution as an instrument to recruit new converts. Among them was the cult Children of God, also known as The Family, who called this practice "Flirty Fishing". They later abolished the practice due to the growing AIDS epidemic. In Ventura County, California, Wilbur and Mary Ellen Tracy established their own temple, the Church Of The Most High Goddess, in the wake of what they described as a divine revelation. Sexual acts played a fundamental role in the church's sacred rites, which were performed by Mary Ellen Tracy herself in her assumed role of High Priestess. Local newspaper articles about the Neopagan church quickly got the attention of local law enforcement officials, and in April 1989, the Tracys' house was searched and the couple arrested on charges of pimping, pandering and prostitution. They were subsequently convicted in a trial in state court and sentenced to jail terms: Wilbur Tracy for 180 days plus a $1,000.00 fine; Mary Ellen Tracy for 90 days plus mandatory screening for STDs. Some modern sacred prostitutes act as sexual surrogates as a form of therapy. In places where prostitution is illegal, sacred prostitutes may be paid as therapists, escorts, or performers. Modern views According to Avaren Ipsen, from University of California, Berkeley's Commission on the Status of Women, the myth of sacred prostitution works as "an enormous source of self-esteem and as a model of sex positivity" to many sex workers. She compared this situation to the figure of Mary Magdalene, whose status as a prostitute, though short-lived according to Christian texts and disputed among academics, has been celebrated by sex working collectives (among them Sex Workers Outreach Project USA) in an effort to de-stigmatize their job. Ipsen speculated that academic currents trying to deny sacred prostitution are ideologically motivated, attributing them to the "desires of feminists, including myself, to be 'decent.'" In her book The Sacred Prostitute: Eternal Aspect of the Feminine, psychoanalyst Nancy Qualls-Corbett praised sacred prostitution as an expression of female sexuality and a bridge between the latter and the divine, as well as a rupture from mundane sexual degradation. "[The sacred prostitute] did not make love in order to obtain admiration or devotion from the man who came to her... She did not require a man to give her a sense of her own identity; rather this was rooted in her own womanliness." Qualls also equated censuring sacred prostitution to demonize female sexuality and vitality. "In her temple, men and women came to find life and all that it had to offer in sensual pleasure and delight. But with the change in cultural values and the institutionalization of monotheism and patriarchy, the individual came to the House of God to prepare for death." This opinion is shared by several schools of modern Paganism, among them Wicca, for whom sacred prostitution, independently from its historical backing, embodies the sacralization of sex and a celebration of the communion between female and male sexuality. This practice is associated to spiritual healing and sex magic. Within secular thinking, philosopher Antonio Escohotado is a popular adept of this current, favoring particularly the role of ancient sacred prostitutes and priestesses of Ishtar. In his seminal work Rameras y esposas, he extols them and their cult as symbols of female empowerment and sexual freedom. Actress Susie Lamb approached sacred prostitution in her 2014 performance Horae: Fragments of a Sacred History of Prostitution, in which she points out its value to challenge gender roles. "The idea of sacred prostitution is almost entirely incomprehensible to the modern imagination. It involved women having sex as an act of worship... The relationship between men and women in this ancient tradition is based on respect for the woman. She was seen as a powerful person." See also Deuki Devadasi Hetaera Shamhat Sex worker Sex magic Primitive promiscuity Hijra (South Asia) Sexuality in ancient Rome List of fertility deities List of love and lust deities Padre Putas References Bibliography Published in 12 volumes External links Stuckey, H. Johanna. "Sacred Prostitutes". MatriFocus. 2005 vol 5–1. , and a discussion Jenin Younes (2008), Sacred Prostitution in Ancient Israel Prostitution Religious sex rituals Prostitution Religious occupations Ancient priestesses Sexual controversies Holiness History of prostitution Inanna Tanit Books of the Maccabees Second Temple Antiochus IV Epiphanes Constantine the Great Eusebius
Sacred prostitution
[ "Biology" ]
5,432
[ "Behavior", "Religious practices", "Human behavior" ]
62,070
https://en.wikipedia.org/wiki/Ir%C3%A8ne%20Joliot-Curie
Irène Joliot-Curie (; ; 12 September 1897 – 17 March 1956) was a French chemist and physicist who received the 1935 Nobel Prize in Chemistry with her husband, Frédéric Joliot-Curie, for their discovery of induced radioactivity. They were the second married couple, after her parents, to win the Nobel Prize, adding to the Curie family legacy of five Nobel Prizes. This made the Curies the family with the most Nobel laureates to date. Her mother Marie Skłodowska-Curie and herself also form the only mother–daughter pair to have won Nobel Prizes whilst Pierre and Irène Curie form the only father-daughter pair to have won Nobel Prizes by the same occasion, whilst there are six father-son pairs who have won Nobel Prizes by comparison. She was also one of the first three women to be a member of a French government, becoming undersecretary for Scientific Research under the Popular Front in 1936. Both children of the Joliot-Curies, Hélène and Pierre, are also scientists. In 1945, she was one of the six commissioners of the new French Alternative Energies and Atomic Energy Commission (CEA) created by de Gaulle and the Provisional Government of the French Republic. She died in Paris on 17 March 1956 from an acute leukemia linked to her exposure to polonium and X-rays. Biography Early life and education Irène was born in Paris, France, on 12 September 1897 and was the first of Marie and Pierre's two daughters. Her sister was Ève, born in 1904. They lost their father early on in 1906 due to a horse-drawn wagon incident and Marie was left to raise them. Education was important to Marie and Irène's education began at a school near the Paris Observatory. This school was chosen because it had a more challenging curriculum than the school nearby the Curie's home. In 1906, it was obvious Irène was talented in mathematics and her mother chose to focus on that instead of public school. Marie joined forces with a number of eminent French scholars, including the prominent French physicist Paul Langevin, to form "The Cooperative", which included a private gathering of nine students that were children of the most distinguished academics in France. Each contributed to educating these children in their respective homes. The curriculum of The Cooperative was varied and included not only the principles of science and scientific research but such diverse subjects as Chinese and sculpture and with great emphasis placed on self-expression and play. Irène studied in this environment for about two years. Irène and her sister Ève were sent to Poland to spend the summer with their Aunt Bronia (Marie's sister) when Irène was thirteen. Irène's education was so rigorous that she still had a German and trigonometry lesson every day of that break. Irène re-entered a more orthodox learning environment by going back to high school at the Collège Sévigné in central Paris until 1914. She then went onto the Faculty of Science at the Sorbonne to complete her baccalaureate, until 1916 when her studies were interrupted by World War I. World War I Irène took a nursing course during college to assist her mother, Marie Curie, in the field as her assistant. She began her work as a nurse radiographer on the battlefield alongside her mother, but after a few months she was left to work alone at a radiological facility in Belgium. She taught doctors how to locate shrapnel in bodies using radiology and taught herself how to repair the equipment. She moved throughout facilities and battlegrounds including two bombsites, Furnes and Ypres, and Amiens. She received a military medal for her assistance in X-ray facilities in France and Belgium. After the war, Irène returned to the Sorbonne in Paris to complete her second baccalaureate degree in mathematics and physics in 1918. Irène then went on to work as her mother's assistant, teaching radiology at the Radium Institute, which had been built by her parents. Her doctoral thesis was concerned with the alpha decay of polonium, the element discovered by her parents (along with radium) and named after Marie's country of birth, Poland. Irène became a Doctor of Science in 1925. Research As she neared the end of her doctorate in 1924, Irène Curie was asked to teach the precision laboratory techniques required for radiochemical research to the young chemical engineer Frédéric Joliot, whom she would later wed. From 1928 Joliot-Curie and her husband Frédéric combined their research efforts on the study of atomic nuclei. In 1932, Joliot-Curie and her husband Frédéric had full access to Marie's polonium. Experiments were done using gamma rays to identify the positron. Though their experiments identified both the positron and the neutron, they failed to interpret the significance of the results and the discoveries were later claimed by Carl David Anderson and James Chadwick respectively. These discoveries would have secured greatness indeed, as together with J. J. Thomson's discovery of the electron in 1897, they finally replaced John Dalton's model of atoms as solid spherical particles. However, in 1933, Joliot-Curie and her husband were the first to calculate the accurate mass of the neutron. The Joliot-Curies continued trying to get their name into the scientific community; in doing so they developed a new theory from an interesting experiment they conducted. During an experiment bombarding aluminium with alpha rays, they discovered that only protons were detected. Based on the undetectable electron and positron pair, they proposed that the protons changed into neutrons and positrons. Later in October 1933, this new theory was presented to the Seventh Solvay Conference. The Solvay Conferences consisted of prominent scientists in the physics and chemistry community. Irene and her husband presented their theory and results to their fellow scientists, but they received criticism of their finding from most of the 46 scientists attending. However they were able to build on the controversial theory later on. In 1934, the Joliot-Curies finally made the discovery that sealed their place in scientific history. Building on the work of Marie and Pierre Curie, who had isolated naturally occurring radioactive elements, the Joliot-Curies realised the alchemist's dream of turning one element into another: creating radioactive nitrogen from boron, radioactive isotopes of phosphorus from aluminium, and silicon from magnesium. Irradiating the natural stable isotope of aluminium with alpha particles (i.e. helium nuclei) resulted in an unstable isotope of phosphorus: 27Al + 4He → 30P + 1n. This phosporus isotope is not found in nature and decays emitting a positron. This discovery is formally known as positron emission or beta decay, where a proton in the radioactive nucleus changes to a neutron and releases a positron and an electron neutrino. By then, the application of radioactive materials for use in medicine was growing and this discovery allowed radioactive materials to be created quickly, cheaply, and plentifully. The Nobel Prize for chemistry in 1935 brought with it fame and recognition from the scientific community and Joliot-Curie was awarded a professorship at the Faculty of Science. The work that Irène's laboratory pioneered, research into radium nuclei, would also help another group of physicists within Germany. Otto Hahn and Fritz Strassman on 19 December 1938 bombarded uranium with neutrons, but misinterpreted their findings. Lise Meitner and Otto Frisch would theoretically correct Hahn and Strassmann's findings, and after replicating their experiment based on Hungarian physicist Leo Szilard's theory that he had confided to Meitner back in 1933, confirmed on 13 January 1939 that Hahn and Strassmann had indeed observed nuclear fission: the splitting of the nucleus itself, emitting vast amounts of energy. Lise Meitner's now-famous calculations actually disproved Irène's results and proved that nuclear fission was possible and replicable. In 1948, using work on nuclear fission, the Joliot-Curies along with other scientists created the first French nuclear reactor. The Joliot-Curies were a part of the organization in charge of the project, the Atomic Energy Commission, Commissariat à l'énergie atomique (CEA). Irène was the commissioner of the CEA and Irène's husband, Frédéric, was the director of the CEA. The reactor, Zoé (Zéro énergie Oxyde et Eau lourde) used nuclear fission to generate five kilowatts of power. This was the beginning of nuclear energy as a source of power for France. Years of working so closely with radioactive materials finally caught up with Joliot-Curie and she was diagnosed with leukemia. She had been accidentally exposed to polonium when a sealed capsule of the element exploded on her laboratory bench in 1946. Treatment with antibiotics and a series of operations relieved her suffering temporarily but her condition continued to deteriorate. Despite this, Joliot-Curie continued to work and in 1955 drew up plans for new physics laboratories at the Orsay Faculty of Sciences, which is now a part of the Paris-Saclay University, south of Paris. Political views The Joliot-Curies had become increasingly aware of the growth of the fascist movement. They opposed its ideals and joined the Socialist Party in 1934, the Comité de vigilance des intellectuels antifascistes a year later, and in 1936 they actively supported the Republican faction in the Spanish Civil War. In the same year, Joliot-Curie was appointed Undersecretary of State for Scientific Research by the French government, in which capacity she helped in founding the Centre National de la Recherche Scientifique. Frédéric and Irène visited Moscow for the two hundred and twentieth anniversary of the Russian Academy of Science and returned sympathizing with Russian colleagues. Frédéric's close connection with the Communist Party caused Irène to later be detained on Ellis Island during her third trip to the US, coming to speak in support of Spanish refugees, at the Joint Antifascist Refugee Committee's invitation. The Joliot-Curies had continued Pierre and Marie's policy of publishing all of their work for the benefit of the global scientific community, but afraid of the danger that might result should it be developed for military use, they stopped: on 30 October 1939, they placed all of their documentation on nuclear fission in the vaults of the French Academy of Sciences, where it remained until 1949. Joliot-Curie's political career continued after the war and she became a commissioner in the Commissariat à l'énergie atomique. However, she still found time for scientific work and in 1946 became director of her mother's Institut Curie. Joliot-Curie became actively involved in promoting women's education, serving on the National Committee of the Union of French Women (Comité National de l'Union des Femmes Françaises) and the World Peace Council. The Joliot-Curies were given memberships to the French Légion d'honneur; Irène as an officer and Frédéric as a commander, recognising his earlier work for the resistance. Personal life Irène and Frédéric hyphenated their surnames to Joliot-Curie after they married in 1926. The Joliot-Curies had two children, Hélène, born eleven months after they were married, and Pierre, born in 1932. Between 1941 and 1943 during World War II, Joliot-Curie contracted tuberculosis and was forced to spend time convalescing in Switzerland. Concern for her own health together with the anguish of her husband's being in the resistance against the German troops and her children in occupied France was hard to bear. She did make several dangerous visits back to France, enduring detention by German troops at the Swiss border on more than one occasion. Finally, in 1944, Joliot-Curie judged it too dangerous for her family to remain in France and she took her children back to Switzerland. Later in September 1944, after not hearing from Frédéric for months, Irene and her children were finally able to rejoin him. Irène fought through these struggles to advocate for her own personal views. She was a passionate member of the feminist movement, especially regarding the sciences, and also advocated for peace. She continually applied to the French Academy of Sciences, an elite scientific organization, knowing that she would be denied. She did so to draw attention to the fact they did not accept women in the organization. Irène was also involved in many speaking functions such as the International Women's Day conference. She also played a big role for the French contingent at the World Congress of Intellectuals for Peace, which promoted the World Peace movement. In 1948, during a strike involving coal miners, Joliot-Curie reached out to Paris Newsletters to convince families to temporarily adopt the children of the coal miners during the strike. The Joliot-Curies adopted two girls during that time. Death In 1956, after a final convalescent period in the French Alps, Joliot-Curie was admitted to the Curie Hospital in Paris, where she died on 17 March at the age of 58 from leukemia, possibly due to radiation from polonium-210. Frédéric's health was also declining, and he died in 1958 from liver disease, which too was said to be the result of overexposure to radiation. Joliot-Curie was an atheist and anti-war. When the French government held a national funeral in her honor, Irène's family asked to have the religious and military portions of the funeral omitted. Frédéric was also given a national funeral by the French government. Joliot-Curie's daughter, Hélène Langevin-Joliot, went on to become a nuclear physicist and professor at the University of Paris. Joliot-Curie's son, Pierre Joliot, went on to become a biochemist at the Centre National de la Recherche Scientifique. Notable honours Nobel Prize in Chemistry in 1935 for the discovery of artificial radioactivity with Frédéric Joliot-Curie. Barnard Gold Medal for Meritorious Service to Science in 1940 with Frédéric Joliot-Curie. Officer of the Legion of Honor. Her name was added to the Monument to the X-ray and Radium Martyrs of All Nations erected in Hamburg, Germany. See also List of female Nobel laureates Stefania Maracineanu Radioactive (film) Timeline of women in science Women in chemistry References Further reading Conference (Dec. 1935) for the Nobel prize of F. & I. Joliot-Curie, online and analysed on BibNum [click 'à télécharger' for English version]. External links including the Nobel Lecture on 12 December 1935 Artificial Production of Radioactive Elements 1897 births 1956 deaths 20th-century French chemists 20th-century French physicists 20th-century French women scientists Curie family Deaths from leukemia in France French women activists French atheists French Nobel laureates French people of Polish descent French socialist feminists French socialists French women chemists French women physicists Nobel laureates in Chemistry Members of the German Academy of Sciences at Berlin Nuclear chemists Paris-Saclay University people Recipients of the Order of the Cross of Grunwald, 3rd class Scientists from Paris University of Paris alumni Women Nobel laureates
Irène Joliot-Curie
[ "Chemistry", "Technology" ]
3,141
[ "Nuclear chemists", "Women Nobel laureates", "Women in science and technology" ]
62,108
https://en.wikipedia.org/wiki/Elliot%20See
Elliot McKay See Jr. (July 23, 1927 – February 28, 1966) was an American engineer, naval aviator, test pilot and NASA astronaut. See received an appointment to the United States Merchant Marine Academy in 1945. He graduated in 1949 with a Bachelor of Science degree in marine engineering and a United States Naval Reserve commission, and joined the Aircraft Gas Turbine Division of General Electric as an engineer. He was called to active duty as a naval aviator during the Korean War, and flew Grumman F9F Panther fighters with Fighter Squadron 144 (VF-144) from the aircraft carrier in the Mediterranean, and in the Western Pacific. He married Marilyn Denahy in 1954, and they had three children. See rejoined General Electric (GE) in 1956 as a flight test engineer after his tour of duty, and became a group leader and experimental test pilot at Edwards Air Force Base, where he flew the latest jet aircraft with GE engines. He also obtained a Master of Science degree in aeronautical engineering from UCLA. Selected in NASA's second group of astronauts in 1962, See was the prime command pilot for what would have been his first space flight, Gemini 9. He was killed along with Charles Bassett, his Gemini 9 crewmate, in a NASA jet crash at the St. Louis McDonnell Aircraft plant, where they were to undergo two weeks of space rendezvous simulator training. Early life and education Elliot McKay See Jr. was born on July 23, 1927, in Dallas, Texas, to Elliot McKay See Sr. (1888–1968) and Mamie Norton See ( Drummond; 1900–1988). He was the first of two children; his sister Sally Drummond See rounded out the family in 1930. His father was an electrical engineer who worked for General Electric, and his mother worked in jobs ranging from advertising to real estate. See was active in the Boy Scouts of America for five years, and earned the rank of Eagle Scout. He attended Highland Park High School and was on the varsity team in several sports, including boxing. He was also on the Reserve Officer Training Corps (ROTC) Rifle Team. He graduated from high school in 1945. The United States entered World War II in December 1941. See had to choose between going to war or going to college, as he would otherwise be drafted at age 18. He decided to apply for aviation cadet training. He failed a physical, and, according to See, "going to college became the most important thing". He enrolled at the University of Texas, and after a few months pledged to Phi Kappa Psi. While at the University of Texas, he signed up for flying lessons and received his private pilot's license. See applied for military officer training and received an appointment to the United States Merchant Marine Academy (USMMA) in 1945. As the end of the war drew near, the USMMA changed its curriculum to a four-year college-level program, which was the minimum requirement to be a merchant marine in peacetime. He spent his plebe year at Pass Christian, Mississippi, where the USMMA had a satellite campus, and then transferred to the main campus at Kings Point, New York. He commanded the Third Company as a cadet officer. He was a member of the Propeller Club and head cheerleader. He was on the mile relay running team, played intramural softball, and was a varsity boxer. As co-captain of the rifle team, he won the Captain Tomb Trophy for individual rifle and pistol marksmanship in December 1948. In 1949, Congress authorized the USMMA to award Bachelor of Science degrees to its graduates, so on graduation that year Elliot received his B.S. degree, his marine engineer's licenses, and a commission as an officer in the United States Naval Reserve. Navy service and General Electric After graduation, See took a summer job with Lykes Brothers Steamship Company. On September 1, 1949, he joined the Aircraft Gas Turbine Division of General Electric, the firm his father had worked for, in Boston. He moved to Cincinnati, Ohio, when the division was relocated. There he met Marilyn Jane Denahy from Georgetown, Ohio, who worked at General Electric as a secretary. He and his friend Tay Haney pooled their funds to buy a Luscombe Silvaire Sprayer aircraft, which they flew on cross-country trips. In November 1952, while taking Marilyn on a joyride, the Luscombe's engine began to fail. See attempted to land the aircraft on a short, unimproved field, but the tail wheel snagged a power line and forced the aircraft into the ground. See suffered deep cuts to his face which required plastic surgery. Marilyn escaped the crash with only minor injuries. By 1953, See was working as a flight test engineer at General Electric's plant in Evendale, Ohio. Like many naval reservists, he was called to active duty due to the Korean War. He was initially stationed at Miramar Naval Air Station near San Diego, California. He married Marilyn on September 30, 1954, before shipping out for a sixteen-month operational tour as a naval aviator, flying the Grumman F9F Panther with Fighter Squadron 144 (VF-144), part of Carrier Air Group 14. He was deployed to the Mediterranean on the aircraft carrier , which returned to the United States in June 1955. In October, after further training at El Centro Naval Air Station, California, he embarked with VF-144 on an operational cruise on the aircraft carrier , which formed part of Task Force 77. The task force traveled to Hawaii, Japan, the Philippine Islands, and Hong Kong. See primarily focused on line maintenance, but also became proficient at carrier landings. By the end of the tour, he had reached the rank of lieutenant commander. He returned home in February 1956, in time for the birth of his first child, Sally. The couple later had two more children: Carolyn in 1957, and David in 1962. See rejoined General Electric in 1956 as a flight test engineer after his tour of duty. He became a group leader and experimental test pilot at Edwards Air Force Base, California, where the United States Air Force conducted flight tests. He served as a project pilot for the development of the General Electric J79-8 engine used in the F4H aircraft. He also conducted powerplant flight tests on the J-47, J-73, J-79, CJ805 and CJ805 aft-fan engines, which involved flying in F-86, XF4D, F-104, F11F-1F, RB-66, F4H, and T-38 aircraft. He worked towards his master's degree one night a week, starting in 1960, eventually obtaining a Master of Science degree in aeronautical engineering from UCLA in 1962, and continued flying with the Naval Reserve. He was eventually promoted to commander. NASA In 1962, See applied to become a NASA astronaut. After undergoing preliminary evaluations, medical tests, and interviews during the selection process, See was selected to be in NASA's second group of astronauts, known as The New Nine. He was 35 at the time of his selection; the oldest in the group. On his selection, he said "Overwhelmed isn't the right word. I was amazed and certainly pleased. It's a very great honor." At the time of his selection, See had logged more than 3,900 hours of flying time, including more than 3,300 in jet aircraft. He drove from Edwards with fellow civilian pilot Neil Armstrong to start his new career in Houston, Texas, where the new Manned Spacecraft Center (MSC) was under construction. Every astronaut was assigned a core competency, a special area in which they had to develop expertise, by the NASA Astronaut Office. The knowledge they gathered could then be shared with the others, and the astronaut-expert was expected to provide astronaut input to the spacecraft designers and engineers. See's special area of expertise was the spacecraft electrical and sequential systems, and the coordination of mission planning. See was tasked with determining if the crewed lunar landing should occur in direct sunlight or using light reflected from the Earth. To help make the decision, he flew helicopters and airplanes wearing special welding goggles to simulate different lighting conditions. See also landed helicopters with Jim Lovell on lava flows that simulated the terrain on the Moon. See was announced as the backup pilot for Gemini 5 on February 8, 1965, with Armstrong serving as the backup command pilot. They were the first civilians selected for a spaceflight. Gemini 5 was launched on August 21, 1965. Early in the flight, a problem was discovered with the fuel cells, and the flight controllers considered ending the mission early. See, who had worked with General Electric in developing the fuel cells, was confident that they could find a solution. Flight Director Chris Kraft gave them 24 hours to fix the problem. After working through the night, they diagnosed the problem and developed procedures that allowed the astronauts to fix the fuel cells, which allowed the mission to continue. See was a capsule communicator (CAPCOM) at MSC in Houston during the Gemini 7/Gemini 6A rendezvous mission in December 1965. Under the crew rotation system devised by chief astronaut Deke Slayton, as the backup for Gemini 5, Armstrong and See were in line for prime crew of Gemini 8. From the spring to the fall of 1965, Armstrong and See trained for the Gemini 5 mission. They spent a significant amount of time training in the spacecraft simulators. They flew back and forth to Kennedy Space Center, from which their spacecraft would be launched; to North Carolina to develop experiments to be conducted during the flight; and to McDonnell Aircraft in St. Louis, where the Gemini spacecraft was made. Contrary to Slayton's typical crew rotation, David Scott took See's place as the pilot of Gemini 8. According to his autobiography, Slayton did not assign See to Gemini 8 because he considered him as too out-of-shape to perform an extravehicular activity. Life photographer Ralph Morse asked Armstrong why See was no longer assigned with him on the Gemini 8 mission, and Armstrong replied, "Elliot's too good a pilot not to have a command of his own." In October 1965 See was promoted to command pilot (first seat) of Gemini 9, with Charles Bassett as his pilot. The Gemini 9 mission was similar to the previous mission. An extravehicular activity (EVA) that used the Astronaut Maneuvering Unit (AMU) was scheduled, and they would rendezvous with an Agena target vehicle. Bassett was scheduled for the EVA and See would stay in the capsule. Death On February 28, 1966, See and Charles Bassett were flying with their backup crew, Gene Cernan and Thomas Stafford, from Ellington Air Force Base to Lambert Field in St. Louis, Missouri, for two weeks of space rendezvous simulator training. The prime crew flew in one jet and the backup crew in another. See was the pilot of their T-38 trainer jet, with Bassett in the rear seat. The weather at Lambert Field that Monday morning was poor and required an instrument approach. Both jets overshot the initial landing attempt; See continued with a visual circling approach and Stafford elected to follow the standard procedure for a missed approach. On his second attempt, See undershot the runway, hit the afterburners and turned to the right. The jet crashed into McDonnell Aircraft Building 101, where the Gemini spacecraft was built. See was found in a parking lot still strapped to his ejection seat. Both astronauts died instantly from trauma sustained in the accident, within of their spacecraft. See and Bassett were buried near each other in Arlington National Cemetery, and the graves are about from Theodore Freeman, another astronaut who died in a T-38 crash sixteen months prior. After a reporter had disclosed to Freeman's wife that he had died, NASA enacted new policies to avoid a similar embarrassing situation in the future. In compliance with these policies, astronaut John Young asked Marilyn Lovell and Jane Conrad to go to Marilyn See's house and ensure she did not find out about her husband's death from a non-NASA source. They rushed over and made excuses for their early surprise visit. After Young arrived to break the news, the three hugged her for comfort. Marilyn Lovell then went to the school to pick up Marilyn See's children, to make sure they did not find out from the press. A NASA investigative panel later concluded that pilot error, caused by bad weather, was the principal cause of the accident. The panel concluded that See was flying too low on his second approach, probably due to poor visibility. At the time, See was known as one of the better pilots in the astronaut corps. Slayton later expressed doubts about See's flying abilities, claiming that he flew too slowly, and "wasn't aggressive enough... he flew too slow–a fatal problem in a plane like the T-38, which will stall easily if you get below ." Jim Lovell and Buzz Aldrin were promoted to the backup crew as a result of the accident. Stafford and Cernan, the original backup crew, were launched three months later on June 3 as Gemini 9A. The shuffling of the Gemini crews caused by the deaths of See and Bassett affected crew assignments for subsequent Gemini and Project Apollo missions. In particular, Aldrin flew as the pilot of Gemini 12, and later Apollo 11. Both men were buried in Arlington National Cemetery on Friday, March 4. During funeral services in Texas two days earlier, Aldrin, Bill Anders, and Walter Cunningham flew the missing man formation in See's honor, while Lovell, Jim McDivitt, and civilian pilot Jere Cobb did the same to honor Bassett. Legacy See was survived by his wife Marilyn and three children. After his death she continued to live in Houston, where she worked as a court reporter. See's name is inscribed on the Fallen Astronaut plaque placed on the Moon by Apollo 15 in 1971. He is also listed on the Space Mirror Memorial at the John F. Kennedy Space Center Visitor Complex, dedicated in 1991. He was honored by Highland Park High School in 2010 as one of the recipients of its Distinguished Alumni Award. A room at the USMMA is also dedicated to his memory. See was a member of the Society of Experimental Test Pilots (SETP) and an associate fellow of the American Institute of Aeronautics and Astronautics (AIAA). In media See was played by Steve Zahn in the 1998 HBO miniseries From the Earth to the Moon, and by Patrick Fugit in the 2018 film First Man. See also Fallen Astronaut List of Eagle Scouts List of spaceflight-related accidents and incidents Notes References 1927 births 1966 deaths Accidental deaths in Missouri American test pilots Aviators from Texas Aviators killed in aviation accidents or incidents in the United States Burials at Arlington National Cemetery General Electric people American aerospace engineers Engineers from Texas Military personnel from Dallas Space program fatalities Highland Park High School (University Park, Texas) alumni United States Merchant Marine Academy alumni University of California, Los Angeles alumni United States Naval Aviators University of Texas at Austin alumni Victims of aviation accidents or incidents in 1966 NASA Astronaut Group 2 NASA civilian astronauts
Elliot See
[ "Engineering" ]
3,087
[ "Space program fatalities", "Space programs" ]
62,119
https://en.wikipedia.org/wiki/Bradford%27s%20law
Bradford's law is a pattern first described by Samuel C. Bradford in 1934 that estimates the exponentially diminishing returns of searching for references in science journals. One formulation is that if journals in a field are sorted by number of articles into three groups, each with about one-third of all articles, then the number of journals in each group will be proportional to 1:n:n2. There are a number of related formulations of the principle. In many disciplines, this pattern is called a Pareto distribution. As a practical example, suppose that a researcher has five core scientific journals for his or her subject. Suppose that in a month there are 12 articles of interest in those journals. Suppose further that in order to find another dozen articles of interest, the researcher would have to go to an additional 10 journals. Then that researcher's Bradford multiplier bm is 2 (i.e. 10/5). For each new dozen articles, that researcher will need to look in bm times as many journals. After looking in 5, 10, 20, 40, etc. journals, most researchers quickly realize that there is little point in looking further. Different researchers have different numbers of core journals, and different Bradford multipliers. But the pattern holds quite well across many subjects, and may well be a general pattern for human interactions in social systems. Like Zipf's law, to which it is related, we do not have a good explanation for why it works, but knowing that it does is very useful for librarians. What it means is that for each specialty, it is sufficient to identify the "core publications" for that field and only stock those; very rarely will researchers need to go outside that set. However, its impact has been far greater than that. Armed with this idea and inspired by Vannevar Bush's famous article As We May Think, Eugene Garfield at the Institute for Scientific Information in the 1960s developed a comprehensive index of how scientific thinking propagates. His Science Citation Index (SCI) had the effect of making it easy to identify exactly which scientists did science that had an impact, and which journals that science appeared in. It also caused the discovery, which some did not expect, that a few journals, such as Nature and Science, were core for all of hard science. The same pattern does not happen with the humanities or the social sciences. The result of this is pressure on scientists to publish in the best journals, and pressure on universities to ensure access to that core set of journals. On the other hand, the set of "core journals" may vary more or less strongly with the individual researchers, and even more strongly along schools-of-thought divides. There is also a danger of over-representing majority views if journals are selected in this fashion. Scattering Bradford's law is also known as Bradford's law of scattering or the Bradford distribution, as it describes how the articles on a particular subject are scattered throughout the mass of periodicals. Another more general term that has come into use since 2006 is information scattering, an often observed phenomenon related to information collections where there are a few sources that have many items of relevant information about a topic, while most sources have only a few. This law of distribution in bibliometrics can be applied to the World Wide Web as well. Hjørland and Nicolaisen identified three kinds of scattering: Lexical scattering. The scattering of words in texts and in collections of texts. Semantic scattering. The scattering of concepts in texts and in collections of texts. Subject scattering. The scattering of items useful to a given task or problem. They found that the literature of Bradford's law (including Bradford's own papers) is unclear in relation to which kind of scattering is actually being measured. Law's interpretations The interpretation of Bradford's law in terms of a geometric progression was suggested by V. Yatsko, who introduced an additional constant and demonstrated that Bradford distribution can be applied to a variety of objects, not only to distribution of articles or citations across journals. V. Yatsko's interpretation (Y-interpretation) can be effectively used to compute threshold values in case it is necessary to distinguish subsets within a set of objects (successful/unsuccessful applicants, developed/underdeveloped regions, etc.). Related laws and distributions Benford's law, originally used to explain apparently non-uniform sampling Lotka's law, describes the frequency of publication by authors in any given field. Power law, a general mathematical form for "heavy-tailed" distributions, with a polynomial density function. In this form, these laws may all be expressed and estimates derived. Zeta distribution Zipf's law, originally used for word frequencies Zipf–Mandelbrot law See also PageRank The Long Tail Notes References Bradford, Samuel C., Sources of Information on Specific Subjects, Engineering: An Illustrated Weekly Journal (London), 137, 1934 (26 January), pp. 85–86. Reprinted as: Bradford, Samuel C. Sources of information on specific subjects, Journal of Information Science, 10:4, 1985 (October), pp. 173–180 Nicolaisen, Jeppe; and Hjørland, Birger (2007), Practical potentials of Bradford's law: A critical examination of the received view, Journal of Documentation, 63(3): 359–377. Available here and here Suresh K. Bhavnani, Concepcio´n S. Wilson, Information Scattering. Available Lancaster, F. W., & Pontigo J. (1986). Qualitative aspects of the Bradford distribution. Scientometrics, 9(1–2), 59–70. External links In Oldenburg's Long Shadow: Librarians, Research Scientists, Publishers, and the Control of Scientific Publishing Bibliometrics Computational linguistics Statistical laws
Bradford's law
[ "Mathematics", "Technology" ]
1,203
[ "Metrics", "Bibliometrics", "Quantity", "Science and technology studies", "Computational linguistics", "Natural language and computing" ]
62,149
https://en.wikipedia.org/wiki/Custard
Custard is a variety of culinary preparations based on sweetened milk, cheese, or cream cooked with egg or egg yolk to thicken it, and sometimes also flour, corn starch, or gelatin. Depending on the recipe, custard may vary in consistency from a thin pouring sauce () to the thick pastry cream () used to fill éclairs. The most common custards are used in custard desserts or dessert sauces and typically include sugar and vanilla; however, savory custards are also found, e.g., in quiche. Preparation Custard is usually cooked in a double boiler (bain-marie), or heated very gently in a saucepan on a stove, though custard can also be steamed, baked in the oven with or without a water bath, or even cooked in a pressure cooker. Custard preparation is a delicate operation because a temperature increase of leads to overcooking and curdling. Generally, a fully cooked custard should not exceed ; it begins setting at . A bain marie water bath slows heat transfer and makes it easier to remove the custard from the oven before it curdles. Adding a small amount of cornflour (U.S. corn starch) to the egg-sugar mixture stabilises the resulting custard, allowing it to be cooked in a single pan as well as in a double-boiler. A sous-vide water bath may be used to precisely control temperature. Variations While custard may refer to a wide variety of thickened dishes, technically (and in French cookery) the word custard (crème or more precisely crème moulée, ) refers only to an egg-thickened custard. When starch is added, the result is called 'pastry cream' (, ) or confectioners' custard, made with a combination of milk or cream, egg yolks, fine sugar, flour or some other starch, and usually a flavoring such as vanilla, chocolate, or lemon. Crème pâtissière is a key ingredient in many French desserts, including mille-feuille (or Napoleons) and filled tarts. It is also used in Italian pastry and sometimes in Boston cream pie. The thickening of the custard is caused by the combination of egg and starch. Corn flour or flour thickens at and as such many recipes instruct the pastry cream to be boiled. In a traditional custard such as a crème anglaise, where eggs are used alone as a thickener, boiling results in the over-cooking and subsequent curdling of the custard; however, in a pastry cream, starch prevents this. Once cooled, the amount of starch in pastry cream sets the cream and requires it to be beaten or whipped before use. When gelatin is added, it is known as crème anglaise collée (). When gelatin is added and whipped cream is folded in, and it sets in a mold, it is bavarois. When starch is used alone as a thickener (without eggs), the result is a blancmange. In the United Kingdom, custard has various traditional recipes some thickened principally with cornflour (cornstarch) rather than the egg component, others involving regular flour; see custard powder. After the custard has thickened, it may be mixed with other ingredients: mixed with stiffly beaten egg whites and gelatin, it is chiboust cream; mixed with whipped cream, it is crème légère, . Beating in softened butter produces German buttercream or crème mousseline. A quiche is a savoury custard tart. Some kinds of timbale or vegetable loaf are made of a custard base mixed with chopped savoury ingredients. Custard royale is a thick custard cut into decorative shapes and used to garnish soup, stew or broth. In German, it is known as Eierstich and is used as a garnish in German Wedding Soup (Hochzeitssuppe). Chawanmushi is a Japanese savoury custard, steamed and served in a small bowl or on a saucer. Chinese steamed egg is a similar but larger savoury egg dish. Bougatsa is a Greek breakfast pastry whose sweet version consists of semolina custard filling between layers of phyllo. Custard may also be used as a top layer in gratins, such as the South African bobotie and many Balkan versions of moussaka. In Peru, leche asada ("baked milk") is custard baked in individual molds. It is considered a restaurant dish. In French cuisine French cuisine has several named variations on custard: Crème anglaise is a light custard made with eggs, sugar, milk, and vanilla (with the possible addition of starch), with other flavoring agents as desired With cream instead of milk, and more sugar, it is the basis of crème brûlée With egg yolks and heavy cream, it is the basis of ice cream With egg yolks and whipped cream, and stabilised with gelatin, it is the basis of Bavarian cream Thickened with butter, chocolate, or gelatin, it is a popular basis for a crémeux Crème pâtissière (pastry cream) is similar to crème anglaise, but with a thickening agent such as cornstach or flour With added flavoring or fresh fruit, it is the basis of crème plombières Crème Saint-Honoré is crème pâtissière enriched with whipped egg whites Crème chiboust is similar to crème Saint-Honoré, but stabilised with gelatin Crème diplomate and crème légère are variations of crème pâtissière enriched with whipped cream Crème mousseline is a variation of crème pâtissière enriched with butter Frangipane is crème pâtissière mixed with powdered macarons or almond powder Uses Recipes involving sweet custard are listed in the custard dessert category, and include: Banana custard Bavarian cream Boston cream pie Bougatsa Chiboust cream Cream pie Crème brûlée Crème caramel Cremeschnitte Custard tart Danish pastry Egg tart Eggnog English trifle Flan Floating island Frangipane, with almonds Frozen custard Fruit Salad Galaktoboureko Manchester tart Muhallebi Natillas Pastel de nata Pudding Taiyaki Vanilla slice Vla Zabaione History Custards baked in pastry (custard tarts) were very popular in the Middle Ages, and are the origin of the English word 'custard': the French term croustade originally referred to the crust of a tart, and is derived from the Italian word crostata, and ultimately the Latin . Examples include Crustardes of flessh and Crustade, in the 14th century English collection The Forme of Cury. These recipes include solid ingredients such as meat, fish, and fruit bound by the custard. Stirred custards cooked in pots are also found under the names Creme Boylede and Creme boiled. Some custards especially in the Elizabethan era used marigold (calendula) to give the custard color. In modern times, the name 'custard' is sometimes applied to starch-thickened preparations like blancmange and Bird's Custard powder. Chemistry Stirred custard is thickened by coagulation of egg protein, while the same gives baked custard its gel structure. The type of milk used also impacts the result. Most important to a successfully stirred custard is to avoid excessive heat that will cause over-coagulation and syneresis that will result in a curdled custard. Eggs contain the proteins necessary for the gel structure to form, and emulsifiers to maintain the structure. Egg yolk also contains enzymes like amylase, which can break down added starch. This enzyme activity contributes to the overall thinning of custard in the mouth. Egg yolk lecithin also helps to maintain the milk-egg interface. The proteins in egg whites are set at . Starch is sometimes added to custard to prevent premature curdling. The starch acts as a heat buffer in the mixture: as they hydrate, they absorb heat and help maintain a constant rate of heat transfer. Starches also make for a smoother texture and thicker mouth feel. If the mixture pH is 9 or higher, the gel is too hard; if it is below 5, the gel structure has difficulty forming because protonation prevents the formation of covalent bonds. Physical-chemical properties Cooked (set) custard is a weak gel, viscous, and thixotropic; while it does become easier to stir the more it is manipulated, it does not, unlike many other thixotropic liquids, recover its lost viscosity over time. On the other hand, a suspension of uncooked imitation custard powder (starch) in water, with the proper proportions, has the opposite rheological property: it is negative thixotropic, or dilatant, allowing the demonstration of "walking on custard". See also List of desserts List of custard desserts Custard cream Bird's Custard – brand of imitation custard Eggnog – sweetened dairy-based beverage Pudding – dessert or savory dish References External links British desserts Dairy products English cuisine Food ingredients Steamed foods American desserts Types of food Creamy dishes
Custard
[ "Technology" ]
2,043
[ "Food ingredients", "Components" ]
62,198
https://en.wikipedia.org/wiki/Livermorium
Livermorium is a synthetic chemical element; it has symbol Lv and atomic number 116. It is an extremely radioactive element that has only been created in a laboratory setting and has not been observed in nature. The element is named after the Lawrence Livermore National Laboratory in the United States, which collaborated with the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, to discover livermorium during experiments conducted between 2000 and 2006. The name of the laboratory refers to the city of Livermore, California, where it is located, which in turn was named after the rancher and landowner Robert Livermore. The name was adopted by IUPAC on May 30, 2012. Six isotopes of livermorium are known, with mass numbers of 288–293 inclusive; the longest-lived among them is livermorium-293 with a half-life of about 80 milliseconds. A seventh possible isotope with mass number 294 has been reported but not yet confirmed. In the periodic table, it is a p-block transactinide element. It is a member of the 7th period and is placed in group 16 as the heaviest chalcogen, but it has not been confirmed to behave as the heavier homologue to the chalcogen polonium. Livermorium is calculated to have some similar properties to its lighter homologues (oxygen, sulfur, selenium, tellurium, and polonium), and be a post-transition metal, though it should also show several major differences from them. Introduction History Unsuccessful synthesis attempts The first search for element 116, using the reaction between 248Cm and 48Ca, was performed in 1977 by Ken Hulet and his team at the Lawrence Livermore National Laboratory (LLNL). They were unable to detect any atoms of livermorium. Yuri Oganessian and his team at the Flerov Laboratory of Nuclear Reactions (FLNR) in the Joint Institute for Nuclear Research (JINR) subsequently attempted the reaction in 1978 and met failure. In 1985, in a joint experiment between Berkeley and Peter Armbruster's team at GSI, the result was again negative, with a calculated cross section limit of 10–100 pb. Work on reactions with 48Ca, which had proved very useful in the synthesis of nobelium from the natPb+48Ca reaction, nevertheless continued at Dubna, with a superheavy element separator being developed in 1989, a search for target materials and starting of collaborations with LLNL being started in 1990, production of more intense 48Ca beams being started in 1996, and preparations for long-term experiments with 3 orders of magnitude higher sensitivity being performed in the early 1990s. This work led directly to the production of new isotopes of elements 112 to 118 in the reactions of 48Ca with actinide targets and the discovery of the 5 heaviest elements on the periodic table: flerovium, moscovium, livermorium, tennessine, and oganesson. In 1995, an international team led by Sigurd Hofmann at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany attempted to synthesise element 116 in a radiative capture reaction (in which the compound nucleus de-excites through pure gamma emission without evaporating neutrons) between a lead-208 target and selenium-82 projectiles. No atoms of element 116 were identified. Unconfirmed discovery claims In late 1998, Polish physicist Robert Smolańczuk published calculations on the fusion of atomic nuclei towards the synthesis of superheavy atoms, including elements 118 and 116. His calculations suggested that it might be possible to make these two elements by fusing lead with krypton under carefully controlled conditions. In 1999, researchers at Lawrence Berkeley National Laboratory made use of these predictions and announced the discovery of elements 118 and 116, in a paper published in Physical Review Letters, and very soon after the results were reported in Science. The researchers reported to have performed the reaction + → + → + α The following year, they published a retraction after researchers at other laboratories were unable to duplicate the results and the Berkeley lab itself was unable to duplicate them as well. In June 2002, the director of the lab announced that the original claim of the discovery of these two elements had been based on data fabricated by principal author Victor Ninov. The isotope 289Lv was finally discovered in 2024 at the JINR. Discovery Livermorium was first synthesized on July 19, 2000, when scientists at Dubna (JINR) bombarded a curium-248 target with accelerated calcium-48 ions. A single atom was detected, decaying by alpha emission with decay energy 10.54 MeV to an isotope of flerovium. The results were published in December 2000. + → * → + 3 → + α The daughter flerovium isotope had properties matching those of a flerovium isotope first synthesized in June 1999, which was originally assigned to 288Fl, implying an assignment of the parent livermorium isotope to 292Lv. Later work in December 2002 indicated that the synthesized flerovium isotope was actually 289Fl, and hence the assignment of the synthesized livermorium atom was correspondingly altered to 293Lv. Road to confirmation Two further atoms were reported by the institute during their second experiment during April–May 2001. In the same experiment they also detected a decay chain which corresponded to the first observed decay of flerovium in December 1998, which had been assigned to 289Fl. No flerovium isotope with the same properties as the one found in December 1998 has ever been observed again, even in repeats of the same reaction. Later it was found that 289Fl has different decay properties and that the first observed flerovium atom may have been its nuclear isomer 289mFl. The observation of 289mFl in this series of experiments may indicate the formation of a parent isomer of livermorium, namely 293mLv, or a rare and previously unobserved decay branch of the already-discovered state 293Lv to 289mFl. Neither possibility is certain, and research is required to positively assign this activity. Another possibility suggested is the assignment of the original December 1998 atom to 290Fl, as the low beam energy used in that original experiment makes the 2n channel plausible; its parent could then conceivably be 294Lv, but this assignment would still need confirmation in the 248Cm(48Ca,2n)294Lv reaction. The team repeated the experiment in April–May 2005 and detected 8 atoms of livermorium. The measured decay data confirmed the assignment of the first-discovered isotope as 293Lv. In this run, the team also observed the isotope 292Lv for the first time. In further experiments from 2004 to 2006, the team replaced the curium-248 target with the lighter curium isotope curium-245. Here evidence was found for the two isotopes 290Lv and 291Lv. In May 2009, the IUPAC/IUPAP Joint Working Party reported on the discovery of copernicium and acknowledged the discovery of the isotope 283Cn. This implied the de facto discovery of the isotope 291Lv, from the acknowledgment of the data relating to its granddaughter 283Cn, although the livermorium data was not absolutely critical for the demonstration of copernicium's discovery. Also in 2009, confirmation from Berkeley and the Gesellschaft für Schwerionenforschung (GSI) in Germany came for the flerovium isotopes 286 to 289, immediate daughters of the four known livermorium isotopes. In 2011, IUPAC evaluated the Dubna team experiments of 2000–2006. Whereas they found the earliest data (not involving 291Lv and 283Cn) inconclusive, the results of 2004–2006 were accepted as identification of livermorium, and the element was officially recognized as having been discovered. The synthesis of livermorium has been separately confirmed at the GSI (2012) and RIKEN (2014 and 2016). In the 2012 GSI experiment, one chain tentatively assigned to 293Lv was shown to be inconsistent with previous data; it is believed that this chain may instead originate from an isomeric state, 293mLv. In the 2016 RIKEN experiment, one atom that may be assigned to 294Lv was seemingly detected, alpha decaying to 290Fl and 286Cn, which underwent spontaneous fission; however, the first alpha from the livermorium nuclide produced was missed, and the assignment to 294Lv is still uncertain though plausible. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, livermorium is sometimes called eka-polonium. In 1979 IUPAC recommended that the placeholder systematic element name ununhexium (Uuh) be used until the discovery of the element was confirmed and a name was decided. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 116", with the symbol of E116, (116), or even simply 116. According to IUPAC recommendations, the discoverer or discoverers of a new element have the right to suggest a name. The discovery of livermorium was recognized by the Joint Working Party (JWP) of IUPAC on 1 June 2011, along with that of flerovium. According to the vice-director of JINR, the Dubna team originally wanted to name element 116 moscovium, after the Moscow Oblast in which Dubna is located, but it was later decided to use this name for element 115 instead. The name livermorium and the symbol Lv were adopted on May 23, 2012. The name recognises the Lawrence Livermore National Laboratory, within the city of Livermore, California, US, which collaborated with JINR on the discovery. The city in turn is named after the American rancher Robert Livermore, a naturalized Mexican citizen of English birth. The naming ceremony for flerovium and livermorium was held in Moscow on October 24, 2012. Other routes of synthesis The synthesis of livermorium in fusion reactions using projectiles heavier than 48Ca has been explored in preparation for synthesis attempts of the yet-undiscovered element 120, as such reactions would necessarily utilize heavier projectiles. In 2023, the reaction between 238U and 54Cr was studied at the JINR's Superheavy Element Factory in Dubna; one atom of the new isotope 288Lv was reported, though more detailed analysis has not yet been published. Similarly, in 2024, a team at the Lawrence Berkeley National Laboratory reported the synthesis of two atoms of 290Lv in the reaction between 244Pu and 50Ti. This result was described as "truly groundbreaking" by RIKEN director Hiromitsu Haba, whose team plans to search for element 119. The team at JINR studied the reaction between 242Pu and 50Ti in 2024 as a follow-up to the 238U+54Cr, obtaining additional decay data for 288Lv and its decay products and discovering the new isotope 289Lv. Predicted properties Other than nuclear properties, no properties of livermorium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Properties of livermorium remain unknown and only predictions are available. Nuclear stability and isotopes Livermorium is expected to be near an island of stability centered on copernicium (element 112) and flerovium (element 114). Due to the expected high fission barriers, any nucleus within this island of stability exclusively decays by alpha decay and perhaps some electron capture and beta decay. While the known isotopes of livermorium do not actually have enough neutrons to be on the island of stability, they can be seen to approach the island, as the heavier isotopes are generally the longer-lived ones. Superheavy elements are produced by nuclear fusion. These fusion reactions can be divided into "hot" and "cold" fusion, depending on the excitation energy of the compound nucleus produced. In hot fusion reactions, very light, high-energy projectiles are accelerated toward very heavy targets (actinides), giving rise to compound nuclei at high excitation energy (~40–50 MeV) that may either fission or evaporate several (3 to 5) neutrons. In cold fusion reactions (which use heavier projectiles, typically from the fourth period, and lighter targets, usually lead and bismuth), the produced fused nuclei have a relatively low excitation energy (~10–20 MeV), which decreases the probability that these products will undergo fission reactions. As the fused nuclei cool to the ground state, they require emission of only one or two neutrons. Hot fusion reactions tend to produce more neutron-rich products because the actinides have the highest neutron-to-proton ratios of any elements that can presently be made in macroscopic quantities. Important information could be gained regarding the properties of superheavy nuclei by the synthesis of more livermorium isotopes, specifically those with a few neutrons more or less than the known ones – 286Lv, 287Lv, 294Lv, and 295Lv. This is possible because there are many reasonably long-lived isotopes of curium that can be used to make a target. The light isotopes can be made by fusing curium-243 with calcium-48. They would undergo a chain of alpha decays, ending at transactinide isotopes that are too light to achieve by hot fusion and too heavy to be produced by cold fusion. The same neutron-deficient isotopes are also reachable in reactions with projectiles heavier than 48Ca, which will be necessary to reach elements beyond atomic number 118 (or possibly 119); this is how 288Lv and 289Lv were discovered. The synthesis of the heavy isotopes 294Lv and 295Lv could be accomplished by fusing the heavy curium isotope curium-250 with calcium-48. The cross section of this nuclear reaction would be about 1 picobarn, though it is not yet possible to produce 250Cm in the quantities needed for target manufacture. Alternatively, 294Lv could be produced via charged-particle evaporation in the 251Cf(48Ca,pn) reaction. After a few alpha decays, these livermorium isotopes would reach nuclides at the line of beta stability. Additionally, electron capture may also become an important decay mode in this region, allowing affected nuclei to reach the middle of the island. For example, it is predicted that 295Lv would alpha decay to 291Fl, which would undergo successive electron capture to 291Nh and then 291Cn which is expected to be in the middle of the island of stability and have a half-life of about 1200 years, affording the most likely hope of reaching the middle of the island using current technology. A drawback is that the decay properties of superheavy nuclei this close to the line of beta stability are largely unexplored. Other possibilities to synthesize nuclei on the island of stability include quasifission (partial fusion followed by fission) of a massive nucleus. Such nuclei tend to fission, expelling doubly magic or nearly doubly magic fragments such as calcium-40, tin-132, lead-208, or bismuth-209. Recently it has been shown that the multi-nucleon transfer reactions in collisions of actinide nuclei (such as uranium and curium) might be used to synthesize the neutron-rich superheavy nuclei located at the island of stability, although formation of the lighter elements nobelium or seaborgium is more favored. One last possibility to synthesize isotopes near the island is to use controlled nuclear explosions to create a neutron flux high enough to bypass the gaps of instability at 258–260Fm and at mass number 275 (atomic numbers 104 to 108), mimicking the r-process in which the actinides were first produced in nature and the gap of instability around radon bypassed. Some such isotopes (especially 291Cn and 293Cn) may even have been synthesized in nature, but would have decayed away far too quickly (with half-lives of only thousands of years) and be produced in far too small quantities (about 10−12 the abundance of lead) to be detectable as primordial nuclides today outside cosmic rays. Physical and atomic In the periodic table, livermorium is a member of group 16, the chalcogens. It appears below oxygen, sulfur, selenium, tellurium, and polonium. Every previous chalcogen has six electrons in its valence shell, forming a valence electron configuration of ns2np4. In livermorium's case, the trend should be continued and the valence electron configuration is predicted to be 7s27p4; therefore, livermorium will have some similarities to its lighter congeners. Differences are likely to arise; a large contributing effect is the spin–orbit (SO) interaction—the mutual interaction between the electrons' motion and spin. It is especially strong for the superheavy elements, because their electrons move much faster than in lighter atoms, at velocities comparable to the speed of light. In relation to livermorium atoms, it lowers the 7s and the 7p electron energy levels (stabilizing the corresponding electrons), but two of the 7p electron energy levels are stabilized more than the other four. The stabilization of the 7s electrons is called the inert pair effect, and the effect "tearing" the 7p subshell into the more stabilized and the less stabilized parts is called subshell splitting. Computation chemists see the split as a change of the second (azimuthal) quantum number l from 1 to and for the more stabilized and less stabilized parts of the 7p subshell, respectively: the 7p1/2 subshell acts as a second inert pair, though not as inert as the 7s electrons, while the 7p3/2 subshell can easily participate in chemistry. For many theoretical purposes, the valence electron configuration may be represented to reflect the 7p subshell split as 7s7p7p. Inert pair effects in livermorium should be even stronger than in polonium and hence the +2 oxidation state becomes more stable than the +4 state, which would be stabilized only by the most electronegative ligands; this is reflected in the expected ionization energies of livermorium, where there are large gaps between the second and third ionization energies (corresponding to the breaching of the unreactive 7p1/2 shell) and fourth and fifth ionization energies. Indeed, the 7s electrons are expected to be so inert that the +6 state will not be attainable. The melting and boiling points of livermorium are expected to continue the trends down the chalcogens; thus livermorium should melt at a higher temperature than polonium, but boil at a lower temperature. It should also be denser than polonium (α-Lv: 12.9 g/cm3; α-Po: 9.2 g/cm3); like polonium it should also form an α and a β allotrope. The electron of a hydrogen-like livermorium atom (oxidized so that it only has one electron, Lv115+) is expected to move so fast that it has a mass 1.86 times that of a stationary electron, due to relativistic effects. For comparison, the figures for hydrogen-like polonium and tellurium are expected to be 1.26 and 1.080 respectively. Chemical Livermorium is projected to be the fourth member of the 7p series of chemical elements and the heaviest member of group 16 in the periodic table, below polonium. While it is the least theoretically studied of the 7p elements, its chemistry is expected to be quite similar to that of polonium. The group oxidation state of +6 is known for all the chalcogens apart from oxygen which cannot expand its octet and is one of the strongest oxidizing agents among the chemical elements. Oxygen is thus limited to a maximum +2 state, exhibited in the fluoride OF2. The +4 state is known for sulfur, selenium, tellurium, and polonium, undergoing a shift in stability from reducing for sulfur(IV) and selenium(IV) through being the most stable state for tellurium(IV) to being oxidizing in polonium(IV). This suggests a decreasing stability for the higher oxidation states as the group is descended due to the increasing importance of relativistic effects, especially the inert pair effect. The most stable oxidation state of livermorium should thus be +2, with a rather unstable +4 state. The +2 state should be about as easy to form as it is for beryllium and magnesium, and the +4 state should only be achieved with strongly electronegative ligands, such as in livermorium(IV) fluoride (LvF4). The +6 state should not exist at all due to the very strong stabilization of the 7s electrons, making the valence core of livermorium only four electrons. The lighter chalcogens are also known to form a −2 state as oxide, sulfide, selenide, telluride, and polonide; due to the destabilization of livermorium's 7p3/2 subshell, the −2 state should be very unstable for livermorium, whose chemistry should be essentially purely cationic, though the larger subshell and spinor energy splittings of livermorium as compared to polonium should make Lv2− slightly less unstable than expected. Livermorium hydride (LvH2) would be the heaviest chalcogen hydride and the heaviest homolog of water (the lighter ones are H2S, H2Se, H2Te, and PoH2). Polane (polonium hydride) is a more covalent compound than most metal hydrides because polonium straddles the border between metal and metalloid and has some nonmetallic properties: it is intermediate between a hydrogen halide like hydrogen chloride (HCl) and a metal hydride like stannane (SnH4). Livermorane should continue this trend: it should be a hydride rather than a livermoride, but still a covalent molecular compound. Spin-orbit interactions are expected to make the Lv–H bond longer than expected from periodic trends alone, and make the H–Lv–H bond angle larger than expected: this is theorized to be because the unoccupied 8s orbitals are relatively low in energy and can hybridize with the valence 7p orbitals of livermorium. This phenomenon, dubbed "supervalent hybridization", has some analogues in non-relativistic regions in the periodic table; for example, molecular calcium difluoride has 4s and 3d involvement from the calcium atom. The heavier livermorium dihalides are predicted to be linear, but the lighter ones are predicted to be bent. Experimental chemistry Unambiguous determination of the chemical characteristics of livermorium has not yet been established. In 2011, experiments were conducted to create nihonium, flerovium, and moscovium isotopes in the reactions between calcium-48 projectiles and targets of americium-243 and plutonium-244. The targets included lead and bismuth impurities and hence some isotopes of bismuth and polonium were generated in nucleon transfer reactions. This, while an unforeseen complication, could give information that would help in the future chemical investigation of the heavier homologs of bismuth and polonium, which are respectively moscovium and livermorium. The produced nuclides bismuth-213 and polonium-212m were transported as the hydrides 213BiH3 and 212mPoH2 at 850 °C through a quartz wool filter unit held with tantalum, showing that these hydrides were surprisingly thermally stable, although their heavier congeners McH3 and LvH2 would be expected to be less thermally stable from simple extrapolation of periodic trends in the p-block. Further calculations on the stability and electronic structure of BiH3, McH3, PoH2, and LvH2 are needed before chemical investigations take place. Moscovium and livermorium are expected to be volatile enough as pure elements for them to be chemically investigated in the near future, a property livermorium would then share with its lighter congener polonium, though the short half-lives of all presently known livermorium isotopes means that the element is still inaccessible to experimental chemistry. Notes References Bibliography External links Livermorium at The Periodic Table of Videos (University of Nottingham) CERN Courier – Second postcard from the island of stability Livermorium at WebElements.com Chalcogens Chemical elements Ernest Lawrence Synthetic elements
Livermorium
[ "Physics", "Chemistry" ]
5,267
[ "Matter", "Chemical elements", "Synthetic materials", "Synthetic elements", "Atoms", "Radioactivity" ]
62,200
https://en.wikipedia.org/wiki/Oganesson
Oganesson is a synthetic chemical element; it has symbol Og and atomic number 118. It was first synthesized in 2002 at the Joint Institute for Nuclear Research (JINR) in Dubna, near Moscow, Russia, by a joint team of Russian and American scientists. In December 2015, it was recognized as one of four new elements by the Joint Working Party of the international scientific bodies IUPAC and IUPAP. It was formally named on 28 November 2016. The name honors the nuclear physicist Yuri Oganessian, who played a leading role in the discovery of the heaviest elements in the periodic table. Oganesson has the highest atomic number and highest atomic mass of all known elements. On the periodic table of the elements it is a p-block element, a member of group 18 and the last member of period 7. Its only known isotope, oganesson-294, is highly radioactive, with a half-life of 0.7 ms and, only five atoms have been successfully produced. This has so far prevented any experimental studies of its chemistry. Because of relativistic effects, theoretical studies predict that it would be a solid at room temperature, and significantly reactive, unlike the other members of group 18 (the noble gases). Introduction History Early speculation The possibility of a seventh noble gas, after helium, neon, argon, krypton, xenon, and radon, was considered almost as soon as the noble gas group was discovered. Danish chemist Hans Peter Jørgen Julius Thomsen predicted in April 1895, the year after the discovery of argon, that there was a whole series of chemically inert gases similar to argon that would bridge the halogen and alkali metal groups: he expected that the seventh of this series would end a 32-element period which contained thorium and uranium and have an atomic weight of 292, close to the 294 now known for the first and only confirmed isotope of oganesson. Danish physicist Niels Bohr noted in 1922 that this seventh noble gas should have atomic number 118 and predicted its electronic structure as 2, 8, 18, 32, 32, 18, 8, matching modern predictions. Following this, German chemist Aristid von Grosse wrote an article in 1965 predicting the likely properties of element 118. It was 107 years from Thomsen's prediction before oganesson was successfully synthesized, although its chemical properties have not been investigated to determine if it behaves as the heavier congener of radon. In a 1975 article, American chemist Kenneth Pitzer suggested that element 118 should be a gas or volatile liquid due to relativistic effects. Unconfirmed discovery claims In late 1998, Polish physicist Robert Smolańczuk published calculations on the fusion of atomic nuclei towards the synthesis of superheavy atoms, including oganesson. His calculations suggested that it might be possible to make element 118 by fusing lead with krypton under carefully controlled conditions, and that the fusion probability (cross section) of that reaction would be close to the lead–chromium reaction that had produced element 106, seaborgium. This contradicted predictions that the cross sections for reactions with lead or bismuth targets would go down exponentially as the atomic number of the resulting elements increased. In 1999, researchers at Lawrence Berkeley National Laboratory made use of these predictions and announced the discovery of elements 118 and 116, in a paper published in Physical Review Letters, and very soon after the results were reported in Science. The researchers reported that they had performed the reaction + → + . In 2001, they published a retraction after researchers at other laboratories were unable to duplicate the results and the Berkeley lab could not duplicate them either. In June 2002, the director of the lab announced that the original claim of the discovery of these two elements had been based on data fabricated by principal author Victor Ninov. Newer experimental results and theoretical predictions have confirmed the exponential decrease in cross sections with lead and bismuth targets as the atomic number of the resulting nuclide increases. Discovery reports The first genuine decay of atoms of oganesson was observed in 2002 at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, by a joint team of Russian and American scientists. Headed by Yuri Oganessian, a Russian nuclear physicist of Armenian ethnicity, the team included American scientists from the Lawrence Livermore National Laboratory in California. The discovery was not announced immediately, because the decay energy of 294Og matched that of 212mPo, a common impurity produced in fusion reactions aimed at producing superheavy elements, and thus announcement was delayed until after a 2005 confirmatory experiment aimed at producing more oganesson atoms. The 2005 experiment used a different beam energy (251 MeV instead of 245 MeV) and target thickness (0.34 mg/cm2 instead of 0.23 mg/cm2). On 9 October 2006, the researchers announced that they had indirectly detected a total of three (possibly four) nuclei of oganesson-294 (one or two in 2002 and two more in 2005) produced via collisions of californium-249 atoms and calcium-48 ions. + → + 3 . In 2011, IUPAC evaluated the 2006 results of the Dubna–Livermore collaboration and concluded: "The three events reported for the Z = 118 isotope have very good internal redundancy but with no anchor to known nuclei do not satisfy the criteria for discovery". Because of the very small fusion reaction probability (the fusion cross section is or ) the experiment took four months and involved a beam dose of calcium ions that had to be shot at the californium target to produce the first recorded event believed to be the synthesis of oganesson. Nevertheless, researchers were highly confident that the results were not a false positive, since the chance that the detections were random events was estimated to be less than one part in . In the experiments, the alpha-decay of three atoms of oganesson was observed. A fourth decay by direct spontaneous fission was also proposed. A half-life of 0.89 ms was calculated: decays into by alpha decay. Since there were only three nuclei, the half-life derived from observed lifetimes has a large uncertainty: . → + The identification of the nuclei was verified by separately creating the putative daughter nucleus directly by means of a bombardment of with ions, + → + 3 , and checking that the decay matched the decay chain of the nuclei. The daughter nucleus is very unstable, decaying with a lifetime of 14 milliseconds into , which may experience either spontaneous fission or alpha decay into , which will undergo spontaneous fission. Confirmation In December 2015, the Joint Working Party of international scientific bodies International Union of Pure and Applied Chemistry (IUPAC) and International Union of Pure and Applied Physics (IUPAP) recognized the element's discovery and assigned the priority of the discovery to the Dubna–Livermore collaboration. This was on account of two 2009 and 2010 confirmations of the properties of the granddaughter of 294Og, 286Fl, at the Lawrence Berkeley National Laboratory, as well as the observation of another consistent decay chain of 294Og by the Dubna group in 2012. The goal of that experiment had been the synthesis of 294Ts via the reaction 249Bk(48Ca,3n), but the short half-life of 249Bk resulted in a significant quantity of the target having decayed to 249Cf, resulting in the synthesis of oganesson instead of tennessine. From 1 October 2015 to 6 April 2016, the Dubna team performed a similar experiment with 48Ca projectiles aimed at a mixed-isotope californium target containing 249Cf, 250Cf, and 251Cf, with the aim of producing the heavier oganesson isotopes 295Og and 296Og. Two beam energies at 252 MeV and 258 MeV were used. Only one atom was seen at the lower beam energy, whose decay chain fitted the previously known one of 294Og (terminating with spontaneous fission of 286Fl), and none were seen at the higher beam energy. The experiment was then halted, as the glue from the sector frames covered the target and blocked evaporation residues from escaping to the detectors. The production of 293Og and its daughter 289Lv, as well as the even heavier isotope 297Og, is also possible using this reaction. The isotopes 295Og and 296Og may also be produced in the fusion of 248Cm with 50Ti projectiles. A search beginning in summer 2016 at RIKEN for 295Og in the 3n channel of this reaction was unsuccessful, though the study is planned to resume; a detailed analysis and cross section limit were not provided. These heavier and likely more stable isotopes may be useful in probing the chemistry of oganesson. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, oganesson is sometimes known as eka-radon (until the 1960s as eka-emanation, emanation being the old name for radon). In 1979, IUPAC assigned the systematic placeholder name ununoctium to the undiscovered element, with the corresponding symbol of Uuo, and recommended that it be used until after confirmed discovery of the element. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 118", with the symbol of E118, (118), or simply 118. Before the retraction in 2001, the researchers from Berkeley had intended to name the element ghiorsium (Gh), after Albert Ghiorso (a leading member of the research team). The Russian discoverers reported their synthesis in 2006. According to IUPAC recommendations, the discoverers of a new element have the right to suggest a name. In 2007, the head of the Russian institute stated the team were considering two names for the new element: flyorium, in honor of Georgy Flyorov, the founder of the research laboratory in Dubna; and moskovium, in recognition of the Moscow Oblast where Dubna is located. He also stated that although the element was discovered as an American collaboration, who provided the californium target, the element should rightly be named in honor of Russia since the Flyorov Laboratory of Nuclear Reactions at JINR was the only facility in the world which could achieve this result. These names were later suggested for element 114 (flerovium) and element 116 (moscovium). Flerovium became the name of element 114; the final name proposed for element 116 was instead livermorium, with moscovium later being proposed and accepted for element 115 instead. Traditionally, the names of all noble gases end in "-on", with the exception of helium, which was not known to be a noble gas when discovered. The IUPAC guidelines valid at the moment of the discovery approval however required all new elements be named with the ending "-ium", even if they turned out to be halogens (traditionally ending in "-ine") or noble gases (traditionally ending in "-on"). While the provisional name ununoctium followed this convention, a new IUPAC recommendation published in 2016 recommended using the "-on" ending for new group 18 elements, regardless of whether they turn out to have the chemical properties of a noble gas. The scientists involved in the discovery of element 118, as well as those of 117 and 115, held a conference call on 23 March 2016 to decide their names. Element 118 was the last to be decided upon; after Oganessian was asked to leave the call, the remaining scientists unanimously decided to have the element "oganesson" after him. Oganessian was a pioneer in superheavy element research for sixty years reaching back to the field's foundation: his team and his proposed techniques had led directly to the synthesis of elements 107 through 118. Mark Stoyer, a nuclear chemist at the LLNL, later recalled, "We had intended to propose that name from Livermore, and things kind of got proposed at the same time from multiple places. I don't know if we can claim that we actually proposed the name, but we had intended it." In internal discussions, IUPAC asked the JINR if they wanted the element to be spelled "oganeson" to match the Russian spelling more closely. Oganessian and the JINR refused this offer, citing the Soviet-era practice of transliterating names into the Latin alphabet under the rules of the French language ("Oganessian" is such a transliteration) and arguing that "oganesson" would be easier to link to the person. In June 2016, IUPAC announced that the discoverers planned to give the element the name oganesson (symbol: Og). The name became official on 28 November 2016. In 2017, Oganessian commented on the naming: The naming ceremony for moscovium, tennessine, and oganesson was held on 2 March 2017 at the Russian Academy of Sciences in Moscow. In a 2019 interview, when asked what it was like to see his name in the periodic table next to Einstein, Mendeleev, the Curies, and Rutherford, Oganessian responded: Characteristics Other than nuclear properties, no properties of oganesson or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Thus only predictions are available. Nuclear stability and isotopes The stability of nuclei quickly decreases with the increase in atomic number after curium, element 96, whose most stable isotope, 247Cm, has a half-life four orders of magnitude longer than that of any subsequent element. All nuclides with an atomic number above 101 undergo radioactive decay with half-lives shorter than 30 hours. No elements with atomic numbers above 82 (after lead) have stable isotopes. This is because of the ever-increasing Coulomb repulsion of protons, so that the strong nuclear force cannot hold the nucleus together against spontaneous fission for long. Calculations suggest that in the absence of other stabilizing factors, elements with more than 104 protons should not exist. However, researchers in the 1960s suggested that the closed nuclear shells around 114 protons and 184 neutrons should counteract this instability, creating an island of stability in which nuclides could have half-lives reaching thousands or millions of years. While scientists have still not reached the island, the mere existence of the superheavy elements (including oganesson) confirms that this stabilizing effect is real, and in general the known superheavy nuclides become exponentially longer-lived as they approach the predicted location of the island. Oganesson is radioactive, decaying via alpha decay and spontaneous fission, with a half-life that appears to be less than a millisecond. Nonetheless, this is still longer than some predicted values. Calculations using a quantum-tunneling model predict the existence of several heavier isotopes of oganesson with alpha-decay half-lives close to 1 ms. Theoretical calculations done on the synthetic pathways for, and the half-life of, other isotopes have shown that some could be slightly more stable than the synthesized isotope 294Og, most likely 293Og, 295Og, 296Og, 297Og, 298Og, 300Og and 302Og (the last reaching the N = 184 shell closure). Of these, 297Og might provide the best chances for obtaining longer-lived nuclei, and thus might become the focus of future work with this element. Some isotopes with many more neutrons, such as some located around 313Og, could also provide longer-lived nuclei. The isotopes from 291Og to 295Og might be produced as daughters of element 120 isotopes that can be reached in the reactions 249–251Cf+50Ti, 245Cm+48Ca, and 248Cm+48Ca. In a quantum-tunneling model, the alpha decay half-life of was predicted to be with the experimental Q-value published in 2004. Calculation with theoretical Q-values from the macroscopic-microscopic model of Muntian–Hofman–Patyk–Sobiczewski gives somewhat lower but comparable results. Calculated atomic and physical properties Oganesson is a member of group 18, the zero-valence elements. The members of this group are usually inert to most common chemical reactions (for example, combustion) because the outer valence shell is completely filled with eight electrons. This produces a stable, minimum energy configuration in which the outer electrons are tightly bound. It is thought that similarly, oganesson has a closed outer valence shell in which its valence electrons are arranged in a 7s27p6 configuration. Consequently, some expect oganesson to have similar physical and chemical properties to other members of its group, most closely resembling the noble gas above it in the periodic table, radon. Following the periodic trend, oganesson would be expected to be slightly more reactive than radon. However, theoretical calculations have shown that it could be significantly more reactive. In addition to being far more reactive than radon, oganesson may be even more reactive than the elements flerovium and copernicium, which are heavier homologs of the more chemically active elements lead and mercury, respectively. The reason for the possible enhancement of the chemical activity of oganesson relative to radon is an energetic destabilization and a radial expansion of the last occupied 7p-subshell. More precisely, considerable spin–orbit interactions between the 7p electrons and the inert 7s electrons effectively lead to a second valence shell closing at flerovium, and a significant decrease in stabilization of the closed shell of oganesson. It has also been calculated that oganesson, unlike the other noble gases, binds an electron with release of energy, or in other words, it exhibits positive electron affinity, due to the relativistically stabilized 8s energy level and the destabilized 7p3/2 level, whereas copernicium and flerovium are predicted to have no electron affinity. Nevertheless, quantum electrodynamic corrections have been shown to be quite significant in reducing this affinity by decreasing the binding in the anion Og− by 9%, thus confirming the importance of these corrections in superheavy elements. 2022 calculations expect the electron affinity of oganesson to be 0.080(6) eV. Monte Carlo simulations of oganesson's molecular dynamics predict it has a melting point of and a boiling point of due to relativistic effects (if these effects are ignored, oganesson would melt at ≈). Thus oganesson would probably be a solid rather than a gas under standard conditions, though still with a rather low melting point. Oganesson is expected to have an extremely broad polarizability, almost double that of radon. Because of its tremendous polarizability, oganesson is expected to have an anomalously low first ionization energy of about 860 kJ/mol, similar to that of cadmium and less than those of iridium, platinum, and gold. This is significantly smaller than the values predicted for darmstadtium, roentgenium, and copernicium, although it is greater than that predicted for flerovium. Its second ionization energy should be around 1560 kJ/mol. Even the shell structure in the nucleus and electron cloud of oganesson is strongly impacted by relativistic effects: the valence and core electron subshells in oganesson are expected to be "smeared out" in a homogeneous Fermi gas of electrons, unlike those of the "less relativistic" radon and xenon (although there is some incipient delocalisation in radon), due to the very strong spin–orbit splitting of the 7p orbital in oganesson. A similar effect for nucleons, particularly neutrons, is incipient in the closed-neutron-shell nucleus 302Og and is strongly in force at the hypothetical superheavy closed-shell nucleus 472164, with 164 protons and 308 neutrons. Studies have also predicted that due to increasing electrostatic forces, oganesson may have a semibubble structure in proton density, having few protons at the center of its nucleus. Moreover, spin–orbit effects may cause bulk oganesson to be a semiconductor, with a band gap of  eV predicted. All the lighter noble gases are insulators instead: for example, the band gap of bulk radon is expected to be  eV. Predicted compounds The only confirmed isotope of oganesson, 294Og, has much too short a half-life to be chemically investigated experimentally. Therefore, no compounds of oganesson have been synthesized yet. Nevertheless, calculations on theoretical compounds have been performed since 1964. It is expected that if the ionization energy of the element is high enough, it will be difficult to oxidize and therefore, the most common oxidation state would be 0 (as for the noble gases); nevertheless, this appears not to be the case. Calculations on the diatomic molecule showed a bonding interaction roughly equivalent to that calculated for , and a dissociation energy of 6 kJ/mol, roughly 4 times of that of . Most strikingly, it was calculated to have a bond length shorter than in by 0.16 Å, which would be indicative of a significant bonding interaction. On the other hand, the compound OgH+ exhibits a dissociation energy (in other words proton affinity of oganesson) that is smaller than that of RnH+. The bonding between oganesson and hydrogen in OgH is predicted to be very weak and can be regarded as a pure van der Waals interaction rather than a true chemical bond. On the other hand, with highly electronegative elements, oganesson seems to form more stable compounds than for example copernicium or flerovium. The stable oxidation states +2 and +4 have been predicted to exist in the fluorides and . The +6 state would be less stable due to the strong binding of the 7p1/2 subshell. This is a result of the same spin–orbit interactions that make oganesson unusually reactive. For example, it was shown that the reaction of oganesson with to form the compound would release an energy of 106 kcal/mol of which about 46 kcal/mol come from these interactions. For comparison, the spin–orbit interaction for the similar molecule is about 10 kcal/mol out of a formation energy of 49 kcal/mol. The same interaction stabilizes the tetrahedral Td configuration for , as distinct from the square planar D4h one of , which is also expected to have; this is because OgF4 is expected to have two inert electron pairs (7s and 7p1/2). As such, OgF6 is expected to be unbound, continuing an expected trend in the destabilisation of the +6 oxidation state (RnF6 is likewise expected to be much less stable than XeF6). The Og–F bond will most probably be ionic rather than covalent, rendering the oganesson fluorides non-volatile. OgF2 is predicted to be partially ionic due to oganesson's high electropositivity. Oganesson is predicted to be sufficiently electropositive to form an Og–Cl bond with chlorine. A compound of oganesson and tennessine, OgTs4, has been predicted to be potentially stable chemically. See also Island of stability Superheavy element Transuranium element Extended periodic table Notes References Bibliography Further reading External links 5 ways the heaviest element on the periodic table is really bizarre, ScienceNews.org Element 118: Experiments on discovery, archive of discoverers' official web page Element 118, Heaviest Ever, Reported for 1,000th of a Second, The New York Times. It's Elemental: Oganesson Oganesson at The Periodic Table of Videos (University of Nottingham) On the Claims for Discovery of Elements 110, 111, 112, 114, 116, and 118 (IUPAC Technical Report) WebElements: Oganesson 2002 introductions Chemical elements Chemical elements with face-centered cubic structure Noble gases Synthetic elements
Oganesson
[ "Physics", "Chemistry", "Materials_science" ]
5,049
[ "Matter", "Noble gases", "Chemical elements", "Synthetic materials", "Nonmetals", "Synthetic elements", "Atoms", "Radioactivity" ]
62,214
https://en.wikipedia.org/wiki/Lawrence%20Berkeley%20National%20Laboratory
Lawrence Berkeley National Laboratory (LBNL, Berkeley Lab) is a federally funded research and development center in the hills of Berkeley, California, United States. Established in 1931 by the University of California (UC), the laboratory is sponsored by the United States Department of Energy and administered by the UC system. Ernest Lawrence, who won the Nobel prize for inventing the cyclotron, founded the lab and served as its director until his death in 1958. Located in the Berkeley Hills, the lab overlooks the campus of the University of California, Berkeley. Scientific research The mission of Berkeley Lab is to bring science solutions to the world. The research at Berkeley Lab has four main themes: discovery science, clean energy, healthy earth and ecological systems, and the future of science. The Laboratory's 22 scientific divisions are organized within six areas of research: Computing Sciences, Physical Sciences, Earth and Environmental Sciences, Biosciences, Energy Sciences, and Energy Technologies. Lab founder Ernest Lawrence believed that scientific research is best done through teams of individuals with different fields of expertise, working together, and his laboratory still considers that a guiding principle today. Research impact Berkeley Lab scientists have won fifteen Nobel prizes in physics and chemistry, and each one has a street named after them on the Lab campus. 23 Berkeley Lab employees were contributors to reports by the United Nations' Intergovernmental Panel on Climate Change, which shared the Nobel Peace Prize. Fifteen Lab scientists have also won the National Medal of Science, and two have won the National Medal of Technology and Innovation. 82 Berkeley Lab researchers have been elected to membership in the National Academy of Sciences or the National Academy of Engineering. In 2022, Berkeley Lab had the greatest research publication impact of any single government laboratory in the world in physical sciences and chemistry, as measured by Nature Index. The only institutions with higher ranking were national government research agencies for China, France, and Italy which are network of research laboratories or smaller research units. Using the same metric, the Lab is the second-ranking laboratory in the area of earth and environmental sciences. Scientific user facilities Much of Berkeley Lab's research impact is built on the capabilities of its unique research facilities. The laboratory manages five national scientific user facilities, which are part of the network of 28 such facilities operated by the DOE Office of Science. These facilities and the expertise of the scientists and engineers who operate them are made available to 14,000 researchers from universities, industry, and government laboratories. Berkeley Lab operates five major National User Facilities for the DOE Office of Science: The Advanced Light Source (ALS) is a synchrotron light source with 41 beamlines providing ultraviolet, soft x-ray, and hard x-ray light to scientific experiments in a wide variety of fields, including materials science, biology, chemistry, physics, and the environmental sciences. The ALS is supported by the DOE Office of Basic Energy Sciences. The Joint Genome Institute (JGI) is a scientific user facility for integrative genomic science, with particular emphasis on the DOE missions of energy and the environment. The JGI provides over 2,000 scientific users with access to the latest generation of genome sequencing and analysis capabilities. The Molecular Foundry is a multidisciplinary nanoscience research facility. Its seven research facilities focus on Imaging and Manipulation of Nanostructures, Nanofabrication, Theory of Nanostructured Materials, Inorganic Nanostructures, Biological Nanostructures, Organic and Macromolecular Synthesis, and Electron Microscopy. The National Energy Research Scientific Computing Center (NERSC) is the mission scientific computing facility for the DOE Office of Science, providing high performance computing for over 11,000 scientists working on DOE research programs. NERSC celebrated its 50th anniversary in 2024 by making a video that describes significant events over that 50-year timeline. The Perlmutter system at NERSC was the 5th-ranked supercomputer system in the Top500 (HPL) rankings when it came online in 2021. As of November 2024, it ranks 7th in the world for performance on the alternate HPCG benchmark, which has a lower ratio of computing to data movement. The Energy Sciences Network (ESnet) is a high-speed research network serving DOE scientists with their experimental facilities and collaborators worldwide. The upgraded network infrastructure launched in 2022 is optimized for very large scientific data flows, and the network transports roughly 35 petabytes of traffic each month. Team science Much of the research at Berkeley Lab is done by researchers from several disciplines and multiple institutions working together as a large team focused on shared scientific goals. Berkeley is either the lead partner or one of the leads in several research institutes and hubs, including the following: The Joint BioEnergy Institute (JBEI). JBEI's mission is to establish the scientific knowledge and new technologies needed to transform the maximum amount of carbon available in bioenergy crops into biofuels and bioproducts. JBEI is one of four U.S. Department of Energy (DOE) Bioenergy Research Centers (BRCs). In 2023, the DOE announced the commitment of $590M to support the BRCs for the next five years. The National Alliance for Water Innovation (NAWI). NAWI aims to secure an affordable, energy-efficient, and resilient water supply for the US economy through decentralized, fit-for-purpose processing. NAWI is supported primarily by the DOE Office of Energy Efficiency and Renewable Energy, partnering with the California Department of Water Resources, the California State Water Resources Control Board. Berkeley Lab is the lead partner, with founding partners Oak Ridge National Laboratory (ORNL) and the National Renewable Energy Laboratory (NREL). The Liquid Sunlight Alliance (LiSA). LiSA's Mission is to establish the science principles by which durable coupled microenvironments can be co-designed to efficiently and selectively generate liquid fuels from sunlight, water, carbon dioxide, and nitrogen. The lead institution for LiSA is the California Institute of Technology and Berkeley Lab is a major partner. The Energy Storage Research Alliance (ESRA). The mission of the Energy Storage Research Alliance is to apply cutting-edge scientific tools and automation to accelerate materials discovery for next-generation energy storage technologies. Argonne National Laboratory leads the ESRA collaboration with Berkeley Lab and Pacific Northwest National Laboratory as co-leads. Cyclotron Road Cyclotron Road is a fellowship program for technology innovators, supporting entrepreneurial scientists as they advance their own technology projects. The core support for the program comes from the Department of Energy's Office of Energy Efficiency and Renewable Energy, through the Lab-Embedded Entrepreneurship Program. Berkeley Lab manages the program in close partnership with Activate, a nonprofit organization established to scale the Cyclotron Road fellowship model to a greater number of innovators around the U.S. and the world. Cyclotron Road fellows receive two years of stipend, over $100,000 of research support, intensive mentorship and a startup curriculum, and access to the expertise and facilities of Berkeley Lab. Since members of the first cohort completed their fellowships in 2017, the 84 start-up companies founded by Cyclotron Road Fellows have raised over $2.5 billion in follow-on funding. Notable scientists Nobel laureates Fifteen Berkeley Lab scientists have received the Nobel Prize in physics or chemistry. National Medals Fifteen Berkeley Lab scientists have received the National Medal of Science and two have been awarded the [National Medal of Technology and Innovation]]. The National Medal of Technology and Innovation was awarded to Arthur Rosenfeld in 2011, to Ashok Gadgil in 2023, and to Jennifer Doudna in 2025. History From 1931 to 1945: cyclotrons and team science The laboratory was founded on August 26, 1931, by Ernest Lawrence, as the Radiation Laboratory of the University of California, Berkeley, associated with the Physics Department. It centered physics research around his new instrument, the cyclotron, a type of particle accelerator for which he was awarded the Nobel Prize in Physics in 1939. Throughout the 1930s, Lawrence pushed to create larger and larger machines for physics research, courting private philanthropists for funding. He was the first to develop a large team to build big projects to make discoveries in basic research. Eventually these machines grew too large to be held on the university grounds, and in 1940 the lab moved to its current site atop the hill above campus. Part of the team put together during this period includes two other young scientists who went on to direct large laboratories: J. Robert Oppenheimer, who directed Los Alamos Laboratory, and Robert Wilson, who directed Fermilab. Leslie Groves visited Lawrence's Radiation Laboratory in late 1942 as he was organizing the Manhattan Project, meeting J. Robert Oppenheimer for the first time. Oppenheimer was tasked with organizing the nuclear bomb development effort and founded today's Los Alamos National Laboratory to help keep the work secret. At the RadLab, Lawrence and his colleagues developed the technique of electromagnetic enrichment of uranium using their experience with cyclotrons. The calutrons (named after the University) became the basic unit of the massive Y-12 facility in Oak Ridge, Tennessee. Lawrence's lab helped contribute to what have been judged to be the three most valuable technology developments of the war (the atomic bomb, proximity fuze, and radar). The cyclotron, whose construction was stalled during the war, was finished in November 1946. The Manhattan Project shut down two months later. From 1946 to 1972: discovering the antiproton and new elements After the war, the Radiation Laboratory became one of the first laboratories to be incorporated into the Atomic Energy Commission (AEC) (now Department of Energy, DOE). In 1952, the Laboratory established a branch in Livermore focused on nuclear security work, which developed into Lawrence Livermore National Laboratory. Some classified research continued at Berkeley Lab until the 1970s, when it became a laboratory dedicated only to unclassified scientific research. Much of the Laboratory's scientific leadership during this period were also faculty members in the Physics and Chemistry Departments at the University of California, Berkeley. The scientists and engineers at Berkeley Lab continued to build ambitious large projects to accelerate the advance of science. Lawrence's original cyclotron design did not work for particles near the speed of light, so a new approach was needed. Edwin McMillan co-invented the synchrotron with Vladimir Veksler to address the problem. McMillan built an electron synchrotron capable of accelerating electrons to 300 million electron volts (300 MeV), which was operated from 1948 to 1960. The Berkeley accelerator team built the Bevatron, a proton synchrotron capable of accelerating protons to an energy of 6.5 gigaelectronvolts (GeV), an energy chosen to be just above the threshold for producing antiprotons. In 1955, during the Bevatron's first full year of operation, Physicists Emilio Segrè and Owen Chamberlain won the competition to observe the antiprotons for the first time. They won the Nobel Prize for Physics in 1959 for this discovery. The Bevatron remained the highest energy accelerator until the CERN Proton Synchrotron started accelerating protons to 25 GeV in 1959. Luis Alvarez led the design and construction of several liquid hydrogen bubble chambers, which were used to discover a large number of new elementary particles using Bevatron beams. His group also developed measuring systems to record the millions of photographs of particle tracks in the bubble chamber and computer systems to analyze the data. Alvarez won the Nobel Prize for Physics in 1968 for the discovery of many elementary particles using this technique. The Alvarez Physics Memos are a set of informal working papers of the large group of physicists, engineers, computer programmers, and technicians led by Luis W. Alvarez from the early 1950s until his death in 1988. Over 1700 memos are available on-line, hosted by the Laboratory. Berkeley Lab is credited with the discovery of 16 elements on the periodic table, more than any other institution, over the period 1940 to 1974. The American Chemical Society has established a National Historical Chemical Landmark at the Lab to memorialize this accomplishment. Glenn Seaborg was personally involved in discovering nine of these new elements, and he won the Nobel Prize for Chemistry in 1951 with McMillan. Founding Laboratory Director Lawrence died in 1958 at the age of 57. McMillan became the second Director, serving in that role until 1972. From 1973 to 1989: new capabilities in energy and environmental research The University of California appointed Andrew Sessler as the Laboratory Director in 1973, during the 1973 oil crisis. He established the Energy and Environment Division at the Lab, expanding for the first time into applied research that addressed the energy and environmental challenges the country faced. Sessler also joined with other Berkeley physicists to form an organization called Scientists for Sakharov, Orlov, Sharansky (SOS), which led an international protest movement calling attention to the plight of three Soviet scientists who were being persecuted by the U.S.S.R. government. Arthur Rosenfeld led the campaign to build up applied energy research at Berkeley Lab. He became widely known as the father of energy efficiency and the person who convinced the nation to adopt energy standards for appliances and buildings. Inspired by the 1973 oil crisis, he started up large team efforts that developed several technologies that radically improved energy efficiency. These included compact fluorescent lamps, low-energy refrigerators, and windows that trap heat. He developed the first energy-efficiency standards for buildings and appliances in California, which helped the state to sustain constant electricity use per capita from 1973 to 2006, while it rose by 50% in the rest of the country. This phenomenon is called the Rosenfeld Effect. By 1980, George Smoot had built up a strong experimental group in Berkeley, building instruments to measure the cosmic microwave background (CMB) in order to study the early universe. He became the principal investigator for the Differential Microwave Radiometer (DMR) instrument that was launched in 1989 as part of the Cosmic Background Explorer (COBE) mission. The full sky maps taken by the DMR made it possible for COBE scientists to discover the anisotropy of the CMB, and Smoot shared the Nobel Prize for Physics in 2006 with John Mather. From 1990 to 2004: new facilities for chemistry and materials, nanotechnology, scientific computing, and genomics Charles V. Shank left Bell Labs to become Director of Berkeley Lab in 1989, a position he held for 15 years. During his tenure, four of the five national scientific user facilities started operations at Berkeley, and the fifth started construction. On October 5, 1993, the new Advanced Light Source produced its first beams of x-ray light. David Shirley had proposed in the early 1990s building this new synchrotron source specializing in imaging materials using extreme ultraviolet to soft x-rays. In fall 2001, a major upgrade added "superbends" to produce harder x-rays for beamlines devoted to protein crystallography. In 1996, both the National Energy Research Scientific Computing Center (NERSC) and the Energy Sciences Network (ESnet) were moved from Lawrence Livermore National Laboratory to their new home at Berkeley Lab. To reestablish NERSC at Berkeley required moving a Cray C90, a first-generation vector processor supercomputer of 1991 vintage, and installing a new Cray T3E, the second-generation (1995) model. The NERSC computing capacity was 350 GFlop/s, representing 1/200,000 of the Perlmutter's speed in 2022. Horst Simon was brought to Berkeley as the first Director of NERSC, and he soon became one of the co-editors who managed the Top500 list of supercomputers, a position he has held ever since. The Joint Genome Institute (JGI) was created in 1997 to unite the expertise and resources in genome mapping, DNA sequencing, technology development, and information sciences that had developed at the DOE genome centers at Berkeley Lab, Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL). The JGI was originally established to work on the Human Genome Project (HGP), and generated the complete sequences of Chromosomes 5, 16 and 19. In 2004, the JGI established itself as a national user facility managed by Berkeley Lab, focusing on the broad genomic needs of biology and biotechnology, especially those related to the environment and carbon management. Laboratory Director Shank brought Daniel Chemla from Bell Labs to Berkeley Lab in 1991 to lead the newly formed Division of Materials Science and Engineering. In 1998 Chemla was appointed director of the Advanced Light Source to build it into a world-class scientific user facility. In 2001, Chemla proposed the establishment of the Molecular Foundry, to make cutting-edge instruments and expertise for nanotechnology accessible to a broad research community. Paul Alivisatos as founding director, and the founding directors of the facilities were Carolyn Bertozzi, Jean Frechet, Steven Gwon Sheng Louie, Jeffrey Bokor, and Miquel Salmeron. The Molecular Foundry building was dedicated in 2006, with Bertozzi as Foundry Director and Steven Chu as Laboratory Director. In the 1990s, Saul Perlmutter led the Supernova Cosmology Project (SCP), which used a certain type of supernovas as standard candles to study the expansion of the universe. The SCP team co-discovered the accelerating expansion of the universe, leading to the concept of dark energy, an unknown form of energy that drives this acceleration. Perlmutter shared the Nobel Prize in Physics in 2011 for this discovery. From 2005 to 2015: addressing climate change and the future of energy On August 1, 2004, Nobel-winning physicist Steven Chu was named the sixth Director of Berkeley Lab. The DOE was preparing to compete the management and operations (M&O) contract for Berkeley Lab for the first time, and Chu's first task was to lead the University of California's team that successfully bid for that contract. The initial term of the contract was from June 1, 2005, to May 31, 2010, with possible phased extensions for superior management performance up to a total contract term of 20 years. In 2007, Berkeley Lab launched the Joint BioEnergy Institute, one of three Bioenergy Research Centers to receive funding from the Genomic Science Program of DOE's Office for Biological and Environmental Research (BER). JBEI's Chief Executive Officer is Jay Keasling, who was elected a member of the National Academy of Engineering for developing synthetic biology tools needed to engineer the antimalarial drug artemisinin. The DOE Office of Science named Keasling a Distinguished Scientist Fellow in 2021 for advancing the DOE's strategy in renewable energy. On December 15, 2008, newly elected President Barack Obama nominated Steven Chu to be the Secretary of Energy. The University of California chose the Lab's Deputy Director, Paul Alivisatos, as the new director. Alivisatos is a materials chemist who won the National Medal of Science for his pioneering work in developing nanomaterials. He continued the Lab's focus on renewable energy and climate change. The DOE established the Joint Center for Artificial Photosynthesis (JCAP) as an Energy Innovation Hub in 2010, with California Institute of Technology as the lead institution and Berkeley Lab as the lead partner. The Lab built a new facility to house the JCAP laboratories and collaborative research space, and it was dedicated as Chu Hall in 2015. After JCAP operated for ten years, in 2020 the Berkeley team became a major partner in a new Energy Innovation Hub, the Liquid Sunlight Alliance (LiSA), with the vision of establishing the science needed to generate liquid fuels economically from sunlight, water, carbon dioxide and nitrogen. The Lab also is a major partner on a second Energy Innovation Hub, the Joint Center for Energy Storage Research (JCESR) which was started in 2013, with Argonne National Laboratory as the lead institution. The Lab built a new facility, the General Purpose Laboratory, to house energy storage laboratories and associated research space, which Secretary of Energy Ernest Moniz inaugurated in 2014. The mission of JCESR is to deliver transformational new concepts and materials that will enable a diversity of high performance next-generation batteries for transportation and the grid. On November 12, 2015, Laboratory Director Paul Alivisatos and Deputy Director Horst Simon were joined by University of California President Janet Napolitano, UC Berkeley Chancellor Nicholas Dirks, and the head of DOE's ASCR program Barb Helland to dedicate a Shyh Wang Hall, a facility designed to host the NERSC supercomputers and staff, the ESnet staff, and the research divisions in the Computing Sciences area. The building was designed with a novel seismic floor for the 20,000 square foot machine room in addition to features that take advantage of the coastal climate to provide energy-efficient air conditioning for the computing systems. From 2016 to the present: building new facilities and accelerating decarbonization In 2015 Paul Alivisatos announced that he was stepping down from his role as Laboratory Director. He took two leadership positions at the University of California, Berkeley, before becoming President of the University of Chicago in 2021. The University of California selected Michael Witherell, formerly the Director of Fermilab and Vice Chancellor for Research at the University of California, Santa Barbara as the eighth director of Berkeley Lab starting on March 1, 2016. In 2016, the Laboratory entered a period of intensive modernization: an unprecedented number of major projects to upgrade existing scientific facilities and to build new ones. Berkeley Lab physicists led the construction of the Dark Energy Spectroscopic Instrument, which is designed to create three-dimensional maps of the distribution of matter covering an unprecedented volume of the universe with unparalleled detail. The new instrument was installed on the retrofitted Nicholas U. Mayall 4-meter Telescope at Kitt Peak National Observatory in 2019. The five-year mission started in 2021, and the map assembled with data taken in the first seven months already included more galaxies than any previous survey. On September 27, 2016, The DOE gave approval of the mission need for ALS-U, a major project to upgrade the Advanced Light Source that includes constructing a new storage ring and an accumulator ring. The horizontal size of the electron beam in ALS will shrink from 100 micrometers to a few micrometers, which will improve the ability to image novel materials needed for next-generation batteries and electronics. With a total project cost of $590 million, this is the largest construction project at the Lab since the ALS was built in 1993. How the Lab's name evolved Shortly after the death of Lawrence in August 1958, the UC Radiation Laboratory (UCRL), including both the Berkeley and Livermore sites, was renamed Lawrence Radiation Laboratory. The Berkeley location became Lawrence Berkeley Laboratory in 1971, although many continued to call it the RadLab. Gradually, another shortened form came into common usage, LBL. Its formal name was amended to Ernest Orlando Lawrence Berkeley National Laboratory in 1995, when "National" was added to the names of all DOE labs. "Ernest Orlando" was later dropped to shorten the name. Today, the lab is commonly referred to as Berkeley Lab. Laboratory directors (1931–1958): Ernest Lawrence (1958–1972): Edwin McMillan (1973–1980): Andrew Sessler (1980–1989): David Shirley (1989–2004): Charles V. Shank (2004–2008): Steven Chu (2009–2016): Paul Alivisatos (2016–present): Michael Witherell Operations and governance The University of California operates Lawrence Berkeley National Laboratory under a contract with the Department of Energy. The site consists of 76 buildings (owned by the U.S. Department of Energy) located on owned by the university in the Berkeley Hills. Altogether, the Lab has 3,663 UC employees, of whom about 800 are students or postdocs, and each year it hosts more than 3,000 participating guest scientists. There are approximately two dozen DOE employees stationed at the laboratory to provide federal oversight of Berkeley Lab's work for the DOE. The laboratory director, Michael Witherell, is appointed by the university regents and reports to the university president. Although Berkeley Lab is governed by UC independently of the Berkeley campus, the two entities are closely interconnected: more than 200 Berkeley Lab researchers hold joint appointments as UC Berkeley faculty. The laboratory budget was $1.495 billion dollars in fiscal year 2023, while the total obligations were $1.395 billion. See also Lawrence Livermore National Laboratory References External links 1931 establishments in California Berkeley Hills Ernest Lawrence Federally Funded Research and Development Centers Historic American Engineering Record in California Laboratories in California Manhattan Project sites Nuclear research institutes Research institutes established in 1931 Research institutes in the San Francisco Bay Area Science and technology in the San Francisco Bay Area United States Department of Energy national laboratories University and college laboratories in the United States University of California, Berkeley University of California, Berkeley buildings
Lawrence Berkeley National Laboratory
[ "Engineering" ]
5,123
[ "Nuclear research institutes", "Nuclear organizations" ]
62,247
https://en.wikipedia.org/wiki/Backus%E2%80%93Naur%20form
In computer science, BackusNaur form (BNF; ; Backus normal form) is a notation used to describe the syntax of programming languages or other formal languages. It was developed by John Backus and Peter Naur. BNF can be described as a metasyntax notation for context-free grammars. Backus–Naur form is applied wherever exact descriptions of languages are needed, such as in official language specifications, in manuals, and in textbooks on programming language theory. BNF can be used to describe document formats, instruction sets, and communication protocols. Over time, many extensions and variants of the original Backus–Naur notation have been created; some are exactly defined, including extended Backus–Naur form (EBNF) and augmented Backus–Naur form (ABNF). Overview BNFs describe how to combine different symbols to produce a syntactically correct sequence. BNFs consist of three components: a set of non-terminal symbols, a set of terminal symbols, and rules for replacing non-terminal symbols with a sequence of symbols. These so-called "derivation rules" are written as <symbol> ::= __expression__ where: <symbol> is a nonterminal variable that is always enclosed between the pair <>. means that the symbol on the left must be replaced with the expression on the right. __expression__ consists of one or more sequences of either terminal or nonterminal symbols where each sequence is separated by a vertical bar "|" indicating a choice, the whole being a possible substitution for the symbol on the left. All syntactically correct sequences must be generated in the following manner: Initialize the sequence so that it just contains one start symbol. Apply derivation rules to this start symbol and the ensuing sequences of symbols. Applying rules in this manner can produce longer and longer sequences, so many BNF definitions allow for a special "delete" symbol to be included in the specification. We can specify a rule that allows us to replace some symbols with this "delete" symbol, which is meant to indicate that we can remove the symbols from our sequence and still have a syntactically correct sequence. Example As an example, consider this possible BNF for a U.S. postal address: <postal-address> ::= <name-part> <street-address> <zip-part> <name-part> ::= <personal-part> <last-name> <opt-suffix-part> <EOL> | <personal-part> <name-part> <personal-part> ::= <first-name> | <initial> "." <street-address> ::= <house-num> <street-name> <opt-apt-num> <EOL> <zip-part> ::= <town-name> "," <state-code> <ZIP-code> <EOL> <opt-suffix-part> ::= "Sr." | "Jr." | <roman-numeral> | "" <opt-apt-num> ::= "Apt" <apt-num> | "" This translates into English as: A postal address consists of a name-part, followed by a street-address part, followed by a zip-code part. A name-part consists of either: a personal-part followed by a last name followed by an optional suffix (Jr. Sr., or dynastic number) and end-of-line, or a personal part followed by a name part (this rule illustrates the use of recursion in BNFs, covering the case of people who use multiple first and middle names and initials). A personal-part consists of either a first name or an initial followed by a dot. A street address consists of a house number, followed by a street name, followed by an optional apartment specifier, followed by an end-of-line. A zip-part consists of a town-name, followed by a comma, followed by a state code, followed by a ZIP-code followed by an end-of-line. An opt-suffix-part consists of a suffix, such as "Sr.", "Jr." or a roman-numeral, or an empty string (i.e. nothing). An opt-apt-num consists of a prefix "Apt" followed by an apartment number, or an empty string (i.e. nothing). Note that many things (such as the format of a first-name, apartment number, ZIP-code, and Roman numeral) are left unspecified here. If necessary, they may be described using additional BNF rules. History The idea of describing the structure of language using rewriting rules can be traced back to at least the work of Pāṇini, an ancient Indian Sanskrit grammarian and a revered scholar in Hinduism who lived sometime between the 6th and 4th century BC. His notation to describe Sanskrit word structure is equivalent in power to that of Backus and has many similar properties. In Western society, grammar was long regarded as a subject for teaching, rather than scientific study; descriptions were informal and targeted at practical usage. In the first half of the 20th century, linguists such as Leonard Bloomfield and Zellig Harris started attempts to formalize the description of language, including phrase structure. Meanwhile, string rewriting rules as formal logical systems were introduced and studied by mathematicians such as Axel Thue (in 1914), Emil Post (1920s–40s) and Alan Turing (1936). Noam Chomsky, teaching linguistics to students of information theory at MIT, combined linguistics and mathematics by taking what is essentially Thue's formalism as the basis for the description of the syntax of natural language. He also introduced a clear distinction between generative rules (those of context-free grammars) and transformation rules (1956). John Backus, a programming language designer at IBM, proposed a metalanguage of "metalinguistic formulas" to describe the syntax of the new programming language IAL, known today as ALGOL 58 (1959). His notation was first used in the ALGOL 60 report. BNF is a notation for Chomsky's context-free grammars. Backus may have been familiar with Chomsky's work, but there are some doubts about this. As proposed by Backus, the formula defined "classes" whose names are enclosed in angle brackets. For example, <ab>. Each of these names denotes a class of basic symbols. Further development of ALGOL led to ALGOL 60. In the committee's 1963 report, Peter Naur called Backus's notation Backus normal form. Donald Knuth argued that BNF should rather be read as Backus–Naur form, as it is "not a normal form in the conventional sense", unlike, for instance, Chomsky normal form. The name Pāṇini Backus form was also once suggested in view of the fact that the expansion Backus normal form may not be accurate, and that Pāṇini had independently developed a similar notation earlier. BNF is described by Peter Naur in the ALGOL 60 report as metalinguistic formula: Another example from the ALGOL 60 report illustrates a major difference between the BNF metalanguage and a Chomsky context-free grammar. Metalinguistic variables do not require a rule defining their formation. Their formation may simply be described in natural language within the <> brackets. The following ALGOL 60 report section 2.3 comments specification, exemplifies how this works: For the purpose of including text among the symbols of a program the following "comment" conventions hold: Equivalence here means that any of the three structures shown in the left column may be replaced, in any occurrence outside of strings, by the symbol shown in the same line in the right column without any effect on the action of the program. Naur changed two of Backus's symbols to commonly available characters. The ::= symbol was originally a :≡. The | symbol was originally the word "" (with a bar over it). BNF is very similar to canonical-form Boolean algebra equations that are, and were at the time, used in logic-circuit design. Backus was a mathematician and the designer of the FORTRAN programming language. Studies of Boolean algebra is commonly part of a mathematics curriculum. Neither Backus nor Naur described the names enclosed in < > as non-terminals. Chomsky's terminology was not originally used in describing BNF. Naur later described them as classes in ALGOL course materials. In the ALGOL 60 report they were called metalinguistic variables. Anything other than the metasymbols ::=, |, and class names enclosed in < > are symbols of the language being defined. The metasymbol ::= is to be interpreted as "is defined as". The | is used to separate alternative definitions and is interpreted as "or". The metasymbols < > are delimiters enclosing a class name. BNF is described as a metalanguage for talking about ALGOL by Peter Naur and Saul Rosen. In 1947 Saul Rosen became involved in the activities of the fledgling Association for Computing Machinery, first on the languages committee that became the IAL group and eventually led to ALGOL. He was the first managing editor of the Communications of the ACM. BNF was first used as a metalanguage to talk about the ALGOL language in the ALGOL 60 report. That is how it is explained in ALGOL programming course material developed by Peter Naur in 1962. Early ALGOL manuals by IBM, Honeywell, Burroughs and Digital Equipment Corporation followed the ALGOL 60 report using it as a metalanguage. Saul Rosen in his book describes BNF as a metalanguage for talking about ALGOL. An example of its use as a metalanguage would be in defining an arithmetic expression: The first symbol of an alternative may be the class being defined, the repetition, as explained by Naur, having the function of specifying that the alternative sequence can recursively begin with a previous alternative and can be repeated any number of times. For example, above <expr> is defined as a <term> followed by any number of <addop> <term>. In some later metalanguages, such as Schorre's META II, the BNF recursive repeat construct is replaced by a sequence operator and target language symbols defined using quoted strings. The < and > brackets were removed. Parentheses () for mathematical grouping were added. The <expr> rule would appear in META II as These changes enabled META II and its derivative programming languages to define and extend their own metalanguage, at the cost of the ability to use a natural language description, metalinguistic variable, language construct description. Many spin-off metalanguages were inspired by BNF. See META II, TREE-META, and Metacompiler. A BNF class describes a language construct formation, with formation defined as a pattern or the action of forming the pattern. The class name expr is described in a natural language as a <term> followed by a sequence <addop> <term>. A class is an abstraction; we can talk about it independent of its formation. We can talk about term, independent of its definition, as being added or subtracted in expr. We can talk about a term being a specific data type and how an expr is to be evaluated having specific combinations of data types, or even reordering an expression to group data types and evaluation results of mixed types. The natural-language supplement provided specific details of the language class semantics to be used by a compiler implementation and a programmer writing an ALGOL program. Natural-language description further supplemented the syntax as well. The integer rule is a good example of natural and metalanguage used to describe syntax: There are no specifics on white space in the above. As far as the rule states, we could have space between the digits. In the natural language we complement the BNF metalanguage by explaining that the digit sequence can have no white space between the digits. English is only one of the possible natural languages. Translations of the ALGOL reports were available in many natural languages. The origin of BNF is not as important as its impact on programming language development. During the period immediately following the publication of the ALGOL 60 report BNF was the basis of many compiler-compiler systems. Some, like "A Syntax Directed Compiler for ALGOL 60" developed by Edgar T. Irons and "A Compiler Building System" Developed by Brooker and Morris, directly used BNF. Others, like the Schorre Metacompilers, made it into a programming language with only a few changes. <class name> became symbol identifiers, dropping the enclosing <, > and using quoted strings for symbols of the target language. Arithmetic-like grouping provided a simplification that removed using classes where grouping was its only value. The META II arithmetic expression rule shows grouping use. Output expressions placed in a META II rule are used to output code and labels in an assembly language. Rules in META II are equivalent to a class definitions in BNF. The Unix utility yacc is based on BNF with code production similar to META II. yacc is most commonly used as a parser generator, and its roots are obviously BNF. BNF today is one of the oldest computer-related languages still in use. Further examples BNF's syntax itself may be represented with a BNF like the following: <syntax> ::= <rule> | <rule> <syntax> <rule> ::= <opt-whitespace> "<" <rule-name> ">" <opt-whitespace> "::=" <opt-whitespace> <expression> <line-end> <opt-whitespace> ::= " " <opt-whitespace> | "" <expression> ::= <list> | <list> <opt-whitespace> "|" <opt-whitespace> <expression> <line-end> ::= <opt-whitespace> <EOL> | <line-end> <line-end> <list> ::= <term> | <term> <opt-whitespace> <list> <term> ::= <literal> | "<" <rule-name> ">" <literal> ::= '"' <text1> '"' | "'" <text2> "'" <text1> ::= "" | <character1> <text1> <text2> ::= "" | <character2> <text2> <character> ::= <letter> | <digit> | <symbol> <letter> ::= "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" | "J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" | "S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z" | "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" | "i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" | "q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" | "y" | "z" <digit> ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" <symbol> ::= "|" | " " | "!" | "#" | "$" | "%" | "&" | "(" | ")" | "*" | "+" | "," | "-" | "." | "/" | ":" | ";" | ">" | "=" | "<" | "?" | "@" | "[" | "\" | "]" | "^" | "_" | "`" | "{" | "}" | "~" <character1> ::= <character> | "'" <character2> ::= <character> | '"' <rule-name> ::= <letter> | <rule-name> <rule-char> <rule-char> ::= <letter> | <digit> | "-" Note that "" is the empty string. The original BNF did not use quotes as shown in <literal> rule. This assumes that no whitespace is necessary for proper interpretation of the rule. <EOL> represents the appropriate line-end specifier (in ASCII, carriage-return, line-feed or both depending on the operating system). <rule-name> and <text> are to be substituted with a declared rule's name/label or literal text, respectively. In the U.S. postal address example above, the entire block-quote is a <syntax>. Each line or unbroken grouping of lines is a rule; for example one rule begins with <name-part> ::=. The other part of that rule (aside from a line-end) is an expression, which consists of two lists separated by a vertical bar |. These two lists consists of some terms (three terms and two terms, respectively). Each term in this particular rule is a rule-name. Variants EBNF There are many variants and extensions of BNF, generally either for the sake of simplicity and succinctness, or to adapt it to a specific application. One common feature of many variants is the use of regular expression repetition operators such as * and +. The extended Backus–Naur form (EBNF) is a common one. Another common extension is the use of square brackets around optional items. Although not present in the original ALGOL 60 report (instead introduced a few years later in IBM's PL/I definition), the notation is now universally recognised. ABNF Augmented Backus–Naur form (ABNF) and Routing Backus–Naur form (RBNF) are extensions commonly used to describe Internet Engineering Task Force (IETF) protocols. Parsing expression grammars build on the BNF and regular expression notations to form an alternative class of formal grammar, which is essentially analytic rather than generative in character. Others Many BNF specifications found online today are intended to be human-readable and are non-formal. These often include many of the following syntax rules and extensions: Optional items enclosed in square brackets: [<item-x>]. Items existing 0 or more times are enclosed in curly brackets or suffixed with an asterisk (*) such as <word> ::= <letter> {<letter>} or <word> ::= <letter> <letter>* respectively. Items existing 1 or more times are suffixed with an addition (plus) symbol, +, such as <word> ::= <letter>+. Terminals may appear in bold rather than italics, and non-terminals in plain text rather than angle brackets. Where items are grouped, they are enclosed in simple parentheses. Software using BNF or variants Software that accepts BNF (or a superset) as input ANTLR, a parser generator written in Java Coco/R, compiler generator accepting an attributed grammar in EBNF DMS Software Reengineering Toolkit, program analysis and transformation system for arbitrary languages GOLD, a BNF parser generator RPA BNF parser. Online (PHP) demo parsing: JavaScript, XML XACT X4MR System, a rule-based expert system for programming language translation XPL Analyzer, a tool which accepts simplified BNF for a language and produces a parser for that language in XPL; it may be integrated into the supplied SKELETON program, with which the language may be debugged (a SHARE contributed program, which was preceded by A Compiler Generator) bnfparser2, a universal syntax verification utility bnf2xml, Markup input with XML tags using advanced BNF matching JavaCC, Java Compiler Compiler tm (JavaCC tm) - The Java Parser Generator Similar software GNU bison, GNU version of yacc Yacc, parser generator (most commonly used with the Lex preprocessor) Racket's parser tools, lex and yacc-style parsing (Beautiful Racket edition) Qlik Sense, a BI tool, uses a variant of BNF for scripting BNF Converter (BNFC), operating on a variant called "labeled Backus–Naur form" (LBNF). In this variant, each production for a given non-terminal is given a label, which can be used as a constructor of an algebraic data type representing that nonterminal. The converter is capable of producing types and parsers for abstract syntax in several languages, including Haskell and Java See also Augmented Backus–Naur form (ABNF) Compiler Description Language (CDL) Definite clause grammar – a more expressive alternative to BNF used in Prolog Extended Backus–Naur form (EBNF) Meta-II – an early compiler writing tool and notation Syntax diagram – railroad diagram Translational Backus–Naur form (TBNF) Van Wijngaarden grammar – used in preference to BNF to define Algol68 Wirth syntax notation – an alternative to BNF from 1977 References External links . — Augmented BNF for Syntax Specifications: ABNF. — Routing BNF: A Syntax Used in Various Protocol Specifications. ISO/IEC 14977:1996(E) Information technology – Syntactic metalanguage – Extended BNF, available from or from (the latter is missing the cover page, but is otherwise much cleaner) Language grammars , the original BNF. , freely available BNF grammars for SQL. , freely available BNF grammars for SQL, Ada, Java. , freely available BNF/EBNF grammars for C/C++, Pascal, COBOL, Ada 95, PL/I. . Includes parts 11, 14, and 21 of the ISO 10303 (STEP) standard. Formal languages Compiler construction Metalanguages
Backus–Naur form
[ "Mathematics" ]
4,789
[ "Formal languages", "Mathematical logic" ]
62,275
https://en.wikipedia.org/wiki/Mesotardigrada
Mesotardigrada is one of three classes of tardigrades, consisting of a single species, Thermozodium esakii. The animal reportedly has six claws of equal length at each foot. This species was described in 1937 by German zoologist Gilbert Rahm from a hot spring near Nagasaki, Japan. The inability of taxonomists to replicate Rahm's finding has cast doubt on the accuracy of the description, making T. esakii, and by extension the entire class Mesotardigrada, a taxon inquirendum. Taxonomic ambiguity The type specimen Rahm used as the basis of his description has either been lost or it was never preserved in the first place, which Grothman et al. (2017) suggest is consistent with the lax taxonomic standards of the 1930s. Thus, re-examination of the original specimen is not possible. Complicating matters further, the type locality from which Rahm collected his specimen may have been destroyed by an earthquake and subsequent searches for additional specimens matching the original description have been unsuccessful. Grothman et al. (2017) suggest that Rahm might have observed and misinterpreted a species in the class Heterotardigrada, possibly belonging to the genus Carphania or Oreella. See also Salinella – another high rank taxon whose sole member has not been independently verified to exist References External links Tardigrade phylogenetics Tardigrades Ecdysozoa classes Monotypic animal classes Nomina dubia
Mesotardigrada
[ "Biology" ]
312
[ "Space-flown life", "Controversial taxa", "Tardigrades", "Nomina dubia", "Biological hypotheses" ]
62,289
https://en.wikipedia.org/wiki/Monosodium%20glutamate
Monosodium glutamate (MSG), also known as sodium glutamate, is a sodium salt of glutamic acid. MSG is found naturally in some foods including tomatoes and cheese in this glutamic acid form. MSG is used in cooking as a flavor enhancer with a savory taste that intensifies the umami flavor of food, as naturally occurring glutamate does in foods such as stews and meat soups. MSG was first prepared in 1908 by Japanese biochemist Kikunae Ikeda, who tried to isolate and duplicate the savory taste of kombu, an edible seaweed used as a broth (dashi) for Japanese cuisine. MSG balances, blends, and rounds the perception of other tastes. MSG, along with disodium ribonucleotides, is commonly used and found in stock (bouillon) cubes, soups, ramen, gravy, stews, condiments, savory snacks, etc. The U.S. Food and Drug Administration has given MSG its generally recognized as safe (GRAS) designation. It is a popular misconception that MSG can cause headaches and other feelings of discomfort, known as "Chinese restaurant syndrome". Several blinded studies show no such effects when MSG is combined with food in normal concentrations, and are inconclusive when MSG is added to broth in large concentrations. The European Union classifies it as a food additive permitted in certain foods and subject to quantitative limits. MSG has the HS code 2922.42 and the E number E621. Use Pure MSG is reported not to have a highly pleasant taste until it is combined with a savory aroma. The basic sensory function of MSG is attributed to its ability to enhance savory taste-active compounds when added in the proper concentration. The optimal concentration varies by food; in clear soup, the "pleasure score" rapidly falls with the addition of more than one gram of MSG per 100mL. The sodium content (in mass percent) of MSG, 12.28%, is about one-third of that in sodium chloride (39.34%), due to the greater mass of the glutamate counterion. Although other salts of glutamate have been used in low-salt soups, they are less palatable than MSG. Food scientist Steve Witherly noted in 2017 that MSG may promote healthy eating by enhancing the flavor of food such as kale while reducing the use of salt. The ribonucleotide food additives disodium inosinate (E631) and disodium guanylate (E627), as well as conventional salt, are usually used with monosodium glutamate-containing ingredients as they seem to have a synergistic effect. "Super salt" is a mixture of 9 parts salt, to one part MSG and 0.1 parts disodium ribonucleotides (a mixture of disodium inosinate and disodium guanylate). Safety MSG is generally recognized as safe to eat. A popular belief is that MSG can cause headaches and other feelings of discomfort, but blinded tests have not provided strong evidence of this. International bodies governing food additives currently consider MSG safe for human consumption as a flavor enhancer. Under normal conditions, humans can metabolize relatively large quantities of glutamate, which is naturally produced in the gut in the course of protein hydrolysis. The median lethal dose (LD50) is between 15 and 18 g/kg body weight in rats and mice, respectively, five times the LD50 of table salt (3 g/kg in rats). The use of MSG as a food additive and the natural levels of glutamic acid in foods are not of toxic concern in humans. Specifically MSG in the diet does not increase glutamate in the brain or affect brain function. A 1995 report from the Federation of American Societies for Experimental Biology (FASEB) for the United States Food and Drug Administration (FDA) concluded that MSG is safe when "eaten at customary levels" and, although a subgroup of otherwise-healthy individuals develop an MSG symptom complex when exposed to 3 g of MSG in the absence of food, MSG as a cause has not been established because the symptom reports are anecdotal. According to the report, no data supports the role of glutamate in chronic disease. High quality evidence has failed to demonstrate a relationship between the MSG symptom complex and actual MSG consumption. No association has been demonstrated, and the few responses were inconsistent. No symptoms were observed when MSG was used in food. Adequately controlling for experimental bias includes a blinded, placebo-controlled experimental design and administration by capsule, because of the unique aftertaste of glutamates. In a 1993 study, 71 fasting participants were given 5 g of MSG and then a standard breakfast. One reaction (to the placebo, in a self-identified MSG-sensitive individual) occurred. A study in 2000 tested the reaction of 130 subjects with a reported sensitivity to MSG. Multiple trials were performed, with subjects exhibiting at least two symptoms continuing. Two people out of the 130 responded to all four challenges. Because of the low prevalence, the researchers concluded that a response to MSG was not reproducible. Studies exploring MSG's role in obesity have yielded mixed results. Although several studies have investigated anecdotal links between MSG and asthma, current evidence does not support a causal association. Food Standards Australia New Zealand (FSANZ) MSG technical report concludes, "There is no convincing evidence that MSG is a significant factor in causing systemic reactions resulting in severe illness or mortality. The studies conducted to date on Chinese restaurant syndrome (CRS) have largely failed to demonstrate a causal association with MSG. Symptoms resembling those of CRS may be provoked in a clinical setting in small numbers of individuals by the administration of large doses of MSG without food. However, such effects are neither persistent nor serious and are likely to be attenuated when MSG is consumed with food. In terms of more serious adverse effects such as the triggering of bronchospasm in asthmatic individuals, the evidence does not indicate that MSG is a significant trigger factor." However, the FSANZ MSG report says that although no data is available on average MSG consumption in Australia and New Zealand, "data from the United Kingdom indicates an average intake of 590mg/day, with extreme users (97.5th percentile consumers) consuming 2,330mg/day" (Rhodes et al. 1991). In a highly seasoned restaurant meal, intakes as high as 5,000 mg or more may be possible (Yang et al. 1997). When very large doses of MSG (>5 g MSG in a bolus dose) are ingested, plasma glutamate concentration will significantly increase. However, the concentration typically returns to normal within two hours. In general, foods providing metabolizable carbohydrates significantly attenuate peak plasma glutamate levels at doses up to 150mg/kg body weight. Two earlier studiesthe 1987 Joint FAO/WHO Expert Committee on Food Additives (JECFA) and the 1995 Federation of American Societies for Experimental Biology (FASEB)concluded, "there may be a small number of unstable asthmatics who respond to doses of 1.5–2.5 g of MSG in the absence of food". The FASEB evaluation concluded, "sufficient evidence exists to indicate some individuals may experience manifestations of CRS when exposed to a ≥3 g bolus dose of MSG in the absence of food". Production MSG has been produced by three methods: hydrolysis of vegetable proteins with hydrochloric acid to disrupt peptide bonds (1909–1962); direct chemical synthesis with acrylonitrile (1962–1973), and bacterial fermentation (the current method). Wheat gluten was originally used for hydrolysis because it contains more than 30 g of glutamate and glutamine per 100 g of protein. As demand for MSG increased, chemical synthesis and fermentation were studied. The polyacrylic fiber industry began in Japan during the mid-1950s, and acrylonitrile was adopted as a base material to synthesize MSG. As of 2016, most MSG worldwide is produced by bacterial fermentation in a process similar to making vinegar or yogurt. Sodium is added later, for neutralization. During fermentation, Corynebacterium species, cultured with ammonia and carbohydrates from sugar beets, sugarcane, tapioca or molasses, excrete amino acids into a culture broth from which L-glutamate is isolated. Kyowa Hakko Kogyo (currently Kyowa Kirin) developed industrial fermentation to produce L-glutamate. The conversion yield and production rate (from sugars to glutamate) continues to improve in the industrial production of MSG, keeping up with demand. The product, after filtration, concentration, acidification, and crystallization, is glutamate, sodium ions, and water. Chemical properties The compound is usually available as the monohydrate, a white, odorless, crystalline powder. The solid contains separate sodium cations and glutamate anions in zwitterionic form, −OOC-CH()-()2-COO−. In solution it dissociates into glutamate and sodium ions. MSG is freely soluble in water, but it is not hygroscopic and is insoluble in common organic solvents (such as ether). It is generally stable under food-processing conditions. MSG does not break down during cooking and, like other amino acids, will exhibit a Maillard reaction (browning) in the presence of sugars at very high temperatures. History Glutamic acid was discovered and identified in 1866 by the German chemist Karl Heinrich Ritthausen, who treated wheat gluten (for which it was named) with sulfuric acid. Kikunae Ikeda of Tokyo Imperial University isolated glutamic acid as a taste substance in 1908 from the seaweed Laminaria japonica (kombu) by aqueous extraction and crystallization, calling its taste umami ("delicious taste"). Ikeda noticed that dashi, the Japanese broth of katsuobushi and kombu, had a unique taste not yet scientifically described (not sweet, salty, sour, or bitter). To determine which glutamate could result in the taste of umami, he studied the taste properties of numerous glutamate salts such as calcium, potassium, ammonium, and magnesium glutamate. Of these salts, monosodium glutamate was the most soluble and palatable, as well as the easiest to crystallize. Ikeda called his product "monosodium glutamate" and submitted a patent to produce MSG; the Suzuki brothers began commercial production of MSG in 1909 using the term Ajinomoto ("essence of taste"). Society and culture Regulations United States MSG is one of several forms of glutamic acid found in foods, in large part because glutamic acid (an amino acid) is pervasive in nature. Glutamic acid and its salts may be present in a variety of other additives, including hydrolyzed vegetable protein, autolyzed yeast, hydrolyzed yeast, yeast extract, soy extracts, and protein isolate, which must be specifically labeled. Since 1998, MSG cannot be included in the term "spices and flavorings". However, the term "natural flavor/s" is used by the food industry for glutamic acid (chemically similar to MSG, lacking only the sodium ion). The Food and Drug Administration (FDA) does not require disclosure of components and amounts of "natural flavor/s." Australia and New Zealand Standard 1.2.4 of the Australia and New Zealand Food Standards Code requires MSG to be labeled in packaged foods. The label must have the food-additive class name (e.g. "flavour enhancer"), followed by the name of the additive ("MSG") or its International Numbering System (INS) number, 621. Pakistan The Punjab Food Authority banned Ajinomoto, commonly known as Chinese salt, which contains MSG, from being used in food products in the Punjab Province of Pakistan in January 2018. The prohibition against the import and manufacture of MSG was enforced on 28 February 2018, following an order by the Supreme Court on 10 February 2018. In 2024, the federal government lifted the ban on MSG, following objections from Japan and a review of scientific evidence by an expert committee. The committee comprising experts from various institutions—including the Pakistan Council of Scientific and Industrial Research, National Agricultural Research Centre, and Pakistan Standards and Quality Control Authority—confirmed MSG as a safe food additive. Names The following are alternative names for MSG: Chemical names and identifiers Monosodium glutamate or sodium glutamate Sodium 2-aminopentanedioate Glutamic acid, monosodium salt, monohydrate L-Glutamic acid, monosodium salt, monohydrate L-Monosodium glutamate monohydrate Monosodium L-glutamate monohydrate MSG monohydrate Sodium glutamate monohydrate UNII-W81N5U6R6U Flavour enhancer E621 Trade names Accent, produced by B&G Foods Inc., Parsippany, New Jersey, US Aji-No-Moto, produced by Ajinomoto, 26 countries, head office Japan Tasting Powder Ve-Tsin by Tien Chu Ve-Tsin Sazón, distributed by Goya Foods, Jersey City, NJ Stigma in cuisine Origin The controversy surrounding the safety of MSG started with the publication of Robert Ho Man Kwok's correspondence letter titled "Chinese-Restaurant Syndrome" in the New England Journal of Medicine on 4 April 1968. In his letter, Kwok suggested several possible causes before he nominated MSG for his symptoms. This letter was initially met with insider satirical responses, often using race as prop for humorous effect, within the medical community. During the discursive uptake in media, the conversations were recontextualized as legitimate while the race-based motivations of the humor were not parsed, which replicated historical racial prejudices. Despite the resulting public backlash, the Food and Drug Administration (FDA) did not remove MSG from their Generally Recognized as Safe list. In 1970, a National Research Council under the National Academy of Science, on behalf of the FDA, investigated MSG but concluded that MSG was safe for consumption. Reactions The controversy about MSG is tied to racial stereotypes against East Asian societies. Herein, specifically East Asian cuisine was targeted, whereas the widespread usage of MSG in Western processed food does not generate the same stigma. These kind of perceptions, such as the rhetoric of the so-called Chinese restaurant syndrome, have been attributed to xenophobic or racist biases. Food historian Ian Mosby wrote that fear of MSG in Chinese food is part of the US's long history of viewing the "exotic" cuisine of Asia as dangerous and dirty. In 2016, Anthony Bourdain stated in Parts Unknown that "I think MSG is good stuff ... You know what causes Chinese restaurant syndrome? Racism." In 2020, Ajinomoto, the leading manufacturer of MSG, and others launched the #RedefineCRS campaign, in reference to the term "Chinese restaurant syndrome", to combat the misconceptions about MSG, saying they intended to highlight the xenophobic prejudice against East Asian cuisine and the scientific evidence. Following the campaign, Merriam-Webster announced it would review the term. See also References External links The Facts on Monosodium Glutamate (EUFIC) E-number additives Edible salt Flavor enhancers Food additives Glutamates Japanese inventions Metal-amino acid complexes Organic sodium salts Umami enhancers
Monosodium glutamate
[ "Chemistry" ]
3,427
[ "Coordination chemistry", "Salts", "Organic sodium salts", "Edible salt", "Metal-amino acid complexes" ]
62,290
https://en.wikipedia.org/wiki/Dinoflagellate
The dinoflagellates () are a monophyletic group of single-celled eukaryotes constituting the phylum Dinoflagellata and are usually considered protists. Dinoflagellates are mostly marine plankton, but they are also common in freshwater habitats. Their populations vary with sea surface temperature, salinity, and depth. Many dinoflagellates are photosynthetic, but a large fraction of these are in fact mixotrophic, combining photosynthesis with ingestion of prey (phagotrophy and myzocytosis). In terms of number of species, dinoflagellates are one of the largest groups of marine eukaryotes, although substantially smaller than diatoms. Some species are endosymbionts of marine animals and play an important part in the biology of coral reefs. Other dinoflagellates are unpigmented predators on other protozoa, and a few forms are parasitic (for example, Oodinium and Pfiesteria). Some dinoflagellates produce resting stages, called dinoflagellate cysts or dinocysts, as part of their lifecycles; this occurs in 84 of the 350 described freshwater species and a little more than 10% of the known marine species. Dinoflagellates are alveolates possessing two flagella, the ancestral condition of bikonts. About 1,555 species of free-living marine dinoflagellates are currently described. Another estimate suggests about 2,000 living species, of which more than 1,700 are marine (free-living, as well as benthic) and about 220 are from fresh water. The latest estimates suggest a total of 2,294 living dinoflagellate species, which includes marine, freshwater, and parasitic dinoflagellates. A rapid accumulation of certain dinoflagellates can result in a visible coloration of the water, colloquially known as red tide (a harmful algal bloom), which can cause shellfish poisoning if humans eat contaminated shellfish. Some dinoflagellates also exhibit bioluminescence, primarily emitting blue-green light, which may be visible in oceanic areas under certain conditions. Etymology The term "dinoflagellate" is a combination of the Greek dinos and the Latin flagellum. Dinos means "whirling" and signifies the distinctive way in which dinoflagellates were observed to swim. Flagellum means "whip" and this refers to their flagella. History In 1753, the first modern dinoflagellates were described by Henry Baker as "Animalcules which cause the Sparkling Light in Sea Water", and named by Otto Friedrich Müller in 1773. The term derives from the Greek word δῖνος (dînos), meaning whirling, and Latin flagellum, a diminutive term for a whip or scourge. In the 1830s, the German microscopist Christian Gottfried Ehrenberg examined many water and plankton samples and proposed several dinoflagellate genera that are still used today including Peridinium, Prorocentrum, and Dinophysis. These same dinoflagellates were first defined by Otto Bütschli in 1885 as the flagellate order Dinoflagellida. Botanists treated them as a division of algae, named Pyrrophyta or Pyrrhophyta ("fire algae"; Greek pyrr(h)os, fire) after the bioluminescent forms, or Dinophyta. At various times, the cryptomonads, ebriids, and ellobiopsids have been included here, but only the last are now considered close relatives. Dinoflagellates have a known ability to transform from noncyst to cyst-forming strategies, which makes recreating their evolutionary history extremely difficult. Morphology Dinoflagellates are unicellular and possess two dissimilar flagella arising from the ventral cell side (dinokont flagellation). They have a ribbon-like transverse flagellum with multiple waves that beats to the cell's left, and a more conventional one, the longitudinal flagellum, that beats posteriorly. The transverse flagellum is a wavy ribbon in which only the outer edge undulates from base to tip, due to the action of the axoneme which runs along it. The axonemal edge has simple hairs that can be of varying lengths. The flagellar movement produces forward propulsion and also a turning force. The longitudinal flagellum is relatively conventional in appearance, with few or no hairs. It beats with only one or two periods to its wave. The flagella lie in surface grooves: the transverse one in the cingulum and the longitudinal one in the sulcus, although its distal portion projects freely behind the cell. In dinoflagellate species with desmokont flagellation (e.g., Prorocentrum), the two flagella are differentiated as in dinokonts, but they are not associated with grooves. Dinoflagellates have a complex cell covering called an amphiesma or cortex, composed of a series of membranes, flattened vesicles called alveoli (= amphiesmal vesicles) and related structures. In thecate ("armoured") dinoflagellates, these support overlapping cellulose plates to create a sort of armor called the theca or lorica, as opposed to athecate ("nude") dinoflagellates. These occur in various shapes and arrangements, depending on the species and sometimes on the stage of the dinoflagellate. Conventionally, the term tabulation has been used to refer to this arrangement of thecal plates. The plate configuration can be denoted with the plate formula or tabulation formula. Fibrous extrusomes are also found in many forms. A transverse groove, the so-called cingulum (or cigulum) runs around the cell, thus dividing it into an anterior (episoma) and posterior (hyposoma). If and only if a theca is present, the parts are called epitheca and hypotheca, respectively. Posteriorly, starting from the transverse groove, there is a longitudinal furrow called the sulcus. The transverse flagellum strikes in the cingulum, the longitudinal flagellum in the sulcus. Together with various other structural and genetic details, this organization indicates a close relationship between the dinoflagellates, the Apicomplexa, and ciliates, collectively referred to as the alveolates. Dinoflagellate tabulations can be grouped into six "tabulation types": gymnodinoid, suessoid, gonyaulacoid–peridinioid, nannoceratopsioid, dinophysioid, and prorocentroid. Most Dinoflagellates have a plastid derived from secondary endosymbiosis of red algae, however dinoflagellates with plastids derived from green algae and tertiary endosymbiosis of diatoms have also been discovered. Similar to other photosynthetic organisms, dinoflagellates contain chlorophylls a and c2 and the carotenoid beta-carotene. Dinoflagellates also produce the xanthophylls including peridinin, dinoxanthin, and diadinoxanthin. These pigments give many dinoflagellates their typical golden brown color. However, the dinoflagellates Karenia brevis, Karenia mikimotoi, and Karlodinium micrum have acquired other pigments through endosymbiosis, including fucoxanthin. This suggests their chloroplasts were incorporated by several endosymbiotic events involving already colored or secondarily colorless forms. The discovery of plastids in the Apicomplexa has led some to suggest they were inherited from an ancestor common to the two groups, but none of the more basal lines has them. All the same, the dinoflagellate cell consists of the more common organelles such as rough and smooth endoplasmic reticulum, Golgi apparatus, mitochondria, lipid and starch grains, and food vacuoles. Some have even been found with a light-sensitive organelle, the eyespot or stigma, or a larger nucleus containing a prominent nucleolus. The dinoflagellate Erythropsidinium has the smallest known eye. Some athecate species have an internal skeleton consisting of two star-like siliceous elements that has an unknown function, and can be found as microfossils. Tappan gave a survey of dinoflagellates with internal skeletons. This included the first detailed description of the pentasters in Actiniscus pentasterias, based on scanning electron microscopy. They are placed within the order Gymnodiniales, suborder Actiniscineae. Theca structure and formation The formation of thecal plates has been studied in detail through ultrastructural studies. The dinoflagellate nucleus: dinokaryon 'Core dinoflagellates' (dinokaryotes) have a peculiar form of nucleus, called a dinokaryon, in which the chromosomes are attached to the nuclear membrane. These carry reduced number of histones. In place of histones, dinoflagellate nuclei contain a novel, dominant family of nuclear proteins that appear to be of viral origin, thus are called Dinoflagellate viral nucleoproteins (DVNPs) which are highly basic, bind DNA with similar affinity to histones, and occur in multiple posttranslationally modified forms. Dinoflagellate nuclei remain condensed throughout interphase rather than just during mitosis, which is closed and involves a uniquely extranuclear mitotic spindle. This sort of nucleus was once considered to be an intermediate between the nucleoid region of prokaryotes and the true nuclei of eukaryotes, so were termed "mesokaryotic", but now are considered derived rather than primitive traits (i. e. ancestors of dinoflagellates had typical eukaryotic nuclei). In addition to dinokaryotes, DVNPs can be found in a group of basal dinoflagellates (known as Marine Alveolates, "MALVs") that branch as sister to dinokaryotes (Syndiniales). Classification Generality Dinoflagellates are protists and have been classified using both the International Code of Botanical Nomenclature (ICBN, now renamed as ICN) and the International Code of Zoological Nomenclature (ICZN). About half of living dinoflagellate species are autotrophs possessing chloroplasts and half are nonphotosynthesising heterotrophs. The peridinin dinoflagellates, named after their peridinin plastids, appear to be ancestral for the dinoflagellate lineage. Almost half of all known species have chloroplasts, which are either the original peridinin plastids or new plastids acquired from other lineages of unicellular algae through endosymbiosis. The remaining species have lost their photosynthetic abilities and have adapted to a heterotrophic, parasitic or kleptoplastic lifestyle. Most (but not all) dinoflagellates have a dinokaryon, described below (see: Life cycle, below). Dinoflagellates with a dinokaryon are classified under Dinokaryota, while dinoflagellates without a dinokaryon are classified under Syndiniales. Although classified as eukaryotes, the dinoflagellate nuclei are not characteristically eukaryotic, as some of them lack histones and nucleosomes, and maintain continually condensed chromosomes during mitosis. The dinoflagellate nucleus was termed 'mesokaryotic' by Dodge (1966), due to its possession of intermediate characteristics between the coiled DNA areas of prokaryotic bacteria and the well-defined eukaryotic nucleus. This group, however, does contain typically eukaryotic organelles, such as Golgi bodies, mitochondria, and chloroplasts. Jakob Schiller (1931–1937) provided a description of all the species, both marine and freshwater, known at that time. Later, Alain Sournia (1973, 1978, 1982, 1990, 1993) listed the new taxonomic entries published after Schiller (1931–1937). Sournia (1986) gave descriptions and illustrations of the marine genera of dinoflagellates, excluding information at the species level. The latest index is written by Gómez. Identification English-language taxonomic monographs covering large numbers of species are published for the Gulf of Mexico, the Indian Ocean, the British Isles, the Mediterranean and the North Sea. The main source for identification of freshwater dinoflagellates is the Süsswasser Flora. Calcofluor-white can be used to stain thecal plates in armoured dinoflagellates. Ecology and physiology Habitats Dinoflagellates are found in all aquatic environments: marine, brackish, and fresh water, including in snow or ice. They are also common in benthic environments and sea ice. Endosymbionts All Zooxanthellae are dinoflagellates and most of them are members within Symbiodiniaceae (e.g. the genus Symbiodinium). The association between Symbiodinium and reef-building corals is widely known. However, endosymbiontic Zooxanthellae inhabit a great number of other invertebrates and protists, for example many sea anemones, jellyfish, nudibranchs, the giant clam Tridacna, and several species of radiolarians and foraminiferans. Many extant dinoflagellates are parasites (here defined as organisms that eat their prey from the inside, i.e. endoparasites, or that remain attached to their prey for longer periods of time, i.e. ectoparasites). They can parasitize animal or protist hosts. Protoodinium, Crepidoodinium, Piscinoodinium, and Blastodinium retain their plastids while feeding on their zooplanktonic or fish hosts. In most parasitic dinoflagellates, the infective stage resembles a typical motile dinoflagellate cell. Nutritional strategies Three nutritional strategies are seen in dinoflagellates: phototrophy, mixotrophy, and heterotrophy. Phototrophs can be photoautotrophs or auxotrophs. Mixotrophic dinoflagellates are photosynthetically active, but are also heterotrophic. Facultative mixotrophs, in which autotrophy or heterotrophy is sufficient for nutrition, are classified as amphitrophic. If both forms are required, the organisms are mixotrophic sensu stricto. Some free-living dinoflagellates do not have chloroplasts, but host a phototrophic endosymbiont. A few dinoflagellates may use alien chloroplasts (cleptochloroplasts), obtained from food (kleptoplasty). Some dinoflagellates may feed on other organisms as predators or parasites. Food inclusions contain bacteria, bluegreen algae, diatoms, ciliates, and other dinoflagellates. Mechanisms of capture and ingestion in dinoflagellates are quite diverse. Several dinoflagellates, both thecate (e.g. Ceratium hirundinella, Peridinium globulus) and nonthecate (e.g. Oxyrrhis marina, Gymnodinium sp. and Kofoidinium spp.), draw prey to the sulcal region of the cell (either via water currents set up by the flagella or via pseudopodial extensions) and ingest the prey through the sulcus. In several Protoperidinium spp., e.g. P. conicum, a large feeding veil—a pseudopod called the pallium—is extruded to capture prey which is subsequently digested extracellularly (= pallium-feeding). Oblea, Zygabikodinium, and Diplopsalis are the only other dinoflagellate genera known to use this particular feeding mechanism. Gymnodinium fungiforme, commonly found as a contaminant in algal or ciliate cultures, feeds by attaching to its prey and ingesting prey cytoplasm through an extensible peduncle. Two related genera, Polykrikos and Neatodinium, shoot out a harpoon-like organelle to capture prey. Some mixotrophic dinoflagellates are able to produce neurotoxins that have anti-grazing effects on larger copepods and enhance the ability of the dinoflagellate to prey upon larger copepods. Toxic strains of Karlodinium veneficum produce karlotoxin that kills predators who ingest them, thus reducing predatory populations and allowing blooms of both toxic and non-toxic strains of K. veneficum. Further, the production of karlotoxin enhances the predatory ability of K. veneficum by immobilizing its larger prey. K. armiger are more inclined to prey upon copepods by releasing a potent neurotoxin that immobilizes its prey upon contact. When K. armiger are present in large enough quantities, they are able to cull whole populations of their copepod prey. The feeding mechanisms of the oceanic dinoflagellates remain unknown, although pseudopodial extensions were observed in Podolampas bipes. Blooms Introduction Dinoflagellate blooms are generally unpredictable, short, with low species diversity, and with little species succession. The low species diversity can be due to multiple factors. One way a lack of diversity may occur in a bloom is through a reduction in predation and a decreased competition. The first may be achieved by having predators reject the dinoflagellate, by, for example, decreasing the amount of food it can eat. This additionally helps prevent a future increase in predation pressure by causing predators that reject it to lack the energy to breed. A species can then inhibit the growth of its competitors, thus achieving dominance. Harmful algal blooms Dinoflagellates sometimes bloom in concentrations of more than a million cells per millilitre. Under such circumstances, they can produce toxins (generally called dinotoxins) in quantities capable of killing fish and accumulating in filter feeders such as shellfish, which in turn may be passed on to people who eat them. This phenomenon is called a red tide, from the color the bloom imparts to the water. Some colorless dinoflagellates may also form toxic blooms, such as Pfiesteria. Some dinoflagellate blooms are not dangerous. Bluish flickers visible in ocean water at night often come from blooms of bioluminescent dinoflagellates, which emit short flashes of light when disturbed. A red tide occurs because dinoflagellates are able to reproduce rapidly and copiously as a result of the abundant nutrients in the water. Although the resulting red waves are an interesting visual phenomenon, they contain toxins that not only affect all marine life in the ocean, but the people who consume them as well. A specific carrier is shellfish. This can introduce both nonfatal and fatal illnesses. One such poison is saxitoxin, a powerful paralytic neurotoxin. Human inputs of phosphate further encourage these red tides, so strong interest exists in learning more about dinoflagellates, from both medical and economic perspectives. Dinoflagellates are known to be particularly capable of scavenging dissolved organic phosphorus for P-nutrient, several HAS species have been found to be highly versatile and mechanistically diversified in utilizing different types of DOPs. The ecology of harmful algal blooms is extensively studied. Bioluminescence At night, water can have an appearance of sparkling light due to the bioluminescence of dinoflagellates. More than 18 genera of dinoflagellates are bioluminescent, and the majority of them emit a blue-green light. These species contain scintillons, individual cytoplasmic bodies (about 0.5 μm in diameter) distributed mainly in the cortical region of the cell, outpockets of the main cell vacuole. They contain dinoflagellate luciferase, the main enzyme involved in dinoflagellate bioluminescence, and luciferin, a chlorophyll-derived tetrapyrrole ring that acts as the substrate to the light-producing reaction. The luminescence occurs as a brief (0.1 sec) blue flash (max 476 nm) when stimulated, usually by mechanical disturbance. Therefore, when mechanically stimulated—by boat, swimming, or waves, for example—a blue sparkling light can be seen emanating from the sea surface. Dinoflagellate bioluminescence is controlled by a circadian clock and only occurs at night. Luminescent and nonluminescent strains can occur in the same species. The number of scintillons is higher during night than during day, and breaks down during the end of the night, at the time of maximal bioluminescence. The luciferin-luciferase reaction responsible for the bioluminescence is pH sensitive. When the pH drops, luciferase changes its shape, allowing luciferin, more specifically tetrapyrrole, to bind. Dinoflagellates can use bioluminescence as a defense mechanism. They can startle their predators by their flashing light or they can ward off potential predators by an indirect effect such as the "burglar alarm". The bioluminescence attracts attention to the dinoflagellate and its attacker, making the predator more vulnerable to predation from higher trophic levels. Bioluminescent dinoflagellate ecosystem bays are among the rarest and most fragile, with the most famous ones being the Bioluminescent Bay in La Parguera, Lajas, Puerto Rico; Mosquito Bay in Vieques, Puerto Rico; and Las Cabezas de San Juan Reserva Natural Fajardo, Puerto Rico. Also, a bioluminescent lagoon is near Montego Bay, Jamaica, and bioluminescent harbors surround Castine, Maine. Within the United States, Central Florida is home to the Indian River Lagoon which is abundant with dinoflagellates in the summer and bioluminescent ctenophore in the winter. Lipid and sterol production Dinoflagellates produce characteristic lipids and sterols. One of these sterols is typical of dinoflagellates and is called dinosterol. Transport Dinoflagellate theca can sink rapidly to the seafloor in marine snow. Life cycle Introduction Dinoflagellates have a haplontic life cycle, with the possible exception of Noctiluca and its relatives. The life cycle usually involves asexual reproduction by means of mitosis, either through desmoschisis or eleuteroschisis. More complex life cycles occur, more particularly with parasitic dinoflagellates. Sexual reproduction also occurs, though this mode of reproduction is only known in a small percentage of dinoflagellates. This takes place by fusion of two individuals to form a zygote, which may remain mobile in typical dinoflagellate fashion and is then called a planozygote. This zygote may later form a resting stage or hypnozygote, which is called a dinoflagellate cyst or dinocyst. After (or before) germination of the cyst, the hatchling undergoes meiosis to produce new haploid cells. Dinoflagellates appear to be capable of carrying out several DNA repair processes that can deal with different types of DNA damage. Dinoflagellate cysts The life cycle of many dinoflagellates includes at least one nonflagellated benthic stage as a cyst. Different types of dinoflagellate cysts are mainly defined based on morphological (number and type of layers in the cell wall) and functional (long- or short-term endurance) differences. These characteristics were initially thought to clearly distinguish pellicle (thin-walled) cysts from resting (double-walled) dinoflagellate cysts. The former were considered short-term (temporal) and the latter long-term (resting) cysts. However, during the last two decades further knowledge has highlighted the great intricacy of dinoflagellate life histories. More than 10% of the approximately 2000 known marine dinoflagellate species produce cysts as part of their life cycle (see diagram on the right). These benthic phases play an important role in the ecology of the species, as part of a planktonic-benthic link in which the cysts remain in the sediment layer during conditions unfavorable for vegetative growth and, from there, reinoculate the water column when favorable conditions are restored. Indeed, during dinoflagellate evolution the need to adapt to fluctuating environments and/or to seasonality is thought to have driven the development of this life cycle stage. Most protists form dormant cysts in order to withstand starvation and UV damage. However, there are enormous differences in the main phenotypic, physiological and resistance properties of each dinoflagellate species cysts. Unlike in higher plants most of this variability, for example in dormancy periods, has not been proven yet to be attributed to latitude adaptation or to depend on other life cycle traits. Thus, despite recent advances in the understanding of the life histories of many dinoflagellate species, including the role of cyst stages, many gaps remain in knowledge about their origin and functionality. Recognition of the capacity of dinoflagellates to encyst dates back to the early 20th century, in biostratigraphic studies of fossil dinoflagellate cysts. Paul Reinsch was the first to identify cysts as the fossilized remains of dinoflagellates. Later, cyst formation from gamete fusion was reported, which led to the conclusion that encystment is associated with sexual reproduction. These observations also gave credence to the idea that microalgal encystment is essentially a process whereby zygotes prepare themselves for a dormant period. Because the resting cysts studied until that time came from sexual processes, dormancy was associated with sexuality, a presumption that was maintained for many years. This attribution was coincident with evolutionary theories about the origin of eukaryotic cell fusion and sexuality, which postulated advantages for species with diploid resting stages, in their ability to withstand nutrient stress and mutational UV radiation through recombinational repair, and for those with haploid vegetative stages, as asexual division doubles the number of cells. Nonetheless, certain environmental conditions may limit the advantages of recombination and sexuality, such that in fungi, for example, complex combinations of haploid and diploid cycles have evolved that include asexual and sexual resting stages. However, in the general life cycle of cyst-producing dinoflagellates as outlined in the 1960s and 1970s, resting cysts were assumed to be the fate of sexuality, which itself was regarded as a response to stress or unfavorable conditions. Sexuality involves the fusion of haploid gametes from motile planktonic vegetative stages to produce diploid planozygotes that eventually form cysts, or hypnozygotes, whose germination is subject to both endogenous and exogenous controls. Endogenously, a species-specific physiological maturation minimum period (dormancy) is mandatory before germination can occur. Thus, hypnozygotes were also referred to as "resting" or "resistant" cysts, in reference to this physiological trait and their capacity following dormancy to remain viable in the sediments for long periods of time. Exogenously, germination is only possible within a window of favorable environmental conditions. Yet, with the discovery that planozygotes were also able to divide it became apparent that the complexity of dinoflagellate life cycles was greater than originally thought. Following corroboration of this behavior in several species, the capacity of dinoflagellate sexual phases to restore the vegetative phase, bypassing cyst formation, became well accepted. Further, in 2006 Kremp and Parrow showed the dormant resting cysts of the Baltic cold water dinoflagellates Scrippsiella hangoei and Gymnodinium sp. were formed by the direct encystment of haploid vegetative cells, i.e., asexually. In addition, for the zygotic cysts of Pfiesteria piscicida dormancy was not essential. Genomics One of the most striking features of dinoflagellates is the large amount of cellular DNA that they contain. Most eukaryotic algae contain on average about 0.54 pg DNA/cell, whereas estimates of dinoflagellate DNA content range from 3–250 pg/cell, corresponding to roughly 3000–215 000 Mb (in comparison, the haploid human genome is 3180 Mb and hexaploid Triticum wheat is 16 000 Mb). Polyploidy or polyteny may account for this large cellular DNA content, but earlier studies of DNA reassociation kinetics and recent genome analyses do not support this hypothesis. Rather, this has been attributed, hypothetically, to the rampant retroposition found in dinoflagellate genomes. In addition to their disproportionately large genomes, dinoflagellate nuclei are unique in their morphology, regulation, and composition. Their DNA is so tightly packed that exactly how many chromosomes they have is still uncertain. The dinoflagellates share an unusual mitochondrial genome organisation with their relatives, the Apicomplexa. Both groups have very reduced mitochondrial genomes (around 6 kilobases (kb) in the Apicomplexa vs ~16kb for human mitochondria). One species, Amoebophrya ceratii, has lost its mitochondrial genome completely, yet still has functional mitochondria. The genes on the dinoflagellate genomes have undergone a number of reorganisations, including massive genome amplification and recombination which have resulted in multiple copies of each gene and gene fragments linked in numerous combinations. Loss of the standard stop codons, trans-splicing of mRNAs for the mRNA of cox3, and extensive RNA editing recoding of most genes has occurred. The reasons for this transformation are unknown. In a small group of dinoflagellates, called 'dinotoms' (Durinskia and Kryptoperidinium), the endosymbionts (diatoms) still have mitochondria, making them the only organisms with two evolutionarily distinct mitochondria. In most of the species, the plastid genome consist of just 14 genes. The DNA of the plastid in the peridinin-containing dinoflagellates is contained in a series of small circles called minicircles. Each circle contains one or two polypeptide genes. The genes for these polypeptides are chloroplast-specific because their homologs from other photosynthetic eukaryotes are exclusively encoded in the chloroplast genome. Within each circle is a distinguishable 'core' region. Genes are always in the same orientation with respect to this core region. In terms of DNA barcoding, ITS sequences can be used to identify species, where a genetic distance of p≥0.04 can be used to delimit species, which has been successfully applied to resolve long-standing taxonomic confusion as in the case of resolving the Alexandrium tamarense complex into five species. A recent study revealed a substantial proportion of dinoflagellate genes encode for unknown functions, and that these genes could be conserved and lineage-specific. Evolutionary history Dinoflagellates are mainly represented as fossils by dinocysts, which have a long geological record with lowest occurrences during the mid-Triassic, whilst geochemical markers suggest a presence to the Early Cambrian. Some evidence indicates dinosteroids in many Paleozoic and Precambrian rocks might be the product of ancestral dinoflagellates (protodinoflagellates). Dinoflagellates show a classic radiation of morphologies during the Late Triassic through the Middle Jurassic. More modern-looking forms proliferate during the later Jurassic and Cretaceous. This trend continues into the Cenozoic, albeit with some loss of diversity. Molecular phylogenetics show that dinoflagellates are grouped with ciliates and apicomplexans (=Sporozoa) in a well-supported clade, the alveolates. The closest relatives to dinokaryotic dinoflagellates appear to be apicomplexans, Perkinsus, Parvilucifera, syndinians, and Oxyrrhis. Molecular phylogenies are similar to phylogenies based on morphology. The earliest stages of dinoflagellate evolution appear to be dominated by parasitic lineages, such as perkinsids and syndinians (e.g. Amoebophrya and Hematodinium). All dinoflagellates contain red algal plastids or remnant (nonphotosynthetic) organelles of red algal origin. The parasitic dinoflagellate Hematodinium however lacks a plastid entirely. Some groups that have lost the photosynthetic properties of their original red algae plastids has obtained new photosynthetic plastids (chloroplasts) through so-called serial endosymbiosis, both secondary and tertiary: Lepidodinium unusually possesses a green algae-derived plastid (all other serially-acquired plastids can be traced back to red algae). The plastid is most related to free-living Pedinomonas (hence likely secondary). Two previously undescribed dinoflagellates ("MGD" and "TGD") contain a closely-related plastid. Karenia, Karlodinium, and Takayama possess plastids of haptophyte origin, produced in three separate events. "Dinotoms" (Durinskia and Kryptoperidinium) have plastids derived from diatoms. Some species also perform kleptoplasty: Dinophysis have plastids from a cryptomonad, due to kleptoplasty from a cilate prey. The Kareniaceae (which contains the three haptophyte-having genera) contains two separate cases of kleptoplasty. Dinoflagellate evolution has been summarized into five principal organizational types: prorocentroid, dinophysoid, gonyaulacoid, peridinioid, and gymnodinoid. The transitions of marine species into fresh water have been frequent events during the diversification of dinoflagellates and have occurred recently. Many dinoflagellates also have a symbiotic relationship with cyanobacteria, called cyanobionts, which have a reduced genome and has not been found outside their hosts. The Dinophysoid dinoflagellates have two genera, Amphisolenia and Triposolenia, that contain intracellular cyanobionts, and four genera; Citharistes, Histioneis, Parahistioneis, and Ornithocercus, that contain extracellular cyanobionts. Most of the cyanobionts are used for nitrogen fixation, not for photosynthesis, but some don't have the ability to fix nitrogen. The dinoflagellate Ornithocercus magnificus is host for symbionts which resides in an extracellular chamber. While it is not fully known how the dinoflagellate benefit from it, it has been suggested it is farming the cyanobacteria in specialized chambers and regularly digest some of them. Recently, the living fossil Dapsilidinium pastielsii was found inhabiting the Indo-Pacific Warm Pool, which served as a refugium for thermophilic dinoflagellates, and others such as Calciodinellum operosum and Posoniella tricarinelloides were also described from fossils before later being found alive. Examples Alexandrium Gonyaulax Gymnodinium Lingulodinium polyedrum See also Ciguatera Paralytic shellfish poisoning Yessotoxin Thin layers (oceanography) References Bibliography External links International Society for the Study of Harmful Algae Classic dinoflagellate monographs Japanese dinoflagellate site Noctiluca scintillans—Guide to the Marine Zooplankton of south eastern Australia, Tasmanian Aquaculture & Fisheries Institute Tree of Life Dinoflagellates Centre of Excellence for Dinophyte Taxonomy CEDiT Dinoflagellates Endosymbiotic events Olenekian first appearances Extant Early Triassic first appearances
Dinoflagellate
[ "Biology" ]
7,931
[ "Endosymbiotic events", "Symbiosis", "Algae", "Dinoflagellates" ]
62,304
https://en.wikipedia.org/wiki/Pfizer
Pfizer Inc. ( ) is an American multinational pharmaceutical and biotechnology corporation headquartered at The Spiral in Manhattan, New York City. The company was established in 1849, in New York by two German entrepreneurs, Charles Pfizer (1824–1906) and his cousin Charles F. Erhart (1821–1891). Pfizer develops and produces medicines and vaccines for immunology, oncology, cardiology, endocrinology, and neurology. The company's largest products by sales are the Pfizer–BioNTech COVID-19 vaccine ($11 billion in 2023 revenues), apixaban ($6 billion in 2023 revenues), a pneumococcal conjugate vaccine ($6 billion in 2023 revenues), palbociclib ($4 billion in 2023 revenues), and tafamidis ($3 billion in 2023 revenues). In 2023, 46% of the company's revenues came from the United States, 6% came from Japan, and 48% came from other countries. As of 2024, the company ranks 69th on the Fortune 500. History 1849–1950: Early history Pfizer was founded in 1849 as "Charles Pfizer & Company" by Charles Pfizer and Charles F. Erhart, two cousins who had immigrated to the United States from Ludwigsburg, Germany. The business produced chemical compounds, and was headquartered on Bartlett Street in Williamsburgh, New York where they produced an antiparasitic called santonin. This was an immediate success, although it was production of citric acid that led to Pfizer's growth in the 1880s. Pfizer continued to buy property in the area (by now the Williamsburg district of the city of Brooklyn, New York and beginning in 1898, the City of Greater New York) to expand its lab and factory, retaining offices on Flushing Avenue until the 1960s; the Brooklyn plant ultimately closed in 2009. Following their success with citric acid, Pfizer (at the now-demolished 295 Washington Avenue) and Erhart (at 280 Washington Avenue) established their main residences in the nearby Clinton Hill district, known for its concentration of Gilded Age wealth. In 1881, Pfizer moved its administrative headquarters to 81 Maiden Lane in Manhattan, presaging the company's expansion to Chicago, Illinois a year later. By 1906 sales exceeded $3million. World War I caused a shortage of calcium citrate. Pfizer imported the compound from Italy for the manufacture of citric acid, and due to the disruption in supply, the company began a search for an alternative. They found this in the form of a fungus capable of fermenting sugar to citric acid. By 1919, the company was able to commercialize production of citric acid from this source. The company developed expertise in fermentation technology as a result. These skills were applied to the deep-submergence mass production of penicillin, an antibiotic, during World War II in response to the need to treat injured Allied soldiers. The company also embarked on a global soil collection program related to improving production yields of penicillin which ultimately resulted in 135,000 samples. On June 2, 1942, the company incorporated under the Delaware General Corporation Law. 1950–1980: Pivot to pharmaceutical research and global expansion Due to price declines for penicillin, Pfizer searched for new antibiotics with greater profit potential. Pfizer discovered oxytetracycline in 1950, and this changed the company from a manufacturer of fine chemicals to a research-based pharmaceutical company. Pfizer developed a drug discovery program focused on in vitro synthesis to augment its research in fermentation technology. In 1959, the company established an animal health division with a farm and research facility in Terre Haute, Indiana. By the 1950s, Pfizer had established offices in Belgium, Brazil, Canada, Cuba, Mexico, Panama, Puerto Rico, and the United Kingdom. In 1960, the company moved its medical research laboratory operations out of New York City to a new facility in Groton, Connecticut. In 1980, Pfizer launched Feldene (piroxicam), a prescription anti-inflammatory medication that became Pfizer's first product to reach $1billion in revenue. In 1965, John Powers, Jr. became chief executive officer of the company, succeeding John McKeen. As the area surrounding its Brooklyn, NY plant fell into decline in the 1970s and 1980s, the company formed a public-private partnership with New York City that encompassed the construction of low- and middle-income housing, the refurbishment of apartment buildings for the homeless and the establishment of a charter school. In 1972, Edmund T. Pratt Jr. became chief executive officer of the company, succeeding John Powers, Jr. 1980–2000: Development of Viagra, Zoloft, and Lipitor In 1981, the company received approval for Diflucan (fluconazole), the first oral treatment for severe fungal infections including candidiasis, blastomycosis, coccidiodomycosis, cryptococcosis, histoplasmosis, dermatophytosis, and pityriasis versicolor. In 1986, Pfizer acquired the worldwide rights to Zithromax (azithromycin), a macrolide antibiotic that is recommended by the Infectious Disease Society of America as a first line treatment for certain cases of community-acquired pneumonia, from Pliva. In 1989, Pfizer scientists Peter Dunn and Albert Wood created Viagra (sildenafil) for treating high blood pressure and angina, a chest pain associated with coronary artery disease. In 1991, it was patented in the United Kingdom as a heart medication. Early trials for the medication showed that it did not work for the treatment of heart disease, but volunteers in the clinical trials had increased erections several days after taking the drug. It was patented in the United States in 1996 and received approval by the Food and Drug Administration in March 1998. In December 1998, Pfizer hired Bob Dole as a spokesperson for the drug. The patents for Viagra expired in 2020. In 1991, William C. Steere, Jr. became chief executive officers of the company, succeeding Edmund T. Pratt Jr. In 1991 Pfizer also began marketing Zoloft (sertraline), an antidepressant of the selective serotonin reuptake inhibitor (SSRI) class developed nine years earlier by Pfizer chemists Kenneth Koe and Willard Welch. Sertraline is primarily prescribed for major depressive disorder in adult outpatients as well as obsessive-compulsive disorder, panic disorder, and social anxiety disorder in both adults and children. In 2005, the year before it became a generic drug, sales were over $3billion and over 100million people had been treated with the drug. The patent for Zoloft expired in the summer of 2006. In 1996, Eisai, in partnership with Pfizer, received approval from the Food and Drug Administration for donepezil under the brand Aricept for treatment of Alzheimer's disease; Pfizer also received approval for Norvasc (amlodipine), an antihypertensive drug of the dihydropyridine calcium channel blocker class. In 1997, the company entered into a co-marketing agreement with Warner–Lambert for Lipitor (atorvastatin), a statin for the treatment of hypercholesterolemia. Although atorvastatin was the fifth statin to be developed, clinical trials showed that atorvastatin caused a more dramatic reduction in low-density lipoprotein pattern C (LDL-C) than the other statin drugs. Upon its patent expiration in 2011, Lipitor was the best-selling drug ever, with approximately $125billion in sales over 14.5 years. 2000–2010: Further expansion In 2001, Henry McKinnell became chief executive officer of the company, replacing William C. Steere, Jr. In 2002, The Bill & Melinda Gates Foundation purchased stock in Pfizer. In 2004, the company received approval for Lyrica (pregabalin), an anticonvulsant and anxiolytic medication used to treat epilepsy, neuropathic pain, fibromyalgia, restless leg syndrome, and generalized anxiety disorder. The United States patent on Lyrica was challenged by generic manufacturers and was upheld in 2014, extending the expiration date to 2018. In July 2006, Jeff Kindler was named chief executive officer of the company, replacing Henry McKinnell. On December 3, 2006, Pfizer ceased development of torcetrapib, a drug that increases production of HDL, which reduces LDL thought to be correlated to heart disease. During a Phase III clinical trial involving 15,000 patients, more deaths than expected occurred in the group that took the medicine, and the mortality rate of patients taking the combination of torcetrapib and Lipitor (82 deaths during the study) was 60% higher than those taking Lipitor alone (52 deaths during the study). Lipitor alone was not implicated in the results, but Pfizer lost nearly $1billion developing the failed drug and its stock price dropped 11% on the day of the announcement. Between 2007 and 2010, Pfizer spent $3.3million on investigations and legal fees and recovered about $5.1million, and had another $5million of pending recoveries from civil lawsuits against makers of counterfeit prescription drugs. Pfizer has hired customs and narcotics experts worldwide to track down fakes and assemble evidence that can be used to pursue civil suits for trademark infringement. In July 2008, Pfizer announced 275 job cuts at its manufacturing facility in Portage, Michigan. Portage was previously the world headquarters of Upjohn Company, which had been acquired as part of Pharmacia. Acquisitions and mergers In June 2000, Pfizer acquired Warner-Lambert outright for $116billion. To satisfy conditions imposed by antitrust regulators at the Federal Trade Commission, Pfizer sold off or transferred stakes in several minor products, including RID (a shampoo for treatment of head lice, sold to Bayer) and Warner-Lambert's antidepressant Celexa (which competes with Zoloft). The acquisition created what was, at the time, the second-largest pharmaceutical company worldwide. In 2003, Pfizer merged with Pharmacia, and in the process acquired Searle and SUGEN. Searle had developed Flagyl (metronidazole), a nitroimidazole antibiotic medication used particularly for anaerobic bacteria and protozoa. Searle also developed celecoxib (Celebrex) a COX-2 inhibitor and nonsteroidal anti-inflammatory drug (NSAID) used to treat the pain and inflammation in osteoarthritis, acute pain in adults, rheumatoid arthritis, ankylosing spondylitis, painful menstruation, and juvenile rheumatoid arthritis. SUGEN, a company focused on protein kinase inhibitors, had pioneered the use of ATP-mimetic small molecules to block signal transduction. The SUGEN facility was shut down in 2003 by Pfizer, with the loss of more than 300 jobs, and several programs were transferred to Pfizer. These included sunitinib (Sutent), a cancer medication which was approved for human use by the FDA in January 2006. A related compound, SU11654 (Toceranib), was also approved for cancer in dogs, and the ALK inhibitor Crizotinib also grew out of a SUGEN program. In October 2006, the company announced it would acquire PowerMed. On October 15, 2009, Pfizer acquired Wyeth for $68billion in cash and stock, including the assumption of debt, making Pfizer the largest pharmaceutical company in the world. The acquisition of Wyeth provided Pfizer with a pneumococcal conjugate vaccine, trademarked Prevnar 13; this is used for the prevention of invasive pneumococcal infections. The introduction of the original, 7-valent version of the vaccine, developed by Wyeth in February 2000, led to a 75% reduction in the incidence of invasive pneumococcal infections among children under age5 in the United States. Pfizer introduced an improved version of the vaccine in 2010, for which it was granted a patent in India in 2017. Prevnar 13 provides coverage of 13 bacterial variants, expanding beyond the original 7-valent version. By 2012, the rate of invasive infections among children under age5 had been reduced by an additional 50%. 2010–2020: Further discoveries and acquisitions In 2010, Ian Read was named chief executive officer of the company. In February 2011, Pfizer announced the closure of its UK research and development facility (formerly also a manufacturing plant) in Sandwich, Kent, which at the time employed 2,400 people. In March 2011, Pfizer acquired King Pharmaceuticals for $3.6billion in cash. King produced emergency injectables such as the EpiPen. On September 4, 2012, the FDA approved bosutinib (Bosulif) for chronic myelogenous leukemia (CML), a rare type of leukemia and a blood and bone marrow disease that affects primarily older adults. In November 2012, Pfizer received approval from the Food and Drug Administration for Xeljanz, a tofacitinib, for rheumatoid arthritis and ulcerative colitis. The drug had sales of $1.77billion in 2018, and in January 2019, it was the top drug in the United States for direct-to-consumer advertising, passing adalimumab (Humira). On February 1, 2013, Zoetis, the Agriculture Division of Pfizer and later Pfizer Animal Health, became a public company via an initial public offering, raising $2.2billion. Later in 2013, Pfizer completed the corporate spin-off of its remaining stake in Zoetis. In September 2014, the company acquired Innopharma for $225million, plus up to $135million in milestone payments, in a deal that expanded Pfizer's range of generic and injectable drugs. On January 5, 2015, the company announced it would acquire a controlling interest in Redvax, expanding its vaccine portfolio targeting human cytomegalovirus. In February 2015, the company received approval from the Food and Drug Administration for palbociclib (Ibrance) for treatment of certain types of breast cancer. In March 2015, the company announced it would restart its collaboration with Eli Lilly and Company surrounding the Phase III trial of Tanezumab. In May 2015, Pfizer and a Bar-Ilan University laboratory announced a partnership based on the development of medical DNA nanotechnology. In June 2015, the company acquired Nimenrix and Mencevax, meningococcal vaccines, from GlaxoSmithKline for around $130million. In September 2015, Pfizer acquired Hospira for $17billion, including the assumption of debt. Hospira was the largest producer of generic injectable pharmaceuticals in the world. On November 23, 2015, Pfizer and Allergan announced a planned $160billion merger, in the largest pharmaceutical deal ever and the third largest corporate merger in history. The proposed transaction contemplated that the merged company maintain Allergan's Republic of Ireland domicile, resulting in the new company being subject to corporation tax at the relatively low rate of 12.5%. The deal was to constitute a reverse merger, whereby Allergan acquired Pfizer, with the new company then changing its name to "Pfizer, plc". On April 6, 2016, Pfizer and Allergan terminated the merger agreement after the Obama administration and the United States Department of the Treasury introduced new laws intended to limit corporate inversions (the extent to which companies could move their headquarters overseas in order to reduce the amount of taxes they pay). In June 2016, the company acquired Anacor Pharmaceuticals for $5.2billion, expanding its portfolio in both inflammation and immunology drugs areas. In August 2016, the company made a $40million bid for the assets of BIND Therapeutics, which was in bankruptcy. The same month, the company acquired Bamboo Therapeutics for $645million, expanding its gene therapy offerings. In September 2016, the company acquired cancer drug-maker Medivation for $14billion. In October 2016, the company licensed the anti-CTLA4 monoclonal antibody, ONC-392, from OncoImmune. In November 2016, Pfizer funded a $3,435,600 study with the CDC Foundation to research "screen-and-treat" strategies for cryptococcal disease in Botswana. In December 2016, Pfizer acquired AstraZeneca's small-molecule antibiotics business for $1.575 billion. In January 2018, Pfizer announced that it would end its work on research into treatments for Alzheimer's disease and Parkinsonism (a symptom of Parkinson's disease and other conditions). The company said about 300 researchers would lose their jobs. In July 2018, the Food and Drug Administration approved enzalutamide, developed by Pfizer and Astellas Pharma for patients with castration-resistant prostate cancer. In August 2018, Pfizer signed an agreement with BioNTech to conduct joint research and development activities regarding mRNA-based influenza vaccines. In October 2018, effective January 1, 2019, Albert Bourla was promoted to chief executive officer, succeeding Ian Read, his mentor. In July 2019, the company acquired Therachon for up to $810million, expanding its rare disease portfolio through Therachon's recombinant human fibroblast growth factor receptor 3 compound, aimed at treating conditions such as achondroplasia. Also in July, Pfizer acquired Array Biopharma for $10.6billion, boosting its oncology pipeline. In August 2019, Pfizer merged its consumer health business with that of GlaxoSmithKline, into a joint venture owned 68% by GlaxoSmithKline and 32% by Pfizer, with plans to make it a public company. The transaction built on a 2018 transaction where GlaxoSmithKline acquired Novartis' stake in the GSK-Novartis consumer healthcare joint business. The transaction followed negotiations with other companies including Reckitt Benckiser, Sanofi, Johnson & Johnson, and Procter & Gamble. In September 2019, Pfizer initiated a study with the CDC Foundation to investigate the tracking of healthcare-associated infections, scheduled to run through to June 2023. In December 2019, Pfizer awarded the CDC Foundation a further $1,948,482 to continue its cryptococcal disease screening and treatment research in nine African countries. 2020: COVID-19 pandemic and vaccine development In March 2020, Pfizer joined the COVID-19 Therapeutics Accelerator funding vehicle to expedite development of treatments against COVID-19. The $125 million initiative was launched by the Bill & Melinda Gates Foundation in partnership with Mastercard and Wellcome Trust, with additional funding announced shortly after from Chan Zuckerberg Initiative, UK Foreign, Commonwealth and Development Office and Madonna. The following month, the Foundation for the National Institutes of Health announced the Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) public-private partnership to develop a coordinated research strategy for prioritizing and speeding up development of COVID-19 vaccines and pharmaceutical products. Pfizer joined the partnership as an industry "leadership organization", and participated as a collaborator in ACTIV-led clinical trials. CEO Albert Bourla attended the GAVI COVAX AMC 2021 Investment Opportunity Launch Event, otherwise named One World Protected, on April 15, 2021. In Canada, Pfizer endorsed the use of a vaccine passport mobile app developed by CANImmunize in order to record and track status of COVID-19 vaccination. As the scale of the COVID-19 pandemic became apparent, Pfizer partnered with BioNTech to study and develop COVID-19 mRNA vaccine candidates. Unlike many of its competitors, Pfizer took no initial research funds from the United States' Operation Warp Speed vaccine development program, instead choosing to invest roughly $2 billion of its own funds. Pfizer CEO Albert Bourla has said that he declined money from Operation Warp Speed to avoid government intervention, stating later that "when you get money from someone that always comes with strings. They want to see how we are going to progress, what type of moves you are going to do. They want reports. And also, I wanted to keep Pfizer out of politics, by the way." In May 2020, Pfizer began testing four different COVID-19 vaccine variations using lipid nanoparticle technology provided by Canadian biotechnology company Acuitas Therapeutics. Vaccines were injected into the first human participants in the U.S. in early May. In July 2020, Pfizer and BioNTech announced that two of the partners' four mRNA vaccine candidates had won fast track designation from the FDA. The company began PhaseII-III testing on 30,000 people in the last week of July 2020 and was slated to be paid $1.95billion for 100million doses of the vaccine by the US government. In September 2020, Pfizer and BioNTech announced that they had completed talks with the European Commission to provide an initial 200million vaccine doses to the EU, with the option to supply another 100million doses at a later date. On November 9, 2020, Pfizer announced that BioNTech's COVID-19 vaccine, tested on 43,500 people, was found to be 90% effective at preventing symptomatic COVID-19. The efficacy was updated to 95% a week later. Akiko Iwasaki, an immunologist interviewed by The New York Times, described the efficacy figure as "really a spectacular number." The announcement made Pfizer and BioNTech the first companies to develop and test a working vaccine for COVID-19. Over the following month and a half, regulators in various countries approved Pfizer's vaccine for emergency use. Controversy In February 2021, after a year long investigation relying on unnamed officials, Pfizer was accused by The Bureau of Investigative Journalism (TBIJ) of employing "high-level bullying" against at least two Latin American countries during negotiations to acquire COVID-19 vaccines, including requesting that the countries put sovereign assets as collateral for payments. According to TBIJ, these negotiation tactics resulted in a months long delay in Pfizer reaching a vaccine agreement with one country and a complete failure to reach agreements with two other countries, including Argentina and Brazil. In November 2021, TBMJ published an article after obtaining information from a whistleblower from the Ventavia Research Group. Ventavia was hired by Pfizer as a research subcontractor. A regional director (whistleblower) who was employed at Ventavia Research Group has told The BMJ that the company falsified data, unblinded patients, employed inadequately trained vaccinators, and was slow to follow up on adverse events reported in Pfizer's pivotal phase III trial. The regional director, Brook Jackson, emailed a complaint to the US Food and Drug Administration (FDA). Ventavia fired her later the same day. The European Medicines Agency (EMA) stated in a response to the European Parliament, that "the deficiencies identified do not jeopardize the quality and integrity of the data from the main Comirnaty trial and have no impact on the benefit-risk assessment or on the conclusions on the safety, effectiveness and quality of the vaccine". Science-Based Medicine emphasized that Ventavia oversaw just three of the 153 clinical sites involved with Pfizer's trial and "a small fraction (~1,000 by the time the whistleblower was fired) of the trial's over ~44,000 subjects." On 10 October 2022, during a session of the European Parliament's Special Committee on the COVID-19 Pandemic, Pfizer executive Janine Small testified that the company had not evaluated their COVID-19 vaccine for its ability to reduce transmission of the SARS-CoV-2 virus prior to its release to the general public. Dutch MEP Rob Roos described the admission as "scandalous". CEO Albert Bourla was slated to attend, but withdrew. Roos' statements in turn have been described as "misleading". Development of oral antivirals In November 2021, Pfizer launched a new COVID-19 oral antivirus treatment known as Paxlovid. In January 2022, the Pfizer CEO Albert Bourla confirmed that the trial results of a fourth dose were pending until March 2022. He said that the firm was setting up a collaboration to develop an anti-COVID pill treatment along with a French company, Novasep. He also said the COVID vaccine was "safe and efficient" for children. In May 2022, reports emerged of patients experiencing "rebound" symptoms after completing a five-day course of Paxlovid. The FDA responded by announcing they had performed additional analyses of the drug's clinical trial data, and decided against changing its recommendations. U.S. President Joe Biden and Dr. Anthony Fauci were both reported to experience this rebound syndrome in the months that followed, while continuing to recommend the drug for those who may benefit from it. Late 2020–onwards: Corporate developments and acquisitions In September 2020, the company acquired a 9.9% stake in CStone Pharmaceuticals for $200million (HK$1.55billion), helping to commercialise its anti-PD-L1 monoclonal antibody, CS1001. In October 2020, the company acquired Arixa Pharmaceuticals. In November 2020, using a Reverse Morris Trust structure, Pfizer merged its off-patent branded and generic drug business, known as Upjohn, with Mylan to form Viatris, owned 57% by Pfizer shareholders. On January 5, 2021, Pfizer introduced a new logo. In April 2021, Pfizer acquired Amplyx Pharmaceuticals and its anti-fungal compound fosmanogepix (APX001). In August, the company announced it would acquire Trillium Therapeutics Inc and its immuno-oncology portfolio for $2.3 billion. In March 2022, the company acquired Arena Pharmaceuticals for $6.7 billion in cash. In June 2022, the company acquired ReViral Ltd, for up to $525 million, gaining access to experimental drugs used to combat respiratory syncytial virus infections. In October 2022, the company acquired Biohaven Pharma and its calcitonin gene-related peptide programs for $11.6 billion. It also acquired Global Blood Therapeutics for $5.4 billion, boosting Pfizer's rare disease business. In April 2023, Pfizer moved its world headquarters from 42nd Street in Midtown Manhattan to the Spiral at Hudson Yards. In December 2023, the company acquired Seagen, a pioneer of antibody–drug conjugates for the treatment of cancer, for $43billion. On Sept 30, 2024, Pfizer announced its intentions to sell 540 million Haleon shares whose worth is about  £2.1 billion ($2.8 billion)  according to Bloomberg calculations. Acquisition history Pfizer Warner–Lambert William R. Warner Lambert Pharmacal Company Parke-Davis Wilkinson Sword Agouron Pharmacia Pharmacia & Upjohn Pharmacia Farmitalia Carlo Erba Kabi Pharmacia Pharmacia Aktiebolaget The Upjohn Company Monsanto Searle Esperion Therapeutics Meridica Vicuron Pharmaceuticals Idun Angiosyn Powermed Rinat Coley Pharmaceutical Group CovX Encysive Pharmaceuticals Inc Wyeth Chef Boyardee S.M.A. Corporation Ayerst Laboratories Fort Dodge Serum Company Bristol-Myers Parke-Davis A.H. Robins Sherwood Medical Genetics Institute, Inc. American Cyanamid Lederle Laboratories Solvay King Pharmaceuticals Monarch Pharmaceuticals, Inc. King Pharmaceuticals Research and Development, Inc. Meridian Medical Technologies, Inc. Parkedale Pharmaceuticals, Inc. King Pharmaceuticals Canada Inc. Monarch Pharmaceuticals Ireland Limited Synbiotics Corporation Icagen Ferrosan Excaliard Pharmaceuticals Alacer Corp NextWave Pharmaceuticals, Inc Innopharma Redvax GmbH Hospira Mayne Pharma Ltd Pliva-Croatia Orchid Chemicals & Pharmaceuticals Ltd. Javelin Pharmaceuticals, Inc. TheraDoc Arixa Pharmaceuticals Anacor Pharmaceuticals Bamboo Therapeutics Medivation AstraZeneca Array BioPharma Amplyx Pharmaceuticals Trillium Therapeutics Arena Pharmaceuticals ReViral Ltd Biohaven Pharma Kleo Pharmaceuticals, Inc. Seagen Cascadian Therapeutics Legal issues Aggressive pharmaceutical marketing Pfizer has been accused of aggressive pharmaceutical marketing. 2004 Illegal marketing of gabapentin for off-label uses settlement In 1993, the Food and Drug Administration (FDA) approved gabapentin only for treatment of seizures. Warner–Lambert, which merged with Pfizer in 2000, used continuing medical education and medical research, sponsored articles about the drug for the medical literature, and alleged suppression of unfavorable study results, to promote gabapentin. Within five years, the drug was being widely used for off-label uses such as treatment of pain and psychiatric conditions. Warner–Lambert admitted to violating FDA regulations by promoting the drug for pain, psychiatric conditions, migraine, and other unapproved uses. In 2004, the company paid $430million in one of the largest settlements to resolve criminal and civil health care liability charges. It was the first off-label promotion case successfully brought under the False Claims Act. A Cochrane review concluded that gabapentin is ineffective in migraine prophylaxis. The American Academy of Neurology rates it as having unproven efficacy, while the Canadian Headache Society and the European Federation of Neurological Societies rate its use as being supported by moderate and low-quality evidence. 2009 Illegal marketing of Bextra settlement In September 2009, Pfizer pleaded guilty to the illegal marketing of arthritis drug valdecoxib (Bextra) and agreed to a $2.3billion settlement, the largest health care fraud settlement at that time. Pfizer promoted the sale of the drug for several uses and dosages that the Food and Drug Administration specifically declined to approve due to safety concerns. The drug was pulled from the market in 2005. It was Pfizer's fourth such settlement in a decade. The payment included $1.195billion in criminal penalties for felony violations of the Federal Food, Drug, and Cosmetic Act, and $1.0billion to settle allegations it had illegally promoted the drugs for uses that were not approved by the Food and Drug Administration (FDA) leading to violations under the False Claims Act as reimbursements were requested from Federal and State programs. The criminal fine was the largest ever assessed in the United States to date. Pfizer entered a corporate integrity agreement with the Office of Inspector General that required it to make substantial structural reforms within the company, and publish to its website its post approval commitments and a searchable database of all payments to physicians made by the company. Termination of Peter Rost Peter Rost was vice president in charge of the endocrinology division at Pharmacia before its acquisition by Pfizer. During that time he raised concerns internally about kickbacks and off-label marketing of Genotropin, Pharmacia's human growth hormone drug. Pfizer reported the Pharmacia marketing practices to the FDA and Department of Justice; Rost was unaware of this and filed an FCA lawsuit against Pfizer. Pfizer kept him employed, but isolated him until the FCA suit was unsealed in 2005. The Justice Department declined to intervene, and Pfizer fired him, and he filed a wrongful termination suit against Pfizer. Pfizer won a summary dismissal of the case, with the court ruling that the evidence showed Pfizer had decided to fire Rost prior to learning of his whistleblower activities. 2014 Illegal marketing of Rapamune settlement A "whistleblower suit" was filed in 2005 against Wyeth, which was acquired by Pfizer in 2009, alleging that the company illegally marketed sirolimus (Rapamune) for off-label uses, targeted specific doctors and medical facilities to increase sales of Rapamune, tried to get transplant patients to change from their transplant drugs to Rapamune, and specifically targeted African-Americans. According to the whistleblowers, Wyeth also provided doctors and hospitals that prescribed the drug with kickbacks such as grants, donations, and other money. In 2013, the company pleaded guilty to criminal mis-branding violations under the Federal Food, Drug, and Cosmetic Act. By August 2014, it had paid $491million in civil and criminal penalties related to Rapamune. 2014 Illegal marketing settlement In June 2010, health insurance network Blue Cross Blue Shield (BCBS) filed a lawsuit against Pfizer for allegedly illegally marketing drugs Bextra, Geodon and Lyrica. BCBS alleged that Pfizer used kickbacks and wrongly persuaded doctors to prescribe the drugs. According to the lawsuit, Pfizer handed out 'misleading' materials on off-label uses, sent over 5,000 doctors on trips to the Caribbean or around the United States, and paid them $2,000 honoraria in return for listening to lectures about Bextra. Despite Pfizer's claims that "the company's intent was pure" in fostering a legal exchange of information among doctors, an internal marketing plan revealed that Pfizer intended to train physicians "to serve as public relations spokespeople." The case was settled in 2014 for $325million. Fearing that Pfizer is "too big to fail" and that prosecuting the company would result in disruptions to Medicare and Medicaid, federal prosecutors instead charged a subsidiary of a subsidiary of a subsidiary of Pfizer, which is "nothing more than a shell company whose only function is to plead guilty." 2013 Quigley Company asbestos settlement The Quigley Company, which sold asbestos-containing insulation products until the early 1970s, was acquired by Pfizer in 1968. In June 2013, asbestos victims and Pfizer negotiated a settlement that required Pfizer to pay a total of $964million: $430million to 80% of existing plaintiffs and place an additional $535million into a settlement trust that will compensate future plaintiffs as well as the remaining 20% of plaintiffs with claims against Pfizer and Quigley. Of that $535million, $405million is in a 40-year note from Pfizer, while $100million is from insurance policies. 1994 Shiley defective heart valves settlement Pfizer purchased Shiley in 1979, at the onset of its Convexo-Concave valve ordeal, involving the Bjork–Shiley valve. Approximately 500 people died when defective heart valves fractured and, in 1994, Pfizer agreed to pay $10.75million to settle claims by the United States Department of Justice that the company lied to get approval for the valves. 2010 Firing of employee that filed suit A federal lawsuit was filed by a scientist claiming she got an infection by a genetically modified lentivirus while working for Pfizer, resulting in intermittent paralysis. A judge dismissed the case citing a lack of evidence that the illness was caused by the virus but the jury ruled that by firing the employee, Pfizer violated laws protecting freedom of speech and whistleblowers and awarded her $1.37million. 2012 Celebrex intellectual property settlement Brigham Young University (BYU) said a professor of chemistry, Dr. Daniel L. Simmons, discovered an enzyme in the 1990s that led towards development of Celebrex. BYU was originally seeking a 15% royalty on sales, equating to $9.7billion. A research agreement had been made between BYU and Monsanto, whose pharmaceutical business was later acquired by Pfizer, to develop a better aspirin. The enzyme Dr. Simmons claims to have discovered would induce pain and inflammation while causing gastrointestinal problems and Celebrex is used to reduce those issues. A six-year battle ensued because BYU claimed that Pfizer did not give Dr. Simmons credit or compensation, while Pfizer claimed that it had met all obligations regarding the Monsanto agreement. In May 2012, Pfizer settled the allegations, agreeing to pay $450million. 2011 Nigeria Trovafloxacin lawsuit settlement In 1996, an outbreak of measles, cholera, and bacterial meningitis occurred in Nigeria. Pfizer representatives and personnel from a contract research organization (CRO) traveled to Kano to set up a clinical trial and administer an experimental antibiotic, trovafloxacin, to approximately 200 children. Local Kano officials reported that more than fifty children died in the experiment, while many others developed mental and physical deformities. The nature and frequency of both fatalities and other adverse outcomes were similar to those historically found among pediatric patients treated for meningitis in sub-Saharan Africa. In 2001, families of the children, as well as the governments of Kano and Nigeria, filed lawsuits regarding the treatment. According to Democracy Now!, "[r]esearchers did not obtain signed consent forms, and medical personnel said Pfizer did not tell parents their children were getting the experimental drug." The lawsuits also accused Pfizer of using the outbreak to perform unapproved human testing, as well as allegedly under-dosing a control group being treated with traditional antibiotics in order to skew the results of the trial in favor of Trovan. Nigerian medical personnel as well as at least one Pfizer physician said the trial was conducted without regulatory approval. In 2007, Pfizer published a Statement of Defense letter, stating that the drug's oral form was safer and easier to administer. Trovan had been used safely in more than five thousand Americans prior to the Nigerian trial, and mortality in the patients treated by Pfizer was lower than that observed historically in African meningitis epidemics. No unusual side effects, unrelated to meningitis, were observed after four weeks. In June 2010, the US Supreme Court rejected Pfizer's appeal against a ruling allowing lawsuits by the Nigerian families to proceed. In December 2010, the United States diplomatic cables leak indicated that Pfizer hired investigators to find evidence of corruption against Nigerian attorney general Michael Aondoakaa to persuade him to drop legal action. The Washington Post reporter Joe Stephens, who helped break the story in 2000, called these actions "dangerously close to blackmail". In response, the company released a press statement describing the allegations as "preposterous" and saying that it acted in good faith. Aondoakka, who had allegedly demanded bribes from Pfizer in return for a settlement of the case, was declared unfit for office and had his U.S. visa revoked in association with corruption charges in 2010. The lawsuits were eventually settled out of court. Pfizer committed to paying "to compensate the families of children in the study", another $30 million to "support healthcare initiatives in Kano", and $10 million to cover legal costs. Payouts began in 2011. 2022 Inflating Prices fine In July 2022, UK antitrust authorities fined Pfizer £63 million for unfairly high priced drug that aids in controlling epileptic seizures. The Competition and Markets Authority stated that the company took advantage of loopholes by de-branding epilepsy drug Epanutin, by doing so the price of Epanutin's price was not regulated to the same standards the company are used to and therefore the price of the drug was raised. It was stated that over a four-year period, Pfizer had billed Epanutin for around 780% and 1,600% higher than its standard price. 2022 Allegations of patent infringement on mRNA technology In August 2022, Moderna announced that it will sue Pfizer and its partner BioNTech for infringing their patent on the mRNA technology. In May 2024, the European Patent Office upheld the validity of Moderna's EP949 patent, one of the two patents asserted against Pfizer and BioNTech. Environmental record Since 2000, the company has implemented more than 4,000 greenhouse gas reduction projects. Pfizer has inherited Wyeth's liabilities in the American Cyanamid site in Bridgewater Township, New Jersey, a highly toxic EPA Superfund site. Pfizer has since attempted to remediate this land in order to clean and develop it for future profits and potential public uses. The Sierra Club and the Edison Wetlands Association have opposed the cleanup plan, arguing that the area is subject to flooding, which could cause pollutants to leach. The EPA considers the plan the most reasonable from considerations of safety and cost-effectiveness, arguing that an alternative plan involving trucking contaminated soil off site could expose cleanup workers. The EPA's position is backed by the environmental watchdog group CRISIS. In June 2002, a chemical explosion at the Groton plant injured 7 people and caused the evacuation of more than 100 homes in the surrounding area. Public-private engagement Pfizer engages with the public and private sectors in a variety of settings including to promote research and development, academic funding, event sponsorship, philanthropy, and political lobbying. Academia Institute for Advanced Study – Matching gifts and direct donor. University of Toronto – Donor to the Boundless Campaign, and member of the President's Circle. University of Washington – Member of the Honor Roll of Donors, having contributed between $10 million and $50 million to funding the school as of 2020. Activism Habitat for Humanity – Donor. Human Rights Campaign (HRC) – Corporate partner. National Women's Law Center – Donor. Share Our Strength – Donor. WaterAid – Partner. Conferences and summits Women in Medicine Summit – Sponsor. World Neuroscience Innovation Forum – Strategic partner. Media During the COVID-19 pandemic, Pfizer engaged many forms of media to promote their COVID-19 vaccine, including a commissioned National Geographic documentary. Pfizer is also a donor to the National Geographic Society. Pfizer was a prominent sponsor of the 2022 Oscars ceremony alongside BioNTech. Pfizer has been a major donor to the National Press Foundation. Pfizer sponsored a program for the NPF called "Cancer Issues 2010" to train journalists to "understand the latest research" on various cancers, including the role of pharmaceutical products and vaccines. MicroRNA (miRNA) was also a listed topic. Pfizer sponsors 19 to Zero, a "coalition of academics, public health experts, behavioural economists, and creative professionals" that develops media and educational materials to influence public perception surrounding COVID-19 and COVID-19 vaccines. Medical societies American Society of Hematology – Sponsor. Arthritis Society – National partner. Pfizer also supports the organization's provincial branches in Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia, Ontario, Prince Edward Island, and Quebec. Canadian Cancer Society – Sponsor. Canadian Paediatric Society – Funding. CPS is the organization that administers the Canadian Immunization Monitoring Program, Active (IMPACT) vaccine safety program. Canadian Society of Internal Medicine – Annual conference sponsor with Bristol Myers Squibb. Endocrine Society – Corporate Liaison Board member. European Society of Cardiology – Sponsor of the EURObservational Research Programme. Spanish Cardiac Society – Strategic partner. Political lobbying Pfizer is affiliated with a variety of industry organizations engaging in political lobbying, and has made substantial direct donations to government and regulatory agencies: Adult Vaccine Access Coalition – Member. Alliance for a Stronger FDA – Member. AMR Industry Alliance – Member. BIOTECanada – Member company. Bipartisan Policy Center – Donor. The Business Council – Member, represented by CEO Albert Bourla. Business Council for the United Nations – Member. Center on Budget and Policy Priorities – Funder. Community Anti-Drug Coalitions of America (CADCA) – Partner. COVID-19 Vaccine Education and Equity Project – Sponsor. European Federation of Pharmaceutical Industries and Associations – Member. Foundation for the National Institutes of Health (FNIH) – Donor. Pfizer has given between $5,000,000 and $9,999,999 to the between 1997 and 2020, contributing to funding the activities of the National Institutes of Health. Global Health Council – Member. Immunisation Coalition (Australia) – Sponsor. Innovative Medicines Canada – Member. IMC is an association of pharmaceutical companies doing business in Canada. The group lobbies the Government of Ontario and House of Commons of Canada through Rubicon Strategy, a firm owned by Progressive Conservative Party of Ontario campaign manager Kory Teneycke. International Federation of Pharmaceutical Manufacturers & Associations (IFPMA) – Member. Life Sciences British Columbia (LSBC) – Member company and Platinum Sponsor. National Health Council (NHC) – Member organization. NHC is a non-profit organization that lobbies the U.S. Government on issues related to healthcare reform. National Pharmaceutical Council (NPC) – Member company. Personalized Medicine Coalition (PMC) – Member. Pharmaceutical Advertising Advisory Board (PAAB) – Client. Pharmaceutical Research and Manufacturers of America (PhRMA) – Member company. Reagan-Udall Foundation for the Food and Drug Administration – Donor. Research!America – Member organization. U.S. Global Leadership Coalition – Member. World Economic Forum – Member organization. Scott Gottlieb, who resigned as FDA commissioner in April 2019, joined the Pfizer board of directors three months later, in July 2019. Pfizer lobbied various officials in the Government of British Columbia between April and November 2012, including then-premier Christy Clark, future premier John Horgan, future health minister Adrian Dix, and future deputy premier, minister of public safety and solicitor general Mike Farnworth. The disclosed purpose was to "provide health policy and pharmaceutical information and communications on behalf of Pfizer Canada," and "learn and understand the budgetary, policy and strategic directions of the Government." Professional associations Academy of Surgical Research (ASR) – 2021 Annual Meeting sponsor. American Statistical Association (ASA) – Corporate supporter. Bioscience Association Manitoba (BAM) – Sponsor. British Columbia Pharmacy Association (BCPA) – Event sponsor. Canadian Association for Clinical Microbiology and Infectious Diseases (CACMID) – Patron (former). Canadian Association of Emergency Physicians (CAEP) – Corporate partner. Canadian Association of Medical Oncologists – Annual meeting sponsor. Canadian Medical Association – Sponsor. In 2009, Pfizer partnered with the CMA to launch a continuing medical education course for physicians. Canadian Pharmacists Association and Canadian Pharmacists Journal – Sponsor. Canadian Public Health Association - Sponsor. Canadian Rheumatology Association – Sponsor. Canadian Urological Association – Sponsor. Ontario Medical Association (OMA) – Donor to the Ontario Medical Foundation. Pharmacy Association of Nova Scotia – Sponsor. Public health Pfizer has engaged in a number of public health and global health initiatives worldwide, and provides funding for health care facilities of various specialties in Canada and the United States: CANImmunize – Endorsing partner. CANImmunize is a vaccine passport software company funded primarily by the Public Health Agency of Canada, and partnered with governments, health agencies, academia and pharmaceutical companies across Canada. Centre for Addiction and Mental Health – Donor. Dana–Farber Cancer Institute – Donor. Federation of Medical Women of Canada – Sponsor. Food Allergy Canada – Corporate partner, providing funding and advocacy support. Hospital for Sick Children (SickKids) – Donor to the SickKids Foundation. Medical Teams International – Corporate donor. North Bay Regional Health Center – Donor to the NBRHC Foundation. Princess Margaret Cancer Centre (PMCC) – Conference sponsor, and donor to the Princess Margaret Cancer Foundation. Scarborough Health Network (SHN) – Donor to the SHN Foundation. Sinai Health Foundation – Donor. The foundation funds Mount Sinai Hospital, Bridgepoint Active Healthcare, and the Lunenfeld-Tanenbaum Research Institute in Toronto, Ontario. Sunnybrook Health Sciences Centre – Donor. University Hospitals Kingston Foundation – Donor. UHKF raises funds for the Kingston Health Sciences Centre and Providence Care. William Osler Health System – Event sponsor. Pfizer sponsored a presentation in January 2020 delivered by Julie Bettinger through British Columbia's Provincial Health Services Authority (PHSA) titled "Vaccine hesitancy: It doesn't matter if the vaccine works if nobody gets it." In 2020, Pfizer provided funding in the range of $100,000.00 – $250,000.00 to Ronald McDonald House Charities “to provide resources that directly improve the health and well-being of children and their families.” Research and development Pfizer has partnered with and sponsored many medical research networks and professional associations in the United States, Canada and globally: ABC Global Alliance – Main sponsor. The alliance is a Portuguese not-for-profit society supporting research into advanced breast cancer. Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) – Industry partner. AdvaMed – Member (former). Alliance for Regenerative Medicine – Member organization. The alliance is an international advocacy organization supporting the development of regenerative medicines including gene therapy and stem-cell therapy. Arthritis Australia – Donor. BioFIT – Sponsor. BioFIT holds events to connect academia, pharmaceutical companies, and investors in the field of life sciences and biotechnology. Canadian Frailty Network – Industry partner. CFN has provided research grants related to COVID-19. Colorectal Cancer Canada – Sponsor. Drugs for Neglected Diseases Initiative – Partner. DNDI is a non-profit drug research and development organization that expedites creation and delivery of medicines for diseases including leishmaniasis, sleeping sickness, and hepatitis C. GISAID – Funding for COVID-19 operations. Heart and Stroke Foundation of Canada – National corporate partner and sponsor. Lung Health Foundation – Partner. Funds research into infectious lung disease and lobbying for policy changes. Mentoring in IBD – Sponsor. Annual educational program for Canadian gastroenterologists. Mount Sinai Hospital (Toronto) – Sponsor for research into infectious diseases such as COVID-19 through educational grants. Nova Scotia Chronic Pain Collaborative Care Network – Investment in Canadian health research. Ontario Hospital Research Institute (OHRI) – Research grants. Pinnacle Research Group – Sponsor. Radcliffe Cardiology – Industry partner. Truth Initiative – Featured partner. The initiative performs research and policy studies related to the reduction of tobacco use in youth. Corporate affairs Board of directors , the company's board consisted of the following directors: Ronald E. Blaylock, Managing Partner of GenNx360 Capital Partners Albert Bourla, CEO of Pfizer Mortimer J. Buckley, former CEO of The Vanguard Group Sue Desmond-Hellmann, former CEO of The Bill and Melinda Gates Foundation Joseph J. Echevarria, former CEO of Deloitte LLP Scott Gottlieb, former Commissioner of the FDA Helen Hobbs, Professor at the University of Texas Southwestern Medical Center Susan Hockfield, 16th President of the Massachusetts Institute of Technology Dan Littman, professor of Molecular Immunology at New York University Shantanu Narayen, CEO of Adobe Suzanne Nora Johnson, former Vice Chairman of Goldman Sachs James Quincey, CEO of The Coca-Cola Company James C. Smith, former CEO of Thomson Reuters Cyrus Taraporevala, former President and CEO of State Street Global Advisor Ownership , the largest shareholders of Pfizer were: The Vanguard Group (9.11%) BlackRock (7.69%) State Street (5.13%) Wellington Management Group (2.89%) Charles Schwab Corporation (2.30%) Geode Capital Management (2.08%) Norges Bank (1.48%) Morgan Stanley (1.38%) Massachusetts Financial Services (1.26%) State Farm (0.96%) See also Biotech and pharmaceutical companies in the New York metropolitan area Companies of the United States with untaxed profits Fire in the Blood (2013 film) List of pharmaceutical companies References External links 1849 establishments in New York (state) 1940s initial public offerings American brands American companies established in 1849 Biotechnology companies of the United States Chemical companies established in 1849 Chemical companies of the United States Clinical trial organizations Companies based in Manhattan Companies listed on the New York Stock Exchange Companies listed on the Bombay Stock Exchange Former components of the Dow Jones Industrial Average Companies in the Dow Jones Global Titans 50 Life sciences industry Multinational companies based in New York City Pharmaceutical companies established in 1849 Pharmaceutical companies of the United States Publicly traded companies based in New York City Research and development in the United States Vaccine producers COVID-19 vaccine producers
Pfizer
[ "Biology" ]
11,095
[ "Life sciences industry" ]
62,313
https://en.wikipedia.org/wiki/Pharmacia
Pharmacia was a pharmaceutical and biotechnological company in Sweden that merged with the American pharmaceutical company Upjohn in 1995. History Pharmacia company was founded in 1911 in Stockholm, Sweden by pharmacist Gustav Felix Grönfeldt at the Elgen Pharmacy. The company was named after the Greek word φαρμακεία, transliterated pharmakeia, which means 'sorcery'. In the company's early days, much of its profits were derived from the "miracle medicine" Phospho-Energon. During World War II, Swedish chemist Björn Ingelman (who worked for Arne Tiselius at Uppsala university) researched various uses for the polysaccharide dextran. Together with the medical researcher Anders Grönwall, he discovered that dextran could be used as a replacement for blood plasma in blood transfusions, for which there could be a large need in wartime. Pharmacia, which then was still a small company, was contacted in 1943 and its CEO Elis Göth was very interested. The product Macrodex, a dextran solution, was launched four years later. Dextran-based products were to play a significant role in the further expansion of Pharmacia. In 1951, the company moved to Uppsala, Sweden, to get closer to the scientists with whom they cooperated, and Ingelman became its head of research. In 1959, Pharmacia pioneered gel filtration with its Sephadex products. These were also based on dextran and discoveries in Tiselius' department, this time by Jerker Porath and Per Flodin. In 1967 Pharmacia Fine Chemicals was established in Uppsala. In 1986 Pharmacia Fine Chemicals acquired LKB-produkter AB and changed its name to Pharmacia Biotech. Pharmacia Biotech expanded their role in the "biotech revolution" through its acquisition of PL Laboratories from Pabst Brewery offering a line of recombinant DNA specialty research chemicals. Sold to private interests in the 1990s, Pharmacia was first merged with "Kabi Vitrum" to form Kabi Pharmacia with headquarters in Uppsala. In 1993, Kabi Pharmacia bought Farmitalia, an Italian company that had developed doxorubicin, a chemotherapeutic. In 1995 the company merged with the American pharmaceutical company Upjohn, becoming known as Pharmacia & Upjohn and moved its headquarters to London. In 1998, the company was divided into two business area. The pharmaceutical business became Pharmacia & Upjohn. The scientific instruments groups which sold chromatography resin, purification equipment, molecular biology reagents and electrophoresis products was purchased by Amersham in 1998 and was named Amersham Pharmacia Biotech. They later changed the name to Amersham Biosciences and ran their radiochemical and reagents business along with the highly profitable chromatography business. The Pharmacia Logo Drop remained as a highly recognized brand. Amersham Biosciences was sold to GE Healthcare in 2004 to become GE Healthcare Life Sciences. From 1 April 2020, GE Healthcare Life Sciences has been renamed Cytiva, following the sale of GE Healthcare Life Sciences from General Electric to Danaher Corporation in a $21.4 billion acquisition. Overview The following is an illustration of the company's mergers, acquisitions, spin-offs and historical predecessors: References External links Cytiva Pharmaceutical companies of Sweden Biotechnology companies of Sweden Defunct companies of Sweden Pfizer Pharmaceutical companies established in 1911 Pharmaceutical companies disestablished in 2002 Life sciences industry Biotechnology companies disestablished in 2002 Uppsala Swedish companies established in 1911 Swedish companies disestablished in 2002
Pharmacia
[ "Biology" ]
792
[ "Life sciences industry" ]
62,323
https://en.wikipedia.org/wiki/Adamant
Adamant in classical mythology is an archaic form of diamond. In fact, the English word diamond is ultimately derived from adamas, via Late Latin and Old French . In ancient Greek (), genitive (), literally 'unconquerable, untameable'. In those days, the qualities of hard metal (probably steel) were attributed to it, and adamant became an independent concept as a result. In the Middle Ages adamant also became confused with the magnetic rock lodestone, and a folk etymology connected it with the Latin , 'to love or be attached to'. Another connection was the belief that adamant (the diamond definition) could block the effects of a magnet. This was addressed in chapter III of Pseudodoxia Epidemica, for instance. Since the contemporary word diamond is now used for the hardest gemstone, the increasingly archaic noun adamant has been reduced to mostly poetic or anachronistic use. In that capacity, the name, and various derivatives of it, are frequently used in modern media to refer to a variety of fictional substances. In mythology Adamant is used as a translation in the King James Bible in Ezekiel 3:9 for the word שמיר (Shamir), the original word in the Hebrew Bible. In Greek mythology, Cronus castrated his father Uranus using an adamant sickle given to him by his mother Gaia. An adamantine sickle or sword was also used by the hero Perseus to decapitate the Gorgon Medusa while she slept. In the Greek tragedy Prometheus Bound (translated by G. M. Cookson), Hephaestus is to bind Prometheus "to the jagged rocks in adamantine bonds infrangible". In Virgil's Aeneid, the gate of Tartarus is framed with pillars of solid adamant, "that no might of man, nay, not even the sons of heaven, could uproot in war" In John Milton's epic poem Paradise Lost, adamant or adamantine is mentioned eight times. First in Book 1, Satan is hurled "to bottomless perdition, there to dwell in adamantine chains and penal fire" (lines 47–48). Three times in Book 2 the gates of hell are described as being made of adamantine (lines 436, 646 and 853). In Book 6, Satan "Came towring [sic], armd [sic] in Adamant and Gold" (line 110), his shield is described as "of tenfold adamant" (line 255), and the armor worn by the fallen angels is described as "adamantine" (line 542). Finally in book 10 the metaphorical "Pinns [sic] of Adamant and Chains" (lines 318–319) bind the world to Satan, and thus to sin and death. In some versions of the Alexander Romance, Alexander the Great builds walls of Adamantine, the Gates of Alexander, to keep the giants Gog and Magog from pillaging the peaceful southern lands. In The Hypostasis of the Archons, Gnostic scripture from the Nag Hammadi Library refers to the Adamantine Land, an incorruptible place 'above' from whence the spirit came to dwell within man so that he became Adam, he who moves upon the ground with a living soul. In popular culture In The Divine Comedy by Dante, completed 1320, the angel at purgatory's gate sits on adamant. In the Early Modern epic poem The Faerie Queene, published 1590, Sir Artegal's sword is made of Adamant. In the Holy Sonnet I, published 1620, John Donne states in line 14, "And thou like adamant draw mine iron heart". In Gulliver's Travels by Jonathan Swift, the base of the fictitious flying island of Laputa (Part III of Gulliver's Travels) is constructed of Adamant. In J. R. R. Tolkien's The Lord of the Rings, Nenya, one of the Three Rings of Power, is set with a gem of adamant; the fortress of Barad-dûr is also partly built from "adamant". The crown of Gondor is described as having "seven gems of adamant". In the tabletop roleplaying game Dungeons & Dragons, Adamantine is an exotic metal of great strength. In His Dark Materials by Philip Pullman, in the third book, The Amber Spyglass (2000), Lord Asriel's tower is made of adamant. See also Adamant (1811 ship) Adamant Mountain, in Canada Adam Ant, musician adamant, a noun defined at Wiktionary Adamant, Vermont, a village in Washington County, Vermont, US Adamantane, a bulky hydrocarbon Adamantine spar, a real mineral adamantine, an adjective defined at Wiktionary Aggregated diamond nanorods, ultrahard, nanocrystalline form of diamond Unobtainium, a name given to exotic, fictional materials used in science fiction Adamantina, a Brazilian municipality in the state of São Paulo. Adamantium, a fictional metal alloy in the Marvel Universe References Fictional metals Mythological substances Superhard materials Objects in Greek mythology
Adamant
[ "Physics", "Chemistry" ]
1,068
[ "Mythological substances", "Materials", "Superhard materials", "Matter" ]
4,257,207
https://en.wikipedia.org/wiki/Poisson%E2%80%93Lie%20group
In mathematics, a Poisson–Lie group is a Poisson manifold that is also a Lie group, with the group multiplication being compatible with the Poisson algebra structure on the manifold. The infinitesimal counterpart of a Poisson–Lie group is a Lie bialgebra, in analogy to Lie algebras as the infinitesimal counterparts of Lie groups. Many quantum groups are quantizations of the Poisson algebra of functions on a Poisson–Lie group. Definition A Poisson–Lie group is a Lie group equipped with a Poisson bracket for which the group multiplication with is a Poisson map, where the manifold has been given the structure of a product Poisson manifold. Explicitly, the following identity must hold for a Poisson–Lie group: where and are real-valued, smooth functions on the Lie group, while and are elements of the Lie group. Here, denotes left-multiplication and denotes right-multiplication. If denotes the corresponding Poisson bivector on , the condition above can be equivalently stated as In particular, taking one obtains , or equivalently . Applying Weinstein splitting theorem to one sees that non-trivial Poisson-Lie structure is never symplectic, not even of constant rank. Poisson-Lie groups - Lie bialgebra correspondence The Lie algebra of a Poisson–Lie group has a natural structure of Lie coalgebra given by linearising the Poisson tensor at the identity, i.e. is a comultiplication. Moreover, the algebra and the coalgebra structure are compatible, i.e. is a Lie bialgebra, The classical Lie group–Lie algebra correspondence, which gives an equivalence of categories between simply connected Lie groups and finite-dimensional Lie algebras, was extended by Drinfeld to an equivalence of categories between simply connected Poisson–Lie groups and finite-dimensional Lie bialgebras. Thanks to Drinfeld theorem, any Poisson–Lie group has a dual Poisson–Lie group, defined as the Poisson–Lie group integrating the dual of its bialgebra. Homomorphisms A Poisson–Lie group homomorphism is defined to be both a Lie group homomorphism and a Poisson map. Although this is the "obvious" definition, neither left translations nor right translations are Poisson maps. Also, the inversion map taking is not a Poisson map either, although it is an anti-Poisson map: for any two smooth functions on . Examples Trivial examples Any trivial Poisson structure on a Lie group defines a Poisson–Lie group structure, whose bialgebra is simply with the trivial comultiplication. The dual of a Lie algebra, together with its linear Poisson structure, is an additive Poisson–Lie group. These two example are dual of each other via Drinfeld theorem, in the sense explained above. Other examples Let be any semisimple Lie group. Choose a maximal torus and a choice of positive roots. Let be the corresponding opposite Borel subgroups, so that and there is a natural projection . Then define a Lie group which is a subgroup of the product , and has the same dimension as . The standard Poisson–Lie group structure on is determined by identifying the Lie algebra of with the dual of the Lie algebra of , as in the standard Lie bialgebra example. This defines a Poisson–Lie group structure on both and on the dual Poisson Lie group . This is the "standard" example: the Drinfeld-Jimbo quantum group is a quantization of the Poisson algebra of functions on the group . Note that is solvable, whereas is semisimple. See also Lie bialgebra Quantum group Affine quantum group Quantum affine algebras References Lie groups Symplectic geometry Structures on manifolds
Poisson–Lie group
[ "Mathematics" ]
783
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
4,257,261
https://en.wikipedia.org/wiki/Aubertite
Aubertite is a mineral with the chemical formula CuAl(SO4)2Cl·14H2O. It is colored blue. Its crystals are triclinic pedial. It is transparent. It has vitreous luster. It is not radioactive. Aubertite is rated 2-3 on the Mohs Scale. The sample was collected by J. Aubert (born 1929), assistant director, National Institute of Geophysics, France, in the year 1961. Its type locality is Queténa Mine, Toki Cu deposit, Chuquicamata District, Calama, El Loa Province, Antofagasta Region, Chile. References Webmineral.com - Aubertite Mindat.org - Aubertite Handbook of Mineralogy - Aubertite Copper(II) minerals Aluminium minerals Sulfate minerals Chloride minerals 14 Triclinic minerals Minerals in space group 2
Aubertite
[ "Chemistry" ]
186
[ "Hydrate minerals", "Hydrates" ]
4,257,408
https://en.wikipedia.org/wiki/Denjoy%27s%20theorem%20on%20rotation%20number
In mathematics, the Denjoy theorem gives a sufficient condition for a diffeomorphism of the circle to be topologically conjugate to a diffeomorphism of a special kind, namely an irrational rotation. proved the theorem in the course of his topological classification of homeomorphisms of the circle. He also gave an example of a C1 diffeomorphism with an irrational rotation number that is not conjugate to a rotation. Statement of the theorem Let ƒ: S1 → S1 be an orientation-preserving diffeomorphism of the circle whose rotation number θ = ρ(ƒ) is irrational. Assume that it has positive derivative ƒ(x) > 0 that is a continuous function with bounded variation on the interval [0,1). Then ƒ is topologically conjugate to the irrational rotation by θ. Moreover, every orbit is dense and every nontrivial interval I of the circle intersects its forward image ƒ°q(I), for some q > 0 (this means that the non-wandering set of ƒ is the whole circle). Complements If ƒ is a C2 map, then the hypothesis on the derivative holds; however, for any irrational rotation number Denjoy constructed an example showing that this condition cannot be relaxed to C1, continuous differentiability of ƒ. Vladimir Arnold showed that the conjugating map need not be smooth, even for an analytic diffeomorphism of the circle. Later Michel Herman proved that nonetheless, the conjugating map of an analytic diffeomorphism is itself analytic for "most" rotation numbers, forming a set of full Lebesgue measure, namely, for those that are badly approximable by rational numbers. His results are even more general and specify differentiability class of the conjugating map for Cr diffeomorphisms with any r ≥ 3. See also Circle map References Kornfeld, Sinai, Fomin, Ergodic theory. External links John Milnor, Denjoy Theorem Dynamical systems Diffeomorphisms Theorems in topology Theorems in dynamical systems
Denjoy's theorem on rotation number
[ "Physics", "Mathematics" ]
432
[ "Theorems in dynamical systems", "Theorems in topology", "Topology", "Mechanics", "Mathematical problems", "Mathematical theorems", "Dynamical systems" ]
4,258,134
https://en.wikipedia.org/wiki/Lebesgue%27s%20density%20theorem
In mathematics, Lebesgue's density theorem states that for any Lebesgue measurable set , the "density" of A is 0 or 1 at almost every point in . Additionally, the "density" of A is 1 at almost every point in A. Intuitively, this means that the "edge" of A, the set of points in A whose "neighborhood" is partially in A and partially outside of A, is negligible. Let μ be the Lebesgue measure on the Euclidean space Rn and A be a Lebesgue measurable subset of Rn. Define the approximate density of A in a ε-neighborhood of a point x in Rn as where Bε denotes the closed ball of radius ε centered at x. Lebesgue's density theorem asserts that for almost every point x of A the density exists and is equal to 0 or 1. In other words, for every measurable set A, the density of A is 0 or 1 almost everywhere in Rn. However, if μ(A) > 0 and , then there are always points of Rn where the density is neither 0 nor 1. For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4. The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible. The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem. Thus, this theorem is also true for every finite Borel measure on Rn instead of Lebesgue measure, see Discussion. See also References Hallard T. Croft. Three lattice-point problems of Steinhaus. Quart. J. Math. Oxford (2), 33:71-83, 1982. Theorems in measure theory Integral calculus
Lebesgue's density theorem
[ "Mathematics" ]
389
[ "Theorems in mathematical analysis", "Theorems in measure theory", "Integral calculus", "Calculus" ]
4,258,383
https://en.wikipedia.org/wiki/Compasso%20d%27Oro
The Compasso d'Oro (; ) is an industrial design award originated in Italy in 1954. Initially sponsored by the La Rinascente, a Milanese department store, the award has been organised and managed by the Associazione per il Disegno Industriale (ADI) since 1964. The Compasso d'Oro is the first, and among the most recognized and respected design awards. It aims to acknowledge and promote quality in its field in Italy and internationally, and has been called both the "Nobel" and the "Oscar" of design. History The Compasso d′Oro was established in 1954, and now it is the highest honour in the field of industrial design in Italy, comparable to other prestigious international awards such as the Good Design award, iF Design Award, Red Dot Award, the Cooper-Hewitt National Design Awards, and the Good Design Award (Japan). It was the first award of its kind in Europe and soon took on an international dimension and relevance, multiplying the occasions on which the exhibitions of award-winning objects were held in Europe, the United States, Canada and Japan. The original idea for the award is credited to Gio Ponti and . Many other leading architects and designers of the era including the Castiglioni brothers (Livio, Pier Giacomo, and Achille), , Enzo Mari and Marco Zanuso were involved in aspects of its inception and early development. The Compasso d'Oro logo (designed by Steiner) and award trophy itself invoke a drafting compass invented by Adalbert Göringer in 1893 to measure the Golden Section. At present the management department of the Compasso d'Oro is Italy Industrial Designing Association, and it is also the members of the International Industrial Designing Committee and the European Designing Bureau. Since its inception, approximately 350 designers have been honoured with the Award, for designs covering a wide range – from automobiles and bicycles to furniture and household objects, portable sewing machines, typewriters, calculators, clocks, lighting as well as concepts and systems, technical equipment, and yachts. For the first time, the 2020 Compasso d'Oro included a "Products Career Award" which was given to three historical designs that have proven to be highly successful over time but were not awarded at the time of their inception: a 1962 floor lamp called Arco by Pier Giacomo and Achille Castiglioni; a bed design by Vico Magistretti from 1978 called "Nahalie"; and the now famous "Sacco" bean-bag chair designed by Piero Gatti, Cesare Paolini, and Franco Teodoro in 1968. The ADI Design Museum in Milan houses the historical collection of the ADI Compasso d’Oro Foundation, as well as temporary exhibitions, public talks and initiatives. On 22 April 2004, the Ministry of Cultural Heritage and Activities and Tourism – through its Superintendency for Lombardy – declared the collection of "exceptional artistic and historical interest", thus making it part of the national cultural heritage. In 2020, the Milan square where the ADI Design Museum is situated was renamed "Piazza Compasso d'Oro" to honour the cultural and historical significance of the award. The inaugural ADI "Compasso d'Oro International Award" will be held at the Expo 2025 in Osaka, Japan. The winning entries will be exhibited in the Italian pavilion. List of Compasso d'Oro Awards Gallery See also Industrial design List of industrial designers List of Compasso d'Oro recipients by year (in Italian) List of design awards References Further reading External links Official site of the Associazione per il Disegno Industriale List of all Compasso d'Oro winners since 1954 by year/edition ADI Design Museum Compasso d'Oro, brief documentary film by RAI Culture television Science and technology in Italy Italian design Awards established in 1954 1954 establishments in Italy Italian awards Design awards Industrial design awards
Compasso d'Oro
[ "Engineering" ]
796
[ "Design", "Design awards" ]
4,258,398
https://en.wikipedia.org/wiki/Locally%20integrable%20function
In mathematics, a locally integrable function (sometimes also called locally summable function) is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions. Definition Standard definition . Let be an open set in the Euclidean space and be a Lebesgue measurable function. If on is such that i.e. its Lebesgue integral is finite on all compact subsets of , then is called locally integrable. The set of all such functions is denoted by : where denotes the restriction of to the set . The classical definition of a locally integrable function involves only measure theoretic and topological concepts and can be carried over abstract to complex-valued functions on a topological measure space : however, since the most common application of such functions is to distribution theory on Euclidean spaces, all the definitions in this and the following sections deal explicitly only with this important case. An alternative definition . Let be an open set in the Euclidean space . Then a function such that for each test function is called locally integrable, and the set of such functions is denoted by . Here denotes the set of all infinitely differentiable functions with compact support contained in . This definition has its roots in the approach to measure and integration theory based on the concept of continuous linear functional on a topological vector space, developed by the Nicolas Bourbaki school: it is also the one adopted by and by . This "distribution theoretic" definition is equivalent to the standard one, as the following lemma proves: . A given function is locally integrable according to if and only if it is locally integrable according to , i.e. Proof of If part: Let be a test function. It is bounded by its supremum norm , measurable, and has a compact support, let's call it . Hence by . Only if part: Let be a compact subset of the open set . We will first construct a test function which majorises the indicator function of . The usual set distance between and the boundary is strictly greater than zero, i.e. hence it is possible to choose a real number such that (if is the empty set, take ). Let and denote the closed -neighborhood and -neighborhood of , respectively. They are likewise compact and satisfy Now use convolution to define the function by where is a mollifier constructed by using the standard positive symmetric one. Obviously is non-negative in the sense that , infinitely differentiable, and its support is contained in , in particular it is a test function. Since for all , we have that . Let be a locally integrable function according to . Then Since this holds for every compact subset of , the function is locally integrable according to . □ Generalization: locally p-integrable functions . Let be an open set in the Euclidean space and be a Lebesgue measurable function. If, for a given with , satisfies i.e., it belongs to for all compact subsets of , then is called locally -integrable or also -locally integrable. The set of all such functions is denoted by : An alternative definition, completely analogous to the one given for locally integrable functions, can also be given for locally -integrable functions: it can also be and proven equivalent to the one in this section. Despite their apparent higher generality, locally -integrable functions form a subset of locally integrable functions for every such that . Notation Apart from the different glyphs which may be used for the uppercase "L", there are few variants for the notation of the set of locally integrable functions adopted by , and . adopted by and . adopted by and . Properties Lp,loc is a complete metric space for all p ≥ 1 . is a complete metrizable space: its topology can be generated by the following metric: where is a family of non empty open sets such that , meaning that is compactly included in i.e. it is a set having compact closure strictly included in the set of higher index. . , k ∈ is an indexed family of seminorms, defined as In references , , and , this theorem is stated but not proved on a formal basis: a complete proof of a more general result, which includes it, is found in . Lp is a subspace of L1,loc for all p ≥ 1 . Every function belonging to , , where is an open subset of , is locally integrable. Proof. The case is trivial, therefore in the sequel of the proof it is assumed that . Consider the characteristic function of a compact subset of : then, for , where is a positive number such that = for a given is the Lebesgue measure of the compact set Then for any belonging to , by Hölder's inequality, the product is integrable i.e. belongs to and therefore Note that since the following inequality is true the theorem is true also for functions belonging only to the space of locally -integrable functions, therefore the theorem implies also the following result. . Every function in , , is locally integrable, i. e. belongs to . Note: If is an open subset of that is also bounded, then one has the standard inclusion which makes sense given the above inclusion . But the first of these statements is not true if is not bounded; then it is still true that for any , but not that . To see this, one typically considers the function , which is in but not in for any finite . L1,loc is the space of densities of absolutely continuous measures . A function is the density of an absolutely continuous measure if and only if . The proof of this result is sketched by . Rephrasing its statement, this theorem asserts that every locally integrable function defines an absolutely continuous measure and conversely that every absolutely continuous measures defines a locally integrable function: this is also, in the abstract measure theory framework, the form of the important Radon–Nikodym theorem given by Stanisław Saks in his treatise. Examples The constant function defined on the real line is locally integrable but not globally integrable since the real line has infinite measure. More generally, constants, continuous functions and integrable functions are locally integrable. The function for x ∈ (0, 1) is locally but not globally integrable on (0, 1). It is locally integrable since any compact set K ⊆ (0, 1) has positive distance from 0 and f is hence bounded on K. This example underpins the initial claim that locally integrable functions do not require the satisfaction of growth conditions near the boundary in bounded domains. The function is not locally integrable in : it is indeed locally integrable near this point since its integral over every compact set not including it is finite. Formally speaking, : however, this function can be extended to a distribution on the whole as a Cauchy principal value. The preceding example raises a question: does every function which is locally integrable in ⊊ admit an extension to the whole as a distribution? The answer is negative, and a counterexample is provided by the following function: does not define any distribution on . The following example, similar to the preceding one, is a function belonging to ( \ 0) which serves as an elementary counterexample in the application of the theory of distributions to differential operators with irregular singular coefficients: where and are complex constants, is a general solution of the following elementary non-Fuchsian differential equation of first order Again it does not define any distribution on the whole , if or are not zero: the only distributional global solution of such equation is therefore the zero distribution, and this shows how, in this branch of the theory of differential equations, the methods of the theory of distributions cannot be expected to have the same success achieved in other branches of the same theory, notably in the theory of linear differential equations with constant coefficients. Applications Locally integrable functions play a prominent role in distribution theory and they occur in the definition of various classes of functions and function spaces, like functions of bounded variation. Moreover, they appear in the Radon–Nikodym theorem by characterizing the absolutely continuous part of every measure. See also Compact set Distribution (mathematics) Lebesgue's density theorem Lebesgue differentiation theorem Lebesgue integral Lp space Notes References . Measure and integration (as the English translation of the title reads) is a definitive monograph on integration and measure theory: the treatment of the limiting behavior of the integral of various kind of sequences of measure-related structures (measurable functions, measurable sets, measures and their combinations) is somewhat conclusive. . Translated from the original 1958 Russian edition by Eugene Saletan, this is an important monograph on the theory of generalized functions, dealing both with distributions and analytic functionals. . (available also as ). (available also as ). . . . . . English translation by Laurence Chisholm Young, with two additional notes by Stefan Banach: the Mathematical Reviews number refers to the Dover Publications 1964 edition, which is basically a reprint. . . . A monograph on the theory of generalized functions written with an eye towards their applications to several complex variables and mathematical physics, as is customary for the Author. External links Measure theory Integral calculus Types of functions Lp spaces
Locally integrable function
[ "Mathematics" ]
2,003
[ "Functions and mappings", "Calculus", "Mathematical objects", "Mathematical relations", "Types of functions", "Integral calculus" ]
4,258,536
https://en.wikipedia.org/wiki/ASCOM%20%28standard%29
ASCOM (an abbreviation for AStronomy Common Object Model) is an open initiative to provide a standard interface to a range of astronomy equipment including mounts, focusers and imaging devices in a Microsoft Windows environment. History ASCOM was invented in late 1997 and early 1998 by Bob Denny, when he released two commercial programs and several freeware utilities that showcased the technology. He also induced Doug George to include ASCOM capabilities in commercial CCD camera control software. The first observatory to adopt ASCOM was Junk Bond Observatory, in early 1998. It was used at this facility to implement a robotic telescope dedicated to observing asteroids. The successful use of ASCOM there was covered in an article in Sky & Telescope magazine. This helped ASCOM to become more widely adopted. The ASCOM standards were placed under the control of the ASCOM Initiative, a group of astronomy software developers who volunteered to develop the standards further. Under the influence of Denny, George, Tim Long, and others, ASCOM developed into a set of device driver standards. In 2004, over 150 astronomy-related devices were supported by ASCOM device drivers, which were released as freeware. Most of the drivers are also open source. As ASCOM developed, the term became less associated with the Component Object Model, and has been used more broadly to describe not only the standards and software based on them, but also to describe an observing system architecture and a robotic telescope design philosophy. In 2004, ASCOM remained formally a reference to the Component Object Model, but the term is expected to stand on its own as new technologies such as Microsoft .NET take over functions provided by the Component Object Model, and additional ASCOM projects are adopted that dilute its concentration on device drivers. Jonathan Fay contributed to the ASCOM standard. During his work on the WorldWide Telescope ASCOM client he created the reference .NET Framework prototype classes that led to the ASCOM Version 5 redesign. The release of version 6 of the ASCOM Platform in June 2011 marked a transition to an open source development paradigm, with several developers contributing to the effort and all of the platform source code being made available under a Creative Commons license. Initially, the Platform developer team used servers hosted by TiGra Networks (Long's IT consulting company) for source code control, issue tracking and project management, with server licenses contributed by Atlassian and JetBrains. In 2012, due in part to differences in development style, TiGra Networks' involvement with the software development effort ceased and the source code was relocated to SourceForge. What is it? The Ascom Platform is a collection of computer drivers for different astronomy-related devices. It uses agreed standards that allow different computer programs ('apps') and devices to communicate with each other simultaneously. This means that you can have things like mounts, focusers, cameras and filter wheels all controlled by a single computer, even with several computers sharing access to those resources. For example, you can use one program to find targets and another to guide your telescope, with both of them sharing control of your mount at the same time. An ASCOM driver acts as an abstraction layer between the client and hardware thus removing any hardware dependency in the client, and making the client automatically compatible with all devices that supports the minimum required properties and methods. For example, this abstraction allows an ASCOM client to use an imaging device without needing to know whether the device is attached via a serial or network connection. ASCOM defines a collection of required Properties and Methods that ASCOM compliant software can use to communicate with an ASCOM compliant device. ASCOM also defines a range of optional Properties and Methods to take advantage of common features that may not be available for every manufacturer's device. By testing various properties an ASCOM client application can determine what features are available for use. Properties and Methods are accessible via scripting interfaces, allowing control of devices by standard scripting applications such as VBScript and JavaScript. In fact any language that supports access to Microsoft COM objects can interface with ASCOM. An ASCOM Platform software package is available for download which installs some common libraries and documentation as well as a collection of ASCOM drivers for a broad range of equipment. Additional ASCOM drivers for devices not included in the ASCOM Platform package can be downloaded and installed separately. Although ASCOM is predominantly used by the amateur community, because the standard is freely available it is also used in some professional installations. Licensing There are no particular licensing requirements other than that the ASCOM logo may only be used if the client application is ASCOM compatible, and an ASCOM driver must implement all the required properties and methods (but need not implement any of the optional properties and methods). End user From an astronomer's point of view, it is a simple matter of installing the ASCOM platform and suitable client software; no programming is required. ASCOM drivers allow computer-based control of devices such as planetarium software to direct a telescope to point at a selected object. Using a combination of mount, focuser and imaging device ASCOM drivers, it is possible to build a fully automated environment for deep sky imaging. Developer Developers can enhance the power of ASCOM by writing their own clients using the scripting or object interface. ASCOM Alpaca Recent initiative called ASCOM Alpaca is currently under development. The Alpaca API uses RESTful techniques and TCP/IP to enable ASCOM applications and devices to communicate across modern network environments. This will enable ASCOM compatible devices to work across all the different operating systems including Linux and Mac OSX in near future. See also INDI References External links Cedric Thomas, ASCOM Developer web site Application programming interfaces Astronomy software Open standards
ASCOM (standard)
[ "Astronomy" ]
1,145
[ "Astronomy software", "Works about astronomy" ]
4,259,833
https://en.wikipedia.org/wiki/Alberte%20Pullman
Alberte Pullman (née Bucher, 26 August 1920 – 7 January 2011) was a French theoretical and quantum chemist. She studied at the Sorbonne starting in 1938. During her studies she worked on calculations at Centre National de la Recherche Scientifique (CNRS). From 1943 she worked with Raymond Daudel. She completed her doctorate in 1946. On his return from war service in 1946, she married Bernard Pullman. She and her husband worked together until his death in 1996. Together they wrote several books including Quantum Biochemistry, Interscience Publishers, 1963. Their work in the 1950s and 1960s was the beginning of the new field of Quantum Biochemistry. They pioneered the application of quantum chemistry to predicting the carcinogenic properties of aromatic hydrocarbons. Pullman was born in Nantes, France. She was a member of the International Academy of Quantum Molecular Science and a member and former President of The International Society of Quantum Biology and Pharmacology. References External links An interview with Mme Prof. Dr. Alberte Pullman 1920 births 20th-century French chemists 2011 deaths University of Paris alumni Members of the International Academy of Quantum Molecular Science Theoretical chemists Scientists from Nantes
Alberte Pullman
[ "Chemistry" ]
243
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
4,259,939
https://en.wikipedia.org/wiki/X-ray%20optics
X-ray optics is the branch of optics dealing with X-rays, rather than visible light. It deals with focusing and other ways of manipulating the X-ray beams for research techniques such as X-ray diffraction, X-ray crystallography, X-ray fluorescence, small-angle X-ray scattering, X-ray microscopy, X-ray phase-contrast imaging, and X-ray astronomy. X-rays and visible light are both electromagnetic waves, and propagate in space in the same way, but because of the much higher frequency and photon energy of X-rays they interact with matter very differently. Visible light is easily redirected using lenses and mirrors, but because the real part of the complex refractive index of all materials is very close to 1 for X-rays, they instead tend to initially penetrate and eventually get absorbed in most materials without significant change of direction. X-ray techniques There are many different techniques used to redirect X-rays, most of them changing the directions by only minute angles. The most common principle used is reflection at grazing incidence angles, either using total external reflection at very small angles or multilayer coatings. Other principles used include diffraction and interference in the form of zone plates, refraction in compound refractive lenses that use many small X-ray lenses in series to compensate by their number for the minute index of refraction, and Bragg reflection from a crystal plane in flat or bent crystals. X-ray beams are often collimated (reduced in size) using pinholes or movable slits typically made of tungsten or some other high-Z material. Narrow parts of an X-ray spectrum can be selected with monochromators based on one or multiple Bragg reflections by crystals. X-ray spectra can also be manipulated by passing the X-rays through a filter that typically reduces the low-energy part of the spectrum, and possibly parts above absorption edges of the elements used for the filter. Focusing optics Analytical X-ray techniques such as X-ray crystallography, small-angle X-ray scattering, wide-angle X-ray scattering, X-ray fluorescence, X-ray spectroscopy and X-ray photoelectron spectroscopy all benefit from high X-ray flux densities on the samples being investigated. This is achieved by focusing the divergent beam from the X-ray source onto the sample using one of several possible focusing optical components. This is also useful for scanning probe techniques such as scanning transmission X-ray microscopy and scanning X-ray fluorescence imaging. Polycapillary optics Polycapillary lenses are arrays of small hollow glass tubes that guide the X-rays with many total external reflections on the inside of the tubes. The array is tapered so that one end of the capillaries points at the X-ray source and the other at the sample. Polycapillary optics are achromatic and thus suitable for scanning fluorescence imaging and other applications where a broad X-ray spectrum is useful. They collect X-rays efficiently for photon energies of 0.1 to 30 keV and can achieve gains of 100 to 10000 in flux over using a pinhole at 100 mm from the X-ray source. Since only X-rays entering the capillaries within a very narrow angle will be totally internally reflected, only X-rays coming from a small spot will be transmitted through the optic. Polycapillary optics cannot image more than one point to another, so they are used for illumination and collection of X-rays. Zone plates Zone plates consist of a substrate with concentric zones of a phase-shifting or absorbing material with zones getting narrower the larger their radius. The zone widths are designed so that a transmitted wave gets constructive interference in a single point giving a focus. Zone plates can be used as condensers to collect light, but also for direct full-field imaging in e.g. an X-ray microscope. Zone plates are highly chromatic and usually designed only for a narrow energy span, making it necessary to have monochromatic X-rays for efficient collection and high-resolution imaging. Compound refractive lenses Since refractive indices at X-ray wavelengths are so close to 1, the focal lengths of normal lenses get impractically long. To overcome this, lenses with very small radii of curvature are used, and they are stacked in long rows, so that the combined focusing power becomes appreciable. Since the refractive index is less than 1 for X-rays, these lenses must be concave to achieve focusing, contrary to visible-light lenses, which are convex for a focusing effect. Radii of curvature are typically less than one millimeter, making the usable X-ray beam width at most about 1 mm. To reduce the absorption of X-rays in these stacks, materials with very low atomic number such as beryllium or lithium are often used. Lenses from other materials are also available: radiation-resistant polymer (Epoxy based) such as SU-8, nickel and silicon. Since the refractive index depends strongly on X-ray wavelength, these lenses are highly chromatic, and the variation of the focal length with wavelength must be taken into account for any application. Reflection The basic idea is to reflect a beam of X-rays from a surface and to measure the intensity of X-rays reflected in the specular direction (reflected angle equal to incident angle). It has been shown that a reflection off a parabolic mirror followed by a reflection off a hyperbolic mirror leads to the focusing of X-rays. Since the incoming X-rays must strike the tilted surface of the mirror, the collecting area is small. It can, however, be increased by nesting arrangements of mirrors inside each other. The ratio of reflected intensity to incident intensity is the X-ray reflectivity for the surface. If the interface is not perfectly sharp and smooth, the reflected intensity will deviate from that predicted by the Fresnel reflectivity law; the deviations can be analyzed to obtain the density profile of the interface normal to the surface. For films with multiple layers, X-ray reflectivity may show oscillations with wavelength, analogous to the Fabry–Pérot effect. These oscillations can be used to infer layer thicknesses and other properties. Diffraction In X-ray diffraction a beam strikes a crystal and diffracts into many specific directions. The angles and intensities of the diffracted beams indicate a three-dimensional density of electrons within the crystal. X-rays produce a diffraction pattern because their wavelength typically has the same order of magnitude (0.1–10.0 nm) as the spacing between the atomic planes in the crystal. Each atom re-radiates a small portion of an incoming beam's intensity as a spherical wave. If the atoms are arranged symmetrically (as is found in a crystal) with a separation d, these spherical waves will be in phase (add constructively) only in directions where their path-length difference 2d sin θ is equal to an integer multiple of the wavelength λ. The incoming beam therefore appears to have been deflected by an angle 2θ, producing a reflection spot in the diffraction pattern. X-ray diffraction is a form of elastic scattering in the forward direction; the outgoing X-rays have the same energy, and thus the same wavelength, as the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the incoming X-ray to an inner-shell electron, exciting it to a higher energy level. Such inelastic scattering reduces the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such electron excitation, but not in determining the distribution of atoms within the crystal. Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle–antiparticle pairs. Similar diffraction patterns can be produced by scattering electrons or neutrons. X-rays are usually not diffracted from atomic nuclei, but only from the electrons surrounding them. Interference X-ray interference due to the superposition of two or more X-ray waves produces a new wave pattern. X-ray interference usually refers to the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Two non-monochromatic X-ray waves are only fully coherent with each other if they both have exactly the same range of wavelengths and the same phase differences at each of the constituent wavelengths. The total phase difference is derived from the sum of the path difference and the initial phase difference (if the X-ray waves are generated from two or more different sources). It can then be concluded whether the X-ray waves reaching a point are in phase (constructive interference) or out of phase (destructive interference). Technologies There are a variety of techniques used to funnel X-ray photons to the appropriate location on an X-ray detector: Lobster-eye optics Grazing incidence mirrors in a Wolter telescope, or a Kirkpatrick–Baez X-ray reflection microscope. Zone plates. Bent crystals. Normal-incidence mirrors making use of multilayer coatings. A normal-incidence lens much like an optical lens, such as a compound refractive lens. Microstructured optical arrays, namely, capillary/polycapillary optical systems. Coded aperture imaging. Modulation collimators. X-ray waveguides. Most X-ray optical elements (with the exception of grazing-incidence mirrors) are very small and must be designed for a particular incident angle and energy, thus limiting their applications in divergent radiation. , although the technology had advanced rapidly, its practical uses outside research were limited. Efforts were ongoing to introduce X-ray optics in medical X-ray imaging. For instance, one of the applications showing greater promise is in enhancing both the contrast and resolution of mammographic images, compared to conventional anti-scatter grids. Another application is to optimize the energy distribution of the X-ray beam to improve contrast-to-noise ratio over conventional energy filtering. Mirrors for X-ray optics X-ray mirrors can be made of glass, ceramic, or metal foil, coated by a reflective layer. The most commonly used reflective materials for X-ray mirrors are gold and iridium. Even with these the critical reflection angle is energy-dependent. For gold at 1 keV, the critical reflection angle is 2.4°. The use of X-ray mirrors simultaneously requires: the ability to determine the location of the arrival of an X-ray photon in two dimensions, a reasonable detection efficiency. Multilayers for X-Rays No material has substantial reflection for X-rays, except at very small grazing angles. Multilayers enhance the small reflectivity from a single boundary by adding the small reflected amplitudes from many boundaries coherently in phase. For example, if a single boundary has a reflectivity of R = 10−4 (amplitude r = 10−2), then the addition of 100 amplitudes from 100 boundaries can give reflectivity R close to one. The period Λ of the multilayer that provides the in-phase addition is that of the standing wave produced by the input and output beam, Λ = λ/2 sin θ, where λ is the wavelength, and 2θ the half angle between the two beams. For θ = 90°, or reflection at normal incidence, the period of the multilayer is Λ = λ/2. The shortest period that can be used in a multilayer is limited by the size of the atoms to about 2 nm, corresponding to wavelengths above 4 nm. For shorter wavelength a reduction of the incidence angle θ toward more grazing has to be used. The materials for multilayers are selected to give the highest possible reflection at each boundary and the smallest absorption or the propagation through the structure. This is usually achieved by light, low-density materials for the spacer layer and a heavier material that produces high contrast. The absorption in the heavier material can be reduced by positioning it close to the nodes of the standing-wave field inside the structure. Good low-absorption spacer materials are Be, C, B, B4C and Si. Some examples of the heavier materials with good contrast are W, Rh, Ru and Mo. Applications include: normal and grazing-incidence optics for telescopes from EUV to hard X-rays, microscopes, beam lines at synchrotron and FEL facilities, EUV lithography. Mo/Si is the material selection used for the near-normal incidence reflectors for EUV lithography. Hard X-ray mirrors An X-ray mirror optic for the NuSTAR space telescope working at 79 keV (hard, i.e. high-energy X-radiation) was made using multilayered coatings, computer-aided manufacturing, and other techniques. The mirrors use a tungsten/silicon (W/Si) or platinum/silicon-carbide (Pt/SiC) multicoating on slumped glass, allowing a Wolter telescope design. See also Kirkpatrick–Baez mirror X-ray telescope Wolter telescope, a type of X-ray telescope built with glancing-incidence mirrors XMM-Newton and Chandra X-ray Observatory, orbiting observatories using X-ray optics X-ray spectroscopy, X-ray photoelectron spectroscopy, X-ray crystallography References External links Optics Optics Optics Radiography Optics
X-ray optics
[ "Physics", "Chemistry", "Astronomy", "Technology", "Engineering" ]
2,827
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Optics", "X-rays", "Electromagnetic spectrum", "Measuring instruments", "X-ray instrumentation", " molecular", "X-ray astronomy", "Atomic", "Astronomical sub-disciplines", " and optical physics" ]
4,260,103
https://en.wikipedia.org/wiki/Hazard%20%28logic%29
In digital logic, a hazard is an undesirable effect caused by either a deficiency in the system or external influences in both synchronous and asynchronous circuits. Logic hazards are manifestations of a problem in which changes in the input variables do not change the output correctly due to some form of delay caused by logic elements (NOT, AND, OR gates, etc.) This results in the logic not performing its function properly. The three different most common kinds of hazards are usually referred to as static, dynamic and function hazards. Hazards are a temporary problem, as the logic circuit will eventually settle to the desired function. Therefore, in synchronous designs, it is standard practice to register the output of a circuit before it is being used in a different clock domain or routed out of the system, so that hazards do not cause any problems. If that is not the case, however, it is imperative that hazards be eliminated as they can have an effect on other connected systems. Static hazards A static hazard is a change of a signal state twice in a row when the signal is expected to stay constant. When one input signal changes, the output changes momentarily before stabilizing to the correct value. There are two types of static hazards: Static-1 Hazard: the output is currently 1 and after the inputs change, the output momentarily changes to 0,1 before settling on 1 Static-0 Hazard: the output is currently 0 and after the inputs change, the output momentarily changes to 1,0 before settling on 0 In properly formed two-level AND-OR logic based on a Sum Of Products expression, there will be no static-0 hazards (but may still have static-1 hazards). Conversely, there will be no static-1 hazards in an OR-AND implementation of a Product Of Sums expression (but may still have static-0 hazards). The most commonly used method to eliminate static hazards is to add redundant logic (consensus terms in the logic expression). Example of a static hazard Consider an imperfect circuit that suffers from a delay in the physical logic elements i.e. AND gates etc. The simple circuit performs the function noting: From a look at the starting diagram it is clear that if no delays were to occur, then the circuit would function normally. However, no two gates are ever manufactured exactly the same. Due to this imperfection, the delay for the first AND gate will be slightly different than its counterpart. Thus an error occurs when the input changes from 111 to 011. i.e. when A changes state. Now we know roughly how the hazard is occurring, for a clearer picture and the solution on how to solve this problem, we would look to the Karnaugh map. A theorem proved by Huffman tells us that adding a redundant loop 'BC' will eliminate the hazard. The amended function is: Now we can see that even with imperfect logic elements, our example will not show signs of hazards when A changes state. This theory can be applied to any logic system. Computer programs deal with most of this work now, but for simple examples it is quicker to do the debugging by hand. When there are many input variables (say 6 or more) it will become quite difficult to 'see' the errors on a Karnaugh map. Dynamic hazards A dynamic hazard are a series of changes of a signal state that happen several times in a row when the signal is expected to change state only once. A dynamic hazard is the possibility of an output changing more than once as a result of a single input change. Dynamic hazards often occur in larger logic circuits where there are different routes to the output (from the input). If each route has a different delay, then it quickly becomes clear that there is the potential for changing output values that differ from the required / expected output. E.g. A logic circuit is meant to change output state from 1 to 0, but instead changes from 1 to 0 then 1 and finally rests at the correct value 0. This is a dynamic hazard. As a rule, dynamic hazards are more complex to resolve, but note that if all static hazards have been eliminated from a circuit, then dynamic hazards cannot occur. Functional hazards In contrast to static and dynamic hazards, functional hazards are ones caused by a change applied to more than one input. There is no specific logical solution to eliminate them. One really reliable method is preventing inputs from changing simultaneously, which is not applicable in some cases. So, circuits should be carefully designed to have equal delays in each path. Others Combinational logic hazards In combinational logic is a hazard that depend on the distribution of signal propagation delays in the logic circuits and overall design of a logic circuit function implemented. Combinational functional hazards In combinational logic are hazards that can be detected and suppressed at a higher level of programming, by studying and modifying the output logic function. Sequential hazards Is a kind of undesirable signal changes found in looped systems. See also Don't care Glitch Hazard (computer architecture) Race condition Floating body effect, a probably cause for hazard silicon on Insulator-devices References http://www.ee.surrey.ac.uk/Projects/Labview/Sequential/Course/02-Hazards/hazards.htm#FunctionHazards Digital electronics
Hazard (logic)
[ "Engineering" ]
1,076
[ "Electronic engineering", "Digital electronics" ]
4,260,539
https://en.wikipedia.org/wiki/Labtec
Labtec Enterprises Inc. was an American manufacturer of computer accessories active as an independent company from 1980 to 2001. They were best known for their budget range of peripherals such as keyboards, mice, microphones, speakers and webcams. In the United States, the company had cornered the market for computer speakers and headphones for much of the 1990s before being acquired by Logitech in 2001. History Labtec Enterprises Inc. was founded in 1980 by Charles Dunn and based in Vancouver, Washington, for most of its independent existence. The company was initially focused on providing audio gear (primarily headsets) for the airline industry before branching out to providing peripherals for personal computers in 1990. By the mid-1990s Labtec catered to three segments: the personal computer buyer, providing speakers and microphones; the airline industry, providing headphones and headsets; and the professional audiovisual and telephonics industry, providing audio cables, switches, and junction boxes. The company employed 20 people domestically at the company's combined headquarters and warehouse in Vancouver, Washington, in 1993. Meanwhile, the bulk of the company's products were manufactured overseas in Hong Kong and Taiwan. In 1993, the company was selling about 150,000 speakers to consumers a month. In 1998, Labtec merged with Spacetec IMC Corporation, becoming a new publicly traded corporation in the process. The combined company changed its name to Labtec Inc. in February 1999. Spacetec IMC had manufactured 6DOF controllers for use with CAD software. A Spaceball 2003 controller was used to control the Mars Pathfinder spacecraft in 2000. In 2001, Logitech bought Labtec for approximately USD$125 million in cash, stock and debt in order to expand its line of audio products for personal computers and other devices. References External links Telecommunications companies of the United States Telecommunications equipment vendors Videotelephony Companies based in Vancouver, Washington Telecommunications companies established in 1981 Technology companies disestablished in 2001 Logitech Defunct computer companies of the United States Defunct computer hardware companies
Labtec
[ "Technology" ]
413
[ "Computing stubs", "Computer hardware stubs" ]
4,260,775
https://en.wikipedia.org/wiki/Transport%20Safety%20Investigation%20Bureau
The Transport Safety Investigation Bureau (TSIB) is a department within the Ministry of Transport of the Government of Singapore and is an independent investigation authority, responsible for the investigation of air, marine and land transport accidents and incidents in Singapore. The head office is in Passenger Terminal 2, Changi Airport, Changi, Singapore. It was formed on 1 August 2016 as a restructuring of the Air Accident Investigation Bureau (AAIB) of Singapore. History The AAIB was set up in 2002 after the SilkAir Flight 185 and Singapore Airlines Flight 006 crashes. The bureau set up a facility in 2007 to analyze data from flight data recorders (informally known as "black boxes") installed on commercial aircraft. On 1 August 2016, the AAIB was restructured and subsumed into an entity within TSIB. Responsibilities The TSIB consists of the following entities: Air Accident Investigation Bureau (AAIB) Marine Safety Investigation Branch (MSIB) The AAIB is responsible for the investigation of air accidents and serious incidents in Singapore involving both local and foreign commercial aircraft. The AAIB also participates in overseas investigations of accidents and serious incidents involving Singapore aircraft or aircraft operated by a Singapore air operator. The AAIB conducts investigations in accordance to the Singapore Air Navigation (Investigation of Accidents and Incidents Order 2003) and Annex 13 to the Convention on International Civil Aviation which governs the member states of the International Civil Aviation Organization that conducts these investigations. The MSIB is responsible for the investigation of very serious marine casualties within Singapore territorial waters, as well as accidents involving Singapore-registered ships. The MSIB carries out investigations in accordance with the Code of International Standards and Recommended Practices for a Safety Investigation into a Marine Casualty or Incident of the International Maritime Organization. It took over the role of conducting independent safety investigations from the Maritime and Port Authority of Singapore. For an investigated accident or incident, the TSIB will produce an investigation report. The investigative process involves the collection and analysis of data, from which causes and contributing factors are determined. Whenever safety issues are identified, the TSIB may make safety recommendations. Notable cases As part of a Memorandum of Understanding on Cooperation Relating to Aircraft Accident and Incident Investigation between MOT and Nepal's Ministry of Culture, Tourism and Civil Aviation, the TSIB assisted in investigation into the crash of Yeti Airlines Flight 691. References External links Aviation in Singapore 2002 establishments in Singapore Government agencies established in 2002 Singapore Transport organisations based in Singapore
Transport Safety Investigation Bureau
[ "Technology" ]
489
[ "Railway accidents and incidents", "Rail accident investigators" ]
4,260,804
https://en.wikipedia.org/wiki/Tetra-amido%20macrocyclic%20ligand
Tetra-amido macrocyclic ligands (TAMLs) constitute a class of macrocyclic ligands. When complexed to metals, TAMLs are proposed as environmentally friendly catalysts. Although never commercialized, iron-TAML complexes catalyze the degradation of pesticides, effluent streams from paper mills, dibenzothiophenes from diesel fuels, and anthrax spores. References Macrocycles
Tetra-amido macrocyclic ligand
[ "Chemistry" ]
93
[ "Organic compounds", "Chemical reaction stubs", "Macrocycles" ]
4,261,263
https://en.wikipedia.org/wiki/XPDL
The XML Process Definition Language (XPDL) is a format standardized by the Workflow Management Coalition (WfMC) to interchange business process definitions between different workflow products, i.e. between different modeling tools and management suites. XPDL defines an XML schema for specifying the declarative part of workflow / business process. XPDL is designed to exchange the process definition, both the graphics and the semantics of a workflow business process. XPDL is currently the best file format for exchange of BPMN diagrams; it has been designed specifically to store all aspects of a BPMN diagram. XPDL contains elements to hold graphical information, such as the X and Y position of the nodes, as well as executable aspects which would be used to run a process. This distinguishes XPDL from BPEL which focuses exclusively on the executable aspects of the process. BPEL does not contain elements to represent the graphical aspects of a process diagram. It is possible to say that XPDL is the XML Serialization of BPMN. History The Workflow Management Coalition, founded in August 1993, began by defining the Workflow Reference Model (ultimately published in 1995) that outlined the five key interfaces that a workflow management system must have. Interface 1 was for defining the business process, which includes two aspects: a process definition expression language and a programmatic interface to transfer the process definition to/from the workflow management system. The first revision of a process definition expression language was called Workflow Process Definition Language (WPDL) which was published in 1998. This process meta-model contained all the key concepts required to support workflow automation expressed using URL Encoding. Interoperability demonstrations were held to confirm the usefulness of this language as a way to communicate process models. By 1998, the first standards based on XML began to appear. The Workflow Management Coalition Working Group 1 produced an updated process definition expression language called XML Process Definition Language (XPDL) now known as XPDL 1.0. This second revision was an XML based interchange language that contained many of the same concepts as WPDL, with some improvements. XPDL 1.0 was ratified by the WfMC in 2002, and was subsequently implemented by more than two dozen workflow/BPM products to exchange process definitions. There was a large number of research projects and academic studies on workflow capabilities around XPDL, which was essentially the only standard language at the time for interchange of process design. The WfMC continued to update and improve the process definition interchange language. In 2004 the WfMC endorsed BPMN, a graphical formalism to standardize the way that process definitions were visualized. XPDL was extended specifically with the goal of representing in XML all the concepts present in a BPMN diagram. This third revision of a process definition expression language is known as XPDL 2.0 and was ratified by the WfMC in October 2005. In April 2008, the WfMC ratified XPDL 2.1 as the fourth revision of this specification. XPDL 2.1 includes extension to handle new BPMN 1.1 constructs, as well as clarification of conformance criteria for implementations. In spring 2012, the WfMC completed XPDL 2.2 as the fifth revision of this specification. XPDL 2.2 builds on version 2.1 by introducing support for the process modeling extensions added to BPMN 2.0. References Wil M.P. van der Aalst, "Business Process Management Demystified: A Tutorial on Models, Systems and Standards for Workflow Management", Springer Lecture Notes in Computer Science, Vol 3098/2004. Wil M.P. van der Aalst, "Patterns and XPDL: A Critical Evaluation of the XML Process Definition Language", Eindhoven University of Technology, PDF. Jiang Ping, Q. Mair, J. Newman, "Using UML to design distributed collaborative workflows: from UML to XPDL", Twelfth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 2003. WET ICE 2003. Proceedings, . W.M.P. van der Aalst, "Don't go with the flow: Web services composition standards exposed", IEEE Intelligent Systems, Jan/Feb 2003. Jürgen Jung, "Mapping Business Process Models to Workflow Schemata An Example Using Memo-ORGML And XPDL", Universität Koblenz-Landau, April 2004, PDF. Volker Gruhn, Ralf Laue, "Using Timed Model Checking for Verifying Workflows", José Cordeiro and Joaquim Filipe (Eds.): Proceedings of the 2nd Workshop on Computer Supported Activity Coordination, Miami, USA, 23.05.2005 - 24.05.2005, 75-88. INSTICC Press . Nicolas Guelfi, Amel Mammar, "A formal framework to generate XPDL specifications from UML activity diagrams", Proceedings of the 2006 ACM symposium on Applied computing, 2006. Peter Hrastnik, "Execution of business processes based on web services", International Journal of Electronic Business, Volume 2, Number 5 / 2004. Petr Matousek, "An ASM Specication of the XPDL Language Semantics", Symposium on the Effectiveness of Logic in Computer Science, March 2002, PS. F. Puente, A. Rivero, J.D. Sandoval, P. Hernández, and C.J. Molina, "Improved Workflow Management System based on XPDL", Editor(s): M. Boumedine, S. Ranka, Proceedings of the IASTED Conference on Knowledge Sharing and Collaborative Engineering, St. Thomas, US Virgin Islands, November 29-December 1, 2006, . Petr Matousek, "Verification method proposal for business processes and workflows specified using the XPDL standard language", PhD thesis, Jan 2003. Thomas Hornung, Agnes Koschmider, Jan Mendling, "Integration of Heterogeneous BPM Schemas: The Case of XPDL and BPEL", Technical Report JM-2005-03, Vienna University of Economics and Business Administration, 2006 PDF. Wei Ge, Baoyan Song, Derong Shen, Ge Yu, "e_SWDL: An XML Based Workflow Definition Language for Complicated Applications in Web Environments" Web Technologies and Applications: 5th Asia-Pacific Web Conference, APWeb 2003, Xian, China, April 23–25, 2003. Proceedings, . Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey. In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. . PDF References See also Business Process Management BPMN Workflow Management Coalition External links XPDL & Workflow Patterns PDF Critical comments on XPDL 1.0 Enterprise Workflow National Project supported by the Office of the Deputy Prime Minister endorses WfMC standards for use in all workflow projects in UK. Open Source Java XPDL Editor XML-based standards Workflow technology Specification languages Modeling languages
XPDL
[ "Technology", "Engineering" ]
1,474
[ "Software engineering", "Computer standards", "Specification languages", "XML-based standards" ]
4,261,465
https://en.wikipedia.org/wiki/Institute%20for%20Plasma%20Research
The Institute for Plasma Research (IPR) is a public research institute in India. The institute conducts research in plasma science, including basic plasma physics, magnetically confined hot plasmas, and plasma technologies for industrial applications. It is the leading plasma physics organization of India and houses the largest tokamak of India - SST1. IPR plays a major scientific and technical role in Indian partnership in the international fusion energy initiative ITER. It is part of the IndiGO consortium for research on gravitational waves. It is an autonomous body funded by the Department of Atomic Energy. History In 1982, the Government of India initiated the Plasma Physics Programme (PPP) for research on magnetically confined high-temperature plasmas. In 1986, the PPP evolved into the autonomous Institute for Plasma Research under the Department of Science and Technology. With the commissioning of ADITYA in 1989, full-fledged tokamak experiments started at IPR. A 1995 decision led to the second generation superconducting steady-state tokamak SST-1, capable of 1000-second operation. Due to this, the institute grew rapidly and came under the Department of Atomic Energy. The industrial plasma activities were reorganized under the Facilitation Centre for Industrial Plasma Technologies (FCIPT) and moved to a separate campus in Gandhinagar in 1998. Location The institute is located on the banks of Sabarmati river in Gandhinagar district. It is approximately midway between the cities of Ahmedabad and Gandhinagar. It is 5 km from the Ahmedabad airport and 14 km from the Ahmedabad railway station. Remote campuses Centre of Plasma Physics - Institute for Plasma Research (CPP-IPR) ITER-India Facilitation Centre for Industrial Plasma Technologies The Facilitation Centre for Industrial Plasma Technologies (FCIPT) works in industrial plasma technologies. The centre was set up in 1997 to promote, foster, develop, demonstrate, and transfer industrially relevant plasma-based technologies to industries, thus enabling technology commercialization. The centre acts as an interface between the institute and industries. While working on industrial projects, FCIPT maintained and improved its R&D strengths and, at the same time, advanced industrial uses. FCIPT works with national and international industries, such as Johnson & Johnson,ASP Ethicon Inc. USA, UVSYSTEC GmbH Germany, Thermax India Ltd., Mahindra & Mahindra Ltd., IPCL, Larsen & Toubro Ltd., NHPC Ltd., GE India Technology Centre Bangalore, BHEL, Triton Valves Ltd. Mysore, etc. and organizations such as BARC, DRDO, ISRO, IIT Kharagpur, National Aerospace Laboratories, and other CSIR labs. FCIPT has a material characterization laboratory with instruments such as Transmission Electron Microscope, Field Emission Scanning Electron Microscope with EDAX, Atomic Force Microscope, X-ray diffractometer, Spectroscopic Ellipsometer, UV-VIS spectroscopy, solar simulator, thickness profilometer, optical metallurgical microscope with phase analyser, full-fledged metallography laboratory, Vickers hardness tester, and an ASTM B117 corrosion testing setup. Other infrastructure include electronics and instrumentation lab, process demonstration systems, etc. FCIPT developed technologies related to waste remediation and recovery of energy from waste, surface hardening, and heat treatment technologies such as plasma nitriding and plasma nitrocarburising, plasma-assisted metallization technologies using magnetron sputter deposition, Plasma-enhanced CVD for functional coatings on substrates, plasma melting, plasma diagnostics, and space-related plasma technologies. Ion irradiation-induced patterning of semiconductor materials and amorphous solids is another focus. To this end, a are used to generate patterns such as nanoripples or nanodots and are coated with silver for research in plasmonics. Center of Plasma Physics – Institute for Plasma Research (CPP-IPR) The Centre of Plasma Physics is an autonomous institute that pursues basic research in theoretical and experimental plasma physics. Its Governing Council consists of four scientists with representatives from the Institute for Plasma Research, Gandhinagar, Physical Research Laboratory, Ahmedabad, Bhabha Atomic Research Center, Bombay, and Saha Institute of Nuclear Physics, Calcutta; state government officers and local members. History The government of Assam established the Centre of Plasma Physics in 1991. The centre started functioning in April 1991 in a rented house located at Saptaswahid Pathi. The first chairman of the Governing Council was Professor Predhiman Krishan Kaw (died 18 June 2017, a world-renowned plasma scientist. After its three-year term, the Governing Council was reconstituted by the Education Department with Prof. A.C. Das, Dean of Physical Research Laboratory as its chairman. The founding director of the centre, Prof. Sarbeswar Bujarbarua, is a distinguished plasma scientist and a recipient of the 'Vikram Sarabhai Research Award' in 1989 and Kamal Kumari National Award in 1993. Thereafter, the centre opened theoretical investigations in fundamental plasma processes such as nonlinear phenomena, instabilities, dusty plasma. It has set up facilities for conducting basic plasma physics experiments. Funds are available from several central government agencies (e.g. the Department of Atomic Energy and the Department of Science and Technology), the centre has taken up experimental programs in the frontline areas of plasma physics, such as dense plasma focus and dusty plasma. The centre has published more than 50 original research papers. The scientists work in close collaboration with national and international institutes like the Institute for Plasma Research, Gandhinagar; Physical Research Laboratory, Ahmedabad; Bhabha Atomic Research Centre, Bombay; Regional Research Laboratory, Bhubaneswar; Saha Institute of Nuclear Physics, Calcutta, Kyushu University, Japan; University of Bayreuth, Germany; Culham Laboratory, UK; and Flinders University, Australia. The centre runs a PhD programme with students registering with Guwahati University. Another component of the academic activity of the centre consists of holding lecturers and colloquia on plasma physics and other branches of physical sciences. The Centre of Plasma Physics, Institute for Plasma Research, Sonapur, Kamrup, Assam, became a new campus of IPR as the Centre of Plasma Physics, Sonapur, was formally merged with IPR effective 29 May 2009. CPP-IPR is headed by Centre Director Dr K. S. Goswami and is managed by a Managing Board headed by the director of IPR. It has twelve faculty members, fourteen other staff and research scholars and project scientists. The research is oriented towards essential plasma physics and programs that complement the significant programmes at IPR. Campus The institute's campus is at Nazirakhat, Sonapur, about 32 km from Guwahati, the headquarters of the Kamrup(M) district of Assam. Nazirakhat is a rural area surrounded by peace-loving people of diverse caste, religion, and language, yet it presents the unique feature of unity in diversity. Nazirakhat is linked by a PWD road from the National Highway No. 37. It is about 800 metres from NH-37. Nazirakhat is connected by road with the rest of the state and the country. The institute is surrounded by greenery near the Air-India flying base at Sonapur. Publication Research papers have been published in journals like Phys after the institute's establishment. Scr., Phys. Lett. A, Phys. Rev. Lett. and so on Collaboration The Centre collaborates with the following institutes and universities: The Bhabha Atomic Research Centre, Bombay; Raja Ramanna Centre for Advanced Technology, Indore; Institute for Plasma Research, Gandinagar; IPP, Juelich, Germany; IPP, Garching, Germany; Kyushu University, Fukuoka, Japan; Physical Research Laboratory, Ahmedabad; National Institute for Interdisciplinary Science and Technology, Bhubaneswar; Ruhr University Bochum, Bochum, Germany; Saha Institute of Nuclear Physics, Calcutta; St. Andrews University, UK; Tokyo Metropolitan Institute of Technology, Tokyo; University of Bayreuth, Germany; and University of Kyoto, Japan. Recognition S. Sen, Associate Professor, was awarded EPSRC Professorship Award (1998), UK; JSPS Professorship Award (1999), Japan; Junior Membership Award (1999), Isaac Newton Institute for Mathematical Sciences, Cambridge, UK; and Associateship Award (1999–2005), ICTP, Trieste, Italy. S.R. Mohanty, presently assistant professor, was awarded PhD degree by the University of Delhi for his thesis entitled "X-ray studies on dense plasma focus and plasma processing". M. Kakati, Research Scientist, was awarded a Senior Research Fellowship of the of Scientific & Industrial Research (1999–2001). K. R. Rajkhowa, was awarded the Plasma Science Society of India Fellowship in 1999. B.J. Saikia, Research Scientist, was awarded Japan Society for the Promotion of Science post-doctoral fellowship for two years in 1999. B. Kakati, Research Scholar, was awarded the BUTI Young Scientist Award in 2011. ITER-India ITER will be built mostly through in-kind contributions from the participant countries in the form of components manufactured delivered/installed at ITER. ITER-India is the Indian Domestic Agency (DA), formed with the responsibility to provide ITER the Indian contribution. See also Aditya (tokamak) Indira Gandhi Centre for Atomic Research (IGCAR) Department of Atomic Energy References External links Official website of the Institute for Plasma Research Aditya tokamak SST-1 Tokamak Facilitation Centre for Industrial Plasma Technologies Atomic and nuclear energy research in India Nuclear technology in India Homi Bhabha National Institute Plasma physics facilities Research institutes in Ahmedabad Research institutes in Gujarat 1986 establishments in Gujarat Research institutes established in 1986
Institute for Plasma Research
[ "Physics" ]
2,036
[ "Plasma physics facilities", "Plasma physics" ]
4,261,562
https://en.wikipedia.org/wiki/Adaptive%20sort
A sorting algorithm falls into the adaptive sort family if it takes advantage of existing order in its input. It benefits from the presortedness in the input sequence – or a limited amount of disorder for various definitions of measures of disorder – and sorts faster. Adaptive sorting is usually performed by modifying existing sorting algorithms. Motivation Comparison-based sorting algorithms have traditionally dealt with achieving an optimal bound of O(n log n) when dealing with time complexity. Adaptive sort takes advantage of the existing order of the input to try to achieve better times, so that the time taken by the algorithm to sort is a smoothly growing function of the size of the sequence and the disorder in the sequence. In other words, the more presorted the input is, the faster it should be sorted. This is an attractive feature for a sorting algorithm because sequences that nearly sorted are common in practice. Thus, the performance of existing sorting algorithms can be improved by taking into account the existing order in the input. Most worst-case sorting algorithms that do optimally well in the worst-case, notably heap sort and merge sort, do not take existing order within their input into account, although this deficiency is easily rectified in the case of merge sort by checking if the last element of the left-hand group is less than (or equal) to the first element of the righthand group, in which case a merge operation may be replaced by simple concatenation – a modification that is well within the scope of making an algorithm adaptive. Examples A classic example of an adaptive sorting algorithm is insertion sort. In this sorting algorithm, the input is scanned from left to right, repeatedly finding the position of the current item, and inserting it into an array of previously sorted items. Pseudo-code for the insertion sort algorithm follows (array X is zero-based): procedure Insertion Sort (X): for j = 1 to length(X) - 1 do t ← X[j] i ← j while i > 0 and X[i - 1] > t do X[i] ← X[i - 1] i ← i - 1 end X[i] ← t end The performance of this algorithm can be described in terms of the number of inversions in the input, and then will be roughly equal to , where is the number of inversions. Using this measure of presortedness – being relative to the number of inversions – insertion sort takes less time to sort the closer the array of data is to being sorted. Other examples of adaptive sorting algorithms are adaptive heap sort, adaptive merge sort, patience sort, Shellsort, smoothsort, splaysort, Timsort, and Cartesian tree sorting. See also Sorting algorithms References Sorting algorithms
Adaptive sort
[ "Mathematics" ]
556
[ "Order theory", "Sorting algorithms" ]
4,262,395
https://en.wikipedia.org/wiki/Ewan%20Birney
John Frederick William Birney (known as Ewan Birney) (born 6 December 1972) is joint director of EMBL's European Bioinformatics Institute (EMBL-EBI), in Hinxton, Cambridgeshire and deputy director general of the European Molecular Biology Laboratory (EMBL). He also serves as non-executive director of Genomics England, chair of the Global Alliance for Genomics and Health (GA4GH) and honorary professor of bioinformatics at the University of Cambridge. Birney has made significant contributions to genomics, through his development of innovative bioinformatics and computational biology tools. He previously served as an associate faculty member at the Wellcome Trust Sanger Institute. Education Birney was privately educated at Eton College as an Oppidan Scholar. Before going to University, Birney completed a gap year internship at Cold Spring Harbor Laboratory supervised by James Watson and Adrian Krainer. Birney completed his Bachelor of Arts degree in Biochemistry at the University of Oxford in 1996, where he was an undergraduate student at Balliol College, Oxford. He completed his PhD at the Sanger Institute, supervised by Richard Durbin while he was a postgraduate student at St John's College, Cambridge. His doctoral research used dynamic programming, finite-state machines and probabilistic automatons for sequence alignment. While he was a student he completed internships in the office of the Mayor of Baltimore and also in financial services on valuation of options for the Swiss Bank Corporation. Research and career From 2000 to 2003, Birney organised a scientific wager and sweepstake known as GeneSweep, for the genomics community, taking bets on estimates of the total number of genes (and noncoding DNA) in the human genome. Birney is one of the founders of the Ensembl genome browser and other databases, and has played a role in the sequencing of the Human Genome in 2000 and the analysis of genome function in the ENCODE project. He has played a role in annotating the genome sequences of the human, mouse, chicken and several other organisms. His research group focuses on computational genomics and inter-individual differences in human and other animals. Birney is known for his role in the ENCODE consortium. Prior to the ENCODE project, Birney has been involved in creation of a number of widely used bioinformatics and computational biology tools, either directly (PairWise, GeneWise, GenomeWise,), or in collaboration with students and postdocs, e.g. Exonerate (with Guy Slater), Enredo (Javier Herrero), Pecan (Benedict Paten), the Velvet assembler (Daniel Zerbino ) and CRAM (Markus Hsi-Yang Fritz, Rasko Leinonen and Vadim Zalunin). Birney has also contributed to several other projects including the Pfam database, InterPro, BioPerl, and HMMER and Ensembl genome database project. , Birney's research group focuses on genomic algorithms and studying inter individual differences, in both human and other species. He has supervised several PhD students and postdoctoral researchers that have worked in his laboratory. His research has been funded by the Biotechnology and Biological Sciences Research Council (BBSRC), Medical Research Council (MRC) the National Human Genome Research Institute (NHGRI), the Wellcome Trust and the European Union. Birney serves as a consultant to Oxford Nanopore Technologies and on the scientific advisory board of the Earlham Institute (formerly TGAC) in Norwich. Since 2022, he has served on the governing board at Eton College. He has also served on the boards of the Biotechnology and Biological Sciences Research Council (BBSRC), German Cancer Research Center (DKFZ), The Institute of Cancer Research (ICR), Ontario Institute for Cancer Research (OICR), Institute Pasteur and Riken institute. Awards and honours In 2002, Birney was named as one of the MIT Technology Review TR100 top 100 innovators in the world under the age of 35. In 2003, he gave the inaugural Francis Crick Lecture at the Royal Society: In 2005, he was awarded the Overton Prize by the International Society for Computational Biology (ISCB) for his advocacy of open source bioinformatics, contributions to the BioPerl community and leadership of the Ensembl genome annotation project. In 2005 Birney was awarded the Benjamin Franklin Award in Bioinformatics: Birney was awarded membership of the European Molecular Biology Organization (EMBO) in 2012 and elected a Fellow of the Royal Society (FRS) in 2014. His certificate of election and candidature reads: Birney has been awarded an Honorary Doctor of Science (DSc) degrees: in 2014 from Brunel University London and in 2021 from University of Tartu, Estonia. In 2015, Birney was elected a Fellow of the Academy of Medical Sciences (FMedSci). Birney was appointed Commander of the Order of the British Empire (CBE) in the 2019 New Year Honours. Personal life Birney married in 2003 and has two children. References |- Living people Members of the European Molecular Biology Organization People educated at Eton College Alumni of Balliol College, Oxford Alumni of St John's College, Cambridge British bioinformaticians Commanders of the Order of the British Empire Overton Prize winners Fellows of Churchill College, Cambridge Wellcome Trust 1972 births Fellows of the Royal Society Fellows of the International Society for Computational Biology Fellows of the Academy of Medical Sciences (United Kingdom) Human Genome Project scientists
Ewan Birney
[ "Engineering" ]
1,148
[ "Human Genome Project scientists" ]
4,262,587
https://en.wikipedia.org/wiki/Thermodynamic%20diagrams
Thermodynamic diagrams are diagrams used to represent the thermodynamic states of a material (typically fluid) and the consequences of manipulating this material. For instance, a temperature–entropy diagram (T–s diagram) may be used to demonstrate the behavior of a fluid as it is changed by a compressor. Overview Especially in meteorology they are used to analyze the actual state of the atmosphere derived from the measurements of radiosondes, usually obtained with weather balloons. In such diagrams, temperature and humidity values (represented by the dew point) are displayed with respect to pressure. Thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the actual base and top height of convective clouds or possible instabilities in the stratification. By assuming the energy amount due to solar radiation it is possible to predict the 2 m (6.6 ft) temperature, humidity, and wind during the day, the development of the boundary layer of the atmosphere, the occurrence and development of clouds and the conditions for soaring flight during the day. The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy. When air changes pressure and temperature during a process and prescribes a closed curve within the diagram the area enclosed by this curve is proportional to the energy which has been gained or released by the air. Types of thermodynamic diagrams General purpose diagrams include: PV diagram T–s diagram h–s (Mollier) diagram Psychrometric chart Cooling curve Indicator diagram Saturation vapor curve Thermodynamic surface Specific to weather services, there are mainly three different types of thermodynamic diagrams used: Skew-T log-P diagram Tephigram Emagram Stüve diagram All four diagrams are derived from the physical P–alpha diagram which combines pressure (P) and specific volume (alpha) as its basic coordinates. The P–alpha diagram shows a strong deformation of the grid for atmospheric conditions and is therefore not useful in atmospheric sciences. The three diagrams are constructed from the P–alpha diagram by using appropriate coordinate transformations. Not a thermodynamic diagram in a strict sense, since it does not display the energy–area equivalence, is the Stüve diagram But due to its simpler construction it is preferred in education. Another widely-used diagram that does not display the energy–area equivalence is the θ-z diagram (Theta-height diagram), extensively used boundary layer meteorology. Characteristics Thermodynamic diagrams usually show a net of five different lines: isobars = lines of constant pressure isotherms = lines of constant temperature dry adiabats = lines of constant potential temperature representing the temperature of a rising parcel of dry air saturated adiabats or pseudoadiabats = lines representing the temperature of a rising parcel saturated with water vapor mixing ratio = lines representing the dewpoint of a rising parcel The lapse rate, dry adiabatic lapse rate (DALR) and moist adiabatic lapse rate (MALR), are obtained. With the help of these lines, parameters such as cloud condensation level, level of free convection, onset of cloud formation. etc. can be derived from the soundings. Example The path or series of states through which a system passes from an initial equilibrium state to a final equilibrium state and can be viewed graphically on a pressure-volume (P-V), pressure-temperature (P-T), and temperature-entropy (T-s) diagrams. There are an infinite number of possible paths from an initial point to an end point in a process. In many cases the path matters, however, changes in the thermodynamic properties depend only on the initial and final states and not upon the path. Consider a gas in cylinder with a free floating piston resting on top of a volume of gas at a temperature . If the gas is heated so that the temperature of the gas goes up to while the piston is allowed to rise to as in Figure 1, then the pressure is kept the same in this process due to the free floating piston being allowed to rise making the process an isobaric process or constant pressure process. This Process Path is a straight horizontal line from state one to state two on a P-V diagram. It is often valuable to calculate the work done in a process. The work done in a process is the area beneath the process path on a P-V diagram. Figure 2 If the process is isobaric, then the work done on the piston is easily calculated. For example, if the gas expands slowly against the piston, the work done by the gas to raise the piston is the force F times the distance d. But the force is just the pressure P of the gas times the area A of the piston, F = PA. Thus W = Fd W = PAd W = P(V2 − V1) Now let’s say that the piston was not able to move smoothly within the cylinder due to static friction with the walls of the cylinder. Assuming that the temperature was increased slowly, you would find that the process path is not straight and no longer isobaric, but would instead undergo an isometric process till the force exceeded that of the frictional force and then would undergo an isothermal process back to an equilibrium state. This process would be repeated till the end state is reached. See figure 3. The work done on the piston in this case would be different due to the additional work required for the resistance of the friction. The work done due to friction would be the difference between the work done on these two process paths. Many engineers neglect friction at first in order to generate a simplified model. For more accurate information, the height of the highest point, or the max pressure, to surpass the static friction would be proportional to the frictional coefficient and the slope going back down to the normal pressure would be the same as an isothermal process if the temperature was increased at a slow enough rate. Another path in this process is an isometric process. This is a process where volume is held constant which shows as a vertical line on a P-V diagram. Figure 3 Since the piston is not moving during this process, there is not any work being done. See also Thermodynamics Timeline of thermodynamics References The Physics of Atmospheres by John Houghton, Cambridge University Press 2002. Especially chapter 3.3. deals solely with the tephigram. German version of Handbook of meteorological soaring flight from the Organisation Scientifique et Technique Internationale du Vol à Voile (OSTIV) (chapter 2.3) Further reading Handbook of meteorological forecasting for soaring flight WMO Technical Note No. 158. especially chapter 2.3. External links www.met.tamu.edu/../aws-tr79-006.pdf A very large technical manual (164 pages) how to use the diagrams. www.comet.ucar.edu/../sld010.htm A course on how to use diagrams at Comet, the 'Cooperative Program for Operational Meteorology, Education and Training'. diagrams Diagrams
Thermodynamic diagrams
[ "Physics", "Chemistry", "Mathematics" ]
1,472
[ "Thermodynamics", "Dynamical systems" ]
4,262,738
https://en.wikipedia.org/wiki/Bernard%20Hollander
Bernard Hollander (1864 – 6 February 1934) was a London psychiatrist and one of the main proponents of the new interest in phrenology in the early 20th century. Life and work Hollander was born in Vienna, and settled in London in 1883, where he attended King's College. After graduation he was appointed to the post of physician at the British Hospital for Mental Disorders and Brain Diseases. Hollander was naturalized a British citizen in 1894. Hollander first received critical acclaim for his Positive Philosophy of the Mind (L. N. Fowler, 1891). His main works, The Mental Function of the Brain (1901) and Scientific Phrenology (1902), are an appraisal of the teachings of Franz Joseph Gall. Hollander also introduced a quantitative approach to the phrenological diagnosis, defining a methodology for measuring the skull and comparing the measurements with statistical averages. Hollander founded the Ethological Society, and was the first editor of the Ethological Journal. Notes Further reading Works by Hollander: The revival of phrenology (London and New York, G. P. Putnam's sons, 1901). Scientific Phrenology: being a practical mental science and guide to human character (London, Grant Richards, 1902) The mental symptoms of brain disease: an aid to the surgical treatment of insanity, due to injury, haemorrhage, tumours, and other circumscribed lesions of the brain (London, Rebman, 1910). Nervous disorders of men; the modern psychological conception of their causes, effects, and rational treatment (London, K. Paul, Trench, Trübner & Co. [etc.], 1916). Abnormal children : nervous, mischievous, precocious, and backward (London : K. Paul, Trench, Trubner, 1916) In search of the soul: and the mechanism of thought, emotion, and conduct. Volume 1, Volume 2 (London: Kegan Paul, Trench, Trubner, 1920). The psychology of misconduct, vice, and crime (London : G. Allen & Unwin, ltd., 1922). Methods and Uses of Hypnosis & Self-Hypnosis: A Treatise on the Powers of the Subconscious Mind (London : G. Allen & Unwin, ltd., 1928). About Hollander: Culbertson, J.C. (ed.) (1890) "The Old and New Phrenologies" The Cincinnati Lancet-Clinic vol. 63 (New Series, vol. 24) pp. 176-177, reprinted from the British Medical Journal. 1864 births 1934 deaths Phrenologists English psychiatrists Ethologists Emigrants from Austria-Hungary to the United Kingdom
Bernard Hollander
[ "Biology" ]
560
[ "Ethology", "Behavior", "Ethologists" ]
4,262,791
https://en.wikipedia.org/wiki/Animal%20model%20of%20ischemic%20stroke
Animal models of ischemic stroke are procedures inducing cerebral ischemia. The aim is the study of basic processes or potential therapeutic interventions in this disease, and the extension of the pathophysiological knowledge on and/or the improvement of medical treatment of human ischemic stroke. Ischemic stroke has a complex pathophysiology involving the interplay of many different cells and tissues such as neurons, glia, endothelium, and the immune system. These events cannot be mimicked satisfactorily in vitro yet. Thus a large portion of stroke research is conducted on animals. Overview Several models in different species are currently known to produce cerebral ischemia. Global ischemia models, both complete and incomplete, tend to be easier to perform. However, they are less immediately relevant to human stroke than the focal stroke models, because global ischemia is not a common feature of human stroke. However, in various settings global ischemia is also relevant, e.g. in global anoxic brain damage due to cardiac arrest. Different species also vary in their susceptibility to the various types of ischemic insults. An example is gerbils. They do not have a Circle of Willis and stroke can be induced by common carotid artery occlusion alone. Mechanisms of inducing ischemic stroke Some of the mechanisms which have been used are: Complete global ischemia Decapitation Aorta/vena cava occlusion External neck tourniquet or cuff Cardiac arrest Incomplete global ischemia Hemorrhage or hypotension Hypoxic ischemia Intracranial hypertension and common carotid artery occlusion Two-vessel occlusion and hypotension Four-vessel occlusion Unilateral common carotid artery occlusion (in some species only) Focal cerebral ischemia Endothelin-1-induced constriction of arteries and veins Middle cerebral artery occlusion Spontaneous brain infarction (in spontaneously hypertensive rats) Macrosphere embolization Multifocal cerebral ischemia Blood clot embolization Microsphere embolization Photothrombosis Hypoxic Ischemia models One of the most commonly used animal models of hypoxic ischemia was originally described by Levine in 1960 and later refined by Rice et al., in 1981. This approach is useful to study hypoxic ischemia in the developing brain, since newborn rat pups are utilized in this model. Briefly, 7 day old rat pups undergo a permanent unilateral carotid artery ligation with a subsequent 3 hour exposure to a hypoxic environment (8% oxygen). This model creates a unilateral infarct in the hemisphere ipsilateral to the ligation, since the hypoxia alone is subthreshold for injury at this age. The area of injury is typically concentrated in periventricular regions of the brain, especially cortical and hippocampal areas. Focal ischemia models They are divided into techniques including reperfusion of the ischemic tissue (transient focal cerebral ischemia) and those without reperfusion (permanent focal cerebral ischemia). The following models are established : Endothelin-1 -induced constriction of arteries and veins Middle cerebral artery occlusion (MCAO) MCAO avoiding craniotomy Embolic middle cerebral artery occlusion Endovascular filament middle cerebral artery occlusion (transient or permanent) MCAO involving craniotomy Permanent transcranial middle cerebral artery occlusion Transient transcranial middle cerebral artery occlusion Direct tissue damage Cerebrocortical photothrombosis Endothelin-1 -induced constriction of arteries and veins Endothelin-1 is a potent vasoconstrictor which is produced endogenously during ischemic stroke and which contributes to overall loss of cells and disability. Exogenous endothelin-1 can also be used to induce stroke and cell death after sustained vasoconstriction with reperfusion. It can be microinjected to induce focal stroke in small tissue volumes (e.g., cortical grey matter, white matter or subcortical tissue) or after injection near the Middle cerebral artery. It is often used as a model of focal stroke to evaluate candidate pro-regenerative therapies. One advantage of this model of stroke is that it causes highly reproducible infarcts. Another benefit is that it can be used in elderly rats with only very low resulting mortality. Embolic middle cerebral artery occlusion Middle cerebral artery (MCA) occlusion is achieved in this model by injecting particles like blood clots (thrombembolic MCAO) or artificial spheres into the carotid artery of animals as an animal model of ischemic stroke. Thrombembolic MCAO is achieved either by injecting clots that were formed in vitro or by endovascular instillation of thrombin for in situ clotting . The thrombembolic model is closest to the pathophysiology of human cardioembolic stroke. When injecting spheres into the cerebral circulation, their size determines the pattern of brain infarction: Macrospheres (300–400 μm) induce infarcts similar to those achieved by occlusion of the proximal MCA , whereas microsphere (~ 50 μm) injection results in distal, diffuse embolism . However, the quality of MCAO – and thus the volume of brain infarcts – is very variable, a fact which is further aggravated by a certain rate of spontaneous lysis of injected blood clots. Endovascular filament middle cerebral artery occlusion The technique of endovascular filament (intraluminal suture) MCAO as an animal model of ischemic stroke was described first by Koizumi . It is applied to rats and mice. A piece of surgical filament is introduced into the internal carotid artery and forwarded until the tip occludes the origin of the middle cerebral artery, resulting in a cessation of blood flow and subsequent brain infarction in its area of supply. If the suture is removed after a certain interval, reperfusion is achieved (transient MCAO); if the filament is left in place the procedure is suitable as model of permanent MCAO, too. The most common modification is based on Longa (1989) who described filament introduction via the external carotid artery, allowing closure of the access point with preserved blood supply via the common and internal carotid artery to the brain after the removal of the filament. Known pitfalls of this method are insufficient occlusion, subarachnoid hemorrhage , hyperthermia , and necrosis of the ipsilateral extracranial tissue . Filament MCAO is not applicable to all rat strains . Permanent transcranial middle cerebral artery occlusion In this animal model of ischemic stroke the middle cerebral artery (MCA) is surgically dissected and subsequently permanently occluded, e.g. by electrocautery or ligation. Occlusion can be performed on the proximal or distal part of the MCA. In the latter, ischemic damage is restricted to the cerebral cortex. MCAO can be combined with temporal or permanent common carotid artery occlusion. These models require a small craniotomy. Transient transcranial middle cerebral artery occlusion The technique of modeling ischemic stroke by transient transcranial MCAO is similar to that of permanent transcranial MCAO, with the MCA being reperfused after a defined period of focal cerebral ischemia . Like permanent MCAO, craniotomy is required and common carotid artery (CCA) occlusion can be combined. Occluding one MCA and both CCAs is referred to as the three vessel occlusion model of focal cerebral ischemia. Cerebrocortical photothrombosis Photothrombotic models of ischemic stroke use local intravascular photocoagulation of circumscribed cortical areas. After intravenous injection of photosensitive dyes like rose-bengal, the brain is irradiated through the skull via a small hole or a thinned cranial window, leading to photochemical occlusion of the irradiated vessels with secondary tissue ischemia . This approach was initially proposed by Rosenblum and El-Sabban in 1977, and improved by Watson in 1985 in the rat brain. This method has also been adapted for use in mice. See also animal models of stroke References References Ischemia Ischemic stroke Stroke
Animal model of ischemic stroke
[ "Biology" ]
1,850
[ "Model organisms", "Animal models" ]
4,262,792
https://en.wikipedia.org/wiki/Perfectly%20matched%20layer
A perfectly matched layer (PML) is an artificial absorbing layer for wave equations, commonly used to truncate computational regions in numerical methods to simulate problems with open boundaries, especially in the FDTD and FE methods. The key property of a PML that distinguishes it from an ordinary absorbing material is that it is designed so that waves incident upon the PML from a non-PML medium do not reflect at the interface—this property allows the PML to strongly absorb outgoing waves from the interior of a computational region without reflecting them back into the interior. PML was originally formulated by Berenger in 1994 for use with Maxwell's equations, and since that time there have been several related reformulations of PML for both Maxwell's equations and for other wave-type equations, such as elastodynamics, the linearized Euler equations, Helmholtz equations, and poroelasticity. Berenger's original formulation is called a split-field PML, because it splits the electromagnetic fields into two unphysical fields in the PML region. A later formulation that has become more popular because of its simplicity and efficiency is called uniaxial PML or UPML, in which the PML is described as an artificial anisotropic absorbing material. Although both Berenger's formulation and UPML were initially derived by manually constructing the conditions under which incident plane waves do not reflect from the PML interface from a homogeneous medium, both formulations were later shown to be equivalent to a much more elegant and general approach: stretched-coordinate PML. In particular, PMLs were shown to correspond to a coordinate transformation in which one (or more) coordinates are mapped to complex numbers; more technically, this is actually an analytic continuation of the wave equation into complex coordinates, replacing propagating (oscillating) waves by exponentially decaying waves. This viewpoint allows PMLs to be derived for inhomogeneous media such as waveguides, as well as for other coordinate systems and wave equations. Technical description Specifically, for a PML designed to absorb waves propagating in the x direction, the following transformation is included in the wave equation. Wherever an x derivative appears in the wave equation, it is replaced by: where is the angular frequency and is some function of x. Wherever is positive, propagating waves are attenuated because: where we have taken a planewave propagating in the +x direction (for ) and applied the transformation (analytic continuation) to complex coordinates: , or equivalently . The same coordinate transformation causes waves to attenuate whenever their x dependence is in the form for some propagation constant k: this includes planewaves propagating at some angle with the x axis and also transverse modes of a waveguide. The above coordinate transformation can be left as-is in the transformed wave equations, or can be combined with the material description (e.g. the permittivity and permeability in Maxwell's equations) to form a UPML description. The coefficient σ/ω depends upon frequency—this is so the attenuation rate is proportional to k/ω, which is independent of frequency in a homogeneous material (not including material dispersion, e.g. for vacuum) because of the dispersion relation between ω and k. However, this frequency-dependence means that a time domain implementation of PML, e.g. in the FDTD method, is more complicated than for a frequency-independent absorber, and involves the auxiliary differential equation (ADE) approach (equivalently, i/ω appears as an integral or convolution in time domain). Perfectly matched layers, in their original form, only attenuate propagating waves; purely evanescent waves (exponentially decaying fields) oscillate in the PML but do not decay more quickly. However, the attenuation of evanescent waves can also be accelerated by including a real coordinate stretching in the PML: this corresponds to making σ in the above expression a complex number, where the imaginary part yields a real coordinate stretching that causes evanescent waves to decay more quickly. Limitations of perfectly matched layers PML is widely used and has become the absorbing boundary technique of choice in much of computational electromagnetism. Although it works well in most cases, there are a few important cases in which it breaks down, suffering from unavoidable reflections or even exponential growth. One caveat with perfectly matched layers is that they are only reflectionless for the exact, continuous wave equation. Once the wave equation is discretized for simulation on a computer, some small numerical reflections appear (which vanish with increasing resolution). For this reason, the PML absorption coefficient σ is typically turned on gradually from zero (e.g. quadratically) over a short distance on the scale of the wavelength of the wave. In general, any absorber, whether PML or not, is reflectionless in the limit where it turns on sufficiently gradually (and the absorbing layer becomes thicker), but in a discretized system the benefit of PML is to reduce the finite-thickness "transition" reflection by many orders of magnitude compared to a simple isotropic absorption coefficient. In certain materials, there are "backward-wave" solutions in which group and phase velocity are opposite to one another. This occurs in "left-handed" negative index metamaterials for electromagnetism and also for acoustic waves in certain solid materials, and in these cases the standard PML formulation is unstable: it leads to exponential growth rather than decay, simply because the sign of k is flipped in the analysis above. Fortunately, there is a simple solution in a left-handed medium (for which all waves are backwards): merely flip the sign of σ. A complication, however, is that physical left-handed materials are dispersive: they are only left-handed within a certain frequency range, and therefore the σ coefficient must be made frequency-dependent. Unfortunately, even without exotic materials, one can design certain waveguiding structures (such as a hollow metal tube with a high-index cylinder in its center) that exhibit both backwards- and forwards-wave solutions at the same frequency, such that any sign choice for σ will lead to exponential growth, and in such cases PML appears to be irrecoverably unstable. Another important limitation of PML is that it requires that the medium be invariant in the direction orthogonal to the boundary, in order to support the analytic continuation of the solution to complex coordinates (the complex "coordinate stretching"). As a consequence, the PML approach is no longer valid (no longer reflectionless at infinite resolution) in the case of periodic media (e.g. photonic crystals or phononic crystals) or even simply a waveguide that enters the boundary at an oblique angle. See also Cagniard–de Hoop method References External links Animation on the effects of PML (YouTube) Numerical differential equations Partial differential equations Wave mechanics Computational electromagnetics
Perfectly matched layer
[ "Physics" ]
1,447
[ "Physical phenomena", "Computational electromagnetics", "Classical mechanics", "Computational physics", "Waves", "Wave mechanics" ]
4,263,176
https://en.wikipedia.org/wiki/Launch%20and%20Early%20Orbit%20phase
In spacecraft operations, Launch and Early Orbit Phase (LEOP) is one of the most critical phases of a mission. Spacecraft operations engineers take control of the satellite after it separates from the launch vehicle. LEOP generally concludes once the satellite is safely positioned in its final orbit. During this period, operations staff work typically 24 hours a day to activate, monitor and control the various subsystems of the satellite, including the deployment of any satellite appendages (such as antennas, solar arrays, reflectors, and radiators), and undertake critical orbit and attitude control manoeuvres. Extra support staff are typically on hand and on-call during LEOP, relative to staffing during normal operations. For geostationary satellites, the launch vehicle typically carries the spacecraft to Geostationary Transfer Orbit, or GTO. From this elliptical orbit, the LEOP generally includes a sequence of apogee engine firings to reach the circular geostationary orbit. Autonomous commissioning For some spacecraft like the Intuitive Machines Nova-C lunar lander, initial commissioning is performed autonomously. See also Ground segment Satellite space segment References ESA Spacecraft Operations website Spaceflight concepts Spaceflight
Launch and Early Orbit phase
[ "Astronomy" ]
236
[ "Spaceflight", "Outer space" ]
4,263,231
https://en.wikipedia.org/wiki/Replisome
The replisome is a complex molecular machine that carries out replication of DNA. The replisome first unwinds double stranded DNA into two single strands. For each of the resulting single strands, a new complementary sequence of DNA is synthesized. The total result is formation of two new double stranded DNA sequences that are exact copies of the original double stranded DNA sequence. In terms of structure, the replisome is composed of two replicative polymerase complexes, one of which synthesizes the leading strand, while the other synthesizes the lagging strand. The replisome is composed of a number of proteins including helicase, RFC, PCNA, gyrase/topoisomerase, SSB/RPA, primase, DNA polymerase III, RNAse H, and DNA ligase. Overview of prokaryotic DNA replication process For prokaryotes, each dividing nucleoid (region containing genetic material which is not a nucleus) requires two replisomes for bidirectional replication. The two replisomes continue replication at both forks in the middle of the cell. Finally, as the termination site replicates, the two replisomes separate from the DNA. The replisome remains at a fixed, midcell location in the cell, attached to the membrane, and the template DNA threads through it. DNA is fed through the stationary pair of replisomes located at the cell membrane. Overview of eukaryotic DNA replication process For eukaryotes, numerous replication bubbles form at origins of replication throughout the chromosome. As with prokaryotes, two replisomes are required, one at each replication fork located at the terminus of the replication bubble. Because of significant differences in chromosome size, and the associated complexities of highly condensed chromosomes, various aspects of the DNA replication process in eukaryotes, including the terminal phases, are less well-characterised than for prokaryotes. Challenges of DNA replication The replisome is a system in which various factors work together to solve the structural and chemical challenges of DNA replication. Chromosome size and structure varies between organisms, but since DNA molecules are the reservoir of genetic information for all forms of life, many replication challenges and solutions are the same for different organisms. As a result, the replication factors that solve these problems are highly conserved in terms of structure, chemistry, functionality, or sequence. General structural and chemical challenges include the following: Efficient replisome assembly at origins of replication (origin recognition complexes or specific replication origin sequences in some organisms) Separating the duplex into the leading and lagging template strands (helicases) Protecting the leading and lagging strands from damage after duplex separation (SSB and RPA factors) Priming of the leading and lagging template strands (primase or DNA polymerase alpha) Ensuring processivity (clamp loading factors, ring-shaped clamp proteins, strand binding proteins) High-fidelity DNA replication (DNA polymerase III, DNA polymerase delta, DNA polymerase epsilon. All have intrinsically low error rates because of their structure and chemistry.) Error correction (replicative polymerase active sites sense errors; 3' to 5' exonuclease domains of replicative polymerases fix errors) Synchronised polymerisation of leading and lagging strands despite anti-parallel structure (replication fork structure, dimerisation of replicative polymerases) Primer removal (DNA polymerase I, RNAse H, flap endonucleases such as FEN1, or other DNA repair factors) Formation of phosphodiester bonds at gaps between Okazaki fragments (ligase) In general, the challenges of DNA replication involve the structure of the molecules, the chemistry of the molecules, and, from a systems perspective, the underlying relationships between the structure and the chemistry. Solving the challenges of DNA replication Many of the structural and chemical problems associated with DNA replication are managed by molecular machinery that is highly conserved across organisms. This section discusses how replisome factors solve the structural and chemical challenges of DNA replication. Replisome assembly DNA replication begins at sites called origins of replication. In organisms with small genomes and simple chromosome structure, such as bacteria, there may be only a few origins of replication on each chromosome. Organisms with large genomes and complex chromosome structure, such as humans, may have hundreds, or even thousands, of origins of replication spread across multiple chromosomes. DNA structure varies with time, space, and sequence, and it is thought that these variations, in addition to their role in gene expression, also play active roles in replisome assembly during DNA synthesis. Replisome assembly at an origin of replication is roughly divided into three phases. For bacteria: Formation of pre-replication complex. DnaA binds to the origin recognition complex and separates the duplex. This attracts DnaB helicase and DnaC, which maintain the replication bubble. Formation of pre-initiation complex. SSB binds to the single strand and then gamma (clamp loading factor) binds to SSB. Formation of initiation complex. Gamma deposits the sliding clamp (beta) and attracts DNA polymerase III. For eukaryotes: Formation of pre-replication complex. MCM factors bind to the origin recognition complex and separate the duplex, forming a replication bubble. Formation of pre-initiation complex. Replication protein A (RPA) binds to the single stranded DNA and then RFC (clamp loading factor) binds to RPA. Formation of initiation complex. RFC deposits the sliding clamp (PCNA) and attracts DNA polymerases such as alpha (α), delta (δ), epsilon (ε). For both bacteria and eukaryotes, the next stage is generally referred to as 'elongation', and it is during this phase that the majority of DNA synthesis occurs. Separating the duplex DNA is a duplex formed by two anti-parallel strands. Following Meselson-Stahl, the process of DNA replication is semi-conservative, whereby during replication the original DNA duplex is separated into two daughter strands (referred to as the leading and lagging strand templates). Each daughter strand becomes part of a new DNA duplex. Factors generically referred to as helicases unwind the duplex. Helicases Helicase is an enzyme which breaks hydrogen bonds between the base pairs in the middle of the DNA duplex. Its doughnut like structure wraps around DNA and separates the strands ahead of DNA synthesis. In eukaryotes, the Mcm2-7 complex acts as a helicase, though which subunits are required for helicase activity is not entirely clear. This helicase translocates in the same direction as the DNA polymerase (3' to 5' with respect to the template strand). In prokaryotic organisms, the helicases are better identified and include dnaB, which moves 5' to 3' on the strand opposite the DNA polymerase. Unwinding supercoils and decatenation As helicase unwinds the double helix, topological changes induced by the rotational motion of the helicase lead to supercoil formation ahead of the helicase (similar to what happens when you twist a piece of thread). Gyrase and topoisomerases Gyrase (a form of topoisomerase) relaxes and undoes the supercoiling caused by helicase. It does this by cutting the DNA strands, allowing it to rotate and release the supercoil, and then rejoining the strands. Gyrase is most commonly found upstream of the replication fork, where the supercoils form. Protecting the leading and lagging strands Single-stranded DNA is highly unstable and can form hydrogen bonds with itself that are referred to as 'hairpins' (or the single strand can improperly bond to the other single strand). To counteract this instability, single-strand binding proteins (SSB in prokaryotes and Replication protein A in eukaryotes) bind to the exposed bases to prevent improper ligation. If you consider each strand as a "dynamic, stretchy string", the structural potential for improper ligation should be obvious. An expanded schematic reveals the underlying chemistry of the problem: the potential for hydrogen bond formation between unrelated base pairs. Binding proteins stabilise the single strand and protected the strand from damage caused by unlicensed chemical reactions. The combination of a single strand and its binding proteins serves as a better substrate for replicative polymerases than a naked single strand (binding proteins provide extra thermodynamic driving force for the polymerisation reaction). Strand binding proteins are removed by replicative polymerases. Priming the leading and lagging strands From both a structural and chemical perspective, a single strand of DNA by itself (and the associated single strand binding proteins) is not suitable for polymerisation. This is because the chemical reactions catalysed by replicative polymerases require a free 3' OH in order to initiate nucleotide chain elongation. In terms of structure, the conformation of replicative polymerase active sites (which is highly related to the inherent accuracy of replicative polymerases) means these factors cannot start chain elongation without a pre-existing chain of nucleotides, because no known replicative polymerase can start chain elongation de novo. Priming enzymes, (which are DNA-dependent RNA polymerases), solve this problem by creating an RNA primer on the leading and lagging strands. The leading strand is primed once, and the lagging strand is primed approximately every 1000 (+/- 200) base pairs (one primer for each Okazaki fragment on the lagging strand). Each RNA primer is approximately 10 bases long. The interface at (A*) contains a free 3' OH that is chemically suitable for the reaction catalysed by replicative polymerases, and the "overhang" configuration is structurally suitable for chain elongation by a replicative polymerase. Thus, replicative polymerases can begin chain elongation at (A*). Primase In prokaryotes, the primase creates an RNA primer at the beginning of the newly separated leading and lagging strands. DNA polymerase alpha In eukaryotes, DNA polymerase alpha creates an RNA primer at the beginning of the newly separated leading and lagging strands, and, unlike primase, DNA polymerase alpha also synthesizes a short chain of deoxynucleotides after creating the primer. Ensuring processivity and synchronisation Processivity refers to both speed and continuity of DNA replication, and high processivity is a requirement for timely replication. High processivity is in part ensured by ring-shaped proteins referred to as 'clamps' that help replicative polymerases stay associated with the leading and lagging strands. There are other variables as well: from a chemical perspective, strand binding proteins stimulate polymerisation and provide extra thermodynamic energy for the reaction. From a systems perspective, the structure and chemistry of many replisome factors (such as the AAA+ ATPase features of the individual clamp loading sub-units, along with the helical conformation they adopt), and the associations between clamp loading factors and other accessory factors, also increases processivity. To this point, according to research by Kuriyan et al., due to their role in recruiting and binding other factors such as priming enzymes and replicative polymerases, clamp loaders and sliding clamps are at the heart of the replisome machinery. Research has found that clamp loading and sliding clamp factors are absolutely essential to replication, which explains the high degree of structural conservation observed for clamp loading and sliding clamp factors. This architectural and structural conservation is seen in organisms as diverse as bacteria, phages, yeast, and humans. That such a significant degree of structural conservation is observed without sequence homology further underpins the significance of these structural solutions to replication challenges. Clamp loader Clamp loader is a generic term that refers to replication factors called gamma (bacteria) or RFC (eukaryotes). The combination of template DNA and primer RNA is referred to as 'A-form DNA' and it is thought that clamp loading replication proteins (helical heteropentamers) want to associate with A-form DNA because of its shape (the structure of the major/minor groove) and chemistry (patterns of hydrogen bond donors and acceptors). Thus, clamp loading proteins associate with the primed region of the strand which causes hydrolysis of ATP and provides energy to open the clamp and attach it to the strand. Sliding clamp Sliding clamp is a generic term that refers to ring-shaped replication factors called beta (bacteria) or PCNA (eukaryotes and archaea). Clamp proteins attract and tether replicative polymerases, such as DNA polymerase III, in order to extend the amount of time that a replicative polymerase stays associated with the strand. From a chemical perspective, the clamp has a slightly positive charge at its centre that is a near perfect match for the slightly negative charge of the DNA strand. In some organisms, the clamp is a dimer, and in other organisms the clamp is a trimer. Regardless, the conserved ring architecture allows the clamp to enclose the strand. Dimerisation of replicative polymerases Replicative polymerases form an asymmetric dimer at the replication fork by binding to sub-units of the clamp loading factor. This asymmetric conformation is capable of simultaneously replicating the leading and lagging strands, and the collection of factors that includes the replicative polymerases is generally referred to as a holoenzyme. However, significant challenges remain: the leading and lagging strands are anti-parallel. This means that nucleotide synthesis on the leading strand naturally occurs in the 5' to 3' direction. However, the lagging strand runs in the opposite direction and this presents quite a challenge since no known replicative polymerases can synthesise DNA in the 3' to 5' direction. The dimerisation of the replicative polymerases solves the problems related to efficient synchronisation of leading and lagging strand synthesis at the replication fork, but the tight spatial-structural coupling of the replicative polymerases, while solving the difficult issue of synchronisation, creates another challenge: dimerisation of the replicative polymerases at the replication fork means that nucleotide synthesis for both strands must take place at the same spatial location, despite the fact that the lagging strand must be synthesised backwards relative to the leading strand. Lagging strand synthesis takes place after the helicase has unwound a sufficient quantity of the lagging strand, and this "sufficient quantity of the lagging strand" is polymerised in discrete nucleotide chains called Okazaki fragments. Consider the following: the helicase continuously unwinds the parental duplex, but the lagging strand must be polymerised in the opposite direction. This means that, while polymerisation of the leading strand proceeds, polymerisation of the lagging strand only occurs after enough of the lagging strand has been unwound by the helicase. At this point, the lagging strand replicative polymerase associates with the clamp and primer in order to start polymerisation. During lagging strand synthesis, the replicative polymerase sends the lagging strand back toward the replication fork. The replicative polymerase disassociates when it reaches an RNA primer. Helicase continues to unwind the parental duplex, the priming enzyme affixes another primer, and the replicative polymerase reassociates with the clamp and primer when a sufficient quantity of the lagging strand has unwound. Collectively, leading and lagging strand synthesis is referred to as being 'semidiscontinuous'. High-fidelity DNA replication Prokaryotic and eukaryotic organisms use a variety of replicative polymerases, some of which are well-characterised: DNA polymerase III DNA polymerase delta DNA polymerase epsilon DNA polymerase III This polymerase synthesizes leading and lagging strand DNA in bacteria. DNA polymerase delta This polymerase synthesizes lagging strand DNA in eukaryotes. (Thought to form an asymmetric dimer with DNA polymerase epsilon.) DNA polymerase epsilon This polymerase synthesizes leading strand DNA in eukaryotes. (Thought to form an asymmetric dimer with DNA polymerase delta.) Proof-reading and error correction Although rare, incorrect base pairing polymerisation does occur during chain elongation. (The structure and chemistry of replicative polymerases mean that errors are unlikely, but they do occur.) Many replicative polymerases contain an "error correction" mechanism in the form of a 3' to 5' exonuclease domain that is capable of removing base pairs from the exposed 3' end of the growing chain. Error correction is possible because base pair errors distort the position of the magnesium ions in the polymerisation sub-unit, and the structural-chemical distortion of the polymerisation unit effectively stalls the polymerisation process by slowing the reaction. Subsequently, the chemical reaction in the exonuclease unit takes over and removes nucleotides from the exposed 3' end of the growing chain. Once an error is removed, the structure and chemistry of the polymerisation unit returns to normal and DNA replication continues. Working collectively in this fashion, the polymerisation active site can be thought of as the "proof-reader", since it senses mismatches, and the exonuclease is the "editor", since it corrects the errors. Base pair errors distort the polymerase active site for between 4 and 6 nucleotides, which means, depending on the type of mismatch, there are up to six chances for error correction. The error sensing and error correction features, combined with the inherent accuracy that arises from the structure and chemistry of replicative polymerases, contribute to an error rate of approximately 1 base pair mismatch in 108 to 1010 base pairs. Errors can be classified in three categories: purine-purine mismatches, pyrimidine-pyrimidine mismatches, and pyrimidine-purine mismatches. The chemistry of each mismatch varies, and so does the behaviour of the replicative polymerase with respect to its mismatch sensing activity. The replication of bacteriophage T4 DNA upon infection of E. coli is a well-studied DNA replication system. During the period of exponential DNA increase at 37°C, the rate of elongation is 749 nucleotides per second. The mutation rate during replication is 1.7 mutations per 108 base pairs. Thus DNA replication in this system is both very rapid and highly accurate. Primer removal and nick ligation There are two problems after leading and lagging strand synthesis: RNA remains in the duplex and there are nicks between each Okazaki fragment in the lagging duplex. These problems are solved by a variety of DNA repair enzymes that vary by organism, including: DNA polymerase I, DNA polymerase beta, RNAse H, ligase, and DNA2. This process is well-characterised in bacteria and much less well-characterised in many eukaryotes. In general, DNA repair enzymes complete the Okazaki fragments through a variety of means, including: base pair excision and 5' to 3' exonuclease activity that removes the chemically unstable ribonucleotides from the lagging duplex and replaces them with stable deoxynucleotides. This process is referred to as 'maturation of Okazaki fragments', and ligase (see below) completes the final step in the maturation process. Primer removal and nick ligation can be thought of as DNA repair processes that produce a chemically-stable, error-free duplex. To this point, with respect to the chemistry of an RNA-DNA duplex, in addition to the presence of uracil in the duplex, the presence of ribose (which has a reactive 2' OH) tends to make the duplex much less chemically-stable than a duplex containing only deoxyribose (which has a non-reactive 2' H). DNA polymerase I DNA polymerase I is an enzyme that repairs DNA. RNAse H RNAse H is an enzyme that removes RNA from an RNA-DNA duplex. Ligase After DNA repair factors replace the ribonucleotides of the primer with deoxynucleotides, a single gap remains in the sugar-phosphate backbone between each Okazaki fragment in the lagging duplex. An enzyme called DNA ligase connects the gap in the backbone by forming a phosphodiester bond between each gap that separates the Okazaki fragments. The structural and chemical aspects of this process, generally referred to as 'nick translation', exceed the scope of this article. Replication stress Replication stress can result in a stalled replication fork. One type of replicative stress results from DNA damage such as inter-strand cross-links (ICLs). An ICL can block replicative fork progression due to failure of DNA strand separation. In vertebrate cells, replication of an ICL-containing chromatin template triggers recruitment of more than 90 DNA repair and genome maintenance factors. These factors include proteins that perform sequential incisions and homologous recombination. History Katherine Lemon and Alan Grossman showed using Bacillus subtilis that replisomes do not move like trains along a track but DNA is actually fed through a stationary pair of replisomes located at the cell membrane. In their experiment, the replisomes in B. subtilis were each tagged with green fluorescent protein, and the location of the complex was monitored in replicating cells using fluorescence microscopy. If the replisomes moved like a train on a track, the polymerase-GFP protein would be found at different positions in each cell. Instead, however, in every replicating cell, replisomes were observed as distinct fluorescent foci located at or near midcell. Cellular DNA stained with a blue fluorescent dye (DAPI) clearly occupied most of the cytoplasmic space. References Further reading External links Molecular genetics DNA replication
Replisome
[ "Chemistry", "Biology" ]
4,673
[ "Genetics techniques", "DNA replication", "Molecular genetics", "Molecular biology" ]
4,263,420
https://en.wikipedia.org/wiki/Rafael%20Ximeno%20y%20Planes
Rafael Ximeno y Planes (1759/1760–1825) was a Spanish painter and draughtsman. Biography He was the son of a silversmith and first learned the painter's profession from his maternal uncle Luis Planes. Later he studied at the Real Academia de San Fernando in Madrid thanks to a scholarship. He also studied in Rome in 1783. In 1786 he was appointed vice-director (teniente director) of the Real Academia de San Carlos of Valencia, and in 1793 he moved to Mexico City as the director of painting at the Academia de San Carlos. In addition to academic canvases, Ximeno also created the frescos in the churches of Jesús María and La Profesa, in Mexico City. His fresco ‘The Assumption of the Virgin’ can be found in the dome of Catedral Metropolitana de Ciudad de México. Some of his work also appears in the Basílica de la Asunción, in the town of Cieza, Spain. Throughout his career, he made drawings which were preparatory for prints. Notable among these are his illustrations for the very popular first Spanish translation of Robinson Crusoe, by Tomás de Iriarte (1750–91), published in Madrid in 1789 (which is in fact a translation not of Daniel Defoe's original text, but of Joachim Heinrich Campe’s adaptation, published in Hamburg 1779–80). Four of these preparatory drawings by Rafael Ximeno y Planes are preserved in the British Library. There are also drawings by José Juan Camarón y Meliá (1760–1819) for this edition of Robinson Crusoe in the British Library. More prints after drawings by Ximeno y Planes appear as illustrations in two editions of Don Quixote, one published by the Real Academia de la Lengua in Madrid (now Real Academia Española) and printed by Ibarra in 1780, and another published by between 1797 and 1798. In 1779 he illustrated an edition of Crónica de Juan II by Hernando del Pulgar. The artist is also known for the engraved portraits of Charles IV of Spain, Francisco de Quevedo and Pedro Calderón de la Barca, appearing in the series Retratos de Españoles Ilustres, engraved by , for which he drew the designs. Paintings at the Museo Nacional de Arte, Mexico City References Further reading VV.AA., La col·lecció Raimon Casellas, exhibition catalog, Palacio Nacional de Montjuic, Publicacions del Mnac/Museo del Prado (1992) Clara Isabel Senent del Caño, Rafael Ximeno y Planes. Academicismo en la Nueva España, doctoral thesis, University of Valencia, (2017) External links 1759 births 1825 deaths 18th-century Spanish painters 18th-century Spanish male artists Spanish male painters 19th-century Spanish painters 19th-century Spanish male artists Artists from Valencia Draughtsmen 18th-century Mexican painters Immigrants to New Spain
Rafael Ximeno y Planes
[ "Engineering" ]
598
[ "Design engineering", "Draughtsmen" ]
4,263,491
https://en.wikipedia.org/wiki/Cavity%20method
The cavity method is a mathematical method presented by Marc Mézard, Giorgio Parisi and Miguel Angel Virasoro in 1987 to derive and solve some mean field-type models in statistical physics, specially adapted to disordered systems. The method has been used to compute properties of ground states in many condensed matter and optimization problems. Initially invented to deal with the Sherrington–Kirkpatrick model of spin glasses, the cavity method has shown wider applicability. It can be regarded as a generalization of the Bethe–Peierls iterative method in tree-like graphs, to the case of a graph with loops that are not too short. The cavity method can solve many problems also solvable using the replica trick but has the advantage of being more intuitive and less mathematically subtle than replica-based methods. The cavity method proceeds by perturbing a large system with the addition of a non-thermodynamic number of additional constituents and approximating the response of the entire system perturbatively. The application of the resulting approximation, along with an assumption that certain observables are self-averaging, yields a self-consistency equation for the statistics of the added constituents. The added constituents are then considered to be the mean-field variables. The cavity method has proved useful in solving optimization problems such as k-satisfiability and graph coloring. It has yielded not only ground states energy predictions in the average case but has also inspired algorithmic methods. See also The cavity method originated in the context of statistical physics, but is also closely related to methods from other areas such as belief propagation. References Further reading Condensed matter physics
Cavity method
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
334
[ "Materials science stubs", "Phases of matter", "Materials science", "Condensed matter physics", "Condensed matter stubs", "Matter" ]
4,264,157
https://en.wikipedia.org/wiki/Burning%20Ship%20fractal
The Burning Ship fractal, first described and created by Michael Michelitsch and Otto E. Rössler in 1992, is generated by iterating the function: in the complex plane which will either escape or remain bounded. The difference between this calculation and that for the Mandelbrot set is that the real and imaginary components are set to their respective absolute values before squaring at each iteration. The mapping is non-analytic because its real and imaginary parts do not obey the Cauchy–Riemann equations. Virtually all images of the Burning Ship fractal are reflected vertically for aesthetic purposes, and some are also reflected horizontally. Implementation The below pseudocode implementation hardcodes the complex operations for Z. Consider implementing complex number operations to allow for more dynamic and reusable code. for each pixel (x, y) on the screen, do: x := scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1)) y := scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1)) zx := x // zx represents the real part of z zy := y // zy represents the imaginary part of z iteration := 0 max_iteration := 100 while (zx*zx + zy*zy < 4 and iteration < max_iteration) do xtemp := zx*zx - zy*zy + x zy := abs(2*zx*zy) + y // abs returns the absolute value zx := xtemp iteration := iteration + 1 if iteration = max_iteration then // Belongs to the set return INSIDE_COLOR return (max_iteration / iteration) × color // Assign color to pixel outside the set Gallery References External links About properties and symmetries of the Burning Ship fractal, featured by Theory.org Burning Ship Fractal, Description and C source code. Burning Ship with its Mset of higher powers and Julia Sets Burningship, Video, Fractal webpage includes the first representations and the original paper cited above on the Burning Ship fractal. 3D representations of the Burning Ship fractal FractalTS Mandelbrot, Burning ship and corresponding Julia set generator. Fractals Articles with example pseudocode
Burning Ship fractal
[ "Mathematics" ]
493
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations" ]
4,264,274
https://en.wikipedia.org/wiki/Methylation%20specific%20oligonucleotide%20microarray
Methylation specific oligonucleotide microarray, also known as MSO microarray, was developed as a technique to map epigenetic methylation changes in DNA of cancer cells. The general process starts with modification of DNA with bisulfite, specifically to convert unmethylated cytosine in CpG sites to uracil, while leaving methylated cytosines untouched. The modified DNA region of interest is amplified via PCR and during the process, uracils are converted to thymine. The amplicons are labelled with a fluorescent dye and hybridized to oligonucleotide probes that are fixed to a glass slide. The probes differentially bind to cytosine and thymine residues, which ultimately allows discrimination between methylated and unmethylated CpG sites, respectively. A calibration curve is produced and compared with the microarray results of the amplified DNA samples. This allows a general quantification of the proportion of methylation present in the region of interest. This microarray technique was developed by Tim Hui-Ming Huang and his laboratory and was officially published in 2002. Implications for cancer research Cancer cells often develop atypical methylation patterns, at CpG sites in promoters of tumour suppressor genes. High levels of methylation at a promoter leads to downregulation of the corresponding genes and is characteristic of carcinogenesis. It is one of the most consistent changes observed in early stage tumour cells. Methylation specific oligonucleotide microarray allows for the high resolution and high throughput detection of numerous methylation events on multiple gene promoters. Therefore, this technique can be used to detect aberrant methylation in tumour suppressor promoters at an early stage and has been used in gastric and colon cancers and multiple others. Because it allows one to detect presence of atypical methylations in cancer cells, it can also be used to reveal the major cause behind the malignancy, whether its main contributor is mutations on chromosomes or epigenetic modifications, as well as which tumour suppressor genes' transcription levels are affected. An interesting use of this microarray includes specific classification of cancers based on the methylation patterns alone, such as differentiating between classes of leukemia, suggesting that different classes of cancer show relatively unique methylation patterns. This technique has also been proposed to monitor cancer treatments that involve modifying the methylation patterns in mutant cancer cells. References External links Resources, information and specific protocols for DNA Methylation Analysis Software for DNA Methylation Analysis Cancer research Microarrays
Methylation specific oligonucleotide microarray
[ "Chemistry", "Materials_science", "Biology" ]
526
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
4,264,467
https://en.wikipedia.org/wiki/Wood%27s%20glass
Wood's glass is an optical filter glass invented in 1903 by American physicist Robert Williams Wood (1868–1955), which allows ultraviolet and infrared light to pass through, while blocking most visible light. History Wood's glass was developed as a light filter used in communications during World War I. The glass filter worked both in infrared daylight communication and ultraviolet night communications by removing the visible components of a light beam, leaving only the "invisible radiation" as a signal beam. Wood's glass was commonly used to form the envelope for fluorescent and incandescent ultraviolet bulbs ("black lights"). In recent years, due to its disadvantages, other filter materials have largely replaced it. Composition Wood's glass is special barium-sodium-silicate glass incorporating about 9% nickel oxide. It is a very deep violet-blue glass, opaque to all visible light rays except longest red and shortest violet. It is quite transparent in the violet/ultraviolet in a band between 320 and 400 nanometres with a peak at 365 nanometres, and a fairly broad range of infrared and the longest, least visible red wavelengths. Properties and uses Wood's glass has lower mechanical strength and higher thermal expansion than commonly used glasses, making it more vulnerable to thermal shocks and mechanical damage. The nickel and barium oxides are also chemically reactive, with tendency to slowly form a layer of hydroxides and carbonates in contact with atmospheric moisture and carbon dioxide. The susceptibility to thermal shock makes manufacture of hermetically sealed glass bulbs difficult and costly. Therefore, most contemporary "black-light" bulbs are made of structurally more suitable glass with only a layer of a UV-filtering enamel on its surface; such bulbs, however, pass much more visible light, appearing brighter to the eye. Due to manufacturing difficulties, Wood's glass is now more commonly used in standalone flat or dome-shaped filters, instead of being the material of the light bulb. With prolonged exposure to ultraviolet radiation, Wood's glass undergoes solarization, gradually losing transparency for UV. Photographic filters for ultraviolet photography, notably the Kodak Wratten 18A and 18B, are based on Wood's glass. Health effects Bulbs made of Wood's glass are potentially hazardous in comparison with those made of enameled glass, since the reduced visible light output may cause observers to be exposed to unsafe levels of UV, because the source appears dim. The low output of black lights is not considered sufficient to cause DNA damage or cellular mutations, but excessive exposure to UV can cause temporary or permanent damage to the eye. See also Black light Dichroic filter Wood's lamp References Further reading Optical filters Glass trademarks and brands Glass compositions American inventions
Wood's glass
[ "Chemistry" ]
553
[ "Glass compositions", "Glass chemistry", "Optical filters", "Filters" ]
4,264,509
https://en.wikipedia.org/wiki/Ovoid%20%28projective%20geometry%29
In projective geometry an ovoid is a sphere like pointset (surface) in a projective space of dimension . Simple examples in a real projective space are hyperspheres (quadrics). The essential geometric properties of an ovoid are: Any line intersects in at most 2 points, The tangents at a point cover a hyperplane (and nothing more), and contains no lines. Property 2) excludes degenerated cases (cones,...). Property 3) excludes ruled surfaces (hyperboloids of one sheet, ...). An ovoid is the spatial analog of an oval in a projective plane. An ovoid is a special type of a quadratic set. Ovoids play an essential role in constructing examples of Möbius planes and higher dimensional Möbius geometries. Definition of an ovoid In a projective space of dimension a set of points is called an ovoid, if (1) Any line meets in at most 2 points. In the case of , the line is called a passing (or exterior) line, if the line is a tangent line, and if the line is a secant line. (2) At any point the tangent lines through cover a hyperplane, the tangent hyperplane, (i.e., a projective subspace of dimension ). (3) contains no lines. From the viewpoint of the hyperplane sections, an ovoid is a rather homogeneous object, because For an ovoid and a hyperplane , which contains at least two points of , the subset is an ovoid (or an oval, if ) within the hyperplane . For finite projective spaces of dimension (i.e., the point set is finite, the space is pappian), the following result is true: If is an ovoid in a finite projective space of dimension , then . (In the finite case, ovoids exist only in 3-dimensional spaces.) In a finite projective space of order (i.e. any line contains exactly points) and dimension any pointset is an ovoid if and only if and no three points are collinear (on a common line). Replacing the word projective in the definition of an ovoid by affine, gives the definition of an affine ovoid. If for an (projective) ovoid there is a suitable hyperplane not intersecting it, one can call this hyperplane the hyperplane at infinity and the ovoid becomes an affine ovoid in the affine space corresponding to . Also, any affine ovoid can be considered a projective ovoid in the projective closure (adding a hyperplane at infinity) of the affine space. Examples In real projective space (inhomogeneous representation) (hypersphere) These two examples are quadrics and are projectively equivalent. Simple examples, which are not quadrics can be obtained by the following constructions: (a) Glue one half of a hypersphere to a suitable hyperellipsoid in a smooth way. (b) In the first two examples replace the expression by . Remark: The real examples can not be converted into the complex case (projective space over ). In a complex projective space of dimension there are no ovoidal quadrics, because in that case any non degenerated quadric contains lines. But the following method guarantees many non quadric ovoids: For any non-finite projective space the existence of ovoids can be proven using transfinite induction. Finite examples Any ovoid in a finite projective space of dimension over a field of characteristic is a quadric. The last result can not be extended to even characteristic, because of the following non-quadric examples: For odd and the automorphism the pointset is an ovoid in the 3-dimensional projective space over (represented in inhomogeneous coordinates). Only when is the ovoid a quadric. is called the Tits-Suzuki-ovoid. Criteria for an ovoid to be a quadric An ovoidal quadric has many symmetries. In particular: Let be an ovoid in a projective space of dimension and a hyperplane. If the ovoid is symmetric to any point (i.e. there is an involutory perspectivity with center which leaves invariant), then is pappian and a quadric. An ovoid in a projective space is a quadric, if the group of projectivities, which leave invariant operates 3-transitively on , i.e. for two triples there exists a projectivity with . In the finite case one gets from Segre's theorem: Let be an ovoid in a finite 3-dimensional desarguesian projective space of odd order, then is pappian and is a quadric. Generalization: semi ovoid Removing condition (1) from the definition of an ovoid results in the definition of a semi-ovoid: A point set of a projective space is called a semi-ovoid if the following conditions hold: (SO1) For any point the tangents through point exactly cover a hyperplane. (SO2) contains no lines. A semi ovoid is a special semi-quadratic set which is a generalization of a quadratic set. The essential difference between a semi-quadratic set and a quadratic set is the fact, that there can be lines which have 3 points in common with the set and the lines are not contained in the set. Examples of semi-ovoids are the sets of isotropic points of an hermitian form. They are called hermitian quadrics. As for ovoids in literature there are criteria, which make a semi-ovoid to a hermitian quadric. See, for example. Semi-ovoids are used in the construction of examples of Möbius geometries. See also Ovoid (polar space) Möbius plane Notes References Further reading External links E. Hartmann: Planar Circle Geometries, an Introduction to Moebius-, Laguerre- and Minkowski Planes. Skript, TH Darmstadt (PDF; 891 kB), S. 121-123. Projective geometry Incidence geometry
Ovoid (projective geometry)
[ "Mathematics" ]
1,303
[ "Incidence geometry", "Combinatorics" ]
4,264,592
https://en.wikipedia.org/wiki/2-valued%20morphism
In mathematics, a 2-valued morphism is a homomorphism that sends a Boolean algebra B onto the two-element Boolean algebra 2 = {0,1}. It is essentially the same thing as an ultrafilter on B, and, in a different way, also the same things as a maximal ideal of B. 2-valued morphisms have also been proposed as a tool for unifying the language of physics. 2-valued morphisms, ultrafilters and maximal ideals Suppose B is a Boolean algebra. If s : B → 2 is a 2-valued morphism, then the set of elements of B that are sent to 1 is an ultrafilter on B, and the set of elements of B that are sent to 0 is a maximal ideal of B. If U is an ultrafilter on B, then the complement of U is a maximal ideal of B, and there is exactly one 2-valued morphism s : B → 2 that sends the ultrafilter to 1 and the maximal ideal to 0. If M is a maximal ideal of B, then the complement of M is an ultrafilter on B, and there is exactly one 2-valued morphism s : B → 2 that sends the ultrafilter to 1 and the maximal ideal to 0. Physics If the elements of B are viewed as "propositions about some object", then a 2-valued morphism on B can be interpreted as representing a particular "state of that object", namely the one where the propositions of B which are mapped to 1 are true, and the propositions mapped to 0 are false. Since the morphism conserves the Boolean operators (negation, conjunction, etc.), the set of true propositions will not be inconsistent but will correspond to a particular maximal conjunction of propositions, denoting the (atomic) state. (The true propositions form an ultrafilter, the false propositions form a maximal ideal, as mentioned above.) The transition between two states s1 and s2 of B, represented by 2-valued morphisms, can then be represented by an automorphism f from B to B, such that s2 o f = s1. The possible states of different objects defined in this way can be conceived as representing potential events. The set of events can then be structured in the same way as invariance of causal structure, or local-to-global causal connections or even formal properties of global causal connections. The morphisms between (non-trivial) objects could be viewed as representing causal connections leading from one event to another one. For example, the morphism f above leads form event s1 to event s2. The sequences or "paths" of morphisms for which there is no inverse morphism, could then be interpreted as defining horismotic or chronological precedence relations. These relations would then determine a temporal order, a topology, and possibly a metric. According to, "A minimal realization of such a relationally determined space-time structure can be found". In this model there are, however, no explicit distinctions. This is equivalent to a model where each object is characterized by only one distinction: (presence, absence) or (existence, non-existence) of an event. In this manner, "the 'arrows' or the 'structural language' can then be interpreted as morphisms which conserve this unique distinction". If more than one distinction is considered, however, the model becomes much more complex, and the interpretation of distinction states as events, or morphisms as processes, is much less straightforward. References External links "Representation and Change - A metarepresentational framework for the foundations of physical and cognitive science" Boolean algebra
2-valued morphism
[ "Mathematics" ]
778
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic" ]
4,264,683
https://en.wikipedia.org/wiki/Ian%20Ridpath
Ian William Ridpath (born 1 May 1947, in Ilford, Essex) is an English science writer and broadcaster best known as a popularizer of astronomy and a biographer of constellation history. As a UFO sceptic, he investigated and explained the Rendlesham Forest Incident of December 1980. Life and career Ridpath attended Beal Grammar School in Ilford where he wrote astronomy articles for the school magazine. Before entering publishing he was an assistant in the lunar research group at the University of London Observatory, Mill Hill. He now lives in Brentford, Middlesex. He is editor of the Oxford Dictionary of Astronomy and Norton's Star Atlas, and author of observing guides such as The Monthly Sky Guide and the Collins Stars and Planets Guide (the latter two with charts by Wil Tirion, and both continuously in print for over 30 years). His other books include Star Tales, about the origins and mythology of the constellations, and the children's book Exploring Stars and Planets, now in its fifth edition. He is a contributor to the Dorling Kindersley encyclopedia Universe, and a former editor of the UK quarterly magazine Popular Astronomy. He is also currently editor of The Antiquarian Astronomer, the journal of the Society for the History of Astronomy. His early books on the subject of extraterrestrial life and interstellar travel – Worlds Beyond (1975), Messages from the Stars (1978) and Life off Earth (1983) – led him to investigate UFOs. But he became a sceptic, a position reinforced by his findings about the Rendlesham case. He was one of the first to offer an explanation for the so-called Sirius Mystery involving the supposedly advanced astronomical knowledge of the Dogon people of Mali, west Africa. He was a space expert for LBC Radio from the 1970s into the 1990s, and was also seen on BBC TV's Breakfast Time programme in its early years. It was for Breakfast Time that he first investigated the Rendlesham Forest UFO case. His star show Planet Earth ran at the London Planetarium from February 1993 to January 1995; it was the last show to use the planetarium's original Zeiss optical projector. Awards In 2012 he received the Astronomical Society of the Pacific's Klumpke-Roberts Award for outstanding contributions to the public understanding and appreciation of astronomy. In 1990 he won an award in The Aventis Prizes for Science Books (in the under-8 children's books category) for The Giant Book of Space. Other interests From 1993 to 1995 he was Race Director of the Polytechnic Marathon from Windsor to Chiswick, Britain's oldest marathon race which traced its origins back to the 1908 Olympic Marathon. In that role, he was involved in a public controversy over the ownership of the Sporting Life marathon trophy, originally awarded to winners of the Polytechnic Marathon, which was claimed in 1994 by the London Marathon. The Polytechnic Marathon was last held in 1996. A keen astro-philatelist, he is chairman of the Astro Space Stamp Society. Selected bibliography Stars and Planets Guide. Collins (UK). . Princeton University Press (US). . The Monthly Sky Guide. Dover. . Astronomy: A Visual Guide. Dorling Kindersley. . Gem Stars. Collins. . Times Universe. Times Books. . Exploring Stars and Planets. Philip's. . Star Tales. Lutterworth. . Oxford Dictionary of Astronomy (ed.). Oxford University Press. . Norton’s Star Alas and Reference Handbook (ed.). Dutton. . References External links Personal website CV 1947 births Amateur astronomers English sceptics English science writers Fellows of the Royal Astronomical Society Living people People from Ilford UFO skeptics
Ian Ridpath
[ "Astronomy" ]
747
[ "Astronomers", "Amateur astronomers" ]
4,265,026
https://en.wikipedia.org/wiki/Kartlis%20Deda
Kartlis Deda (; Mother of Kartvel or Mother of Georgians) is a monument in Georgia's capital Tbilisi. The statue was erected on the top of Sololaki hill in 1958, the year Tbilisi celebrated its 1500th anniversary. Prominent Georgian sculptor Elguja Amashukeli designed the twenty-metre aluminium figure of a woman in Georgian national dress. Symbolism She symbolizes the Georgian national character: in her left hand she holds a bowl of wine to greet those who come as friends, and in her right hand is a sword for those who come as enemies. History In 1966 Elguja Amashukeli was awarded the Shota Rustaveli State Prize for this sculpture. He called the statue "Capital", and it commonly became known as "Mother of Kartvel". The accessories of the sculpture, the cup with wine and sword, are an expression of the history of our city, Tbilisi, the endless battles with the enemies and the welcoming of friendly guests. The original statue erected on Sololaki Hill in 1958 was a wooden allegorical statue that would temporarily decorate the capital. Later it was decided to become permanent and the wood texture was covered with aluminum in 1963 to limit environmental damage. In 1997, the old statue was replaced with a new one. Gallery See also List of tallest statues Mother Armenia Mother Ukraine References Colossal statues Monuments and memorials in Tbilisi National symbols of Georgia (country) 1958 sculptures Aluminium sculptures Georgian words and phrases National personifications
Kartlis Deda
[ "Physics", "Mathematics" ]
302
[ "Quantity", "Colossal statues", "Physical quantities", "Size" ]
4,265,190
https://en.wikipedia.org/wiki/Kutta%20condition
The Kutta condition is a principle in steady-flow fluid dynamics, especially aerodynamics, that is applicable to solid bodies with sharp corners, such as the trailing edges of airfoils. It is named for German mathematician and aerodynamicist Martin Kutta. Kuethe and Schetzer state the Kutta condition as follows:A body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge. In fluid flow around a body with a sharp corner, the Kutta condition refers to the flow pattern in which fluid approaches the corner from above and below, meets at the corner, and then flows away from the body. None of the fluid flows around the sharp corner. The Kutta condition is significant when using the Kutta–Joukowski theorem to calculate the lift created by an airfoil with a sharp trailing edge. The value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist. The Kutta condition applied to airfoils Applying 2-D potential flow, if an airfoil with a sharp trailing edge begins to move with an angle of attack through air, the two stagnation points are initially located on the underside near the leading edge and on the topside near the trailing edge, just as with the cylinder. As the air passing the underside of the airfoil reaches the trailing edge it must flow around the trailing edge and along the topside of the airfoil toward the stagnation point on the topside of the airfoil. Vortex flow occurs at the trailing edge and, because the radius of the sharp trailing edge is zero, the speed of the air around the trailing edge should be infinitely fast. Though real fluids cannot move at infinite speed, they can move very fast. The high airspeed around the trailing edge causes strong viscous forces to act on the air adjacent to the trailing edge of the airfoil and the result is that a strong vortex accumulates on the topside of the airfoil, near the trailing edge. As the airfoil begins to move it carries this vortex, known as the starting vortex, along with it. Pioneering aerodynamicists were able to photograph starting vortices in liquids to confirm their existence. The vorticity in the starting vortex is matched by the vorticity in the bound vortex in the airfoil, in accordance with Kelvin's circulation theorem. As the vorticity in the starting vortex progressively increases the vorticity in the bound vortex also progressively increases and causes the flow over the topside of the airfoil to increase in speed. The starting vortex is soon cast off the airfoil and is left behind, spinning in the air where the airfoil left it. The stagnation point on the topside of the airfoil then moves until it reaches the trailing edge. The starting vortex eventually dissipates due to viscous forces. As the airfoil continues on its way, there is a stagnation point at the trailing edge. The flow over the topside conforms to the upper surface of the airfoil. The flow over both the topside and the underside join up at the trailing edge and leave the airfoil travelling parallel to one another. This is known as the Kutta condition. When an airfoil is moving with an angle of attack, the starting vortex has been cast off and the Kutta condition has become established, there is a finite circulation of the air around the airfoil. The airfoil is generating lift, and the magnitude of the lift is given by the Kutta–Joukowski theorem. One of the consequences of the Kutta condition is that the airflow over the topside of the airfoil travels much faster than the airflow under the underside. A parcel of air which approaches the airfoil along the stagnation streamline is cleaved in two at the stagnation point, one half traveling over the topside and the other half traveling along the underside. The flow over the topside is so much faster than the flow along the underside that these two halves never meet again. They do not even re-join in the wake long after the airfoil has passed. There is a popular fallacy called the equal transit-time fallacy that claims the two halves rejoin at the trailing edge of the airfoil. This has been understood as a fallacy since Martin Kutta's discovery. Whenever the speed or angle of attack of an airfoil changes there is a weak starting vortex which begins to form, either above or below the trailing edge. This weak starting vortex causes the Kutta condition to be re-established for the new speed or angle of attack. As a result, the circulation around the airfoil changes and so too does the lift in response to the changed speed or angle of attack."This starting vortex formation occurs not only when a wing is first set into motion, but also when the circulation around the wing is subsequently changed for any reason whatever." Millikan, Clark B. (1941), Aerodynamics of the Airplane, p.65, John Wiley & Sons, New York The Kutta condition gives some insight into why airfoils have sharp trailing edges, even though this is undesirable from structural and manufacturing viewpoints. In irrotational, inviscid, incompressible flow (potential flow) over an airfoil, the Kutta condition can be implemented by calculating the stream function over the airfoil surface. The same Kutta condition implementation method is also used for solving two dimensional subsonic (subcritical) inviscid steady compressible flows over isolated airfoils. The viscous correction for the Kutta condition can be found in some of the recent studies. The Kutta condition in aerodynamics The Kutta condition allows an aerodynamicist to incorporate a significant effect of viscosity while neglecting viscous effects in the underlying conservation of momentum equation. It is important in the practical calculation of lift on a wing. The equations of conservation of mass and conservation of momentum applied to an inviscid fluid flow, such as a potential flow, around a solid body result in an infinite number of valid solutions. One way to choose the correct solution would be to apply the viscous equations, in the form of the Navier–Stokes equations. However, these normally do not result in a closed-form solution. The Kutta condition is an alternative method of incorporating some aspects of viscous effects, while neglecting others, such as skin friction and some other boundary layer effects. The condition can be expressed in a number of ways. One is that there cannot be an infinite change in velocity at the trailing edge. Although an inviscid fluid can have abrupt changes in velocity, in reality viscosity smooths out sharp velocity changes. If the trailing edge has a non-zero angle, the flow velocity there must be zero. At a cusped trailing edge, however, the velocity can be non-zero although it must still be identical above and below the airfoil. Another formulation is that the pressure must be continuous at the trailing edge. The Kutta condition does not apply to unsteady flow. Experimental observations show that the stagnation point (one of two points on the surface of an airfoil where the flow speed is zero) begins on the top surface of an airfoil (assuming positive effective angle of attack) as flow accelerates from zero, and moves backwards as the flow accelerates. Once the initial transient effects have died out, the stagnation point is at the trailing edge as required by the Kutta condition. Mathematically, the Kutta condition enforces a specific choice among the infinite allowed values of circulation. See also Kutta–Joukowski theorem Horseshoe vortex Starting vortex References L. J. Clancy (1975) Aerodynamics, Pitman Publishing Limited, London. "Flow around an airfoil" at the University of Geneva "Kutta condition for lifting flows" by Praveen Chandrashekar of the National Aerospace Laboratories of India A.M. Kuethe and J.D. Schetzer, Foundations of Aerodynamics, John Wiley & Sons, Inc. New York (1959) Massey, B.S. Mechanics of Fluids. Section 9.10, 2nd Edition. Van Nostrand Reinhold Co. London (1970) Library of Congress Catalog Card No. 67-25005 C. Xu, "Kutta condition for sharp edge flows", Mechanics Research Communications 25(4):415-420 (1998). E.L. Houghton and P.W. Carpenter, Aerodynamics for Engineering Students, 5th edition, pp. 160-162, Butterworth-Heinemann, An imprint of Elsevier Science, Jordan Hill, Oxford (2003) Notes Fluid dynamics Aircraft aerodynamics
Kutta condition
[ "Chemistry", "Engineering" ]
1,804
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
13,630,054
https://en.wikipedia.org/wiki/Old%20Frisian%20longhouse
Old Frisian longhouses were, as the name indicates, long-bodied houses which can be found in the Dutch province Friesland. This type of house had more than two different parts behind or beside each part. It is the forerunner of the "Head-Neck-Body farmhouse". There are a variety of types, but most have vanished during time. The only remaining longhouse is located in the Dutch village Wartena. One of the oldest reminding of this type of residence comes from an undated writing from around 1850 from J.H. Halbertsma, the Lexicon Frisicum (A and B). Halbertsma made an untidy schematic drawing that shows the lines of the outer walls of a longhouse. References See also Old Frisian farmhouse House types Vernacular architecture Agricultural buildings in the Netherlands Architecture in Frisia
Old Frisian longhouse
[ "Engineering" ]
177
[ "Architecture stubs", "Architecture" ]
13,631,038
https://en.wikipedia.org/wiki/Mason%20Contractors%20Association%20of%20America
The Mason Contractors Association of America (MCAA) is a trade association in the United States of America representing mason contractors. Activities MCAA promotes building codes and standards for mason contractors and designers such as the ASTM, MSJC, ASCE, and the IBC. Each year, MCAA conducts the MCAA Convention which includes the annual meeting and educational programming. The association conducts various classes. Some programs include Masonry Foreman Development, Basic Masonry Estimating, Masonry Quality Institute, and other topics such as Masonry Wall Bracing and Understanding Masonry Codes and Standards. The MCAA provides information on careers in masonry to students, parents and high schools. The MCAA supports the establishment of both pre-apprentice and apprenticeship programs and assists local training programs. Organizational structure The full board consists of a Chairman, Chairman Elect, Treasurer, Secretary and nine Regional Vice Presidents. In addition, State Chairmen serve for each state and thirteen Committees help drive the association. The Mason Contractors Association of America has a full-time staff in Washington, D.C. References External links Official MCAA Site Masonry Trade associations based in the United States
Mason Contractors Association of America
[ "Engineering" ]
224
[ "Construction", "Masonry" ]
13,631,685
https://en.wikipedia.org/wiki/Hosaka%E2%80%93Cohen%20transformation
Hosaka–Cohen transformation (also called H–C transformation) is a mathematical method of converting a particular two-dimensional scalar magnetic field map to a particular two-dimensional vector map. The scalar field map is of the component of magnetic field which is normal to a two-dimensional surface of a volume conductor; this volume conductor contains the currents producing the magnetic field. The resulting vector map, sometimes called "an arrowmap" roughly mimics those currents under the surface which are parallel to the surface, which produced the field. Therefore, the purpose in performing the transformation is to allow a rough visualization of the underlying, parallel currents. The transformation was proposed by Cohen and Hosaka of the biomagnetism group at MIT, then was used by Hosaka and Cohen to visualize the current sources of the magnetocardiogram. Each arrow is defined as: where of the local coordinate system is normal to the volume conductor surface, and are unit vectors, and is the normal component of magnetic field. This is a form of two-dimensional gradient of the scalar quantity and is rotated by 90° from the conventional gradient. Almost any scalar field, magnetic or otherwise, can be displayed in this way, if desired, as an aid to the eye, to help see the underlying sources of the field. See also Biomagnetism Bioelectromagnetism Electrophysiology Magnetic field Magnetocardiography Magnetometer Notes Further reading Biophysics Medical imaging
Hosaka–Cohen transformation
[ "Physics", "Biology" ]
300
[ "Applied and interdisciplinary physics", "Biophysics" ]