id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
64,333 | https://en.wikipedia.org/wiki/Reed%27s%20law | Reed's law is the assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network.
The reason for this is that the number of possible sub-groups of network participants is 2N − N − 1, where N is the number of participants. This grows much more rapidly than either
the number of participants, N, or
the number of possible pair connections, N(N − 1)/2 (which follows Metcalfe's law).
so that even if the utility of groups available to be joined is very small on a per-group basis, eventually the network effect of potential group membership can dominate the overall economics of the system.
Derivation
Given a set A of N people, it has 2N possible subsets. This is not difficult to see, since we can form each possible subset by simply choosing for each element of A one of two possibilities: whether to include that element, or not.
However, this includes the (one) empty set, and N singletons, which are not properly subgroups. So 2N − N − 1 subsets remain, which is exponential, like 2N.
Quote
From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4):
"[E]ven Metcalfe's law understates the value created by a group-forming network [GFN] as it grows. Let's say you have a GFN with n members. If you add up all the potential two-person groups, three-person groups, and so on that those members could form, the number of possible groups equals 2n. So the value of a GFN increases exponentially, in proportion to 2n. I call that Reed's Law. And its implications are profound."
Business implications
Reed's Law is often mentioned when explaining competitive dynamics of internet platforms. As the law states that a network becomes more valuable when people can easily form subgroups to collaborate, while this value increases exponentially with the number of connections, business platform that reaches a sufficient number of members can generate network effects that dominate the overall economics of the system.
Criticism
Other analysts of network value functions, including Andrew Odlyzko, have argued that both Reed's Law and Metcalfe's Law overstate network value because they fail to account for the restrictive impact of human cognitive limits on network formation. According to this argument, the research around Dunbar's number implies a limit on the number of inbound and outbound connections a human in a group-forming network can manage, so that the actual maximum-value structure is much sparser than the set-of-subsets measured by Reed's law or the complete graph measured by Metcalfe's law.
See also
Andrew Odlyzko's "Content is Not King"
Beckstrom's law
Coase's penguin
List of eponymous laws
Metcalfe's law
Six Degrees of Kevin Bacon
Sarnoff's law
Social capital
References
External links
That Sneaky Exponential—Beyond Metcalfe's Law to the Power of Community Building
Weapon of Math Destruction: A simple formula explains why the Internet is wreaking havoc on business models.
KK-law for Group Forming Services, XVth International Symposium on Services and Local Access, Edinburgh, March 2004, presents an alternative way to model the effect of social networks.
Computer architecture statements
Eponymous laws of economics
Information theory
Network theory | Reed's law | [
"Mathematics",
"Technology",
"Engineering"
] | 720 | [
"Telecommunications engineering",
"Applied mathematics",
"Graph theory",
"Network theory",
"Computer science",
"Information theory",
"Mathematical relations"
] |
64,343 | https://en.wikipedia.org/wiki/Moir%C3%A9%20pattern | In mathematics, physics, and art, moiré patterns ( , , ) or moiré fringes are large-scale interference patterns that can be produced when a partially opaque ruled pattern with transparent gaps is overlaid on another similar pattern. For the moiré interference pattern to appear, the two patterns must not be completely identical, but rather displaced, rotated, or have slightly different pitch.
Moiré patterns appear in many situations. In printing, the printed pattern of dots can interfere with the image. In television and digital photography, a pattern on an object being photographed can interfere with the shape of the light sensors to generate unwanted artifacts. They are also sometimes created deliberately; in micrometers, they are used to amplify the effects of very small movements.
In physics, its manifestation is wave interference like that seen in the double-slit experiment and the beat phenomenon in acoustics.
Etymology
The term originates from moire (moiré in its French adjectival form), a type of textile, traditionally made of silk but now also made of cotton or synthetic fiber, with a rippled or "watered" appearance. Moire, or "watered textile", is made by pressing two layers of the textile when wet. The similar but imperfect spacing of the threads creates a characteristic pattern which remains after the fabric dries.
In French, the noun moire is in use from the 17th century, for "watered silk". It was a loan of the English mohair (attested 1610). In French usage, the noun gave rise to the verb moirer, "to produce a watered textile by weaving or pressing", by the 18th century. The adjective moiré formed from this verb is in use from at least 1823.
Pattern formation
Moiré patterns are often an artifact of images produced by various digital imaging and computer graphics techniques, for example when scanning a halftone picture or ray tracing a checkered plane (the latter being a special case of aliasing, due to undersampling a fine regular pattern). This can be overcome in texture mapping through the use of mipmapping and anisotropic filtering.
The drawing on the upper right shows a moiré pattern. The lines could represent fibers in moiré silk, or lines drawn on paper or on a computer screen. The nonlinear interaction of the optical patterns of lines creates a real and visible pattern of roughly parallel dark and light bands, the moiré pattern, superimposed on the lines.
The moiré effect also occurs between overlapping transparent objects. For example, an invisible phase mask is made of a transparent polymer with a wavy thickness profile. As light shines through two overlaid masks of similar phase patterns, a broad moiré pattern occurs on a screen some distance away. This phase moiré effect and the classical moiré effect from opaque lines are two ends of a continuous spectrum in optics, which is called the universal moiré effect. The phase moiré effect is the basis for a type of broadband interferometer in x-ray and particle wave applications. It also provides a way to reveal hidden patterns in invisible layers.
Line moiré
Line moiré is one type of moiré pattern; a pattern that appears when superposing two transparent layers containing correlated opaque patterns. Line moiré is the case when the superposed patterns comprise straight or curved lines. When moving the layer patterns, the moiré patterns transform or move at a faster speed. This effect is called optical moiré speedup.
More complex line moiré patterns are created if the lines are curved or not exactly parallel.
Shape moiré
Shape moiré is one type of moiré pattern demonstrating the phenomenon of moiré magnification. 1D shape moiré is the particular simplified case of 2D shape moiré. One-dimensional patterns may appear when superimposing an opaque layer containing tiny horizontal transparent lines on top of a layer containing a complex shape which is periodically repeating along the vertical axis.
Moiré patterns revealing complex shapes, or sequences of symbols embedded in one of the layers (in form of periodically repeated compressed shapes) are created with shape moiré, otherwise called band moiré patterns. One of the most important properties of shape moiré is its ability to magnify tiny shapes along either one or both axes, that is, stretching. A common 2D example of moiré magnification occurs when viewing a chain-link fence through a second chain-link fence of identical design. The fine structure of the design is visible even at great distances.
Calculations
Moiré of parallel patterns
Geometrical approach
Consider two patterns made of parallel and equidistant lines, e.g., vertical lines. The step of the first pattern is , the step of the second is , with .
If the lines of the patterns are superimposed at the left of the figure, the shift between the lines increases when going to the right. After a given number of lines, the patterns are opposed: the lines of the second pattern are between the lines of the first pattern. If we look from a far distance, we have the feeling of pale zones when the lines are superimposed (there is white between the lines), and of dark zones when the lines are "opposed".
The middle of the first dark zone is when the shift is equal to . The th line of the second pattern is shifted by compared to the th line of the first network. The middle of the first dark zone thus corresponds to
that is
The distance between the middle of a pale zone and a dark zone is
the distance between the middle of two dark zones, which is also the distance between two pale zones, is
From this formula, we can see that:
the bigger the step, the bigger the distance between the pale and dark zones;
the bigger the discrepancy , the closer the dark and pale zones; a great spacing between dark and pale zones mean that the patterns have very close steps.
The principle of the moiré is similar to the Vernier scale.
Mathematical function approach
The essence of the moiré effect is the (mainly visual) perception of a distinctly different third pattern which is caused by inexact superimposition of two similar patterns. The mathematical representation of these patterns is not trivially obtained and can seem somewhat arbitrary. In this section we shall give a mathematical example of two parallel patterns whose superimposition forms a moiré pattern, and show one way (of many possible ways) these patterns and the moiré effect can be rendered mathematically.
The visibility of these patterns is dependent on the medium or substrate in which they appear, and these may be opaque (as for example on paper) or transparent (as for example in plastic film). For purposes of discussion we shall assume the two primary patterns are each printed in greyscale ink on a white sheet, where the opacity (e.g., shade of grey) of the "printed" part is given by a value between 0 (white) and 1 (black) inclusive, with representing neutral grey. Any value less than 0 or greater than 1 using this grey scale is essentially "unprintable".
We shall also choose to represent the opacity of the pattern resulting from printing one pattern atop the other at a given point on the paper as the average (i.e. the arithmetic mean) of each pattern's opacity at that position, which is half their sum, and, as calculated, does not exceed 1. (This choice is not unique. Any other method to combine the functions that satisfies keeping the resultant function value within the bounds [0,1] will also serve; arithmetic averaging has the virtue of simplicity—with hopefully minimal damage to one's concepts of the printmaking process.)
We now consider the "printing" superimposition of two almost similar, sinusoidally varying, grey-scale patterns to show how they produce a moiré effect in first printing one pattern on the paper, and then printing the other pattern over the first, keeping their coordinate axes in register. We represent the grey intensity in each pattern by a positive opacity function of distance along a fixed direction (say, the x-coordinate) in the paper plane, in the form
where the presence of 1 keeps the function positive definite, and the division by 2 prevents function values greater than 1.
The quantity represents the periodic variation (i.e., spatial frequency) of the pattern's grey intensity, measured as the number of intensity cycles per unit distance. Since the sine function is cyclic over argument changes of , the distance increment per intensity cycle (the wavelength) obtains when , or .
Consider now two such patterns, where one has a slightly different periodic variation from the other:
such that .
The average of these two functions, representing the superimposed printed image, evaluates as follows (see reverse identities here :Prosthaphaeresis ):
where it is easily shown that
and
This function average, , clearly lies in the range [0,1]. Since the periodic variation is the average of and therefore close to and , the moiré effect is distinctively demonstrated by the sinusoidal envelope "beat" function , whose periodic variation is half the difference of the periodic variations and (and evidently much lower in frequency).
Other one-dimensional moiré effects include the classic beat frequency tone which is heard when two pure notes of almost identical pitch are sounded simultaneously. This is an acoustic version of the moiré effect in the one dimension of time: the original two notes are still present—but the listener's perception is of two pitches that are the average of and half the difference of the frequencies of the two notes. Aliasing in sampling of time-varying signals also belongs to this moiré paradigm.
Rotated patterns
Consider two patterns with the same step , but the second pattern is rotated by an angle . Seen from afar, we can also see darker and paler lines: the pale lines correspond to the lines of nodes, that is, lines passing through the intersections of the two patterns.
If we consider a cell of the lattice formed, we can see that it is a rhombus with the four sides equal to ; (we have a right triangle whose hypotenuse is and the side opposite to the angle is ).
The pale lines correspond to the small diagonal of the rhombus. As the diagonals are the bisectors of the neighbouring sides, we can see that the pale line makes an angle equal to with the perpendicular of each pattern's line.
Additionally, the spacing between two pale lines is , half of the long diagonal. The long diagonal is the hypotenuse of a right triangle and the sides of the right angle are and . The Pythagorean theorem gives:
that is:
thus
When is very small () the following small-angle approximations can be made:
thus
We can see that the smaller is, the farther apart the pale lines; when both patterns are parallel (), the spacing between the pale lines is infinite (there is no pale line).
There are thus two ways to determine : by the orientation of the pale lines and by their spacing
If we choose to measure the angle, the final error is proportional to the measurement error. If we choose to measure the spacing, the final error is proportional to the inverse of the spacing. Thus, for the small angles, it is best to measure the spacing.
Implications and applications
Printing full-color images
In graphic arts and prepress, the usual technology for printing full-color images involves the superimposition of halftone screens. These are regular rectangular dot patterns—often four of them, printed in cyan, yellow, magenta, and black. Some kind of moiré pattern is inevitable, but in favorable circumstances the pattern is "tight"; that is, the spatial frequency of the moiré is so high that it is not noticeable. In the graphic arts, the term moiré means an excessively visible moiré pattern. Part of the prepress art consists of selecting screen angles and halftone frequencies which minimize moiré. The visibility of moiré is not entirely predictable. The same set of screens may produce good results with some images, but visible moiré with others.
Television screens and photographs
Moiré patterns are commonly seen on television screens when a person is wearing a shirt or jacket of a particular weave or pattern, such as a houndstooth jacket. This is due to interlaced scanning in televisions and non-film cameras, referred to as interline twitter. As the person moves about, the moiré pattern is quite noticeable. Because of this, newscasters and other professionals who regularly appear on TV are instructed to avoid clothing which could cause the effect.
Photographs of a TV screen taken with a digital camera often exhibit moiré patterns. Since both the TV screen and the digital camera use a scanning technique to produce or to capture pictures with horizontal scan lines, the conflicting sets of lines cause the moiré patterns. To avoid the effect, the digital camera can be aimed at an angle of 30 degrees to the TV screen.
Marine navigation
The moiré effect is used in shoreside beacons called "Inogon leading marks" or "Inogon lights", manufactured by Inogon Licens AB, Sweden, to designate the safest path of travel for ships heading to locks, marinas, ports, etc., or to indicate underwater hazards (such as pipelines or cables). The moiré effect creates arrows that point towards an imaginary line marking the hazard or line of safe passage; as navigators pass over the line, the arrows on the beacon appear to become vertical bands before changing back to arrows pointing in the reverse direction. An example can be found in the UK on the eastern shore of Southampton Water, opposite Fawley oil refinery (). Similar moiré effect beacons can be used to guide mariners to the centre point of an oncoming bridge; when the vessel is aligned with the centreline, vertical lines are visible.
Inogon lights are deployed at airports to help pilots on the ground keep to the centreline while docking on stand.
Strain measurement
In manufacturing industries, these patterns are used for studying microscopic strain in materials: by deforming a grid with respect to a reference grid and measuring the moiré pattern, the stress levels and patterns can be deduced. This technique is attractive because the scale of the moiré pattern is much larger than the deflection that causes it, making measurement easier.
The moiré effect can be used in strain measurement: the operator just has to draw a pattern on the object, and superimpose the reference pattern to the deformed pattern on the deformed object.
A similar effect can be obtained by the superposition of a holographic image of the object to the object itself: the hologram is the reference step, and the difference with the object are the deformations, which appear as pale and dark lines.
Image processing
Some image scanner computer programs provide an optional filter, called a "descreen" filter, to remove moiré pattern artifacts which would otherwise be produced when scanning printed halftone images to produce digital images.
Banknotes
Many banknotes exploit the tendency of digital scanners to produce moiré patterns by including fine circular or wavy designs that are likely to exhibit a moiré pattern when scanned and printed.
Microscopy
In super-resolution microscopy, the moiré pattern can be used to obtain images with a resolution higher than the diffraction limit, using a technique known as structured illumination microscopy.
In scanning tunneling microscopy, moiré fringes appear if surface atomic layers have a different crystal structure than the bulk crystal. This can for example be due to surface reconstruction of the crystal, or when a thin layer of a second crystal is on the surface, e.g. single-layer, double-layer graphene, or Van der Waals heterostructure of graphene and hBN, or bismuth and antimony nanostructures.
In transmission electron microscopy (TEM), translational moiré fringes can be seen as parallel contrast lines formed in phase-contrast TEM imaging by the interference of diffracting crystal lattice planes that are overlapping, and which might have different spacing and/or orientation. Most of the moiré contrast observations reported in the literature are obtained using high-resolution phase contrast imaging in TEM. However, if probe aberration-corrected high-angle annular dark field scanning transmission electron microscopy (HAADF-STEM) imaging is used, more direct interpretation of the crystal structure in terms of atom types and positions is obtained.
Materials science and condensed matter physics
In condensed matter physics, the moiré phenomenon is commonly discussed for two-dimensional materials. The effect occurs when there is mismatch between the lattice parameter or angle of the 2D layer and that of the underlying substrate, or another 2D layer, such as in 2D material heterostructures. The phenomenon is exploited as a means of engineering the electronic structure or optical properties of materials, which some call moiré materials. The often significant changes in electronic properties when twisting two atomic layers and the prospect of electronic applications has led to the name twistronics of this field. A prominent example is in twisted bi-layer graphene, which forms a moiré pattern and at a particular magic angle exhibits superconductivity and other important electronic properties.
In materials science, known examples exhibiting moiré contrast are thin films or nanoparticles of MX-type (M = Ti, Nb; X = C, N) overlapping with austenitic matrix. Both phases, MX and the matrix, have face-centered cubic crystal structure and cube-on-cube orientation relationship. However, they have significant lattice misfit of about 20 to 24% (based on the chemical composition of alloy), which produces a moiré effect.
See also
Aliasing
Angle-sensitive pixel
Barrier grid animation and stereography (kinegram)
Beat (acoustics)
Euclid's orchard
Guardian (sculpture)
Kell factor
Lenticular printing
Moiré Phase Tracking
Multidimensional sampling
References
External links
A series of oil paintings based on moiré principles by British artist, Pip Dickens
A live demonstration of the moiré effect that stems from interferences between circles
An interactive example of various moiré patterns Use arrow keys and mouse to manipulate layers.
A universal moiré effect and application in X-ray phase-contrast imaging
"The Moiré Effect Lights That Guide Ships Home", an article on YouTube by Tom Scott about the Moiré Inogon light in Southampton
"The Moiré Museum", interactive vector graphics with links to the physics and mathematics of the Moiré effect and artistic contributions
Geometry
Interference
Patterns
Printing | Moiré pattern | [
"Mathematics"
] | 3,808 | [
"Geometry"
] |
64,386 | https://en.wikipedia.org/wiki/Jan%20Brueghel%20the%20Elder | Jan Brueghel (also Bruegel or Breughel) the Elder ( , ; ; 1568 – 13 January 1625) was a Flemish painter and draughtsman. He was the younger son of the eminent Flemish Renaissance painter Pieter Bruegel the Elder. A close friend and frequent collaborator with Peter Paul Rubens, the two artists were the leading Flemish painters in the Flemish Baroque painting of the first three decades of the 17th century.
Brueghel worked in many genres including history paintings, flower still lifes, allegorical and mythological scenes, landscapes and seascapes, hunting pieces, village scenes, battle scenes and scenes of hellfire and the underworld. He was an important innovator who invented new types of paintings such as flower garland paintings, paradise landscapes, and gallery paintings in the first quarter of the 17th century. However, he generally avoided painting large figures, as in portraits, though he often collaborated with other painters who did these, while he did the landscape backgrounds, and sometimes the clothes.
He further created genre paintings that were imitations, pastiches and reworkings of his father's works, in particular his father's genre scenes and landscapes with peasants. Brueghel represented the type of the pictor doctus, the erudite painter whose works are informed by the religious motifs and aspirations of the Catholic Counter-Reformation as well as the scientific revolution with its interest in accurate description and classification. He was court painter of the Archduke and Duchess Albrecht and Isabella, sovereigns of the Spanish Netherlands.
The artist was nicknamed "Velvet" Brueghel, "Flower" Brueghel, and "Paradise" Brueghel. The first is believed to have been given him because of his mastery in the rendering of fabrics. The second nickname is a reference to his fame as a painter of (although not a specialist in) flower pieces and the last one to his invention of the genre of the paradise landscape. His brother Pieter Brueghel the Younger was traditionally nicknamed "de helse Brueghel" or "Hell Brueghel" because it was believed he was the author of a number of paintings with fantastic depictions of fire and grotesque imagery. These paintings have now been reattributed to Jan Brueghel the Elder.
Life
Jan Brueghel the Elder was born in Brussels as the son of Pieter Bruegel the Elder and Maria (called 'Mayken') Coecke van Aelst. His mother was the daughter of the prominent Flemish Renaissance artists Pieter Coecke van Aelst and Mayken Verhulst. His father died about a year after Jan's birth in 1569. It is believed that after the death of his mother in 1578, Jan, together with his older brother Pieter Brueghel the Younger and sister Marie, went to live with their grandmother Verhulst, who was by then widowed. The early Flemish biographer Karel van Mander wrote in his Schilder-boeck published in 1604 that Verhulst was the first art teacher of her two grandsons. She taught them drawing and watercolour painting of miniatures. Jan and his brother may also have trained with local artists in Brussels who were active as tapestry designers.
Jan and his brother Pieter were then sent to Antwerp to study oil painting. According to Karel van Mander he studied under Peter Goetkint, an important dealer with a large collection of paintings in his shop. Goetkint died on 15 July 1583 not very long after Jan had started his training. It is possible that Jan continued his studies in this shop, which was taken over by Goetkint's widow, as no other master is recorded.
It was common for Flemish painters of that time to travel to Italy to complete their studies. Jan Brueghel left for Italy, first travelling to Cologne where his sister Marie and her family lived. He later visited Frankenthal, an important cultural centre where a number of Flemish landscape artists were active.
He then travelled on to Naples after probably spending time in Venice. In Naples he produced some drawings after June 1590 which show his interest in landscapes and monumental architecture. He worked for Don Francesco Caracciolo, a prominent nobleman and priest and founder of the Clerics Regular Minor. Jan produced small-scale cabinet paintings for Don Francesco.
Brueghel left Naples for Rome where he lived from 1592 to 1594. He befriended Paul Bril, a landscape specialist from Antwerp who had moved to Rome in the late 16th century. Together with his brother Mathijs Bril, he created atmospheric landscapes for many Roman residences. Brueghel took inspiration from Bril's lively drawings and small-scale landscapes of the mid-1590s. During his time in Rome Jan Brueghel became acquainted with Hans Rottenhammer, a German painter of small highly finished cabinet paintings on copper. Rottenhammer painted religious and mythological compositions, combining German and Italian elements of style, which were highly esteemed. Brueghel collaborated with both Paul Bril and Rottenhammer. Brueghel also spent time making watercolours of Rome's antique monuments and seemed particularly fascinated by the vaulted interiors of the Colosseum.
He enjoyed the protection of Cardinal Ascanio Colonna. In Rome he also met Cardinal Federico Borromeo, who played an important role in the Counter-Reformation and was also an avid art collector. The Cardinal became Brueghel's lifelong friend and patron. Brueghel took up residence in Borromeo's Palazzo Vercelli. When Borromeo became archbishop of Milan in June 1595, Brueghel followed him and became part of the Cardinal's household. He produced many landscape and flower paintings for the Cardinal.
Brueghel stayed about a year in Milan and in 1596 he had returned to Antwerp where he remained active, save for a few interruptions, for the rest of his life. A year after his return Jan Brueghel was admitted as a Free Master in Antwerp's Guild of Saint Luke as the son of a master. The artist married on 23 January 1599 in the Cathedral of Our Lady in Antwerp. The bride was Isabella de Jode, the daughter of the cartographer, engraver and publisher Gerard de Jode. Their son Jan was born on 13 September 1601. This first-born had Rubens as his godfather and later took over his father's workshop and was known as Jan Brueghel the Younger.
Brueghel was registered as a burgher of Antwerp on 4 October 1601 as 'Jan Bruegel, Peetersone, schilder, van Bruessele' ('Jan Bruegel, son of Peeter, painter, of Brussels'). Just a month before, Brueghel had been elected dean of the Guild of Saint Luke, but he had not been able to take up the position as he was not a burgher of Antwerp. Upon becoming formally registered as a burgher the same year Brueghel could finally be the dean. The next year he was re-elected as dean.
In 1603 his daughter Paschasia Brueghel was born. Rubens was also her godfather. His wife Isabella de Jode died the same year leaving him with two young children. It has been speculated that death of his wife was linked to the birth of his latest child.
In the mid-1604 Brueghel visited Prague, the main location of the court of Rudolf II, Holy Roman Emperor, who promoted artistic innovation. The Emperor's court had attracted many Northern artists such as Bartholomeus Spranger and Hans von Aachen who created a new affected style, full of conceits, today known as Northern Mannerism.
Upon returning to Antwerp in September 1604 Brueghel bought a large house called "De Meerminne" (The Mermaid) in the Lange Nieuwstraat in Antwerp on 20 September 1604. The artist remarried in April 1605. With his second wife Catharina van Mariënburg he had 8 children of whom Ambrosius became a painter.
After his appointment in 1606 as court painter to the Archduke and Duchess Albrecht and Isabella, sovereigns of the Spanish Netherlands, the artist was present in Brussels for periods in 1606, 1609, 1610 and 1613. On 28 August 1613 the court in Brussels paid Brueghel 3625 guilders for completing various works.
From October 1610 onwards Rubens started taking on the role of intermediary for his friend Jan Brueghel. By 1625 Rubens had written about 25 letters to Cardinal Borromeo on behalf of Brueghel. In a letter to Borromeo Brueghel referred, jokingly, to his friend's role as that of "mio secretario Rubens" (my secretary Rubens). In 1612 or 1613 Peter Paul Rubens painted a portrait of Jan Brueghel and his family (Courtauld Institute, London). In 1613 he accompanied Rubens and Hendrick van Balen the Elder on a diplomatic mission to the Dutch Republic. Here they met Hendrick Goltzius and other Haarlem artists.
When John Ernest, Duke of Saxe-Eisenach passed through Antwerp in 1614 he took time to pay a visit to Rubens and Brueghel in their workshops. Brueghel received many official commissions from the Antwerp city magistrate. Four of his paintings were offered by the Antwerp city magistrates to the Archduke and Duchess Albrecht and Isabella on 27 August 1615. He was in 1618 one of twelve important painters from Antwerp who were commissioned by the Antwerp city magistrates to produce a series of paintings for the Archduke and Duchess Albrecht and Isabella. For this commission, Brueghel coordinated the work on a painting cycle depicting an Allegory of the Five Senses. The artists participating in the project included Rubens, Frans Snyders, Frans Francken the Younger, Joos de Momper, Hendrick van Balen the Elder and Sebastiaen Vrancx. The works were destroyed in a fire in 1713.
On 9 March 1619 Brueghel bought a third house called Den Bock (the Billy Goat) located in the Antwerp Arenbergstraat. When on 6 August 1623 his daughter Clara Eugenia was baptized, Archduchess Isabella and Cardinal Borromeo were her godparents. Jan Brueghel died on 13 January 1625 in Antwerp from complications arising from a gastrointestinal upset.
The artist's estate was distributed on 3 June and 23 June 1627 among his surviving wife and his children from both marriages. Rubens, Hendrick van Balen the Elder, Cornelis Schut and Paulus van Halmaele were the executors of his last will. Rubens was the guardian of the surviving Brueghel children.
His students included his son Jan as well as Daniel Seghers. Brueghel's daughter Paschasia married the painter Hieronymus van Kessel the Younger, and their son Jan van Kessel the Elder studied with Jan Brueghel the Younger. Brueghel's daughter Anna married David Teniers the Younger in 1637.
Work
General
Jan Brueghel the Elder was a versatile artist who practised in many genres and introduced various new subjects into Flemish art. He was an innovator who contributed to the development of the various genres to which he put his hand such as flower still lifes, landscapes and seascapes, hunting pieces, battle scenes and scenes of hellfire and the underworld. His best-known innovations are the new types of paintings, which he introduced into the repertoire of Flemish art in the first quarter of the 17th century such as flower garland paintings, paradise landscapes and paintings of art galleries. Unlike contemporary Flemish Baroque artists, such as Rubens, he did not produce large altarpieces for the local churches.
Jan Brueghel the Elder achieved a superb technical mastery, which enabled him to render materials, animals and landscapes with remarkable accuracy and a high degree of finish. He had an accomplished miniaturist technique allowing him to achieve an accurate description of nature.
Little is known about the workshop practices of Brueghel. He operated a large workshop that allowed him to produce a large quantity of works, which were in turn reproduced in his workshop. After Brueghel's death in 1625, Jan Brueghel the Younger took charge of his father's workshop which he operated in the same way as his father. This is clear in the style of the surviving paintings which are in the vein of his father's and the continued collaboration with former collaborators of his father such as Rubens and Hendrick van Balen. This workshop production contributed to the wide distribution of Jan Brueghel the Elder's creations.
While his brother Pieter was engaged in the large-scale production of numerous works for the Antwerp art market, Jan Brueghel worked for a select clientele of aristocratic patrons and collectors of pictures to create more expensive and exclusive images. His works, such as his paradise landscapes, appealed to the aesthetic preferences of aristocrats who loved collecting such precious objects. His works, often painted on copper, were luxury objects intended for the simple pleasure of viewing as well as contemplation.
Collaborations
Collaboration between artists specialised in distinctive genres was a defining feature of artistic practice in 17th-century Antwerp. Jan Brueghel was likewise a frequent collaborator with fellow artists. As he was an artist with a wide range of skills he worked with a number of collaborators in various genres. His collaborators included landscape artists Paul Bril and Joos de Momper, architectural painter Paul Vredeman de Vries and figure painters Frans Francken the Younger, Hendrick de Clerck, Pieter van Avont and Hendrick van Balen.
His collaborations with figure painter Hans Rottenhammer began in Rome around 1595 and ended in 1610. Rottenhammer was a gifted figure painter and known for his skill in painting nudes. Initially when the artists both lived in Venice, their collaborative works were executed on canvas, but in their later collaborations after Brueghel had returned to Antwerp they typically used copper. After Brueghel's return to Antwerp, their collaboration practice was for Brueghel to send the coppers with the landscape to Rottenhammer in Venice, who painted in the figures and then returned the coppers. In a few instances, the process was the other way around. Brueghel and Rottenhammer did not collaborate only on landscape paintings with figures; they jointly created one of the earliest devotional garland paintings, made for Cardinal Federico Borromeo, depicting a Virgin and Child surrounded by a flower garland (Pinacoteca Ambrosiana).
While in his collaborations with Hans Rottenhammer, the landscapes were made by Brueghel, the roles were reversed when he worked with Joos de Momper as it was Brueghel who provided the figures to the landscapes painted by de Momper. An example of their collaboration is Mountain Landscape with Pilgrims in a Grotto Chapel (, Liechtenstein Museum). There are about 59 known collaborations between Brueghel and de Momper making de Momper his most frequent collaborator. Hendrick van Balen the Elder was another regular collaborator with Jan Brueghel. Their collaboration was simplified by the fact that from 1604 onwards both painters had moved to the Lange Nieuwstraat, which made it easier to carry the panels and copper plates on which they collaborated back and forth.
Another frequent collaborator of Jan Brueghel was Rubens. The two artists executed about 25 joint works between 1598 and 1625. Their first collaboration was on The Battle of the Amazons (-1600, Sanssouci Picture Gallery). The artists worked together in the development of the genre of the devotional garland painting with works such as the Madonna in a Floral Wreath (-1618, Alte Pinakothek). They further jointly made mythological scenes and an allegorical series representing the Five Senses. The collaboration between the two friends was remarkable because they worked in very different styles and specialisations and were artists of equal status. They were able to preserve the individuality of their respective styles in these joint works.
Brueghel appears to have been the principal initiator of their joint works, which were made principally during the second half of the 1610s when their method of collaboration had become more systemised and included Rubens' workshop. Usually it would be Brueghel who started a painting and he would leave space for Rubens to add the figures. In their early collaborations they seem to have made major corrections to the work of the other. For instance, in the early collaborative effort The Return from War: Mars Disarmed by Venus Rubens overpainted most of the lower-right corner with grey paint so he could enlarge his figures. In later collaborations the artists seem to have streamlined their collaboration and agreed on the composition early on so that these later works show little underdrawing. As court painters to the archdukes their collaborations reflected the court's desire to emphasise the continuity of its reign with the previous Burgundian and Habsburg rulers as well as the rulers' piousness. While they were mindful of the prevailing tastes in courtly circles, which favoured subjects such as the hunt, the two artists were creative in their response to the court's preferences by devising new iconography and genres, such as the devotional garland paintings, which were equally capable of conveying the devoutness and splendour of the archducal court. The joint artistic output of Brueghel and Rubens was highly prized by collectors all over Europe.
Ideological context
Jan Brueghel's work reflects the various ideological currents at work in the Catholic Spanish Netherlands during his lifetime. The Catholic Counter-Reformation's worldview played an important role in the artist's practice. Central in this worldview was the belief that the earth and its inhabitants were revelations of a supreme being, God. Artistic representation of, and scientific investigation into, that divine revelation was encouraged and valued. Breughel's friend and patron, the Counter-Reformation Cardinal Federico Borromeo, particularly emphasised the beauty and diversity of the animal world. In his I tre libri delle laudi divine (published only posthumously in 1632) Borromeo wrote: 'Looking then with attentive study at animals' construction and formation, and at their parts, members, and characters, can it not be said how excellently divine wisdom has demonstrated the value of its great works?' Jan Brueghel's realistic depictions of nature in all its various forms, in flowers, landscapes, animals, etc., was clearly in line with the view that study of God's creation was an important source for knowing God.
Brueghel's era also saw a growing interest in the study of nature through empirical evidence as opposed to relying on inherited tradition. The increased access to new animals and exotic plants from the newly discovered territories played an important role in this intellectual exploration. This resulted in the appearance of the first scholarly catalogues and encyclopedias, including the illustrated natural history catalogues of 16th-century naturalists Conrad Gesner and Ulisse Aldrovandi. Their major contribution to natural history was the creation of an extensive system of description of each animal. Gesner placed all the species within four general categories: quadrupeds, birds, fish and serpents. He described animals in alphabetical order and in terms of nomenclature, geographic origins, mode of living and behaviour. Aldrovandi took another approach and did not order animals alphabetically. He relied on visual resemblance as the classifying factor. For example, he grouped the horse together with analogous animals, such as the donkey and mule, and separated species into categories, such as birds with webbed feet and nocturnal birds.
Brueghel's works reflect this contemporary encyclopedic interest in the classification and ordering of all of the natural world. This is evidenced in his flower pieces, landscapes, allegorical works and gallery paintings. In his paradise landscapes, for instance, Brueghel grouped most of the species according to their basic categories of biological classification, in other words, according to the main groups of related species that resemble one another, such as birds or quadrupeds. He further classified most of them into subdivisions consisting of similar morphological and behavioural characteristics. His paradise landscapes thus constituted a visual catalogue of animals and birds which fulfilled the role of micro-encyclopedia.
Brueghel's endeavour to represent the world through ordering and classifying its many elements based on empirical observation did not stop with the natural world. In Prague he had acquired knowledge of the large collections of Emperor Rudolf II, which were divided in natural, artificial and scientific objects. Brueghel's allegorical paintings of the four elements and of the five senses reveal the same classifying obsession, using each element or sense to organise natural, man-made instruments and scientific objects. In this skillful union of the areas of art, science, and nature Brueghel demonstrates his mastery of these various disciplines. His paintings serve the same purpose to that of encyclopedic collections, then known as cabinets of curiosities, by linking between the mundus sensibilis and the mundus intelligibilis. His approach to describing and cataloguing nature in art resembles the distinction natural historians were starting to make between perceptual experience and theoretical knowledge.
Brueghel's obsession with classifying the world was completely in line with the encyclopedic tastes of the court in Brussels as is demonstrated by their large art collection of predominantly Flemish paintings, menagerie of exotic species and extensive library.
Flower paintings
Jan Brueghel the Elder was one of the first artists in the Habsburg Netherlands who started to paint pure flower still lifes. A pure flower still life depicts flowers, typically arranged in a vase or other vessel, as the principal subject of the picture, rather than as a subordinate part of another work such as a history painting. Jan Brueghel is regarded as an important contributor to the emerging genre of the flower piece in Northern art, a contribution that was already appreciated in his time when he received the nickname 'Flower Brueghel'. While the traditional interpretation of these flower pieces was that they were vanitas symbols or allegories of transience with hidden meanings, it is now more common to interpret them as mere depictions of the natural world. Brueghel's approach to these works was informed by his desire to display his skill in giving a realistic, almost scientific rendering of nature. These works reflected the ideological concerns demonstrated in his work, which combined the worldview that nature was a revelation of a god with the interest in gaining a scientific understanding of nature.
Brueghel's flower pieces are dominated by the floral arrangements, which are placed against a neutral dark background. Minor details such as insects, butterflies, snails and separate sprays of flowers or rosemary may occasionally be added but are subordinate to the principal subject. While Brueghel sought out very rare flowers, he used certain common blooms such as tulips, irises and roses to anchor his bouquets. This may have been a response to his patrons' wishes as well as compositional considerations. His bouquets were typically composed of flowers blooming in different seasons of the year so they could never have been painted together directly from nature. Brueghel was in the habit of travelling to make drawings of flowers that were not available in Antwerp, so that he could paint them into his bouquets. Brueghel rendered the flowers with an almost scientific precision. He arranged each flower with hardly any overlap so that they are shown off to their best advantage, and many are shown at different angles. The flowers are arranged by size with smaller ones at the bottom of the bouquet, larger flowers such as tulips, cornflowers, peonies and guelder roses in the centre and large flowers, such as white lilies and blue irises, at the top of the bouquet.
This arrangement is clear in the Flowers in a Ceramic Vase (c. 1620, Royal Museum of Fine Arts Antwerp). The vase in which the flowers are arranged is decorated with motifs in relief. The two cartouches - separated by a fantastic figure - show Amphitrite, a sea goddess from Greek mythology, on the left, and Ceres, the Roman corn goddess, on the right. These two goddesses were typically used in allegorical representations of the four elements to symbolise water and earth respectively. The other two cartouches on that part of the vase that is invisible likely show Vulcan, who was associated with fire, and Apollo, who was associated with air. The occurrence of the four elements and the flowers in a single work can be interpreted as the emanation of the macrocosm in the microcosm.
Brueghel often repeated motifs in his flower pieces. Even so, he was able to give each work a remarkable freshness and vitality of its own.
Garland paintings
Jan Brueghel the Elder played a key role in the invention and development of the genre of garland paintings. Garland paintings typically show a flower garland around a devotional image or portrait. Together with Hendrick van Balen, he painted around 1607-1608 the first known garland painting for Italian cardinal Federico Borromeo, a passionate art collector and Catholic reformer. Borromeo requested the painting to respond to the destruction of images of the Virgin in the preceding century and it thus combined both the cardinal's interests in Catholic reform and the arts. Brueghel, the still life specialist, painted the flower garland, while van Balen, a specialist figure painter, was responsible for the image of the Virgin.
The genre of garland paintings was inspired by the cult of veneration and devotion to Mary prevalent at the Habsburg court (then the rulers over the Habsburg Netherlands) and in Antwerp generally. The genre was initially connected to the visual imagery of the Counter-Reformation movement. Garland paintings were usually collaborations between a still life and a figure painter. Brueghel's collaborators on garland paintings included Rubens, Frans Francken the Younger and Pieter van Avont.
An example of a collaborative garland painting made by Jan Brueghel the Elder and Rubens is the Madonna in Floral Wreath (1621, Alte Pinakothek).
An example of a collaborative garland painting he made with Hendrick van Balen is the Garland of Fruit surrounding a Depiction of a Goddess Receiving Gifts from Personifications of the Four Seasons of which there are two versions, one in the Belfius collection and a second in the Mauritshuis in The Hague. Both versions are considered to be autograph paintings, but small differences between the two suggest that the panel in the Belfius collection is the original version. The medallion in the centre is traditionally believed to depict Cybele, the ancient Phrygian goddess of the earth and nature as it was described as such in 1774 when it was catalogued in the collection of William V, Prince of Orange in The Hague. More recently an identification of the goddess with Ceres, the Roman goddess of agriculture, grain crops, fertility and motherly relationships, has been proposed. The reason is that the goddess in the medallion has none of the attributes traditionally connected with Cybele. Around the medallion is suspended a garland of flowers, vegetables and fruit – a tribute to the goddess and an ode to plenty and fertility. Van Balen painted the medallion while Brueghel painted the abundant garland, the surrounding figures and the numerous animals.
Landscapes
Jan Brueghel's father, Pieter Bruegel the Elder, is regarded as an important innovator of landscape art. By introducing greater naturalism in his Alpine mountain settings, his father had expanded on the world landscape tradition that had been founded mainly by Joachim Patinir. Some of Pieter the Elder's works also foreshadowed the forest landscape that would start to dominate landscape painting around the turn of the 16th century. Pieter the Elder also developed the village and rural landscape, placing Flemish hamlets and farms in exotic prospects of mountains and river valleys.
Jan developed on the formula he learned from his father of arranging country figures travelling a road, which recedes into the distance. He emphasised the recession into space by carefully diminishing the scale of figures in the foreground, middle-ground, and far distance. To further the sense of atmospheric perspective, he used varying tones of brown, green, and blue progressively to characterise the recession of space. His landscapes with their vast depth are balanced through his attention to the peasant figures and their humble activities in the foreground.
Like his father, Jan Brueghel also painted various village landscapes. He used the surrounding landscapes as the stage for the crowds of anecdotal, colourfully dressed peasants who engage in various activities in the market, the country roads and during the rowdy kermesses. Jan Brueghel's landscape paintings with their strong narrative elements and attention to detail had a significant influence on Flemish and Dutch landscape artists in the second decade of the 17th century. His river views were certainly known to painters working in Haarlem, including Esaias van de Velde and Willem Buytewech, whom Brueghel may have met there when he accompanied Peter Paul Rubens on a diplomatic mission to the Dutch Republic in 1613.
Jan Brueghel was along with artists such as Gillis van Coninxloo one of the prime developers of the dense forest landscape in the 17th century. Jan Breughel experimented with such works before Coninxloo's first dated wooded landscape of 1598. In his forest landscapes Brueghel depicted heavily wooded glades in which he captured the verdant density, and even mystery, of the forest. Although on occasion inhabited by humans and animals, these forest scenes contain dark recesses, virtually no open sky and no outlet for the eye to penetrate beyond the thick trees.
Paradise landscapes
Jan Brueghel invented the 'paradise landscape', a subgenre that involved a combination of landscape and animal painting. Works in this genre are typically crawling with numerous animals from exotic and native European species who coexist harmoniously in a lush landscape setting. These landscapes are inspired by episodes from Genesis, the chapter of the bible, which tells the story of the creation of the world and of man. The favourite themes taken from Genesis where the creation of man, Adam and Eve in paradise, the fall of man and the entry of the animals in Noah's ark.
Like his flower pieces, these landscapes were informed by the Catholic Counter-Reformation's worldview, which regarded earth and its inhabitants as revelations of their god and valued artistic representation of, and scientific investigation into, that divine revelation. As described above, Breughel's friend and patron, the Counter-Reformation Cardinal Federico Borromeo, had particularly emphasised the beauty and diversity of the animal world. Brueghel tried to render this worldview in his paradise landscapes. The novelty of Brueghel's paradise landscapes lies not only in the impressive variety of animals, which the artist studied mainly from life but also in their presentation as both figures of a religious narrative and as subjects of a scientific order.
Brueghel developed his earliest paradise landscapes during his stay in Venice in the early 1590s. His first paradise landscape known as The Garden of Eden with the Fall of Man is now in the Doria Pamphilj Gallery in Rome. The reference to Genesis in the picture appears in a small vignette representing the creation of Man in the background, but the main focus is on the animals and the landscape itself. This work was the first paradise landscape in which Brueghel 'catalogued' animals and depicts common and domesticated types. Brueghel's interest in the cataloguing of animals was stimulated by his visit to the court of Rudolf II, Holy Roman Emperor in Prague. The emperor had established an encyclopedic collection of rarities and animals. While in his early paradise landscapes Brueghel seems to have based some of his renderings of the animals on works by other artists, he later could rely on studies from life of the various species in the menagerie of the court in Brussels. Brueghel had also seen Albrecht Dürer's depiction of animals during his visit to Prague and had made a painted copy of Dürer's watercolour The Madonna with a Multitude of Animals (1503). Dürer's representations of animals play a pivotal role in Renaissance zoology, since they are the purest artistic form of nature study. The studies of animals by Flemish artists Hans Bol and Joris Hoefnagel also had an important influence on Brueghel. In particular Hoefnagel's Four Elements (1575–1582) was the first artistic work to categorise animals in a book format. Hoefnagel's approach to the representation of the animal world combined natural historical, classical, emblematic, and biblical references, which incorporated the various species into the categories of the four elements of the cosmos: earth, water, air, and fire. Brueghel's paradise landscapes also embodied the encyclopedic attitudes of his time by depicting a wide variety of species.
Brueghel continued refining his treatment of the subject of paradise landscapes throughout his career. The many renderings and variations of the paradise landscape produced by Brueghel earned him the nickname Paradise Brueghel.
Allegorical paintings
Jan Brueghel the Elder produced various sets of allegorical paintings, in particular on the themes of the Five senses and the Four Elements. These paintings were often collaborations with other painters such as is the case with the five paintings representing the Five senses on which Brueghel and Rubens collaborated and which are now in the Prado Museum in Madrid. He also collaborated with Hendrick van Balen on various allegorical compositions such as a series on the Four Elements as well as an Allegory of Public Welfare (Museum of Fine Arts, Budapest).
In his allegories Jan Breughel illustrated an abstract concept, such as one of the senses or one of the four elements through a multitude of concrete objects that can be associated with it. He thus represented a concept by means of descriptive tropes. Brueghel resorted in these allegorical compositions to the encyclopedic imagery that he also displayed in his paradise landscapes. This is demonstrated in his composition Allegory of Fire; Venus in the Forge of Vulcan of which there are various versions of which one (Doria Pamphilj Gallery, Rome) is a collaboration with Hendrick van Balen and another (Pinacoteca Ambrosiana, Milan) is attributed to Jan Brueghel alone. Brueghel's encyclopedic approach in this composition offers such detail that historians of science have relied on the composition as a source of information on the types of tools used in 17th-century metallurgical practice.
Scenes of hell and demons
Jan was early on nicknamed 'Hell Brueghel' but by the 19th century that name had become erroneously associated with his brother Pieter the Younger. Jan Brueghel was given the nickname because of his scenes with demons and hell scenes. An example is the Temptation of St. Anthony (Kunsthistorisches Museum, Vienna), which reprises a subject first explored by Hieronymus Bosch. In this demon-plagued scene the monsters are seen attacking the small saint in the corner of a large and dense forest landscape, rather than within the expanded panoramas of Patinir.
Jan Brueghel is believed to have produced his hell scenes for a newer, elite audience of learned and sophisticated collectors. To appeal to this erudite clientele he often populated the hell scenes with mythological rather than religious subjects, in particular the Vergilian scene of Aeneas in Hades, escorted by the Cumaean Sibyl. An example is Aeneas and the Sibyl in the Underworld (1619, Kunsthistorisches Museum, Vienna). Other mythological themes appearing in his hell scenes included the image of Juno visiting Hades and Orpheus in the Underworld from Ovid's Metamorphoses. An example of the latter is Orpheus in the Underworld (Palazzo Pitti). In these compositions brightly coloured monsters provide the 'recreational terror' of the later manifestations of Boschian design.
Brueghel's hell scenes were influential and Jacob van Swanenburg, one of Rembrandt's teachers, was inspired by them to create his own hell scenes.
Gallery paintings
Jan Brueghel the Elder and Frans Francken the Younger were the first artists to create paintings of art and curiosity collections in the 1620s. Gallery paintings depict large rooms in which many paintings and other precious items are displayed in elegant surroundings. These rooms were often referred to as Kunstkammern ("art rooms") or Wunderkammern ("wonder rooms"). The earliest works in this genre depicted art objects together with other items such as scientific instruments or peculiar natural specimens. Some gallery paintings include portraits of the owners or collectors of the art objects or artists at work. The paintings are heavy with symbolism and allegory and are a reflection of the intellectual preoccupations of the age, including the cultivation of personal virtue and the importance of connoisseurship. The genre became immediately quite popular and was followed by other artists such as Jan Brueghel the Younger, Cornelis de Baellieur, Hans Jordaens, David Teniers the Younger, Gillis van Tilborch and Hieronymus Janssens.
A famous example of a gallery painting by Jan Brueghel is The Archdukes Albert and Isabella Visiting a Collector's Cabinet (now referred to as The Archdukes Albert and Isabella Visiting the Collection of Pieter Roose) (c. 1621–1623, Walters Art Museum, Baltimore). The work is believed to be a collaboration between Jan Brueghel the Elder and Hieronymus Francken II. This gallery painting represents the early phase of the genre of collector's cabinets. During this early 'encyclopaedic' phase, the genre reflected the culture of curiosity of that time, when art works, scientific instruments, naturalia and artificialia were equally the object of study and admiration. As a result, the cabinets depicted in these compositions are populated by persons who appear to be as interested in discussing scientific instruments as in admiring paintings. Later the genre concentrated more on galleries solely containing works of art. The compositions depicted in The Archdukes Albert and Isabella Visiting a Collector's Cabinet are predominantly allegories of iconoclasm and the victory of painting (art) over ignorance. They are references to the iconoclasm of the Beeldenstorm that had raged in the Low Countries in the 16th century and the victory over the iconoclasts during the reign of the Archdukes Albert and Isabella who jointly ruled the Spanish Netherlands in the beginning of the 17th century. Jan Brueghel was responsible for the large vase of flowers, which is crowned by a large sunflower. This South American flower, which could grow very tall and would turn towards the sun, was first seen by Europeans in the mid-16th century. It had been illustrated as a New World wonder in botanical treatises, but Jan Brueghel was the first to include the flower in a painting and use it as a symbol of princely patronage in this composition. By turning toward Albert and Isabella (taking the position of the sun), the sunflower symbolises the way that the arts were able to grow and blossom in the light and warmth of princely patronage.
Singeries
Breughel contributed to the development of the genre of the 'monkey scene', also called 'singerie' (a word derived from the French for 'monkey' and meaning a 'comical grimace, behaviour or trick'). Comical scenes with monkeys appearing in human attire and a human environment are a pictorial genre that was initiated in Flemish painting in the 16th century and was further developed in the 17th century. Monkeys were regarded as shameless and impish creatures and excellent imitators of human behaviour. These depictions of monkeys enacting various human roles were a playful metaphor for all the folly in the world.
The Flemish engraver Pieter van der Borcht introduced the singerie as an independent theme around 1575 in a series of prints, which were strongly embedded in the artistic tradition of Pieter Bruegel the Elder. These prints were widely disseminated and the theme was then picked up by other Flemish artists. The first one to do so was the Antwerp artist Frans Francken the Younger, who was quickly followed by Jan Brueghel the Elder, the Younger, Sebastiaen Vrancx and Jan van Kessel the Elder. Jan Brueghel the Elder's son-in-law David Teniers the Younger became the principal practitioner of the genre and developed it further with his younger brother Abraham Teniers. Later in the 17th century Nicolaes van Verendael started to paint these 'monkey scenes' as well.
An example of a singerie by Jan Brueghel is the Monkeys feasting, which dates from his early years as an artist (private collection, on long-term loan to the Rubenshuis, Antwerp). This painting on copper was probably one of the first examples of a singerie painting. Jan Brueghel likely drew his monkeys in the zoo of the Archdukes in Brussels. While the composition shows the monkeys engaged in all kinds of mischief, it includes a painting above the door jamb, which is a work from Rubens' studio, called "Ceres and Pan". The representation of Ceres and Pan provides a contrast between the cultivated versus the wild world of the monkeys below.
References
Sources
Leopoldine van Hogendorp Prosperetti, 'Landscape and Philosophy in the Art of Jan Brueghel the Elder (1568–1625)', Ashgate Publishing, Ltd., 2009
Arianne Faber Kolb, Jan Brueghel the Elder: The Entry of the Animals into Noah’s Ark, Getty Publications, 2005
Larry Silver, Peasant Scenes and Landscapes: The Rise of Pictorial Genres in the Antwerp Art Market, University of Pennsylvania Press, 4 January 2012
Anne T. Woollett and Ariane van Suchtelen; with contributions by Tiarna Doherty, Mark Leonard, and Jørgen Wadum, Rubens and Brueghel: A Working Friendship, 2006
External links
1568 births
1625 deaths
Flemish Baroque painters
Flemish genre painters
Flemish history painters
Flemish landscape painters
Flemish still life painters
Bruegel family
Pieter Bruegel the Elder
Painters from Antwerp
Painters from Brussels
Artists from the Habsburg Netherlands
16th-century Flemish painters
17th-century Flemish painters | Jan Brueghel the Elder | [
"Engineering"
] | 8,847 | [] |
64,393 | https://en.wikipedia.org/wiki/Video%20game%20crash%20of%201983 | The video game crash of 1983 (known in Japan as the Atari shock) was a large-scale recession in the video game industry that occurred from 1983 to 1985 in the United States. The crash was attributed to several factors, including market saturation in the number of video game consoles and available games, many of which were of poor quality. Waning interest in console games in favor of personal computers also played a role. Home video game revenue peaked at around $3.2 billion in 1983, then fell to around $100 million by 1985 (a drop of almost 97 percent). The crash abruptly ended what is retrospectively considered the second generation of console video gaming in North America. To a lesser extent, the arcade video game market also weakened as the golden age of arcade video games came to an end.
Lasting about two years, the crash shook a then-booming video game industry and led to the bankruptcy of several companies producing home computers and video game consoles. Analysts of the time expressed doubts about the long-term viability of video game consoles and software.
The North American video game console industry recovered a few years later, mostly due to the widespread success of Nintendo's Western branding for its Famicom console, the Nintendo Entertainment System (NES), released in October 1985. The NES was designed to avoid the missteps that caused the 1983 crash and the stigma associated with video games at that time.
Causes and factors
Flooded console market
The Atari VCS (renamed the Atari 2600 in late 1982) was not the first home system with swappable game cartridges, but by 1980 it was the most popular second-generation console by a wide margin. Launched in 1977 just ahead of the collapse of the market for home Pong console clones, the Atari VCS experienced modest sales for its first few years. In 1980, Atari's licensed version of Space Invaders from Taito became the console's killer application; sales of the VCS quadrupled, and the game was the first title to sell more than a million copies. Spurred by the success of the Atari VCS, other consoles were introduced, both from Atari and other companies: Odyssey², Intellivision, ColecoVision, Atari 5200, and Vectrex. Notably, Coleco sold an add-on allowing Atari VCS games to be played on its ColecoVision, as well as bundling the console with a licensed home version of Nintendo's arcade hit Donkey Kong. In 1982, the ColecoVision held roughly 17% of the hardware market, compared to Atari VCS' 58%. This was the first real threat to Atari's dominance of the home console market.
Each new console had its own library of games produced exclusively by the console maker, while the Atari VCS also had a large selection of titles produced by third-party developers. In 1982, analysts marked trends of saturation, mentioning that the amount of new software coming in would only allow a few big hits, that retailers had devoted too much floor space to systems, and that price drops for home computers could result in an industry shakeup. Atari had a large inventory after significant portions of the 1982 orders were returned.
In addition, the rapid growth of the video game industry led to an increased demand, which the manufacturers over-projected. In 1983, an analyst for Goldman Sachs stated the demand for video games was up 100% from the previous year, but the manufacturing output had increased by 175%, creating a significant surplus. Atari CEO Raymond Kassar recognized in 1982 that the industry's saturation point was imminent. However, Kassar expected this to occur when about half of American households had a video game console. The crash occurred when about 15 million machines had been sold, which soundly under-shot Kassar's estimate. Michael Katz, the president of Atari's electronic division, stated that the console market was too saturated as 30 million consoles were sold by 1982, out of the 35 million households with children between the ages of six and sixteen.
Loss of publishing control
Prior to 1979, there were no third-party developers, with console manufacturers like Atari publishing all the games for their respective platforms. This changed with the formation of Activision in 1979. Activision was founded by four former Atari video game programmers who left the company because they felt that Atari's developers should receive the same recognition and accolades (specifically in the form of sales-based royalties and public-facing credits) as the actors, directors, and musicians working for other subsidiaries of Warner Communications (Atari's parent company at the time). Already being quite familiar with the Atari VCS, the four programmers developed their own games and cartridge manufacturing processes. Atari quickly sued to block sales of Activision's products but failed to secure a restraining order, and they ultimately settled the case in 1982. While the settlement stipulated that Activision pay royalties to Atari, this case ultimately legitimized the viability of third-party game developers. Activision's games were as popular as Atari's, with Pitfall! (released in 1982) selling over 4 million units.
Prior to 1982, Activision was one of only a handful of third parties publishing games for the Atari VCS. By 1982, Activision's success emboldened numerous other competitors to penetrate the market. However, Activision's founder David Crane observed that several of these companies were supported by venture capitalists attempting to emulate the success of Activision. Without the experience and skill of Activision's team, these inexperienced competitors mostly created games of poor quality. Crane notably described these as "the worst games you can imagine". While Activision's success could be attributed to the team's existing familiarity with the Atari VCS, other publishers had no such advantage.
The rapid growth of the third-party game industry was easily illustrated by the number of vendors present at the semi-annual Consumer Electronics Show (CES). According to Crane, the number of third-party developers jumped from 3 to 30 between two consecutive events. At the Summer 1982 CES, there were 17 companies, including MCA Inc. and Fox Video Games, announcing a combined 90 new Atari games. By 1983, an estimated 100 companies were attempting to leverage the CES into a foothold in the market. AtariAge documented 158 different vendors that had developed for the Atari VCS. In June 1982, the Atari games on the market numbered just 100 which by December, grew to over 400. Experts predicted a glut in 1983, with only 10% of games producing 75% of sales.
BYTE stated in December, "in 1982 few games broke new ground in either design or format ... If the public really likes an idea, it is milked for all its worth, and numerous clones of a different color soon crowd the shelves. That is, until the public stops buying or something better comes along. Companies who believe that microcomputer games are the hula hoop of the 1980s only want to play Quick Profit." Bill Kunkel said in January 1983 that companies had "licensed everything that moves, walks, crawls, or tunnels beneath the earth. You have to wonder how tenuous the connection will be between the game and the movie Marathon Man. What are you going to do, present a video game root canal?" By September 1983, the Phoenix stated that 2600 cartridges were "no longer a growth industry". Activision, Atari, and Mattel all had experienced programmers, but many of the new companies rushing to join the market did not have the expertise or talent to create quality games. Titles such as the Kaboom!-like Lost Luggage, rock band tie-in Journey Escape, and plate-spinning game Dishaster, were examples of games made in the hopes of taking advantage of the video-game boom, but later proved unsuccessful with retailers and potential customers.
The flood of new games was released into a limited competitive space. According to Activision's Jim Levy, they had projected that the total cartridge market in 1982 would be around 60 million, anticipating Activision would be able to secure between 12% and 15% of that market for their production numbers. However, with at least 50 different companies in the new marketspace, and each having produced between one and two million cartridges, along with Atari's own estimated 60 million cartridges in 1982, there was over 200% production of the actual demand for cartridges in 1982, which contributed to the stockpiling of unsold inventory during the crash.
Competition from home computers
Inexpensive home computers had been first introduced in 1977. By 1979, Atari, Inc. unveiled the Atari 400 and 800 computers, built around a chipset originally meant for use in a game console, and which retailed for the same price as their respective names. In 1981, IBM introduced the first IBM Personal Computer with a $1,565 base price (), while Sinclair Research introduced its low-end ZX81 microcomputer for £70 (). By 1982, new desktop computer designs were commonly providing better color graphics and sound than game consoles and personal computer sales were booming. The TI-99/4A and the Atari 400 were both at $349 (), the TRS-80 Color Computer sold at $379 (), and Commodore International had just reduced the price of the VIC-20 to $199 () and the Commodore 64 to $499 ().
Because computers generally had more memory and faster processors than a console, they permitted more sophisticated games. A 1984 compendium of reviews of Atari 8-bit software used 198 pages for games compared to 167 for all other software types. Home computers could also be used for tasks such as word processing and home accounting. Games were easier to distribute, since they could be sold on floppy disks or cassette tapes instead of ROM cartridges. This opened the field to a cottage industry of third-party software developers. Writeable storage media allowed players to save games in progress, a useful feature for increasingly complex games which was not available on the consoles of the era.
In 1982, a price war that began between Commodore and Texas Instruments led to home computers becoming as inexpensive as video-game consoles; after Commodore cut the retail price of the C64 to $300 in June 1983, some stores began selling it for as little as $199. Dan Gutman, founder in 1982 of Video Games Player magazine in an article in 1987, recalled in 1983 that "People asked themselves, 'Why should I buy a video game system when I can buy a computer that will play games and do so much more?'" The Boston Phoenix stated in September 1983 about the cancellation of the Intellivision III, "Who was going to pay $200-plus for a machine that could only play games?" Commodore explicitly targeted video game players. Spokesman William Shatner asked in VIC-20 commercials "Why buy just a video game from Atari or Intellivision?", stating that "unlike games, it has a real computer keyboard" yet "plays great games too". Commodore's ownership of chip fabricator MOS Technology allowed manufacture of integrated circuits in-house, so the VIC-20 and C64 sold for much lower prices than competing home computers. In addition, both Commodore computers were designed to utilize the ubiquitous Atari controllers so they could tap into the existing controller market.
"I've been in retailing 30 years and I have never seen any category of goods get on a self-destruct pattern like this", a Service Merchandise executive told The New York Times in June 1983. The price war was so severe that in September Coleco CEO Arnold Greenberg welcomed rumors of an IBM 'Peanut' home computer because although IBM was a competitor, it "is a company that knows how to make money". "I look back a year or two in the videogame field, or the home-computer field", Greenberg added, "how much better everyone was, when most people were making money, rather than very few". Companies reduced production in the middle of the year because of weak demand even as prices remained low, causing shortages as sales suddenly rose during the Christmas season; only the Commodore 64 was widely available, with an estimated more than 500,000 computers sold during Christmas. The 99/4A was such a disaster for TI, that the company's stock immediately rose by 25% after the company discontinued it and exited the home-computer market in late 1983. JCPenney announced in December 1983 that it would soon no longer sell home computers, because of the combination of low supply and low prices. Radio Shack avoided drastic price cuts for its home computers and remained profitable in 1983.
By that year, Gutman wrote, "Video games were officially dead and computers were hot". He renamed his magazine Computer Games in October 1983, but "I noticed that the word games became a dirty word in the press. We started replacing it with simulations as often as possible". Soon "The computer slump began ... Suddenly, everyone was saying that the home computer was a fad, just another hula hoop". Computer Games published its last issue in late 1984. In 1988, Computer Gaming World founder Russell Sipe noted that "the arcade game crash of 1984 took down the majority of the computer game magazines with it." He stated that, by "the winter of 1984, only a few computer game magazines remained", and by mid-1985, Computer Gaming World "was the only 4-color computer game magazine left".
Immediate effects
With the release of so many new games in 1982 that flooded the market, most stores had insufficient space to carry new games and consoles. As stores tried to return the surplus games to the new publishers, the publishers had neither new products nor cash to issue refunds to the retailers. Many publishers, including Games by Apollo and U.S. Games, quickly folded. Unable to return the unsold games to defunct publishers, stores marked down the titles and placed them in discount bins and sale tables. Recently released games which initially sold for US$35 (equivalent to $99 in 2021) were in bins for $5 ($14 in 2021).
The presence of third-party sales drew the market share that the console manufacturers had. Atari's share of the cartridge-game market fell from 75% in 1981 to less than 40% in 1982, which negatively affected their finances. The bargain sales of poor-quality titles further drew sales away from the more successful third-party companies like Activision due to poorly informed consumers being drawn by price to purchase the bargain titles rather than quality. By June 1983, the market for the more expensive games had shrunk dramatically and was replaced by a new market of rushed-to-market, low-budget games. Crane said that "those awful games flooded the market at huge discounts, and ruined the video game business".
A massive industry shakeout resulted. Magnavox abandoned the video game business entirely. Imagic withdrew its IPO the day before its stock was to go public; the company later collapsed. Activision had to downsize across 1984 and 1985 due to loss of revenue, and to stay competitive and maintain financial security, began development of games for the personal computer. Within a few years, Activision no longer produced cartridge-based games and focused solely on personal computer games.
Atari was one of those companies most affected by the crash. As a company, its revenue dropped significantly due to dramatically lower sales and cost of returned stock. By mid-1983, the company had lost , was forced to lay off 30% of its 10,000 employees, and moved all manufacturing to Hong Kong and Taiwan. Unsold Pac-Man, E.T. the Extra-Terrestrial, and other 1982 and 1983 games and consoles started to fill their warehouses. In September 1983, Atari discreetly buried much of this excess stock in a landfill near Alamogordo, New Mexico, though Atari did not comment about their activity at the time. Misinformation related to sales of Pac-Man and E.T. led to the urban legend of the Atari video game burial, that millions of unsold cartridges were buried there. Gaming historians received permission to dig up the landfill as part of a documentary in 2014, during which former Atari executive James Heller, who had overseen the original burial clarified that only about 728,000 cartridges had been buried in 1982, backed by estimates made during the excavation, and disproving the scale of the urban legend. Atari's burial remains an iconic representation of the 1983 video game crash. By the end of 1983, Atari had over in losses, leading to Warner Communication to sell Atari's consumer products division in July 1984 to Jack Tramiel, who had recently departed Commodore International. Tramiel's new company took the name Atari Corporation, and they directed their efforts into developing their new personal computer line, the Atari ST, over the console business.
Lack of confidence in the video game sector caused many retailers to stop selling video game consoles or reduced their stock significantly, reserving floor or shelf space for other products. Retailers established to exclusively sell video games folded, which impacted sales of personal computer games.
The full effects of the industry crash were not felt until 1985. Despite Atari's claim of 1 million in worldwide sales of its 2600 game system that year, recovery was slow. The sales of home video games had dropped from $3.2 billion in 1982 to $100 million in 1985. Analysts doubted the long-term viability of the video game industry, and, according to Electronic Arts' Trip Hawkins, it had been very difficult to convince retailers to carry video games due to the stigma carried by the fall of Atari until 1985.
Two major events of 1985 helped to revitalize the video game industry. One factor came from increased sales of personal computers from Commodore and Tandy, which helped to maintain revenue for game developers like Activision and Electronic Arts, keeping the video game market alive. The other was the initial limited release of the Nintendo Entertainment System in North America in late 1985, followed by the full national release early the following year. Following 1986, the industry began recovering, and by 1988, annual sales in the industry exceeded $2.3 billion, with 70% of the market dominated by Nintendo. In 1986, Nintendo president Hiroshi Yamauchi noted that "Atari collapsed because they gave too much freedom to third-party developers and the market was swamped with rubbish games". In response, Nintendo limited the number of titles that third-party developers could release for their system each year, and promoted its "Seal of Quality", which it allowed to be used on games and peripherals by publishers that met Nintendo's quality standards.
The end of the crash allowed Commodore to raise the price of the C64 for the first time upon the June 1986 introduction of the Commodore 64c—a Commodore 64 redesigned for lower cost of manufacture—which Compute! cited as the end of the home-computer price war, one of the causes of the crash.
Long-term effects
The crash in 1983 had the largest impact in the United States. It rippled through all sectors of the global video game market worldwide, though sales of video games still remained strong in Japan, Europe, and Canada from the beleaguered American companies. It took several years for the U.S. industry to recover. The estimated worldwide market in 1982, including arcade, console, and computer games, dropped to by 1985. There was also a significant shift in the home video game market, away from consoles to personal computer software, between 1983 and 1985.
1984 is when some of the longer-term effects started to take a toll on the video game console. Companies like Magnavox had decided to pull out of the video game console industry. The general consensus was that video games were just a fad that came as quickly as they left. But outside of North America the video game industry was doing very well. Home consoles were growing in popularity in Japan while home computers were surging across Europe.
United States sales fell from $3 billion to around $100 million in 1985. During the holiday season of 1985 Hiroshi Yamauchi decided to go to New York small markets about putting their products in their stores. Minoru Arakawa offered a money back guarantee from Nintendo that they would pay back for any stock that was left unsold. In total Nintendo sold 50,000 units, about half of the units they shipped to the US.
Japanese domination
The U.S. video game crash had two long-lasting results. The first result was that dominance in the home console market shifted from the United States to Japan. The crash did not directly affect the financial viability of the video game market in Japan, but it still came as a surprise there and created repercussions that changed that industry, and thus became known as the "Atari shock".
Prior to the crash, Jonathan Greenberg of Forbes had predicted in early 1981 that Japanese companies would eventually dominate the North American video game industry, as American video game companies were increasingly licensing products from Japanese companies, who in turn were opening up North American branches. By 1982–1983, Japanese manufacturers had captured a large share of the North American arcade market, which Gene Lipkin of Data East USA partly attributed to Japanese companies having more finances to invest in new ideas.
As the crash was happening in the United States, Japan's game industry started to shift its attention from arcade games to home consoles. Within one month in 1983, two new home consoles were released in Japan: the Nintendo Family Computer (Famicom) and Sega's SG-1000 (which was later supplanted by the Master System) heralding the third generation of home consoles retrospectively. These two consoles were very popular, buoyed by an economic bubble in Japan. The units readily outsold Atari and Mattel's existing systems, and with both Atari and Mattel focusing on recovering domestic sales, the Japanese consoles effectively went uncontested over the next few years. By 1986, three years after its introduction, 6.5 million Japanese homes – 19% of the population – owned a Famicom, and Nintendo began exporting it to the U.S., where the home console industry was only just recovering from the crash.
The impact on the retail sector of the crash was the most formidable barrier that confronted Nintendo as it tried to market the Famicom in the United States. A planned deal with Atari to distribute the Famicom in North America fell apart in the wake of the crash, resulting in Nintendo handling the international release themselves two years later. Additionally, retailer opposition to video games was directly responsible for causing Nintendo to brand its product the Nintendo Entertainment System (NES) rather than a "video game system", and using terms such as "control deck" and "Game Pak", as well as producing a toy robot called R.O.B. to convince toy retailers to allow it in their stores. Furthermore, the design for the NES used a front-loading cartridge slot to mimic how video cassette recorders, popular at that time, were loaded, further pulling the NES away from previous console designs.
By the time the U.S. video game market recovered in the late 1980s, the NES was by far the dominant console in the United States, leaving only a fraction of the market to Atari. By 1989, home video game sales in the United States had reached $5 billion, surpassing the 1982 peak of $3 billion during the previous generation. A large majority of the market was controlled by Nintendo; it sold more than 35 million units in the United States, exceeding the sales of other consoles and personal computers by a considerable margin. New Japanese companies entered the market to challenge Nintendo's success in the United States, NEC's TurboGrafx-16 and the Sega Genesis, both released in the U.S. in 1989. While the TurboGrafx underwhelmed in the market, the Genesis' release set the stage for a major rivalry between Sega and Nintendo in the early 1990s in the United States video game market.
Impact on third-party software development
A second, highly visible result of the crash was the advancement of measures to control third-party development of software. Using secrecy to combat industrial espionage had failed to stop rival companies from reverse engineering the Mattel and Atari systems and hiring away their trained game programmers. While Mattel and Coleco implemented lockout measures to control third-party development (the ColecoVision BIOS checked for a copyright string on power-up), the Atari 2600 was completely unprotected and once information on its hardware became available, little prevented anyone from making games for the system. Nintendo thus instituted a strict licensing policy for the NES that included equipping the cartridge and console with lockout chips, which were region-specific, and had to match in order for a game to work. In addition to preventing the use of unlicensed games, it also was designed to combat software piracy, rarely a problem in North America or Western Europe, but rampant in East Asia. The concepts of such a control system remain in use on every major video game console produced today, even with fewer cartridge-based consoles on the market than in the 8/16-bit era. Replacing the security chips in most modern consoles are specially encoded optical discs that cannot be copied by most users and can only be read by a particular console under normal circumstances. Accolade achieved a technical victory in one court case against Sega, challenging this control, even though it ultimately yielded and signed the Sega licensing agreement. Several publishers, notably Tengen (Atari Games), Color Dreams, and Camerica, challenged Nintendo's control system during the 8-bit era by producing unlicensed NES games.
Initially, Nintendo was the only developer for the Famicom. Under pressure from Namco and Hudson Soft, it opened the Famicom to third-party development, but instituted a license fee of 30% per game cartridge for these third-parties to develop games, a system used by console manufacturers to this day. Nintendo maintained strict manufacturing control and requiring payment in full before manufacturing. Cartridges could not be returned to Nintendo, so publishers assumed all the financial risk of selling all units ordered. Nintendo limited most third-party publishers to only five games per year on its systems (some companies tried to get around this by creating additional company labels like Konami's Ultra Games label). Nintendo ultimately dropped this rule by 1993, after the release of the successor Super Nintendo Entertainment System. Nintendo's strong-armed oversight of Famicom cartridge manufacturing led to both legitimate and bootleg unlicensed cartridges to be made in the Asian regions. Outside of Japan, Nintendo placed its Nintendo Seal of Quality on all licensed games released for the system to try to promote authenticity and detract from bootleg sales, but failed to make significant traction to stalling these sales.
As Nintendo prepared to release the Famicom in the United States, it wanted to avoid both the bootleg problem it had in Asia as well as the mistakes that led up to the 1983 crash. The company created the proprietary 10NES system, a lockout chip which was designed to prevent cartridges made without the chip from being played on the NES. The 10NES lockout was not perfect, as later in the NES' lifecycle methods were found to bypass it, but it did sufficiently allow Nintendo to strengthen its publishing control to avoid the mistakes Atari had made and initially prevent bootleg cartridges in the Western markets. These strict licensing measures backfired somewhat after Nintendo was accused of monopolistic behavior. In the long run, this pushed many western third-party publishers such as Electronic Arts away from Nintendo consoles and supported competing consoles such as the Sega Genesis or Sony PlayStation. Most of the Nintendo platform-control measures were adopted by later console manufacturers such as Sega, Sony, and Microsoft, although not as stringently.
Computer game growth
With waning console interests in the United States, the computer game market was able to gain a strong foothold in 1983 and beyond. Developers that had been primarily in the console games space, like Activision, turned their attention to developing computer game titles to stay viable. Newer companies also were founded to capture the growing interest in the computer games space with novel elements that borrowed from console games, as well as taking advantage of low-cost dial-up modems that allowed for multiplayer capabilities. The computer game market grew between 1983 and 1984, overtaking the console market, but overall video game revenue had declined significantly due to the considerable decline of the console market as well as the arcade market to an extent. The home computer industry, however, experienced a downturn in mid-1984, with global computer game sales declining in 1985 to a certain extent.
Microcomputers dominated the European market throughout the 1980s and with domestic production for those formats thriving over the same period, there was minimal trans-Atlantic ripple from American game production and trends. Partly as a distant knock-on effect of the crash and partly due to the continuing quality of homegrown computer and microcomputer games, consoles did not achieve a dominant position in some European markets until the early 1990s. In the United Kingdom, there was a short-lived home console market between 1980 and 1982, but the 1983 crash led to the decline of consoles in the UK, which was offset by the rise of LCD games in 1983 and then the rise of computer games in 1984. It was not until the late 1980s with the arrival of the Master System and NES that the home console market recovered in the UK. Computer games remained the dominant sector of the UK home video game market up until they were surpassed by Sega and Nintendo consoles in 1991.
References
Works cited
Further reading
DeMaria, Rusel & Wilson, Johnny L. (2003). High Score!: The Illustrated History of Electronic Games (2nd ed.). New York: McGraw-Hill/Osborne. .
Gallagher, Scott & Park, Seung Ho (2002). "Innovation and Competition in Standard-Based Industries: A Historical Analysis of the U.S. Home Video Game Market". IEEE Transactions on Engineering Management, vol. 49, no. 1, February 2002, pp.67–82. doi: 10.1109/17.985749
External links
The Dot Eaters.com: "Chronicle of the Great Videogame Crash"
Twin Galaxies Official Video Game & Pinball Book of World Records: "The Golden Age of Video Game Arcades" — story within the 1998 book.
Intellivisionlives.com: Official Intellivision History — by the original programmers.
The History of Computer Games: The Atari Years — by Chris Crawford, a game designer at Atari during the crash.
Pctimeline.info: Chronology of the Commodore 64 Computer— Events & Game release dates (1982–1990).
1983 in video gaming
1980s in video gaming
History of video games
Business failures
Economic bubbles
1983 in North America
1983 in economic history
Second-generation video game consoles | Video game crash of 1983 | [
"Technology"
] | 6,306 | [
"History of video games",
"History of computing"
] |
64,474 | https://en.wikipedia.org/wiki/Concatenation | In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalizations of concatenation theory, also called string theory, string concatenation is a primitive notion.
Syntax
In many programming languages, string concatenation is a binary infix operator, and in some it is written without an operator. This is implemented in different ways:
Overloading the plus sign + Example from C#: "Hello, " + "World" has the value "Hello, World".
Dedicated operator, such as . in PHP, & in Visual Basic and || in SQL. This has the advantage over reusing + that it allows implicit type conversion to string.
string literal concatenation, which means that adjacent strings are concatenated without any operator. Example from C: "Hello, " "World" has the value "Hello, World".
Implementation
In programming, string concatenation generally occurs at run time, as string values are typically not known until run time. However, in the case of string literals, the values are known at compile time, and thus string concatenation can be done at compile time, either via string literal concatenation or via constant folding, a potential run-time optimization.
Concatenation of sets of strings
In formal language theory and pattern matching (including regular expressions), the concatenation operation on strings is generalised to an operation on sets of strings as follows:
For two sets of strings S1 and S2, the concatenation S1S2 consists of all strings of the form vw where v is a string from S1 and w is a string from S2, or formally . Many authors also use concatenation of a string set and a single string, and vice versa, which are defined similarly by and . In these definitions, the string vw is the ordinary concatenation of strings v and w as defined in the introductory section.
For example, if , and , then FR denotes the set of all chess board coordinates in algebraic notation, while eR denotes the set of all coordinates of the kings' file.
In this context, sets of strings are often referred to as formal languages. The concatenation operator is usually expressed as simple juxtaposition (as with multiplication).
Algebraic properties
The strings over an alphabet, with the concatenation operation, form an associative algebraic structure with identity element the null string—a free monoid.
Sets of strings with concatenation and alternation form a semiring, with concatenation (*) distributing over alternation (+); 0 is the empty set and 1 the set consisting of just the null string.
Applications
Audio and telephony
In programming for telephony, concatenation is used to provide dynamic audio feedback to a user. For example, in a "time of day" speaking clock, concatenation is used to give the correct time by playing the appropriate recordings concatenated together. For example: "at the tone, the time will be", "eight", "thirty", "five", "and", "twenty", "five", "seconds".
The recordings themselves exist separately, but playing them one after the other provides a grammatically correct sentence to the listener.
This technique is also used in number change announcements, voice mail systems, or most telephony applications that provide dynamic feedback to the caller (e.g. moviefone, tellme, and others).
Programming for any kind of computerised public address system can also employ concatenation for dynamic public announcements (for example, flights in an airport). The system would archive recorded speech of numbers, routes or airlines, destinations, times, etc. and play them back in a specific sequence to produce a grammatically correct sentence that is announced throughout the facility.
Database theory
One of the principles of relational database design is that the fields of data tables should reflect a single characteristic of the table's subject, which means that they should not contain concatenated strings. When concatenation is desired in a report, it should be provided at the time of running the report. For example, to display the physical address of a certain customer, the data might include building number, street name, building sub-unit number, city name, state/province name, postal code, and country name, e.g., "123 Fake St Apt 4, Boulder, CO 80302, USA", which combines seven fields. However, the customers data table should not use one field to store that concatenated string; rather, the concatenation of the seven fields should happen upon running the report. The reason for such principles is that without them, the entry and updating of large volumes of data becomes error-prone and labor-intensive. Separately entering the city, state, ZIP code, and nation allows data-entry validation (such as detecting an invalid state abbreviation). Then those separate items can be used for sorting or indexing the records, such as all with "Boulder" as the city name.
Recreational mathematics
In recreational mathematics, many problems concern the properties of numbers under concatenation of their numerals in some base. Examples include home primes (primes obtained by repeatedly factoring the increasing concatenation of prime factors of a given number), Smarandache–Wellin numbers (the concatenations of the first prime numbers), and the Champernowne and Copeland–Erdős constants (the real numbers formed by the decimal representations of the positive integers and the prime numbers, respectively).
See also
Rope (data structure)
References
Citations
Sources
Formal languages
Operators (programming)
String (computer science) | Concatenation | [
"Mathematics",
"Technology"
] | 1,200 | [
"Sequences and series",
"String (computer science)",
"Mathematical structures",
"Formal languages",
"Mathematical logic",
"Computer science"
] |
64,487 | https://en.wikipedia.org/wiki/Tour%20Montparnasse | Tour Maine-Montparnasse (Maine-Montparnasse Tower), also commonly named Tour Montparnasse, is a office skyscraper in the Montparnasse area of Paris, France. Constructed from 1969 to 1973, it was the tallest skyscraper in France until 2011, when it was surpassed by the Tour First in the La Défense business district west of Paris's city limits. It remains the tallest building in Paris proper and the third tallest in France, behind Tour First and Tour Hekla. , it is the 53rd-tallest building in Europe.
The tower was designed by architects Eugène Beaudouin, Urbain Cassan, and Louis Hoym de Marien and built by Campenon Bernard. On 21 September 2017, Nouvelle AOM won a competition to redesign the building's façade.
Description
Built on top of the Montparnasse–Bienvenüe Paris Métro station, the building has 59 floors.
The 56th floor, from the ground, is home to Paris Montparnasse, an observation deck owned by Magnicity, a French company which also operates the Berlin TV Tower in Berlin and 360 CHICAGO at the former John Hancock Center in Chicago. Visitors to the observation deck can also visit the scenic rooftop terrace or make reservations for the 56th-floor restaurant called Ciel de Paris.
The view covers a radius of ; aircraft can be seen taking off from Orly Airport.
The guard rail, to which various antennae are attached, can be pneumatically lowered.
History
The project
In 1934, the old Montparnasse station located on the edges of the similarly named boulevard, opposite the Rue de Rennes, appeared ill-suited to traffic. The city of Paris planned to reorganize the district and build a new station. But the project, entrusted to Raoul Dautry (who would give his name to the square of the tower), met strong opposition and was cancelled.
In 1956, on the occasion of the adoption of the new master plan for the Paris traffic plan, the Société d'économie mixte pour l'Aménagement du secteur Maine Montparnasse (SEMMAM) was created, as well as the l'Agence pour l'Opération Maine Montparnasse (AOM). Their mission was to redevelop the neighbourhood, which required razing many streets, often dilapidated and unsanitary. The site then occupied up to .
In 1958, the first studies of the tower were well launched, but the project was strongly criticized because of the height of the building. A controversy ensued, led by the Minister of Public Works Edgard Pisani, who obtained the support of André Malraux, then Minister of Culture under General de Gaulle which led to slowdowns in the project.
However, the reconstruction of the Montparnasse station a few hundred metres south of the old one and the destruction of the Gare du Maine, which was included in the real estate project of the AOM, a joint agency which brought together the four architects: Urbain Cassan, Eugène Beaudouin and Louis de Hoÿm de Marien, was carried out from June 1966 to the spring of 1969 with the assistance of the architect Jean Saubot.
In 1968, André Malraux granted the building permit for the Tower to the AOM and work began that same year. The project was spearheaded by the American real estate developer Wylie Tuttle, who enlisted a consortium of 17 French insurance companies and seven banks in the $140 million multiple-building project, but later distanced himself from the project until his 2002 obituary revealed that the building was his original "brainchild".
In 1969, the decision to build a shopping centre was finally made, and Georges Pompidou, then President of the Republic, wanted to provide the capital with modern infrastructure. Despite a major controversy, the construction of the tower was started.
For geographer Anne Clerval, this construction symbolizes the service economy of Paris in the 1970s resulting from deindustrialization policies which, from the 1960s, favoured "bypassing by space the most working class strongholds at the time".
Construction
The Tour Montparnasse was built between 1969 and 1973 on the site of the old Montparnasse station. The first stone was laid in 1970 and the inauguration took place in 1973.
The foundations of the tower are made up of 56 reinforced concrete pillars sinking underground. For urban planning reasons, the tower had to be built just above a Metro line; and to avoid using the same support and weakening it, the Metro structures were protected by a reinforced concrete shield. Long horizontal beams were installed in order to free up the space needed in the basement to fit out the tracks for trains.
Occupation
The tower is mainly occupied by offices. Various companies and organizations have settled in the tower:
The International Union of Architects, Axa and MMA insurers, the mining and metallurgy company Eramet, Al Jazeera
Political parties have used campaign offices, such as François Mitterrand in 1974, the RPR in the late 1970s, Emmanuel Macron's La République En Marche! in 2016, Benoît Hamon since 2018
Previously Tour Maine-Montparnasse housed the executive management of Accor.
The 56th floor, with its terrace, bars and restaurant, has been used for private or public events. During the 1980s and 1990s, the live National Lottery was cast on TF1 from the 56th floor.
Climbing the tower
French urban climber Alain Robert scaled the building's exterior glass and steel wall to the top twice, in 1995 and in 2015, both times using no equipment or safety devices. The feat was also undertaken by Polish climber Marcin Banot in 2020 and 2023.
Criticism
The tower's simple architecture, large proportions and monolithic appearance have been often criticized by Parisians for being out of place in Paris's landscape. As a result, two years after its completion the construction of buildings over seven storeys tall in the city centre was banned in Paris. This ban was lifted in 2015.
The design of the tower predates architectural trends of more modern skyscrapers today that are often designed to provide a window for every office. Only the offices around the perimeter of each floor of Tour Montparnasse have windows.
It is said as a joke among the Parisians, that the tower's observation deck enjoys the most beautiful view in all of Paris because it is the only place from which the tower cannot be seen.
A 2008 poll of editors on Virtualtourist voted the building the second-ugliest building in the world, behind Boston City Hall in the United States.
Asbestos contamination
In 2005, studies showed that the tower contained asbestos material. When inhaled, for instance during repairs, asbestos is a carcinogen. Monitoring revealed that legal limits of fibres per litre were surpassed and, on at least one occasion, reached 20 times the legal limit. Due to health and legal concerns, some tenants abandoned their offices in the building.
Removal of the asbestos was originally expected to take three years. After a nearly three-year delay, removal began in 2009 alongside regular operation of the building. In 2012, it was reported the tower was 90% free of asbestos.
See also
List of tallest buildings and structures in the Paris region
References
External links
Photos of Tour Montparnasse
Tour Montparnasse
Pictures and info
Buildings and structures in the 15th arrondissement of Paris
Office buildings in Paris
Montparnasse
Office buildings completed in 1972
Montparnasse
Tourist attractions in Paris
20th-century architecture in France | Tour Montparnasse | [
"Engineering"
] | 1,541 | [
"Architectural controversies",
"Architecture"
] |
64,489 | https://en.wikipedia.org/wiki/Purchasing%20power%20parity | Purchasing power parity (PPP) is a measure of the price of specific goods in different countries and is used to compare the absolute purchasing power of the countries' currencies. PPP is effectively the ratio of the price of a market basket at one location divided by the price
of the basket of goods at a different location. The PPP inflation and exchange rate may differ from the market exchange rate because of tariffs, and other transaction costs.
The purchasing power parity indicator can be used to compare economies regarding their gross domestic product (GDP), labour productivity and actual individual consumption, and in some cases to analyse price convergence and to compare the cost of living between places. The calculation of the PPP, according to the OECD, is made through a basket of goods that contains a "final product list [that] covers around 3,000 consumer goods and services, 30 occupations in government, 200 types of equipment goods and about 15 construction projects".
Concept
Purchasing power parity is an economic term for measuring prices at different locations. It is based on the law of one price, which says that, if there are no transaction costs nor trade barriers for a particular good, then the price for that good should be the same at every location. Ideally, a computer in New York and in Hong Kong should have the same price. If its price is 500 US dollars in New York and the same computer costs 2,000 HK dollars in Hong Kong, PPP theory says the exchange rate should be 4 HK dollars for every 1 US dollar.
Poverty, tariffs, transportation, and other frictions prevent the trading and purchasing of various goods, so measuring a single good can cause a large error. The PPP term accounts for this by using a basket of goods, that is, many goods with different quantities. PPP then computes an inflation and exchange rate as the ratio of the price of the basket in one location to the price of the basket in the other location. For example, if a basket consisting of 1 computer, 1 ton of rice, and half a ton of steel was 1000 US dollars in New York and the same goods cost 6000 HK dollars in Hong Kong, the PPP exchange rate would be 6 HK dollars for every 1 US dollar.
The name purchasing power parity comes from the idea that, with the right exchange rate, consumers in every location will have the same purchasing power.
The value of the PPP exchange rate is very dependent on the basket of goods chosen. In general, goods are chosen that might closely obey the law of one price. Thus, one attempts to select goods which are traded easily and are commonly available in both locations. Organizations that compute PPP exchange rates use different baskets of goods and can come up with different values.
The PPP exchange rate may not match the market exchange rate. The market rate is more volatile because it reacts to changes in demand at each location. Also, tariffs and differences in the price of labour (see Balassa–Samuelson theorem) can contribute to longer-term differences between the two rates. One use of PPP is to predict longer-term exchange rates.
Because PPP exchange rates are more stable and are less affected by tariffs, they are used for many international comparisons, such as comparing countries' GDPs or other national income statistics. These numbers often come with the label PPP-adjusted.
There can be marked differences between purchasing power adjusted incomes and those converted via market exchange rates. A well-known purchasing power adjustment is the Geary–Khamis dollar (the GK dollar or international dollar). The World Bank's World Development Indicators 2005 estimated that in 2003, one Geary–Khamis dollar was equivalent to about 1.8 Chinese yuan by purchasing power parity—considerably different from the nominal exchange rate. This discrepancy has large implications; for instance, when converted via the nominal exchange rates, GDP per capita in India is about US$1,965 while on a PPP basis, it is about Int$7,197. At the other extreme, Denmark's nominal GDP per capita is around US$53,242, but its PPP figure is Int$46,602, in line with other developed nations.
Variations
There are variations in calculating PPP. The EKS method (developed by Ö. Éltető, P. Köves and B. Szulc) uses the geometric mean of the exchange rates computed for individual goods. The EKS-S method (by Éltető, Köves, Szulc, and Sergeev) uses two different baskets, one for each country, and then averages the result. While these methods work for 2 countries, the exchange rates may be inconsistent if applied to 3 countries, so further adjustment may be necessary so that the rate from currency A to B times the rate from B to C equals the rate from A to C.
Relative PPP
Relative PPP is a weaker statement based on the law of one price, covering changes in the exchange rate and inflation rates. It seems to mirror the exchange rate closer than PPP does.
Usage
Conversion
Purchasing power parity exchange rate is used when comparing national production and consumption and other places where the prices of non-traded goods are considered important. (Market exchange rates are used for individual goods that are traded). PPP rates are more stable over time and can be used when that attribute is important.
PPP exchange rates help costing but exclude profits and above all do not consider the different quality of goods among countries. The same product, for instance, can have a different level of quality and even safety in different countries, and may be subject to different taxes and transport costs. Since market exchange rates fluctuate substantially, when the GDP of one country measured in its own currency is converted to the other country's currency using market exchange rates, one country might be inferred to have higher real GDP than the other country in one year but lower in the other. Both of these inferences would fail to reflect the reality of their relative levels of production.
If one country's GDP is converted into the other country's currency using PPP exchange rates instead of observed market exchange rates, the false inference will not occur. Essentially GDP measured at PPP controls for the different costs of living and price levels, usually relative to the United States dollar, enabling a more accurate estimate of a nation's level of production.
The exchange rate reflects transaction values for traded goods between countries in contrast to non-traded goods, that is, goods produced for home-country use. Also, currencies are traded for purposes other than trade in goods and services, e.g., to buy capital assets whose prices vary more than those of physical goods. Also, different interest rates, speculation, hedging or interventions by central banks can influence the purchasing power parity of a country in the international markets.
The PPP method is used as an alternative to correct for possible statistical bias. The Penn World Table is a widely cited source of PPP adjustments, and the associated Penn effect reflects such a systematic bias in using exchange rates to outputs among countries.
For example, if the value of the Mexican peso falls by half compared to the US dollar, the Mexican gross domestic product measured in dollars will also halve. However, this exchange rate results from international trade and financial markets. It does not necessarily mean that Mexicans are poorer by a half; if incomes and prices measured in pesos stay the same, they will be no worse off assuming that imported goods are not essential to the quality of life of individuals.
Measuring income in different countries using PPP exchange rates helps to avoid this problem, as the metrics give an understanding of relative wealth regarding local goods and services at domestic markets. On the other hand, it is poor for measuring the relative cost of goods and services in international markets. The reason is it does not take into account how much US$1 stands for in a respective country. Using the above-mentioned example: in an international market, Mexicans can buy less than Americans after the fall of their currency, though their GDP PPP changed a little.
Exchange rate prediction
PPP exchange rates are never valued because market exchange rates tend to move in their general direction, over a period of years. There is some value to knowing in which direction the exchange rate is more likely to shift over the long run.
In neoclassical economic theory, the purchasing power parity theory assumes that the exchange rate between two currencies actually observed in the different international markets is the one that is used in the purchasing power parity comparisons, so that the same amount of goods could actually be purchased in either currency with the same beginning amount of funds. Depending on the particular theory, purchasing power parity is assumed to hold either in the long run or, more strongly, in the short run. Theories that invoke purchasing power parity assume that in some circumstances a fall in either currency's purchasing power (a rise in its price level) would lead to a proportional decrease in that currency's valuation on the foreign exchange market.
Identifying manipulation
PPP exchange rates are especially useful when official exchange rates are artificially manipulated by governments. Countries with strong government control of the economy sometimes enforce official exchange rates that make their own currency artificially strong. By contrast, the currency's black market exchange rate is artificially weak. In such cases, a PPP exchange rate is likely the most realistic basis for economic comparison. Similarly, when exchange rates deviate significantly from their long term equilibrium due to speculative attacks or carry trade, a PPP exchange rate offers a better alternative for comparison.
In 2011, the Big Mac Index was used to identify manipulation of inflation numbers by Argentina.
Issues
The PPP exchange-rate calculation is controversial because of the difficulties of finding comparable baskets of goods to compare purchasing power across countries.
Estimation of purchasing power parity is complicated by the fact that countries do not simply differ in a uniform price level; rather, the difference in food prices may be greater than the difference in housing prices, while also less than the difference in entertainment prices. People in different countries typically consume different baskets of goods. It is necessary to compare the cost of baskets of goods and services using a price index. This is a difficult task because purchasing patterns and even the goods available to purchase differ across countries.
Thus, it is necessary to make adjustments for differences in the quality of goods and services. Furthermore, the basket of goods representative of one economy will vary from that of another: Americans eat more bread; Chinese more rice. Hence a PPP calculated using the US consumption as a base will differ from that calculated using China as a base. Additional statistical difficulties arise with multilateral comparisons when (as is usually the case) more than two countries are to be compared.
Various ways of averaging bilateral PPPs can provide a more stable multilateral comparison, but at the cost of distorting bilateral ones. These are all general issues of indexing; as with other price indices there is no way to reduce complexity to a single number that is equally satisfying for all purposes. Nevertheless, PPPs are typically robust in the face of the many problems that arise in using market exchange rates to make comparisons.
For example, in 2005 the price of a gallon of gasoline in Saudi Arabia was US$0.91, and in Norway the price was US$6.27. The significant differences in price would not contribute to accuracy in a PPP analysis, despite all of the variables that contribute to the significant differences in price. More comparisons have to be made and used as variables in the overall formulation of the PPP.
When PPP comparisons are to be made over some interval of time, proper account needs to be made of inflationary effects.
In addition to methodological issues presented by the selection of a basket of goods, PPP estimates can also vary based on the statistical capacity of participating countries. The International Comparison Program (ICP), which PPP estimates are based on, require the disaggregation of national accounts into production, expenditure or (in some cases) income, and not all participating countries routinely disaggregate their data into such categories.
Some aspects of PPP comparison are theoretically impossible or unclear. For example, there is no basis for comparison between the Ethiopian labourer who lives on teff with the Thai labourer who lives on rice, because teff is not commercially available in Thailand and rice is not in Ethiopia, so the price of rice in Ethiopia or teff in Thailand cannot be determined. As a general rule, the more similar the price structure between countries, the more valid the PPP comparison.
PPP levels will also vary based on the formula used to calculate price matrices. Possible formulas include GEKS-Fisher, Geary-Khamis, IDB, and the superlative method. Each has advantages and disadvantages.
Linking regions presents another methodological difficulty. In the 2005 ICP round, regions were compared by using a list of some 1,000 identical items for which a price could be found for 18 countries, selected so that at least two countries would be in each region. While this was superior to earlier "bridging" methods, which do not fully take into account differing quality between goods, it may serve to overstate the PPP basis of poorer countries, because the price indexing on which PPP is based will assign to poorer countries the greater weight of goods consumed in greater shares in richer countries.
There are a number of reasons that different measures do not perfectly reflect standard of living. In 2011, interviewed by the Financial Times, a spokesperson for the IMF declared:
Range and quality of goods
The goods that the currency has the "power" to purchase are a basket of goods of different types:
Local, non-tradable goods and services (like electric power) that are produced and sold domestically.
Tradable goods such as non-perishable commodities that can be sold on the international market (like diamonds).
The more that a product falls into category 1, the further its price will be from the currency exchange rate, moving towards the PPP exchange rate. Conversely, category 2 products tend to trade close to the currency exchange rate. (See also Penn effect).
More processed and expensive products are likely to be tradable, falling into the second category, and drifting from the PPP exchange rate to the currency exchange rate. Even if the PPP "value" of the Ethiopian currency is three times stronger than the currency exchange rate, it will not buy three times as much of internationally traded goods like steel, cars and microchips, but non-traded goods like housing, services ("haircuts"), and domestically produced crops. The relative price differential between tradables and non-tradables from high-income to low-income countries is a consequence of the Balassa–Samuelson effect and gives a big cost advantage to labour-intensive production of tradable goods in low income countries (like Ethiopia), as against high income countries (like Switzerland).
The corporate cost advantage is nothing more sophisticated than access to cheaper workers, but because the pay of those workers goes farther in low-income countries than high, the relative pay differentials (inter-country) can be sustained for longer than would be the case otherwise. (This is another way of saying that the wage rate is based on average local productivity and that this is below the per capita productivity that factories selling tradable goods to international markets can achieve.) An equivalent cost benefit comes from non-traded goods that can be sourced locally (nearer the PPP-exchange rate than the nominal exchange rate in which receipts are paid). These act as a cheaper factor of production than is available to factories in richer countries. It is difficult by GDP PPP to consider the different quality of goods among the countries.
The Bhagwati–Kravis–Lipsey view provides a somewhat different explanation from the Balassa–Samuelson theory. This view states that price levels for nontradables are lower in poorer countries because of differences in endowment of labor and capital, not because of lower levels of productivity. Poor countries have more labor relative to capital, so marginal productivity of labor is greater in rich countries than in poor countries. Nontradables tend to be labor-intensive; therefore, because labor is less expensive in poor countries and is used mostly for nontradables, nontradables are cheaper in poor countries. Wages are high in rich countries, so nontradables are relatively more expensive.
PPP calculations tend to overemphasise the primary sectoral contribution, and underemphasise the industrial and service sectoral contributions to the economy of a nation.
Trade barriers and nontradables
The law of one price is weakened by transport costs and governmental trade restrictions, which make it expensive to move goods between markets located in different countries. Transport costs sever the link between exchange rates and the prices of goods implied by the law of one price. As transport costs increase, the larger the range of exchange rate fluctuations. The same is true for official trade restrictions because the customs fees affect importers' profits in the same way as shipping fees. According to Krugman and Obstfeld, "Either type of trade impediment weakens the basis of PPP by allowing the purchasing power of a given currency to differ more widely from country to country." They cite the example that a dollar in London should purchase the same goods as a dollar in Chicago, which is certainly not the case.
Nontradables are primarily services and the output of the construction industry. Nontradables also lead to deviations in PPP because the prices of nontradables are not linked internationally. The prices are determined by domestic supply and demand, and shifts in those curves lead to changes in the market basket of some goods relative to the foreign price of the same basket. If the prices of nontradables rise, the purchasing power of any given currency will fall in that country.
Departures from free competition
Linkages between national price levels are also weakened when trade barriers and imperfectly competitive market structures occur together. Pricing to market occurs when a firm sells the same product for different prices in different markets. This is a reflection of inter-country differences in conditions on both the demand side (e.g., virtually no demand for pork in Islamic states) and the supply side (e.g., whether the existing market for a prospective entrant's product features few suppliers or instead is already near-saturated). According to Krugman and Obstfeld, this occurrence of product differentiation and segmented markets results in violations of the law of one price and absolute PPP. Over time, shifts in market structure and demand will occur, which may invalidate relative PPP.
Differences in price level measurement
Measurement of price levels differ from country to country. Inflation data from different countries are based on different commodity baskets; therefore, exchange rate changes do not offset official measures of inflation differences. Because it makes predictions about price changes rather than price levels, relative PPP is still a useful concept. However, change in the relative prices of basket components can cause relative PPP to fail tests that are based on official price indexes.
Global poverty line
The global poverty line is a worldwide count of people who live below an international poverty line, referred to as the dollar-a-day line. This line represents an average of the national poverty lines of the world's poorest countries, expressed in international dollars. These national poverty lines are converted to international currency and the global line is converted back to local currency using the PPP exchange rates from the ICP. PPP exchange rates include data from the sales of high end non-poverty related items which skews the value of food items and necessary goods which is 70 percent of poor peoples' consumption. Angus Deaton argues that PPP indices need to be reweighted for use in poverty measurement; they need to be redefined to reflect local poverty measures, not global measures, weighing local food items and excluding luxury items that are not prevalent or are not of equal value in all localities.
History
The idea originated with the School of Salamanca in the 16th century, and was developed in its modern form by Gustav Cassel in 1916, in The Present Situation of the Foreign Trade. While Gustav Cassel's use of PPP concept has been traditionally interpreted as his attempt to formulate a positive theory of exchange rate determination, the policy and theoretical context in which Cassel wrote about exchange rates suggests different interpretation. In the years immediately preceding the end of WWI and following it economists and politicians were involved in discussions on possible ways of restoring the gold standard, which would automatically restore the system of fixed exchange rates among participating nations.
The stability of exchange rates was widely believed to be crucial for restoring the international trade and for its further stable and balanced growth. Nobody then was mentally prepared for the idea that flexible exchange rates determined by market forces do not necessarily cause chaos and instability in the peaceful time (and that is what the abandoning of the gold standard during the war was blamed for). Gustav Cassel was among those who supported the idea of restoring the gold standard, although with some alterations. The question, which Gustav Cassel tried to answer in his works written during that period, was not how exchange rates are determined in the free market, but rather how to determine the appropriate level at which exchange rates were to be fixed during the restoration of the system of fixed exchange rates.
His recommendation was to fix exchange rates at the level corresponding to the PPP, as he believed that this would prevent trade imbalances between trading nations. Thus, PPP doctrine proposed by Cassel was not really a positive (descriptive) theory of exchange rate determination (as Cassel was perfectly aware of numerous factors that prevent exchange rates from stabilizing at PPP level if allowed to float), but rather a normative (prescriptive) policy advice, formulated in the context of discussions on returning to the gold standard.
Examples
Professional
OECD comparative price levels
Each month, the Organisation for Economic Co-operation and Development (OECD) measures the differences in price levels between its member countries by calculating the ratios of PPPs for private final consumption expenditure to exchange rates. The OECD table below indicates the number of US dollars needed in each of the countries listed to buy the same representative basket of consumer goods and services that would cost US$100 in the United States.
According to the table, an American living or travelling in Switzerland on an income denominated in US dollars would find that country to be the most expensive of the group, having to spend 27% more US dollars to maintain a standard of living comparable to the US in terms of consumption.
Extrapolating PPP rates
Since global PPP estimates—such as those provided by the ICP—are not calculated annually, but for a single year, PPP exchange rates for years other than the benchmark year need to be extrapolated. One way of doing this is by using the country's GDP deflator. To calculate a country's PPP exchange rate in Geary–Khamis dollars for a particular year, the calculation proceeds in the following manner:
Where PPPrateX,i is the PPP exchange rate of country X for year i, PPPrateX,b is the PPP exchange rate of country X for the benchmark year, PPPrateU,b is the PPP exchange rate of the United States (US) for the benchmark year (equal to 1), GDPdefX,i is the GDP deflator of country X for year i, GDPdefX,b is the GDP deflator of country X for the benchmark year, GDPdefU,i is the GDP deflator of the US for year i, and GDPdefU,b is the GDP deflator of the US for the benchmark year.
UBS
The bank UBS produces its "Prices and Earnings" report every three years. The 2012 report says, "Our reference basket of goods is based on European consumer habits and includes 122 positions".
Educational
To teach PPP, the basket of goods is often simplified to a single good.
Big Mac Index
The Big Mac Index is a simple implementation of PPP where the basket contains a single good: a Big Mac burger from McDonald's restaurants. The index was created and popularized by The Economist in 1986 as a way to teach economics and to identify over- and under-valued currencies.
The Big Mac has the value of being a relatively standardized consumer product that includes input costs from a wide range of sectors in the local economy, such as agricultural commodities (beef, bread, lettuce, cheese), labor (blue and white collar), advertising, rent and real estate costs, transportation, etc.
There are some problems with the Big Mac Index. A Big Mac is perishable and not easily transported. That means the law of one price is not likely to keep prices the same in different locations. McDonald's restaurants are not present in every country, which limits the index's usage. Moreover, Big Macs are not sold at every McDonald's (notably in India), which limits its usage further.
In the white paper, "Burgernomics", the authors computed a correlation of 0.73 between the Big Mac Index's prices and prices calculated using the Penn World Tables. This single-good index captures most, but not all, of the effects captured by more professional (and more complex) PPP measurement.
The Economist uses The Big Mac Index to identify overvalued and undervalued currencies. That is, ones where the Big Mac is expensive or cheap, when measured using current exchange rates. The January 2019 article states that a Big Mac costs HK$20.00 in Hong Kong and US$5.58 in the United States. The implied PPP exchange rate is 3.58 HK$ per US$. The difference between this and the actual exchange rate of 7.83 suggests that the Hong Kong dollar is 54.2% undervalued. That is, it is cheaper to convert US dollars into Hong Kong dollars and buy a Big Mac in Hong Kong than it is to buy a Big Mac directly in US dollars.
KFC Index
Similar to the Big Mac Index, the KFC Index measures PPP with a basket that contains a single item: a KFC Original 12/15 pc. bucket. The Big Mac Index cannot be used for most countries in Africa because most do not have a McDonald's restaurant. Thus, the KFC Index was created by Sagaci Research (a market research firm focusing solely on Africa) to identify over- and under-valued currencies in Africa.
For example, the average price of KFC's Original 12 pc. Bucket in the United States in January 2016 was $20.50; while in Namibia it was only $13.40 at market exchange rates. Therefore, the index states the Namibian dollar was undervalued by 33% at that time.
iPad Index
Like the Big Mac Index, the iPad index (elaborated by CommSec) compares an item's price in various locations. Unlike the Big Mac, however, each iPad is produced in the same place (except for the model sold in Brazil) and all iPads (within the same model) have identical performance characteristics. Price differences are therefore a function of transportation costs, taxes, and the prices that may be realized in individual markets. In 2013, an iPad cost about twice as much in Argentina as in the United States.
PPP vs. CPI
Consumer price index (CPI) and purchasing power parity (PPP) conversion factors share conceptual similarities. The CPI measures differences in levels of prices of goods and services over time within a country, whereas PPPs measure the change in levels of prices across regions within a country.
See also
List of countries by GDP (PPP)
List of countries by GDP (PPP) per capita
List of IMF ranked countries by GDP, Includes IMF ranked PPP of 186 countries
Measures of national income and output
Relative purchasing power parity
References
External links
Penn World Table
Purchasing power parities updated by Organisation of Cooperation and Development (OECD) from OECD data
Explanations from the U. of British Columbia (also provides daily updated PPP charts)
Purchasing power parities as example of international statistical cooperation from Eurostat – Statistics Explained
World Bank International Comparison Project provides PPP estimates for a large number of countries
UBS's "Prices and Earnings" Report 2006 Good report on purchasing power containing a Big Mac index as well as for staples such as bread and rice for 71 world cities.
"Understanding PPPs and PPP based national accounts" provides an overview of methodological issues in calculating PPP and in designing the ICP under which the main PPP tables (Maddison, Penn World Tables, and World Bank WDI) are based.
List of Countries by Purchasing Power Parity since 1990 (World Bank)
The Big Mac Index
Purchasing power parity Definition, Unesco
Purchasing power parity Converter (PPP Converter
Purchasing power
Gross domestic product
International economics
Inequalities
Trade
Currency | Purchasing power parity | [
"Mathematics"
] | 5,965 | [
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Mathematical theorems"
] |
64,493 | https://en.wikipedia.org/wiki/Percentage | In mathematics, a percentage () is a number or ratio expressed as a fraction of 100. It is often denoted using the percent sign (%), although the abbreviations pct., pct, and sometimes pc are also used. A percentage is a dimensionless number (pure number), primarily used for expressing proportions, but percent is nonetheless a unit of measurement in its orthography and usage.
Examples
For example, 45% (read as "forty-five percent") is equal to the fraction , the ratio 45:55 (or 45:100 when comparing to the total rather than the other portion), or 0.45.
Percentages are often used to express a proportionate part of a total.
(Similarly, one can also express a number as a fraction of 1,000, using the term "per mille" or the symbol "".)
Example 1
If 50% of the total number of students in the class are male, that means that 50 out of every 100 students are male. If there are 500 students, then 250 of them are male.
Example 2
An increase of $0.15 on a price of $2.50 is an increase by a fraction of = 0.06. Expressed as a percentage, this is a 6% increase.
While many percentage values are between 0 and 100, there is no mathematical restriction and percentages may take on other values. For example, it is common to refer to 111% or −35%, especially for percent changes and comparisons.
History
In Ancient Rome, long before the existence of the decimal system, computations were often made in fractions in the multiples of . For example, Augustus levied a tax of on goods sold at auction known as centesima rerum venalium. Computation with these fractions was equivalent to computing percentages.
As denominations of money grew in the Middle Ages, computations with a denominator of 100 became increasingly standard, such that from the late 15th century to the early 16th century, it became common for arithmetic texts to include such computations. Many of these texts applied these methods to profit and loss, interest rates, and the Rule of Three. By the 17th century, it was standard to quote interest rates in hundredths.
Percent sign
The term "percent" is derived from the Latin per centum, meaning "hundred" or "by the hundred".
The sign for "percent" evolved by gradual contraction of the Italian term per cento, meaning "for a hundred". The "per" was often abbreviated as "p."—eventually disappeared entirely. The "cento" was contracted to two circles separated by a horizontal line, from which the modern "%" symbol is derived.
Calculations
The percent value is computed by multiplying the numeric value of the ratio by 100. For example, to find 50 apples as a percentage of 1,250 apples, one first computes the ratio = 0.04, and then multiplies by 100 to obtain 4%. The percent value can also be found by multiplying first instead of later, so in this example, the 50 would be multiplied by 100 to give 5,000, and this result would be divided by 1,250 to give 4%.
To calculate a percentage of a percentage, convert both percentages to fractions of 100, or to decimals, and multiply them. For example, 50% of 40% is:
It is not correct to divide by 100 and use the percent sign at the same time; it would literally imply division by 10,000. For example, , not , which actually is . A term such as would also be incorrect, since it would be read as 1 percent, even if the intent was to say 100%.
Whenever communicating about a percentage, it is important to specify what it is relative to (i.e., what is the total that corresponds to 100%). The following problem illustrates this point.
We are asked to compute the ratio of female computer science majors to all computer science majors. We know that 60% of all students are female, and among these 5% are computer science majors, so we conclude that × = or 3% of all students are female computer science majors. Dividing this by the 10% of all students that are computer science majors, we arrive at the answer: = or 30% of all computer science majors are female.
This example is closely related to the concept of conditional probability.
Because of the commutative property of multiplication, reversing expressions does not change the result; for example, 50% of 20 is 10, and 20% of 50 is 10.
Variants of the percentage calculation
The calculation of percentages is carried out and taught in different ways depending on the prerequisites and requirements. In this way, the usual formulas can be obtained with proportions, which saves them from having to remember them. In so-called mental arithmetic, the intermediary question is usually asked what 100% or 1% is (corresponds to).
Example:
42 kg is 7%. How much is (corresponds to) 100%?Given are W (percentage) and p % (percentage).We are looking for G (basic value).
Percentage increase and decrease
Due to inconsistent usage, it is not always clear from the context what a percentage is relative to. When speaking of a "10% rise" or a "10% fall" in a quantity, the usual interpretation is that this is relative to the initial value of that quantity. For example, if an item is initially priced at $200 and the price rises 10% (an increase of $20), the new price will be $220. Note that this final price is 110% of the initial price (100% + 10% = 110%).
Some other examples of percent changes:
An increase of 100% in a quantity means that the final amount is 200% of the initial amount (100% of initial + 100% of increase = 200% of initial). In other words, the quantity has doubled.
An increase of 800% means the final amount is 9 times the original (100% + 800% = 900% = 9 times as large).
A decrease of 60% means the final amount is 40% of the original (100% – 60% = 40%).
A decrease of 100% means the final amount is zero (100% – 100% = 0%).
In general, a change of percent in a quantity results in a final amount that is 100 + percent of the original amount (equivalently, (1 + 0.01) times the original amount).
Compounding percentages
Percent changes applied sequentially do not add up in the usual way. For example, if the 10% increase in price considered earlier (on the $200 item, raising its price to $220) is followed by a 10% decrease in the price (a decrease of $22), then the final price will be $198—not the original price of $200. The reason for this apparent discrepancy is that the two percent changes (+10% and −10%) are measured relative to different initial values ($200 and $220, respectively), and thus do not "cancel out".
In general, if an increase of percent is followed by a decrease of percent, and the initial amount was , the final amount is ; hence the net change is an overall decrease by percent of percent (the square of the original percent change when expressed as a decimal number). Thus, in the above example, after an increase and decrease of , the final amount, $198, was 10% of 10%, or 1%, less than the initial amount of $200. The net change is the same for a decrease of percent, followed by an increase of percent; the final amount is .
This can be expanded for a case where one does not have the same percent change. If the initial amount leads to a percent change , and the second percent change is , then the final amount is . To change the above example, after an increase of and decrease of , the final amount, $209, is 4.5% more than the initial amount of $200.
As shown above, percent changes can be applied in any order and have the same effect.
In the case of interest rates, a very common but ambiguous way to say that an interest rate rose from 10% per annum to 15% per annum, for example, is to say that the interest rate increased by 5%, which could theoretically mean that it increased from 10% per annum to 10.5% per annum. It is clearer to say that the interest rate increased by 5 percentage points (pp). The same confusion between the different concepts of percent(age) and percentage points can potentially cause a major misunderstanding when journalists report about election results, for example, expressing both new results and differences with earlier results as percentages. For example, if a party obtains 41% of the vote and this is said to be a 2.5% increase, does that mean the earlier result was 40% (since 41 = ) or 38.5% (since 41 = )?
In financial markets, it is common to refer to an increase of one percentage point (e.g. from 3% per annum to 4% per annum) as an increase of "100 basis points".
Word and symbol
In most forms of English, percent is usually written as two words (per cent), although percentage and percentile are written as one word. In American English, percent is the most common variant (but per mille is written as two words).
In the early 20th century, there was a dotted abbreviation form "per cent.", as opposed to "per cent". The form "per cent." is still in use in the highly formal language found in certain documents like commercial loan agreements (particularly those subject to, or inspired by, common law), as well as in the Hansard transcripts of British Parliamentary proceedings. The term has been attributed to Latin per centum. The symbol for percent (%) evolved from a symbol abbreviating the Italian per cento. In some other languages, the form procent or prosent is used instead. Some languages use both a word derived from percent and an expression in that language meaning the same thing, e.g. Romanian procent and la sută (thus, 10% can be read or sometimes written ten for [each] hundred, similarly with the English one out of ten). Other abbreviations are rarer, but sometimes seen.
Grammar and style guides often differ as to how percentages are to be written. For instance, it is commonly suggested that the word percent (or per cent) be spelled out in all texts, as in "1 percent" and not "1%". Other guides prefer the word to be written out in humanistic texts, but the symbol to be used in scientific texts. Most guides agree that they always be written with a numeral, as in "5 percent" and not "five percent", the only exception being at the beginning of a sentence: "Ten percent of all writers love style guides." Decimals are also to be used instead of fractions, as in "3.5 percent of the gain" and not " percent of the gain". However the titles of bonds issued by governments and other issuers use the fractional form, e.g. "% Unsecured Loan Stock 2032 Series 2". (When interest rates are very low, the number 0 is included if the interest rate is less than 1%, e.g. "% Treasury Stock", not "% Treasury Stock".) It is also widely accepted to use the percent symbol (%) in tabular and graphic material.
In line with common English practice, style guides—such as The Chicago Manual of Style—generally state that the number and percent sign are written without any space in between.
However, the International System of Units and the ISO 31-0 standard require a space.
Other uses
The word "percentage" is often a misnomer in the context of sports statistics, when the referenced number is expressed as a decimal proportion, not a percentage: "The Phoenix Suns' Shaquille O'Neal led the NBA with a .609 field goal percentage (FG%) during the 2008–09 season." (O'Neal made 60.9% of his shots, not 0.609%.) Likewise, the winning percentage of a team, the fraction of matches that the club has won, is also usually expressed as a decimal proportion; a team that has a .500 winning percentage has won 50% of their matches. The practice is probably related to the similar way that batting averages are quoted.
As "percent" it is used to describe the grade or slope, the steepness of a road or railway, formula for which is 100 × which could also be expressed as the tangent of the angle of inclination times 100. This is the ratio of distances a vehicle would advance vertically and horizontally, respectively, when going up- or downhill, expressed in percent.
Percentage is also used to express composition of a mixture by mass percent and mole percent.
Related units
Percentage point difference of 1 part in 100
Per mille (‰) 1 part in 1,000
Basis point (bp) difference of 1 part in 10,000
Permyriad (‱) 1 part in 10,000
Per cent mille (pcm) 1 part in 100,000
Centiturn
Practical applications
Baker percentage
Volume percent
See also
1000 percent
Relative change and difference
Percent difference
Percentage change
Parts-per notation
Per-unit system
Percent point function
References
External links
100 (number)
Elementary arithmetic | Percentage | [
"Mathematics"
] | 2,816 | [
"Elementary mathematics",
"Arithmetic",
"Elementary arithmetic"
] |
64,506 | https://en.wikipedia.org/wiki/Fast%20Ethernet | In computer networking, Fast Ethernet physical layers carry traffic at the nominal rate of . The prior Ethernet speed was . Of the Fast Ethernet physical layers, 100BASE-TX is by far the most common.
Fast Ethernet was introduced in 1995 as the IEEE 802.3u standard and remained the fastest version of Ethernet for three years before the introduction of Gigabit Ethernet. The acronym GE/FE is sometimes used for devices supporting both standards.
Nomenclature
The 100 in the media type designation refers to the transmission speed of , while the BASE refers to baseband signaling. The letter following the dash (T or F) refers to the physical medium that carries the signal (twisted pair or fiber, respectively), while the last character (X, 4, etc.) refers to the line code method used. Fast Ethernet is sometimes referred to as 100BASE-X, where X is a placeholder for the FX and TX variants.
General design
Fast Ethernet is an extension of the 10-megabit Ethernet standard. It runs on twisted pair or optical fiber cable in a star wired bus topology, similar to the IEEE standard 802.3i called 10BASE-T, itself an evolution of 10BASE5 (802.3) and 10BASE2 (802.3a). Fast Ethernet devices are generally backward compatible with existing 10BASE-T systems, enabling plug-and-play upgrades from 10BASE-T. Most switches and other networking devices with ports capable of Fast Ethernet can perform autonegotiation, sensing a piece of 10BASE-T equipment and setting the port to 10BASE-T half duplex if the 10BASE-T equipment cannot perform autonegotiation itself. The standard specifies the use of CSMA/CD for media access control. A full-duplex mode is also specified and in practice, all modern networks use Ethernet switches and operate in full-duplex mode, even as legacy devices that use half duplex still exist.
A Fast Ethernet adapter can be logically divided into a media access controller (MAC), which deals with the higher-level issues of medium availability, and a physical layer interface (PHY). The MAC is typically linked to the PHY by a four-bit 25 MHz synchronous parallel interface known as a media-independent interface (MII), or by a two-bit 50 MHz variant called reduced media independent interface (RMII). In rare cases, the MII may be an external connection but is usually a connection between ICs in a network adapter or even two sections within a single IC. The specs are written based on the assumption that the interface between MAC and PHY will be an MII but they do not require it. Fast Ethernet or Ethernet hubs may use the MII to connect to multiple PHYs for their different interfaces.
The MII fixes the theoretical maximum data bit rate for all versions of Fast Ethernet to . The information rate actually observed on real networks is less than the theoretical maximum, due to the necessary header and trailer (addressing and error-detection bits) on every Ethernet frame, and the required interpacket gap between transmissions.
Copper
100BASE-T is any of several Fast Ethernet standards for twisted pair cables, including: 100BASE-TX ( over two-pair Cat5 or better cable), 100BASE-T4 (100 Mbit/s over four-pair Cat3 or better cable, defunct), 100BASE-T2 ( over two-pair Cat3 or better cable, also defunct). The segment length for a 100BASE-T cable is limited to (the same limit as 10BASE-T and gigabit Ethernet). All are or were standards under IEEE 802.3 (approved 1995). Almost all 100BASE-T installations are 100BASE-TX.
100BASE-TX
100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of wire inside a Category 5 or above cable. Cable distance between nodes can be up to . One pair is used for each direction, providing full-duplex operation at in each direction.
Like 10BASE-T, the active pairs in a standard connection are terminated on pins 1, 2, 3 and 6. Since a typical Category 5 cable contains four pairs and the performance requirements of 100BASE-TX do not exceed the capabilities of even the worst-performing pair, one typical cable can carry two 100BASE-TX links with a simple wiring adaptor on each end. Cabling is conventionally wired to one of ANSI/TIA-568's termination standards, T568A or T568B. 100BASE-TX uses pairs 2 and 3 (orange and green).
The configuration of 100BASE-TX networks is very similar to 10BASE-T. When used to build a local area network, the devices on the network (computers, printers etc.) are typically connected to a hub or switch, creating a star network. Alternatively, it is possible to connect two devices directly using a crossover cable. With today's equipment, crossover cables are generally not needed as most equipment supports auto-negotiation along with auto MDI-X to select and match speed, duplex and pairing.
With 100BASE-TX hardware, the raw bits, presented 4 bits wide clocked at 25 MHz at the MII, go through 4B5B binary encoding to generate a series of 0 and 1 symbols clocked at a 125 MHz symbol rate. The 4B5B encoding provides DC equalization and spectrum shaping. Just as in the 100BASE-FX case, the bits are then transferred to the physical medium attachment layer using NRZI encoding. However, 100BASE-TX introduces an additional, medium-dependent sublayer, which employs MLT-3 as a final encoding of the data stream before transmission, resulting in a maximum fundamental frequency of 31.25 MHz. The procedure is borrowed from the ANSI X3.263 FDDI specifications, with minor changes.
100BASE-T1
In 100BASE-T1 the data is transmitted over a single copper pair, 3 bits per symbol, each transmitted as code pair using PAM3. It supports full-duplex transmission. The twisted-pair cable is required to support 66 MHz, with a maximum length of 15 m. No specific connector is defined. The standard is intended for automotive applications or when Fast Ethernet is to be integrated into another application. It was developed as Open Alliance BroadR-Reach (OABR) before IEEE standardization.
100BASE-T2
In 100BASE-T2, standardized in IEEE 802.3y, the data is transmitted over two copper pairs, but these pairs are only required to be Category 3 rather than the Category 5 required by 100BASE-TX. Data is transmitted and received on both pairs simultaneously thus allowing full-duplex operation. Transmission uses 4 bits per symbol. The 4-bit symbol is expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a linear-feedback shift register. This is needed to flatten the bandwidth and emission spectrum of the signal, as well as to match transmission line properties. The mapping of the original bits to the symbol codes is not constant in time and has a fairly large period (appearing as a pseudo-random sequence). The final mapping from symbols to PAM-5 line modulation levels obeys the table on the right. 100BASE-T2 was not widely adopted but the technology developed for it is used in 1000BASE-T.
100BASE-T4
100BASE-T4 was an early implementation of Fast Ethernet. It required four twisted copper pairs of voice grade twisted pair, a lower-performing cable compared to Category 5 cable used by 100BASE-TX. Maximum distance was limited to 100 meters. One pair was reserved for transmit and one for receive, and the remaining two switched direction. The fact that three pairs were used to transmit in each direction made 100BASE-T4 inherently half-duplex. Using three cable pairs allowed it to reach while running at lower carrier frequencies, which allowed it to run on older cabling that many companies had recently installed for 10BASE-T networks.
A very unusual 8B6T code was used to convert 8 data bits into 6 base-3 digits (the signal shaping is possible as there are nearly three times as many 6-digit base-3 numbers as there are 8-digit base-2 numbers). The two resulting 3-digit base-3 symbols were sent in parallel over three pairs using 3-level pulse-amplitude modulation (PAM-3).
100BASE-T4 was not widely adopted but some of the technology developed for it is used in 1000BASE-T. Very few hubs were released with 100BASE-T4 support. Some examples include the 3com 3C250-T4 Superstack II HUB 100, IBM 8225 Fast Ethernet Stackable Hub and Intel LinkBuilder FMS 100 T4. The same applies to network interface controllers. Bridging 100BASE-T4 with 100BASE-TX required additional network equipment.
100BaseVG
Proposed and marketed by Hewlett-Packard, 100BaseVG was an alternative design using category 3 cabling and a token concept instead of CSMA/CD. It was slated for standardization as IEEE 802.12 but it quickly vanished when switched 100BASE-TX became popular. The IEEE standard was later withdrawn.
VG was similar to T4 in that it used more cable pairs combined with a lower carrier frequency to allow it to reach on voice-grade cables. It differed in the way those cables were assigned. Whereas T4 would use the two extra pairs in different directions depending on the direction of data exchange, VG instead used two transmission modes. In one, control, two pairs are used for transmission and reception as in classic Ethernet, while the other two pairs are used for flow control. In the second mode, transmission, all four are used to transfer data in a single direction. The hubs implemented a token passing scheme to choose which of the attached nodes were allowed to communicate at any given time, based on signals sent to it from the nodes using control mode. When one node was selected to become active, it would switch to transfer mode, send or receive a packet, and return to control mode.
This concept was intended to solve two problems. The first was that it eliminated the need for collision detection and thereby reduced contention on busy networks. While any particular node may find itself throttled due to heavy traffic, the network as a whole would not end up losing efficiency due to collisions and the resulting rebroadcasts. Under heavy use, the total throughput was increased compared to the other standards. The other was that the hubs could examine the payload types and schedule the nodes based on their bandwidth requirements. For instance, a node sending a video signal may not require much bandwidth but will require it to be predictable in terms of when it is delivered. A VG hub could schedule access on that node to ensure it received the transmission timeslots it needed while opening up the network at all other times to the other nodes. This style of access was known as demand priority.
Fiber optics
Fiber variants use fiber-optic cable with the listed interface types. Interfaces may be fixed or modular, often as small form-factor pluggable (SFP).
Fast Ethernet SFP ports
Fast Ethernet speed is not available on all SFP ports, but supported by some devices. An SFP port for Gigabit Ethernet should not be assumed to be backwards compatible with Fast Ethernet.
Optical interoperability
To have interoperability there are some criteria that have to be met:
Line encoding
Wavelength
Duplex mode
Media count
Media type and dimension
100BASE-X Ethernet is not backward compatible with 10BASE-F and is not forward compatible with 1000BASE-X.
100BASE-FX
100BASE-FX is a version of Fast Ethernet over optical fiber. The 100BASE-FX physical medium dependent (PMD) sublayer is defined by FDDI's PMD, so 100BASE-FX is not compatible with 10BASE-FL, the version over optical fiber.
100BASE-FX is still used for existing installation of multimode fiber where more speed is not required, like industrial automation plants.
100BASE-LFX
100BASE-LFX is a non-standard term to refer to Fast Ethernet transmission. It is very similar to 100BASE-FX but achieves longer distances up to 4–5 km over a pair of multi-mode fibers through the use of Fabry–Pérot laser transmitter running on 1310 nm wavelength. The signal attenuation per km at 1300 nm is about half the loss of 850 nm.
100BASE-SX
100BASE-SX is a version of Fast Ethernet over optical fiber standardized in TIA/EIA-785-1-2002. It is a lower-cost, shorter-distance alternative to 100BASE-FX. Because of the shorter wavelength used (850 nm) and the shorter distance supported, 100BASE-SX uses less expensive optical components (LEDs instead of lasers).
Because it uses the same wavelength as 10BASE-FL, the version of Ethernet over optical fiber, 100BASE-SX can be backward-compatible with 10BASE-FL. Cost and compatibility makes 100BASE-SX an attractive option for those upgrading from 10BASE-FL and those who do not require long distances.
100BASE-LX10
100BASE-LX10 is a version of Fast Ethernet over optical fiber standardized in 802.3ah-2004 clause 58. It has a 10 km reach over a pair of single-mode fibers.
100BASE-BX10
100BASE-BX10 is a version of Fast Ethernet over optical fiber standardized in 802.3ah-2004 clause 58. It uses an optical multiplexer to split TX and RX signals into different wavelengths on the same fiber. It has a 10 km reach over a single strand of single-mode fiber.
100BASE-EX
100BASE-EX is very similar to 100BASE-LX10 but achieves longer distances up to 40 km over a pair of single-mode fibers due to higher quality optics than a LX10, running on 1310 nm wavelength lasers. 100BASE-EX is not a formal standard but industry-accepted term. It is sometimes referred to as 100BASE-LH (long haul), and is easily confused with 100BASE-LX10 or 100BASE-ZX because the use of -LX(10), -LH, -EX, and -ZX is ambiguous between vendors.
100BASE-ZX
100BASE-ZX is a non-standard but multi-vendor term to refer to Fast Ethernet transmission using 1,550 nm wavelength to achieve distances of at least 70 km over single-mode fiber. Some vendors specify distances up to 160 km over single-mode fiber, sometimes called 100BASE-EZX. Ranges beyond 80 km are highly dependent upon the path loss of the fiber in use, specifically the attenuation figure in dB per km, the number and quality of connectors/patch panels and splices located between transceivers.
See also
List of interface bit rates
Notes
References
External links
Common Hardware Variations
Origins and History of Ethernet
IEEE802.3 standards free download
ProCurve Networking 100BASE-FX Technical Brief
Ethernet standards
Computer networking | Fast Ethernet | [
"Technology",
"Engineering"
] | 3,147 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
64,516 | https://en.wikipedia.org/wiki/2 | 2 (two) is a number, numeral and digit. It is the natural number following 1 and preceding 3. It is the smallest and the only even prime number.
Because it forms the basis of a duality, it has religious and spiritual significance in many cultures.
As a word
Two is most commonly a determiner used with plural countable nouns, as in two days or I'll take these two. Two is a noun when it refers to the number two as in two plus two is four.
Etymology of two
The word two is derived from the Old English words (feminine), (neuter), and (masculine, which survives today in the form twain).
The pronunciation , like that of who is due to the labialization of the vowel by the w, which then disappeared before the related sound. The successive stages of pronunciation for the Old English would thus be , , , , and finally .
Mathematics
An integer is determined to be even if it is divisible by two. When written in base 10, all multiples of 2 will end in 0, 2, 4, 6, or 8. 2 is the smallest and the only even prime number, and the first Ramanujan prime. It is also the first superior highly composite number, and the first colossally abundant number.
Geometry
A digon is a polygon with two sides (or edges) and two vertices. Two distinct points in a plane are always sufficient to define a unique line in a nontrivial Euclidean space.
Set theory
A set that is a field has a minimum of two elements. A Cantor space is a topological space homeomorphic to the Cantor set.
Base 2
Binary is a number system with a base of two, it is used extensively in computing.
List of basic calculations
Evolution of the Arabic digit
The digit used in the modern Western world to represent the number 2 traces its roots back to the Indic Brahmic script, where "2" was written as two horizontal lines. The modern Chinese and Japanese languages (and Korean Hanja) still use this method. The Gupta script rotated the two lines 45 degrees, making them diagonal. The top line was sometimes also shortened and had its bottom end curve towards the center of the bottom line. In the Nagari script, the top line was written more like a curve connecting to the bottom line. In the Arabic Ghubar writing, the bottom line was completely vertical, and the digit looked like a dotless closing question mark. Restoring the bottom line to its original horizontal position, but keeping the top line as a curve that connects to the bottom line leads to our modern digit.
In fonts with text figures, digit 2 usually is of x-height, for example, .
In science
The first magic number.
See also
Binary number
References
External links
Prime curiosities: 2
2 (number)
Integers | 2 | [
"Mathematics"
] | 578 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
64,592 | https://en.wikipedia.org/wiki/Hallway | A hallway (also passage, passageway, corridor or hall) is an interior space in a building that is used to connect other rooms. Hallways are generally long and narrow.
Hallways must be sufficiently wide to ensure buildings can be evacuated during a fire, and to allow people in wheelchairs to navigate them. The minimum width of a hallway is governed by building codes. Minimum widths in residences are in the United States. Hallways are wider in higher-traffic settings, such as schools and hospitals.
In 1597 John Thorpe is the first recorded architect to replace multiple connected rooms with rooms along a corridor each accessed by a separate door.
References
External links
Rooms | Hallway | [
"Engineering"
] | 131 | [
"Rooms",
"Architecture"
] |
64,597 | https://en.wikipedia.org/wiki/Ludwig%20von%20Bertalanffy | Karl Ludwig von Bertalanffy (19 September 1901 – 12 June 1972) was an Austrian biologist known as one of the founders of general systems theory (GST). This is an interdisciplinary practice that describes systems with interacting components, applicable to biology, cybernetics and other fields. Bertalanffy proposed that the classical laws of thermodynamics might be applied to closed systems, but not necessarily to "open systems" such as living things. His mathematical model of an organism's growth over time, published in 1934, is still in use today.
Bertalanffy grew up in Austria and subsequently worked in Vienna, London, Canada, and the United States.
Biography
Ludwig von Bertalanffy was born and grew up in the little village of Atzgersdorf (now Liesing) near Vienna. Ludwig's mother Caroline Agnes Vogel was seventeen when she married the thirty-four-year-old Gustav. Ludwig von Bertalanffy grew up as an only child educated at home by private tutors until he was ten and his parents divorced, both remarried outside the Catholic Church in civil ceremonies. When he arrived at his Gymnasium (a form of grammar school) he was already well habituated in learning by reading, and he continued to study on his own. His neighbour, the famous biologist Paul Kammerer, became a mentor and an example to the young Ludwig.
The Bertalanffy family had roots in the 16th century nobility of Hungary which included several scholars and court officials. His grandfather Charles Joseph von Bertalanffy (1833–1912) had settled in Austria and was a state theatre director in Klagenfurt, Graz and Vienna, which were important sites in imperial Austria. Ludwig's father Gustav von Bertalanffy (1861–1919) was a prominent railway administrator. On his mother's side Ludwig's grandfather Joseph Vogel was an imperial counsellor and a wealthy Vienna publisher.
In 1918, Bertalanffy started his studies at the university level in philosophy and art history, first at the University of Innsbruck and then at the University of Vienna. Ultimately, Bertalanffy had to make a choice between studying philosophy of science and biology; he chose the latter because, according to him, one could always become a philosopher later, but not a biologist. In 1926 he finished his PhD thesis (Fechner und das Problem der Integration höherer Ordnung, translated title: Fechner and the Problem of Higher-Order Integration) on the psychologist and philosopher Gustav Theodor Fechner. For the next six years he concentrated on a project of "theoretical biology" which focused on the philosophy of biology. He received his habilitation in 1934 in "theoretical biology".
Bertalanffy was appointed Privatdozent at the University of Vienna in 1934. The post yielded little income, and Bertalanffy faced continuing financial difficulties. He applied for promotion to the status of associate professor, but funding from the Rockefeller Foundation enabled him to make a trip to Chicago in 1937 to work with Nicolas Rashevsky. He was also able to visit the Marine Biological Laboratory in Massachusetts.
Bertalanffy was still in the US when he heard of the Anschluss in March 1938. However, his attempts to remain in the US failed, and he returned to Vienna in October of that year. Within a month of his return, he joined the Nazi Party, which facilitated his promotion to professor at the University of Vienna in 1940. During the Second World War, he linked his "organismic" philosophy of biology to the dominant Nazi ideology, principally that of the Führerprinzip.
Following the defeat of Nazism, Bertalanffy found denazification problematic and left Vienna in 1948. He moved to the University of London (1948–49); the Université de Montréal (1949); the University of Ottawa (1950–54); the University of Southern California (1955–58); the Menninger Foundation (1958–60); the University of Alberta (1961–68); and the State University of New York at Buffalo (SUNY) (1969–72).
In 1972, he died from a heart attack.
Family life
Bertalanffy met his wife, Maria, in April 1924 in the Austrian Alps. They were hardly ever apart for the next forty-eight years. She wanted to finish studying but never did, instead devoting her life to Bertalanffy's career. Later, in Canada, she would work both for him and with him in his career, and after his death she compiled two of Bertalanffy's last works. They had a son, Felix D. Bertalanffy (1926-1999), who was a professor at the University of Manitoba and followed in his father's footsteps by making his profession in the field of cancer research.
Work
Today, Bertalanffy is considered to be a founder and one of the principal authors of the interdisciplinary school of thought known as general systems theory, which was pioneered by Alexander Bogdanov. According to Weckowicz (1989), he "occupies an important position in the intellectual history of the twentieth century. His contributions went beyond biology, and extended into cybernetics, education, history, philosophy, psychiatry, psychology and sociology. Some of his admirers even believe that this theory will one day provide a conceptual framework for all these disciplines".
Individual growth model
The individual growth model published by Ludwig von Bertalanffy in 1934 is widely used in biological models and exists in a number of permutations.
In its simplest version the so-called Bertalanffy growth equation is expressed as a differential equation of length (L) over time (t):
when is the Bertalanffy growth rate and the ultimate length of the individual. This model was proposed earlier by August Friedrich Robert Pūtter (1879-1929), writing in 1920.
The dynamic energy budget theory provides a mechanistic explanation of this model in the case of isomorphs that experience a constant food availability. The inverse of the Bertalanffy growth rate appears to depend linearly on the ultimate length, when different food levels are compared. The intercept relates to the maintenance costs, the slope to the rate at which reserve is mobilized for use by metabolism. The ultimate length equals the maximum length at high food availabilities.
Bertalanffy equation
The Bertalanffy equation describes the growth of a biological organism. It was presented by Ludwig von Bertalanffy in 1969.
Here W is organism weight, t is the time, S is the area of organism surface, and V is a physical volume of the organism.
The coefficients and are (by Bertalanffy's definition) the "coefficient of anabolism" and "coefficient of catabolism" respectively.
The solution of the Bertalanffy equation is the function:
where and are constants.
Bertalanffy couldn't explain the meaning of the parameters (the coefficient of anabolism) and (coefficient of catabolism) in his works, which prompted criticism from biologists. However, the Bertalanffy equation is a special case of the Tetearing equation, that is a more general equation of the growth of a biological organism. The Tetearing equation does provide a physical meaning of the coefficients and .
Bertalanffy module
To honour Bertalanffy, ecological systems engineer and scientist Howard T. Odum named the storage symbol of his General Systems Language as the Bertalanffy module (see image right).
General system theory
In the late 1920s, the Soviet philosopher Alexander Bogdanov pioneered "Tektology", whom Johann Plenge referred to as the theory of "general systems". However, in the West, Bertalanffy is widely recognized for the development of a theory known as general system theory (GST). The theory attempted to provide alternatives to conventional models of organization. GST defined new foundations and developments as a generalized theory of systems with applications to numerous areas of study, emphasizing holism over reductionism, organism over mechanism.
Foundational to GST are the inter-relationships between elements which all together form the whole.
Publications
1928, Kritische Theorie der Formbildung, Borntraeger. In English: Modern Theories of Development: An Introduction to Theoretical Biology, Oxford University Press, New York: Harper, 1933
1928, Nikolaus von Kues, G. Müller, München 1928.
1930, Lebenswissenschaft und Bildung, Stenger, Erfurt 1930
1937, Das Gefüge des Lebens, Leipzig: Teubner.
1940, Vom Molekül zur Organismenwelt, Potsdam: Akademische Verlagsgesellschaft Athenaion.
1949, Das biologische Weltbild, Bern: Europäische Rundschau. In English: Problems of Life: An Evaluation of Modern Biological and Scientific Thought, New York: Harper, 1952.
1953, Biophysik des Fliessgleichgewichts, Braunschweig: Vieweg. 2nd rev. ed. by W. Beier and R. Laue, East Berlin: Akademischer Verlag, 1977
1953, "Die Evolution der Organismen", in Schöpfungsglaube und Evolutionstheorie, Stuttgart: Alfred Kröner Verlag, pp 53–66
1955, "An Essay on the Relativity of Categories." Philosophy of Science, Vol. 22, No. 4, pp. 243–263.
1959, Stammesgeschichte, Umwelt und Menschenbild, Schriften zur wissenschaftlichen Weltorientierung Vol 5. Berlin: Lüttke
1962, Modern Theories of Development, New York: Harper
1967, Robots, Men and Minds: Psychology in the Modern World, New York: George Braziller, 1969 hardcover: , paperback:
1968, General System Theory: Foundations, Development, Applications, New York: George Braziller, revised edition 1976:
1968, The Organismic Psychology and Systems Theory, Heinz Werner lectures, Worcester: Clark University Press.
1975, Perspectives on General Systems Theory. Scientific-Philosophical Studies, E. Taschdjian (eds.), New York: George Braziller,
1981, A Systems View of Man: Collected Essays, editor Paul A. LaViolette, Boulder: Westview Press,
The first articles from Bertalanffy on general systems theory:
1945, "Zu einer allgemeinen Systemlehre", Blätter für deutsche Philosophie, 3/4. (Extract in: Biologia Generalis, 19 (1949), 139-164).
1950, "An Outline of General System Theory", British Journal for the Philosophy of Science 1, p. 114-129.
1951, "General system theory – A new approach to unity of science" (Symposium), Human Biology, Dec. 1951, Vol. 23, p. 303-361.
See also
Bowman–Heidenhain hypothesis
Integrative level
Population dynamics
References
Further reading
Sabine Brauckmann (1999). Ludwig von Bertalanffy (1901--1972), ISSS Luminaries of the Systemics Movement, January 1999.
Peter Corning (2001). Fulfilling von Bertalanffy's Vision: The Synergism Hypothesis as a General Theory of Biological and Social Systems, ISCS 2001.
Mark Davidson (1983). Uncommon Sense: The Life and Thought of Ludwig Von Bertalanffy, Los Angeles: J. P. Tarcher.
Debora Hammond (2005). Philosophical and Ethical Foundations of Systems Thinking, tripleC 3(2): pp. 20–27.
Ervin László eds. (1972). The Relevance of General Systems Theory: Papers Presented to Ludwig Von Bertalanffy on His Seventieth Birthday, New York: George Braziller, 1972.
David Pouvreau (2013). "Une histoire de la 'systémologie générale' de Ludwig von Bertalanffy - Généalogie, genèse, actualisation et postérité d'un projet herméneutique", Doctoral Thesis (1138 pages), Ecole des Hautes Etudes en Sciences Sociales (EHESS), Paris : http://tel.archives-ouvertes.fr/tel-00804157
Thaddus E. Weckowicz (1989). Ludwig von Bertalanffy (1901-1972): A Pioneer of General Systems Theory, Center for Systems Research Working Paper No. 89-2. Edmonton AB: University of Alberta, February 1989.
External links
International Society for the Systems Sciences' biography of Ludwig von Bertalanffy.
http://isss.org/projects/primer International Society for the Systems Sciences' THE PRIMER PROJECT: INTEGRATIVE SYSTEMICS (organismics)
Bertalanffy Center for the Study of Systems Science BCSSS in Vienna.
Ludwig von Bertalanffy (1901-1972): A Pioneer of General Systems Theory working paper by T.E. Weckowicz, University of Alberta Center for Systems Research.
Ludwig von Bertalanffy, General System Theory - Passages (1968)
1901 births
1972 deaths
20th-century Austrian biologists
Systems biologists
Theoretical biologists
Austrian emigrants to the United States
Academic staff of the University of Alberta
Academic staff of the Université de Montréal
Academic staff of the University of Ottawa
Austrian people of Hungarian descent
Austrian untitled nobility
Scientists from Vienna
People from Liesing
Hungarian nobility
Nobility in the Nazi Party
20th-century Austrian nobility
Center for Advanced Study in the Behavioral Sciences fellows
20th-century Canadian biologists
University at Buffalo faculty | Ludwig von Bertalanffy | [
"Biology"
] | 2,808 | [
"Bioinformatics",
"Theoretical biologists"
] |
64,599 | https://en.wikipedia.org/wiki/Soil%20salinity | Soil salinity is the salt content in the soil; the process of increasing the salt content is known as salinization. Salts occur naturally within soils and water. Salination can be caused by natural processes such as mineral weathering or by the gradual withdrawal of an ocean. It can also come about through artificial processes such as irrigation and road salt.
Natural occurrence
Salts are a natural component in soils and water.
The ions responsible for salination are: Na+, K+, Ca2+, Mg2+ and Cl−.
Over long periods of time, as soil minerals weather and release salts, these salts are flushed or leached out of the soil by drainage water in areas with sufficient precipitation. In addition to mineral weathering, salts are also deposited via dust and precipitation. Salts may accumulate in dry regions, leading to naturally saline soils. This is the case, for example, in large parts of Australia.
Human practices can increase the salinity of soils by the addition of salts in irrigation water. Proper irrigation management can prevent salt accumulation by providing adequate drainage water to leach added salts from the soil. Disrupting drainage patterns that provide leaching can also result in salt accumulations. An example of this occurred in Egypt in 1970 when the Aswan High Dam was built. The change in the level of ground water before the construction had enabled soil erosion, which led to high concentration of salts in the water table. After the construction, the continuous high level of the water table led to the salination of arable land.
Sodic soils
When the Na+ (sodium) predominates, soils can become sodic. The pH of sodic soils may be acidic, neutral or alkaline.
Sodic soils present particular challenges because they tend to have very poor structure which limits or prevents water infiltration and drainage. They tend to accumulate certain elements like boron and molybdenum in the root zone at levels that may be toxic for plants. The most common compound used for reclamation of sodic soil is gypsum, and some plants that are tolerant to salt and ion toxicity may present strategies for improvement.
The term "sodic soil" is sometimes used imprecisely in scholarship. It's been used interchangeably with the term alkali soil, which is used in two meanings: 1) a soil with a pH greater than 8.2, 2) soil with an exchangeable sodium content above 15% of exchange capacity. The term "alkali soil" is often, but not always, used for soils that meet both of these characteristics.
Dry land salinity
Salinity in drylands can occur when the water table is between two and three metres from the surface of the soil. The salts from the groundwater are raised by capillary action to the surface of the soil. This occurs when groundwater is saline (which is true in many areas), and is favored by land use practices allowing more rainwater to enter the aquifer than it could accommodate. For example, the clearing of trees for agriculture is a major reason for dryland salinity in some areas, since deep rooting of trees has been replaced by shallow rooting of annual crops.
Salinity due to irrigation
Salinity from irrigation can occur over time wherever irrigation occurs, since almost all water (even natural rainfall) contains some dissolved salts. When the plants use the water, the salts are left behind in the soil and eventually begin to accumulate. This water in excess of plant needs is called the leaching fraction. Salination from irrigation water is also greatly increased by poor drainage and use of saline water for irrigating agricultural crops.
Salinity in urban areas often results from the combination of irrigation and groundwater processes. Irrigation is also now common in cities (gardens and recreation areas).
Consequences of soil salinity
The consequences of salinity are
Detrimental effects on plant growth and yield
Damage to infrastructure (roads, bricks, corrosion of pipes and cables)
Reduction of water quality for users, sedimentation problems, increased leaching of metals, especially copper, cadmium, manganese and zinc.
Soil erosion ultimately, when crops are too strongly affected by the amounts of salts.
More energy required to desalinate
Salinity is an important land degradation problem. Soil salinity can be reduced by leaching soluble salts out of soil with excess irrigation water. Soil salinity control involves watertable control and flushing in combination with tile drainage or another form of subsurface drainage. A comprehensive treatment of soil salinity is available from the United Nations Food and Agriculture Organization.
Salt tolerance of crops
High levels of soil salinity can be tolerated if salt-tolerant plants are grown. Sensitive crops lose their vigor already in slightly saline soils, most crops are negatively affected by (moderately) saline soils, and only salinity-resistant crops thrive in severely saline soils. The University of Wyoming and the Government of Alberta report data on the salt tolerance of plants.
Field data in irrigated lands, under farmers' conditions, are scarce, especially in developing countries. However, some on-farm surveys have been made in Egypt, India, and Pakistan. Some examples are shown in the following gallery, with crops arranged from sensitive to very tolerant.
Calcium has been found to have a positive effect in combating salinity in soils. It has been shown to ameliorate the negative effects that salinity has such as reduced water usage of plants.
Soil salinity activates genes associated with stress conditions for plants. These genes initiate the production of plant stress enzymes such as superoxide dismutase, L-ascorbate oxidase, and Delta 1 DNA polymerase. Limiting this process can be achieved by administering exogenous glutamine to plants. The decrease in the level of expression of genes responsible for the synthesis of superoxide dismutase increases with the increase in glutamine concentration.
Regions affected
From the FAO/UNESCO Soil Map of the World the following salinised areas can be derived.
See also
References
External links
Article on water and salt balances in the soil
Download leaching model for saline soils
Salt of the Earth Documentary produced by Prairie Public Television
Soil science
Salts
Environmental soil science
Energy conversion
Water and the environment | Soil salinity | [
"Chemistry",
"Environmental_science"
] | 1,271 | [
"Environmental soil science",
"Salts"
] |
64,669 | https://en.wikipedia.org/wiki/De%20Morgan%27s%20laws | In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.
The rules can be expressed in English as:
The negation of "A and B" is the same as "not A or not B".
The negation of "A or B" is the same as "not A and not B".
or
The complement of the union of two sets is the same as the intersection of their complements
The complement of the intersection of two sets is the same as the union of their complements
or
not (A or B) = (not A) and (not B)
not (A and B) = (not A) or (not B)
where "A or B" is an "inclusive or" meaning at least one of A or B rather than an "exclusive or" that means exactly one of A or B.
Another form of De Morgan's law is the following as seen below.
Applications of the rules include simplification of logical expressions in computer programs and digital circuit designs. De Morgan's laws are an example of a more general concept of mathematical duality.
Formal notation
The negation of conjunction rule may be written in sequent notation:
The negation of disjunction rule may be written as:
In rule form: negation of conjunction
and negation of disjunction
and expressed as truth-functional tautologies or theorems of propositional logic:
where and are propositions expressed in some formal system.
The generalized De Morgan's laws provide an equivalence for negating a conjunction or disjunction involving multiple terms.For a set of propositions , the generalized De Morgan's Laws are as follows:
These laws generalize De Morgan's original laws for negating conjunctions and disjunctions.
Substitution form
De Morgan's laws are normally shown in the compact form above, with the negation of the output on the left and negation of the inputs on the right. A clearer form for substitution can be stated as:
This emphasizes the need to invert both the inputs and the output, as well as change the operator when doing a substitution.
Set theory
In set theory, it is often stated as "union and intersection interchange under complementation", which can be formally expressed as:
where:
is the negation of , the overline being written above the terms to be negated,
is the intersection operator (AND),
is the union operator (OR).
Unions and intersections of any number of sets
The generalized form is
where is some, possibly countably or uncountably infinite, indexing set.
In set notation, De Morgan's laws can be remembered using the mnemonic "break the line, change the sign".
Boolean algebra
In Boolean algebra, similarly, this law which can be formally expressed as:
where:
is the negation of , the overline being written above the terms to be negated,
is the logical conjunction operator (AND),
is the logical disjunction operator (OR).
which can be generalized to
Engineering
In electrical and computer engineering, De Morgan's laws are commonly written as:
and
where:
is the logical AND,
is the logical OR,
the is the logical NOT of what is underneath the overbar.
Text searching
De Morgan's laws commonly apply to text searching using Boolean operators AND, OR, and NOT. Consider a set of documents containing the words "cats" and "dogs". De Morgan's laws hold that these two searches will return the same set of documents:
Search A: NOT (cats OR dogs)
Search B: (NOT cats) AND (NOT dogs)
The corpus of documents containing "cats" or "dogs" can be represented by four documents:
Document 1: Contains only the word "cats".
Document 2: Contains only "dogs".
Document 3: Contains both "cats" and "dogs".
Document 4: Contains neither "cats" nor "dogs".
To evaluate Search A, clearly the search "(cats OR dogs)" will hit on Documents 1, 2, and 3. So the negation of that search (which is Search A) will hit everything else, which is Document 4.
Evaluating Search B, the search "(NOT cats)" will hit on documents that do not contain "cats", which is Documents 2 and 4. Similarly the search "(NOT dogs)" will hit on Documents 1 and 4. Applying the AND operator to these two searches (which is Search B) will hit on the documents that are common to these two searches, which is Document 4.
A similar evaluation can be applied to show that the following two searches will both return Documents 1, 2, and 4:
Search C: NOT (cats AND dogs),
Search D: (NOT cats) OR (NOT dogs).
History
The laws are named after Augustus De Morgan (1806–1871), who introduced a formal version of the laws to classical propositional logic. De Morgan's formulation was influenced by the algebraization of logic undertaken by George Boole, which later cemented De Morgan's claim to the find. Nevertheless, a similar observation was made by Aristotle, and was known to Greek and Medieval logicians. For example, in the 14th century, William of Ockham wrote down the words that would result by reading the laws out. Jean Buridan, in his , also describes rules of conversion that follow the lines of De Morgan's laws. Still, De Morgan is given credit for stating the laws in the terms of modern formal logic, and incorporating them into the language of logic. De Morgan's laws can be proved easily, and may even seem trivial. Nonetheless, these laws are helpful in making valid inferences in proofs and deductive arguments.
Proof for Boolean algebra
De Morgan's theorem may be applied to the negation of a disjunction or the negation of a conjunction in all or part of a formula.
Negation of a disjunction
In the case of its application to a disjunction, consider the following claim: "it is false that either of A or B is true", which is written as:
In that it has been established that neither A nor B is true, then it must follow that both A is not true and B is not true, which may be written directly as:
If either A or B were true, then the disjunction of A and B would be true, making its negation false. Presented in English, this follows the logic that "since two things are both false, it is also false that either of them is true".
Working in the opposite direction, the second expression asserts that A is false and B is false (or equivalently that "not A" and "not B" are true). Knowing this, a disjunction of A and B must be false also. The negation of said disjunction must thus be true, and the result is identical to the first claim.
Negation of a conjunction
The application of De Morgan's theorem to conjunction is very similar to its application to a disjunction both in form and rationale. Consider the following claim: "it is false that A and B are both true", which is written as:
In order for this claim to be true, either or both of A or B must be false, for if they both were true, then the conjunction of A and B would be true, making its negation false. Thus, one (at least) or more of A and B must be false (or equivalently, one or more of "not A" and "not B" must be true). This may be written directly as,
Presented in English, this follows the logic that "since it is false that two things are both true, at least one of them must be false".
Working in the opposite direction again, the second expression asserts that at least one of "not A" and "not B" must be true, or equivalently that at least one of A and B must be false. Since at least one of them must be false, then their conjunction would likewise be false. Negating said conjunction thus results in a true expression, and this expression is identical to the first claim.
Proof for set theory
Here we use to denote the complement of A, as above in . The proof that is completed in 2 steps by proving both and .
Part 1
Let . Then, .
Because , it must be the case that or .
If , then , so .
Similarly, if , then , so .
Thus, ;
that is, .
Part 2
To prove the reverse direction, let , and for contradiction assume .
Under that assumption, it must be the case that ,
so it follows that and , and thus and .
However, that means , in contradiction to the hypothesis that ,
therefore, the assumption must not be the case, meaning that .
Hence, ,
that is, .
Conclusion
If and , then ; this concludes the proof of De Morgan's law.
The other De Morgan's law, , is proven similarly.
Generalising De Morgan duality
In extensions of classical propositional logic, the duality still holds (that is, to any logical operator one can always find its dual), since in the presence of the identities governing negation, one may always introduce an operator that is the De Morgan dual of another. This leads to an important property of logics based on classical logic, namely the existence of negation normal forms: any formula is equivalent to another formula where negations only occur applied to the non-logical atoms of the formula. The existence of negation normal forms drives many applications, for example in digital circuit design, where it is used to manipulate the types of logic gates, and in formal logic, where it is needed to find the conjunctive normal form and disjunctive normal form of a formula. Computer programmers use them to simplify or properly negate complicated logical conditions. They are also often useful in computations in elementary probability theory.
Let one define the dual of any propositional operator P(p, q, ...) depending on elementary propositions p, q, ... to be the operator defined by
Extension to predicate and modal logic
This duality can be generalised to quantifiers, so for example the universal quantifier and existential quantifier are duals:
To relate these quantifier dualities to the De Morgan laws, consider a domain of discourse D (with some small number of entities) to which properties are ascribed universally and existentially, such as
D = {a, b, c}.
Then express universal quantifier equivalently by conjunction of individual statements
and existential quantifier by disjunction of individual statements
But, using De Morgan's laws,
and
verifying the quantifier dualities in the model.
Then, the quantifier dualities can be extended further to modal logic, relating the box ("necessarily") and diamond ("possibly") operators:
In its application to the alethic modalities of possibility and necessity, Aristotle observed this case, and in the case of normal modal logic, the relationship of these modal operators to the quantification can be understood by setting up models using Kripke semantics.
In intuitionistic logic
Three out of the four implications of de Morgan's laws hold in intuitionistic logic. Specifically, we have
and
The converse of the last implication does not hold in pure intuitionistic logic. That is, the failure of the joint proposition cannot necessarily be resolved to the failure of either of the two conjuncts. For example, from knowing it not to be the case that both Alice and Bob showed up to their date, it does not follow who did not show up. The latter principle is equivalent to the principle of the weak excluded middle ,
This weak form can be used as a foundation for an intermediate logic.
For a refined version of the failing law concerning existential statements, see the lesser limited principle of omniscience , which however is different from .
The validity of the other three De Morgan's laws remains true if negation is replaced by implication for some arbitrary constant predicate C, meaning that the above laws are still true in minimal logic.
Similarly to the above, the quantifier laws:
and
are tautologies even in minimal logic with negation replaced with implying a fixed , while the converse of the last law does not have to be true in general.
Further, one still has
but their inversion implies excluded middle, .
In computer engineering
De Morgan's laws are widely used in computer engineering and digital logic for the purpose of simplifying circuit designs.
In modern programming languages, due to the optimisation of compilers and interpreters, the performance differences between these options are negligible or completely absent.
See also
Conjunction/disjunction duality
Homogeneity (linguistics)
Isomorphism
List of Boolean algebra topics
List of set identities and relations
Positive logic
De Morgan algebra
References
External links
Duality in Logic and Language, Internet Encyclopedia of Philosophy.
Boolean algebra
Duality theories
Rules of inference
Articles containing proofs
Theorems in propositional logic | De Morgan's laws | [
"Mathematics"
] | 2,771 | [
"Boolean algebra",
"Mathematical structures",
"Proof theory",
"Mathematical logic",
"Rules of inference",
"Fields of abstract algebra",
"Theorems in propositional logic",
"Category theory",
"Duality theories",
"Geometry",
"Articles containing proofs",
"Theorems in the foundations of mathematic... |
64,685 | https://en.wikipedia.org/wiki/Post%20correspondence%20problem | The Post correspondence problem is an undecidable decision problem that was introduced by Emil Post in 1946. Because it is simpler than the halting problem and the Entscheidungsproblem it is often used in proofs of undecidability.
Definition of the problem
Let be an alphabet with at least two symbols. The input of the problem consists of two finite lists and of words over . A solution to this problem is a sequence of indices with and for all , such that
The decision problem then is to decide whether such a solution exists or not.
Alternative definition
This gives rise to an equivalent alternative definition often found in the literature, according to which any two homomorphisms with a common domain and a common codomain form an instance of the Post correspondence problem, which now asks whether there exists a nonempty word in the domain such that
.
Another definition describes this problem easily as a type of puzzle. We begin with a collection of dominos, each containing two strings, one on each side. An individual domino looks like
and a collection of dominos looks like
.
The task is to make a list of these dominos (repetition permitted) so that the string we get by reading off the symbols on the top is the same as the string of symbols on the bottom. This list is called a match. The Post correspondence problem is to determine whether a collection of dominos has a match.
For example, the following list is a match for this puzzle.
.
For some collections of dominos, finding a match may not be possible. For example, the collection
.
cannot contain a match because every top string is longer than the corresponding bottom string.
Example instances of the problem
Example 1
Consider the following two lists:
A solution to this problem would be the sequence (3, 2, 3, 1), because
Furthermore, since (3, 2, 3, 1) is a solution, so are all of its "repetitions", such as (3, 2, 3, 1, 3, 2, 3, 1), etc.; that is, when a solution exists, there are infinitely many solutions of this repetitive kind.
However, if the two lists had consisted of only and from those sets, then there would have been no solution (the last letter of any such α string is not the same as the letter before it, whereas β only constructs pairs of the same letter).
A convenient way to view an instance of a Post correspondence problem is as a collection of blocks of the form
there being an unlimited supply of each type of block. Thus the above example is viewed as
i = 1
i = 2
i = 3
where the solver has an endless supply of each of these three block types. A solution corresponds to some way of laying blocks next to each other so that the string in the top cells corresponds to the string in the bottom cells. Then the solution to the above example corresponds to:
i1 = 3
i2 = 2
i3 = 3
i4 = 1
Example 2
Again using blocks to represent an instance of the problem, the following is an example that has infinitely many solutions in addition to the kind obtained by merely "repeating" a solution.
1
2
3
In this instance, every sequence of the form (1, 2, 2, . . ., 2, 3) is a solution (in addition to all their repetitions):
1
2
2
2
3
Proof sketch of undecidability
The most common proof for the undecidability of PCP describes an instance of PCP that can simulate the computation of an arbitrary Turing machine on a particular input. A match will occur if and only if the input would be accepted by the Turing machine. Because deciding if a Turing machine will accept an input is a basic undecidable problem, PCP cannot be decidable either. The following discussion is based on Michael Sipser's textbook Introduction to the Theory of Computation.
In more detail, the idea is that the string along the top and bottom will be a computation history of the Turing machine's computation. This means it will list a string describing the initial state, followed by a string describing the next state, and so on until it ends with a string describing an accepting state. The state strings are separated by some separator symbol (usually written #). According to the definition of a Turing machine, the full state of the machine consists of three parts:
The current contents of the tape.
The current state of the finite-state machine which operates the tape head.
The current position of the tape head on the tape.
Although the tape has infinitely many cells, only some finite prefix of these will be non-blank. We write these down as part of our state. To describe the state of the finite control, we create new symbols, labelled q1 through qk, for each of the finite-state machine's k states. We insert the correct symbol into the string describing the tape's contents at the position of the tape head, thereby indicating both the tape head's position and the current state of the finite control. For the alphabet {0,1}, a typical state might look something like:
101101110q700110.
A simple computation history would then look something like this:
q0101#1q401#11q21#1q810.
We start out with this block, where x is the input string and q0 is the start state:
The top starts out "lagging" the bottom by one state, and keeps this lag until the very end stage. Next, for each symbol a in the tape alphabet, as well as #, we have a "copy" block, which copies it unmodified from one state to the next:
We also have a block for each position transition the machine can make, showing how the tape head moves, how the finite state changes, and what happens to the surrounding symbols. For example, here the tape head is over a 0 in state 4, and then writes a 1 and moves right, changing to state 7:
Finally, when the top reaches an accepting state, the bottom needs a chance to finally catch up to complete the match. To allow this, we extend the computation so that once an accepting state is reached, each subsequent machine step will cause a symbol near the tape head to vanish, one at a time, until none remain. If qf is an accepting state, we can represent this with the following transition blocks, where a is a tape alphabet symbol:
There are a number of details to work out, such as dealing with boundaries between states, making sure that our initial tile goes first in the match, and so on, but this shows the general idea of how a static tile puzzle can simulate a Turing machine computation.
The previous example
q0101#1q401#11q21#1q810.
is represented as the following solution to the Post correspondence problem:
{| class="wikitable" style="text-align:center;"
| bgcolor="#55FF83" width="40" | q8 1
|-
| bgcolor="#87E6FF" width="40" | q8
|}
...
Variants
Many variants of PCP have been considered. One reason is that, when one tries to prove undecidability of some new problem by reducing from PCP, it often happens that the first reduction one finds is not from PCP itself but from an apparently weaker version.
The problem may be phrased in terms of monoid morphisms f, g from the free monoid B∗ to the free monoid A∗ where B is of size n. The problem is to determine whether there is a word w in B+ such that f(w) = g(w).
The condition that the alphabet have at least two symbols is required since the problem is decidable if has only one symbol.
A simple variant is to fix n, the number of tiles. This problem is decidable if n ≤ 2, but remains undecidable for n ≥ 5. It is unknown whether the problem is decidable for 3 ≤ n ≤ 4.
The 'circular Post correspondence problem asks whether indexes can be found such that and are conjugate words, i.e., they are equal modulo rotation. This variant is undecidable.
One of the most important variants of PCP is the bounded Post correspondence problem, which asks if we can find a match using no more than k tiles, including repeated tiles. A brute force search solves the problem in time O(2k), but this may be difficult to improve upon, since the problem is NP-complete. Unlike some NP-complete problems like the boolean satisfiability problem, a small variation of the bounded problem was also shown to be complete for RNP, which means that it remains hard even if the inputs are chosen at random (it is hard on average over uniformly distributed inputs).
Another variant of PCP is called the marked Post Correspondence Problem, in which each must begin with a different symbol, and each must also begin with a different symbol. Halava, Hirvensalo, and de Wolf showed that this variation is decidable in exponential time. Moreover, they showed that if this requirement is slightly loosened so that only one of the first two characters need to differ (the so-called 2-marked Post Correspondence Problem), the problem becomes undecidable again.
The Post Embedding Problem is another variant where one looks for indexes such that is a (scattered) subword of . This variant is easily decidable since, when some solutions exist, in particular a length-one solution exists. More interesting is the Regular Post Embedding Problem, a further variant where one looks for solutions that belong to a given regular language (submitted, e.g., under the form of a regular expression on the set ). The Regular Post Embedding Problem is still decidable but, because of the added regular constraint, it has a very high complexity that dominates every multiply recursive function.
The Identity Correspondence Problem' (ICP) asks whether a finite set of pairs of words (over a group alphabet) can generate an identity pair by a sequence of concatenations. The problem is undecidable and equivalent to the following Group Problem: is the semigroup generated by a finite set of pairs of words (over a group alphabet) a group.
References
External links
Eitan M. Gurari. An Introduction to the Theory of Computation'', Chapter 4, Post's Correspondence Problem. A proof of the undecidability of PCP based on Chomsky type-0 grammars.
Dong, Jing. "The Analysis and Solution of a PCP Instance." 2012 National Conference on Information Technology and Computer Science. The paper describes a heuristic rule for solving some specific PCP instances.
Online PHP Based PCP Solver
PCP AT HOME
PCP - a nice problem
PCP solver in Java
Post Correspondence Problem
Theory of computation
Computability theory
Undecidable problems | Post correspondence problem | [
"Mathematics"
] | 2,292 | [
"Mathematical logic",
"Computational problems",
"Undecidable problems",
"Computability theory",
"Mathematical problems"
] |
64,740 | https://en.wikipedia.org/wiki/Jacob%20Bernoulli | Jacob Bernoulli (also known as James in English or Jacques in French; – 16 August 1705) was one of the many prominent mathematicians in the Swiss Bernoulli family. He sided with Gottfried Wilhelm Leibniz during the Leibniz–Newton calculus controversy and was an early proponent of Leibnizian calculus, which he made numerous contributions to; along with his brother Johann, he was one of the founders of the calculus of variations. He also discovered the fundamental mathematical constant . However, his most important contribution was in the field of probability, where he derived the first version of the law of large numbers in his work Ars Conjectandi.
Biography
Jacob Bernoulli was born in Basel in the Old Swiss Confederacy. Following his father's wish, he studied theology and entered the ministry. But contrary to the desires of his parents, he also studied mathematics and astronomy. He traveled throughout Europe from 1676 to 1682, learning about the latest discoveries in mathematics and the sciences under leading figures of the time. This included the work of Johannes Hudde, Robert Boyle, and Robert Hooke. During this time he also produced an incorrect theory of comets.
Bernoulli returned to Switzerland, and began teaching mechanics at the University of Basel from 1683. His doctoral dissertation Solutionem tergemini problematis was submitted in 1684. It appeared in print in 1687.
In 1684, Bernoulli married Judith Stupanus; they had two children. During this decade, he also began a fertile research career. His travels allowed him to establish correspondence with many leading mathematicians and scientists of his era, which he maintained throughout his life. During this time, he studied the new discoveries in mathematics, including Christiaan Huygens's De ratiociniis in aleae ludo, Descartes' La Géométrie and Frans van Schooten's supplements of it. He also studied Isaac Barrow and John Wallis, leading to his interest in infinitesimal geometry. Apart from these, it was between 1684 and 1689 that many of the results that were to make up Ars Conjectandi were discovered.
People believe he was appointed professor of mathematics at the University of Basel in 1687, remaining in this position for the rest of his life. By that time, he had begun tutoring his brother Johann Bernoulli on mathematical topics. The two brothers began to study the calculus as presented by Leibniz in his 1684 paper on the differential calculus in "Nova Methodus pro Maximis et Minimis" published in Acta Eruditorum. They also studied the publications of von Tschirnhaus. It must be understood that Leibniz's publications on the calculus were very obscure to mathematicians of that time and the Bernoullis were among the first to try to understand and apply Leibniz's theories.
Jacob collaborated with his brother on various applications of calculus. However the atmosphere of collaboration between the two brothers turned into rivalry as Johann's own mathematical genius began to mature, with both of them attacking each other in print, and posing difficult mathematical challenges to test each other's skills. By 1697, the relationship had completely broken down.
The lunar crater Bernoulli is also named after him jointly with his brother Johann.
Important works
Jacob Bernoulli's first important contributions were a pamphlet on the parallels of logic and algebra published in 1685, work on probability in 1685 and geometry in 1687. His geometry result gave a construction to divide any triangle into four equal parts with two perpendicular lines.
By 1689, he had published important work on infinite series and published his law of large numbers in probability theory. Jacob Bernoulli published five treatises on infinite series between 1682 and 1704. The first two of these contained many results, such as the fundamental result that diverges, which Bernoulli believed were new but they had actually been proved by Pietro Mengoli 40 years earlier and was proved by Nicole Oresme in the 14th century already. Bernoulli could not find a closed form for , but he did show that it converged to a finite limit less than 2. Euler was the first to find the limit of this series in 1737. Bernoulli also studied the exponential series which came out of examining compound interest.
In May 1690, in a paper published in Acta Eruditorum, Jacob Bernoulli showed that the problem of determining the isochrone is equivalent to solving a first-order nonlinear differential equation. The isochrone, or curve of constant descent, is the curve along which a particle will descend under gravity from any point to the bottom in exactly the same time, no matter what the starting point. It had been studied by Huygens in 1687 and Leibniz in 1689. After finding the differential equation, Bernoulli then solved it by what we now call separation of variables. Jacob Bernoulli's paper of 1690 is important for the history of calculus, since the term integral appears for the first time with its integration meaning. In 1696, Bernoulli solved the equation, now called the Bernoulli differential equation,
Jacob Bernoulli also discovered a general method to determine evolutes of a curve as the envelope of its circles of curvature. He also investigated caustic curves and in particular he studied these associated curves of the parabola, the logarithmic spiral and epicycloids around 1692. The lemniscate of Bernoulli was first conceived by Jacob Bernoulli in 1694. In 1695, he investigated the drawbridge problem which seeks the curve required so that a weight sliding along the cable always keeps the drawbridge balanced.
Bernoulli's most original work was Ars Conjectandi, published in Basel in 1713, eight years after his death. The work was incomplete at the time of his death but it is still a work of the greatest significance in the theory of probability. The book also covers other related subjects, including a review of combinatorics, in particular the work of van Schooten, Leibniz, and Prestet, as well as the use of Bernoulli numbers in a discussion of the exponential series. Inspired by Huygens' work, Bernoulli also gives many examples on how much one would expect to win playing various games of chance. The term Bernoulli trial resulted from this work.
In the last part of the book, Bernoulli sketches many areas of mathematical probability, including probability as a measurable degree of certainty; necessity and chance; moral versus mathematical expectation; a priori an a posteriori probability; expectation of winning when players are divided according to dexterity; regard of all available arguments, their valuation, and their calculable evaluation; and the law of large numbers.
Bernoulli was one of the most significant promoters of the formal methods of higher analysis. Astuteness and elegance are seldom found in his method of presentation and expression, but there is a maximum of integrity.
Discovery of the mathematical constant e
In 1683, Bernoulli discovered the constant by studying a question about compound interest which required him to find the value of the following expression (which is in fact ):
One example is an account that starts with $1.00 and pays 100 percent interest per year. If the interest is credited once, at the end of the year, the value is $2.00; but if the interest is computed and added twice in the year, the $1 is multiplied by 1.5 twice, yielding $1.00×1.52 = $2.25. Compounding quarterly yields $1.00×1.254 = $2.4414..., and compounding monthly yields $1.00×(1.0833...)12 = $2.613035....
Bernoulli noticed that this sequence approaches a limit (the force of interest) for more and smaller compounding intervals. Compounding weekly yields $2.692597..., while compounding daily yields $2.714567..., just two cents more. Using as the number of compounding intervals, with interest of 100% / in each interval, the limit for large is the number that Euler later named ; with continuous compounding, the account value will reach $2.7182818.... More generally, an account that starts at $1, and yields (1+) dollars at compound interest, will yield dollars with continuous compounding.
Tombstone
Bernoulli wanted a logarithmic spiral and the motto Eadem mutata resurgo ('Although changed, I rise again the same') engraved on his tombstone. He wrote that the self-similar spiral "may be used as a symbol, either of fortitude and constancy in adversity, or of the human body, which after all its changes, even after death, will be restored to its exact and perfect self". Bernoulli died in 1705, but an Archimedean spiral was engraved rather than a logarithmic one.
Translation of Latin inscription:
Jacob Bernoulli, the incomparable mathematician.
Professor at the University of Basel For more than 18 years;
member of the Royal Academies of Paris and Berlin; famous for his writings.
Of a chronic illness, of sound mind to the end;
succumbed in the year of grace 1705, the 16th of August, at the age of 50 years and 7 months, awaiting the resurrection.
Judith Stupanus,
his wife for 20 years,
and his two children have erected a monument to the husband and father they miss so much.
Works
(title roughly translates as "A new hypothesis for the system of comets".)
Ars conjectandi, opus posthumum, Basileae, impensis Thurnisiorum Fratrum, 1713.
Notes
References
Further reading
External links
Gottfried Leibniz and Jakob Bernoulli Correspondence Regarding the Art of Conjecturing"
1655 births
1705 deaths
17th-century apocalypticists
17th-century Swiss mathematicians
18th-century apocalypticists
18th-century writers in Latin
18th-century male writers
18th-century Swiss mathematicians
Jacob
Burials at Basel Münster
Members of the French Academy of Sciences
Number theorists
Scientists from Basel-Stadt
Probability theorists
Swiss mathematicians | Jacob Bernoulli | [
"Mathematics"
] | 2,097 | [
"Number theorists",
"Number theory"
] |
64,749 | https://en.wikipedia.org/wiki/Alva%20Myrdal | Alva Myrdal ( , ; née Reimer; 31 January 1902 – 1 February 1986) was a Swedish sociologist, diplomat and politician. She was a prominent leader of the disarmament movement. She, along with Alfonso García Robles, received the Nobel Peace Prize in 1982. She married Gunnar Myrdal in 1924; he received the Nobel Memorial Prize in Economic Sciences in 1974, making them the fourth ever married couple to have won Nobel Prizes, and the first to win independent of each other (versus a shared Nobel Prize by scientist spouses).
Biography
Early life and studies
Alva Myrdal was born in Uppsala and grew up as the first child of a modest family, the daughter of Albert Reimer and Lowa Jonsson. She had four siblings: Ruth (1904–1980), Folke (1906–1977), May (1909–1941) and Stig (1912–1977). Her father was a socialist and modern liberal. During her childhood the family moved around to different places. For example, they were residents of Eskilstuna, Älvsjö, and Stockholm. Her academic studies involved psychology and family sociology. She earned a Bachelor of Science degree in Stockholm in 1924.
In 1929, Myrdal and her husband Gunnar Myrdal had the opportunity to travel to the US as Rockefeller Fellows. Myrdal further deepened her studies in the fields of psychology, education and sociology whilst in the US. She had the special chance to broaden her knowledge of children's education. Myrdal's observation of the great social and economic disparities in the United States also led to an increased political commitment – "radical" was the term that she and her husband came to use to describe their shared political outlook They then moved to Geneva for further studies, where they started to so study the population decline that worried many Europeans during the interwar period.
Politics of the family and population issue
Myrdal first came to public attention in the 1930s, and was one of the main driving forces in the creation of the Swedish welfare state. She coauthored the book Crisis in the Population Question ( with Gunnar Myrdal in 1934). The basic premise of Crisis in the Population Question is to find what social reforms are needed to allow for individual liberty (especially for women) while also promoting child-bearing, and encouraging Swedes to have children. The book also detailed the importance of shared responsibility for children's education both between the parents as well as the community by trained child educators.
Myrdal was highly critical of developments in the operation of preschools for children in Sweden. Consequently, she published the book Urban Children (1935), where she presented her ideas for a newly reformed Swedish preschool system. She argued that contemporary child care was flawed. The system was polarized between two extremes – measures of 'poor relief' for the less well-off contrasted with those measures which prepared children from wealthier families for private schools. She stressed that there were material obstacles in the way of being able to access a good education. Therefore, social and economic reforms were needed. Myrdal wanted to combine and integrate the two extremes.
A year later, she was able to put her theory into practice, as she became director of the National Educational Seminar, which she cofounded in 1936. She personally worked there as a teacher and pedagogue by training preschool teachers. Myrdal emphasized the lack of recent educational research in regards to preschool teacher training. Her teaching tried to integrate the new discoveries in child psychology in education. Social studies were also emphasized, as was women's personal development.
With architect Sven Markelius, Myrdal designed Stockholm's cooperative Collective House in 1937, with an eye towards developing more domestic liberty for women. She was a member of the Committee for Increased Women's Representation, founded in 1937 to increase women's political representation.
In 1938, Alva and Gunnar Myrdal moved to the United States. While in the US, Myrdal published the book Nation and Family (1941) concerning the Swedish family unit and population policy. During World War II, she also periodically lived in Sweden.
Postwar career takeoff
A long-time prominent member of the Swedish Social Democratic Party, in the late 1940s she became involved in international issues with the United Nations, appointed to head its section on welfare policy in 1949. From 1950 to 1955 she was chairman of UNESCO's social science section—the first woman to hold such prominent positions in the UN. In 1955–1956, she served as a Swedish envoy to New Delhi, India, Yangon, Myanmar and Colombo, Sri Lanka.
From 1951 she had collaborated with British-based sociologist Viola Klein and in 1958 they co-wrote the book Women's Two Roles: Home and Work, supported by the International Federation of University Women "to make an international survey of the needs for social reforms if women are to be put into a position to reconcile family and professional life".
In 1962, Myrdal was elected to the Riksdag, and in 1962 she was sent as the Swedish delegate to the UN disarmament conference in Geneva, a role she kept until 1973. During the negotiations in Geneva, she played an extremely active role, emerging as the leader of the group of nonaligned nations which endeavored to bring pressure to bear on the two superpowers (US and USSR, respectively) to show greater concern for concrete disarmament measures. Her experiences from the years spent in Geneva found an outlet in her book "The game of disarmament", in which she expresses her disappointment at the reluctance of the US and the USSR to disarm.
Myrdal participated in the creation of the Stockholm International Peace Research Institute, becoming the first chairman of the governing board in 1966. In 1967 she was also named consultative Cabinet minister for disarmament, an office she held until 1973. Myrdal also wrote the acclaimed book The Game of Disarmament, originally published in 1976. A vocal supporter of disarmament, Myrdal received the Nobel Peace Prize in 1982 together with Alfonso García Robles. In 1983 Myrdal effectively ended the heated controversy over the future of Adolf Fredrik's Music School, "The AF-fight" (Swedish: AF-striden).
Myrdal promoted reforms in child care and later became a government commission on women's work and chair of the Federation of Business and Professional Women.
Personal life
In 1924, she married Professor Gunnar Myrdal. Together they had children Jan Myrdal (born 1927), Sissela Bok (born 1934) and Kaj Fölster (born 1936).
Her grandchildren include Hilary Bok and Stefan Fölster.
Death
She died the day after her 84th birthday.
Awards and honours
West German Peace Prize (1970; jointly with her husband Gunnar Myrdal)
Wateler Peace Prize (1973)
Royal Institute of Technology's Great Prize (1975)
Monismanien Prize (1976)
Albert Einstein Peace Prize (1980)
Jawaharlal Nehru Award for International Understanding (1981)
Nobel Peace Prize (1982; jointly with Alfonso García Robles)
Honorary degrees
Mount Holyoke College (1950)
University of Leeds, Doctor of Letters (1962)
University of Edinburgh (1964)
Columbia University, Doctor of Humane Letters (1965)
Temple University, Doctor of Humane Letters (1968)
Gustavus Adolphus College, Doctor of Divinity (1971)
Brandeis University, Doctor of Laws (19 May 1974)
University of Gothenburg, Doctor of Philosophy (1975)
University of East Anglia (1976)
University of Helsinki (1980)
University of Oslo (1981)
Linköping University, Doctor of Medicine (1982)
Memberships
Member of the American Philosophical Society (1982)
See also
List of female Nobel laureates
Social engineering (political science)
References
Further reading
External links
Alva Myrdal at Svenskt biografiskt lexikon
1902 births
1986 deaths
Ambassadors of Sweden to India
Ambassadors of Sweden to Myanmar
Ambassadors of Sweden to Nepal
Ambassadors of Sweden to Sri Lanka
People from Uppsala
Swedish Social Democratic Party politicians
Nobel Peace Prize laureates
Uppsala University alumni
Swedish Nobel laureates
Swedish pacifists
Women members of the Riksdag
Women Nobel laureates
Swedish women sociologists
Swedish sociologists
Members of the Första kammaren
20th-century women scientists
20th-century Swedish women politicians
20th-century Swedish politicians
Swedish women ambassadors
Swedish anti–nuclear weapons activists
Members of the American Philosophical Society
Women's International League for Peace and Freedom people | Alva Myrdal | [
"Technology"
] | 1,726 | [
"Women Nobel laureates",
"Women in science and technology"
] |
64,750 | https://en.wikipedia.org/wiki/Computer%20language | A computer language is a formal language used to communicate with a computer. Types of computer languages include:
Construction language – all forms of communication by which a human can specify an executable problem solution to a computer
Command language – a language used to control the tasks of the computer itself, such as starting programs
Configuration language – a language used to write configuration files
Programming language – a formal language designed to communicate instructions to a machine, particularly a computer
Scripting language – a type of programming language which typically is interpreted at runtime rather than being compiled
Query language – a language used to make queries in databases and information systems
Transformation language – designed to transform some input text in a certain formal language into a modified output text that meets some specific goal
Data exchange language – a language that is domain-independent and can be used for data from any kind of discipline; examples: JSON, XML
Markup language – a grammar for annotating a document in a way that is syntactically distinguishable from the text, such as HTML
Modeling language – an artificial language used to express information or knowledge, often for use in computer system design
Architecture description language – used as a language (or a conceptual model) to describe and represent system architectures
Hardware description language – used to model integrated circuits
Page description language – describes the appearance of a printed page in a higher level than an actual output bitmap
Simulation language – a language used to describe simulations
Specification language – a language used to describe what a system should do
Style sheet language – a computer language that expresses the presentation of structured documents, such as CSS
See also
Serialization
Domain-specific language – a language specialized to a particular application domain
Expression language
General-purpose language – a language that is broadly applicable across application domains and lacks specialized features for a particular domain
Lists of programming languages
Natural language processing – the use of computers to process text or speech in human language
External links | Computer language | [
"Technology"
] | 384 | [
"Computer science",
"Computer languages"
] |
64,814 | https://en.wikipedia.org/wiki/Andr%C3%A9-Louis%20Danjon | André-Louis Danjon (; 6 April 1890 – 21 April 1967) was a French astronomer who served as director of the Observatory of Strasbourg from 1930 to 1945 and of the Paris Observatory from 1945 to 1963. He developed several astronomical instruments to examine the regularity of the rotation of the earth and among his discoveries was an acceleration of the rotation of the Earth during periods of intense solar activity occurring in 11-year cycles correlated with an increase in earthquakes. The Danjon scale is used for measuring the intensity of lunar eclipses. He noted an increase in the number of dark lunar eclipses with solar activity which is termed as the Danjon effect.
Life and work
Danjon was born in Caen to drapers Louis Dominique Danjon and Marie Justine Binet. He studied at the Lyce Malherbe and then went to the Ecole Normale Superieure during which time he worked at the observatory of the Societe Astronomique de France. He graduated in 1914 and was conscripted into the army during World War I. He served under Ernest Esclangon and lost an eye in combat in Champagne. He received war honours in 1915 and in 1919 he was appointed aide-astronome to the Strasbourg University. He took up duties as an observer at the Strasbourg Meridian observatory an began to work on the improvement of the observatory. He was involved in establishing a new observatory, the Observatoire de Haute-Provence which became operational in 1923.
Danjon devised a method to measure "earthshine" on the darkside of the Moon using a telescope in which a prism split the Moon's image into two identical side-by-side images. By adjusting a diaphragm to dim one of the images until the sunlit portion had the same apparent brightness as the earthlit portion on the unadjusted image, he could quantify the diaphragm adjustment, and thus had a real measurement for the brightness of earthshine. He recorded the measurements using his method (now known as the Danjon scale, on which zero equates to a barely visible Moon) from 1925 until the 1950s. He extended similar methods to study the albedo of Venus and Mercury which became the subject of his doctoral dissertation Recherches de photometrie astronomique (1928) at Paris University. In 1930 he succeeded Ernest Esclangon as director of the Strasbourg Observatory. He was also appointed as a professor at Strasbourg University. In 1939, German invasion forced the move of faculty to Clermont-Ferrand near Vichy. He was arrested in November 1943 and he escaped being sent to Auschwitz and was released in January. After World War II, Esclangon retired from his position at the Paris Observatory and Danjon replaced him. Here he taught at the Sorbonne. In the 1960s he persuaded the government to establish the European Southern Observatories at La Silla and at Paranal. He also supported the establishment of radio astronomy.at Nancay in 1956.
Among his notable contributions to astronomy was the design of the impersonal (prismatic) astrolabe based on an earlier prismatic astrolabe developed by François Auguste Claude which is now known as the Danjon astrolabe, which led to an improvement in the accuracy of fundamental optical astrometry. An account of this instrument, and of the results of some early years of its operation, are given in Danjon's 1958 George Darwin Lecture to the Royal Astronomical Society.
The "Danjon limit", a proposed measure of the minimum angular separation between the Sun and the Moon at which a lunar crescent is visible is named after him. However, this limit may not exist. The Danjon effect is a name given for his observation that there is an increase in the number of "dark" total lunar eclipses during the 11 year solar sunspot maxima. He developed an astrolabe to identify irregularity in the rotational periodicity and concluded that there was increases in the Earth's rotation during intense solar activity. He suggested that the atmospheric darkness might be due to an increase in aerosols in the atmosphere due to increased volcanic activity.
Danjon was the President of the Société astronomique de France (SAF), the French astronomical society, during two periods: 1947–49 and 1962–64. He was awarded the Prix Jules Janssen of the Société astronomique de France in 1950, and the Gold Medal of the Royal Astronomical Society in 1958. In 1946 he was made Officier of the Legion d'Honneur and in 1954 he was made Commandeur.
Danjon died in 1967 in Suresnes, Hauts-de-Seine. He was married to Madeleine Renoult (m. 1919, died 1965) and they had four children.
References
20th-century French astronomers
Members of the French Academy of Sciences
Scientists from Caen
Recipients of the Gold Medal of the Royal Astronomical Society
Academic staff of the University of Strasbourg
1890 births
1967 deaths
Presidents of the International Astronomical Union | André-Louis Danjon | [
"Astronomy"
] | 1,011 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
64,861 | https://en.wikipedia.org/wiki/Snowball | A snowball is a spherical object made from snow, usually created by scooping snow with the hands and pressing the snow together to compact it into a ball. Snowballs are often used in games such as snowball fights.
A snowball may also be a large ball of snow formed by rolling a smaller snowball on a snow-covered surface. The smaller snowball grows by picking up additional snow as it rolls. The terms "snowball effect" and "snowballing" are derived from this process. The Welsh dance "Y Gasseg Eira" also takes its name from an analogy with rolling a large snowball. This method of forming a large snowball is often used to create the components needed to build a snowman.
The underlying physical process that makes snowballs possible is sintering, in which a solid mass is compacted while near the melting point. Scientific theories about snowball formation began with a lecture by Michael Faraday in 1842, examining the attractive forces between ice particles. An influential early explanation by James Thomson invoked regelation, in which a solid is melted by pressure and then re-frozen.
When and how
When forming a snowball by packing, the pressure exerted by the hands on the snow is a determinant for the final result. Reduced pressure leads to a light and soft snowball. Compacting humid or "packing" snow by applying a high pressure produces a harder snowball, sometimes called an ice ball, which can injure an opponent during a snowball fight.
Temperature is important for snowball formation. It is hard to make a good snowball if the snow is too cold. In addition, snowballs are difficult to form with dry powdery snow. In temperatures below , there is little free water in the snow, which leads to crumbly snowballs. At or above, melted water in the snow results in a better cohesion. Above a certain temperature, however, the snowball becomes slush, which lacks mechanical strength and no longer sticks together. This effect is used in the rule that, in skiing areas, there is a high risk of avalanche if it is possible to squeeze water out of a snowball.
Natural snowballs
Under certain unusual circumstances, natural snowballs form as a result of wind, without human intervention. These circumstances are:
The ground must have a top layer of ice. This will prevent the snowball from sticking to the ground.
That ice must have some wet and loose snow that is near its melting point.
The wind must be strong enough to push the snowballs, but not too strong.
In Antarctica, small windblown frost balls form through a different process that relies on electrostatic attraction; these wind-rolled frost balls are known as yukimarimo.
Under other rare circumstances, in coastal and river areas, wave action on ice and snow may create beach snowballs or ball ice.
Snow lanterns
A snow lantern is a decorative structure made from snowballs, typically shaped into a hollow cone. It is commonly used as a housing for a light source, such as a candle or a Japanese stone garden lantern known as Yukimi Gata. Snow lanterns are part of winter traditions in countries such as Sweden, Finland, and Norway, where they are created and lit during the Christmas season. These structures illuminate the winter landscape and are associated with festive celebrations in snowy regions.
Literary allusion
A snowball that turns into a child is a protagonist in a 1969 children's fantasy novel, The Snowball, by Barbara Sleigh.
Gallery
References
Ball
Play (activity) | Snowball | [
"Biology"
] | 722 | [
"Play (activity)",
"Behavior",
"Human behavior"
] |
64,919 | https://en.wikipedia.org/wiki/Environmental%20science | Environmental science is an interdisciplinary academic field that integrates physics, biology, meteorology, mathematics and geography (including ecology, chemistry, plant science, zoology, mineralogy, oceanography, limnology, soil science, geology and physical geography, and atmospheric science) to the study of the environment, and the solution of environmental problems. Environmental science emerged from the fields of natural history and medicine during the Enlightenment. Today it provides an integrated, quantitative, and interdisciplinary approach to the study of environmental systems.
Environmental studies incorporates more of the social sciences for understanding human relationships, perceptions and policies towards the environment. Environmental engineering focuses on design and technology for improving environmental quality in every aspect.
Environmental scientists seek to understand the earth's physical, chemical, biological, and geological processes, and to use that knowledge to understand how issues such as alternative energy systems, pollution control and mitigation, natural resource management, and the effects of global warming and climate change influence and affect the natural systems and processes of earth.
Environmental issues almost always include an interaction of physical, chemical, and biological processes. Environmental scientists bring a systems approach to the analysis of environmental problems. Key elements of an effective environmental scientist include the ability to relate space, and time relationships as well as quantitative analysis.
Environmental science came alive as a substantive, active field of scientific investigation in the 1960s and 1970s driven by (a) the need for a multi-disciplinary approach to analyze complex environmental problems, (b) the arrival of substantive environmental laws requiring specific environmental protocols of investigation and (c) the growing public awareness of a need for action in addressing environmental problems. Events that spurred this development included the publication of Rachel Carson's landmark environmental book Silent Spring along with major environmental issues becoming very public, such as the 1969 Santa Barbara oil spill, and the Cuyahoga River of Cleveland, Ohio, "catching fire" (also in 1969), and helped increase the visibility of environmental issues and create this new field of study.
Terminology
In common usage, "environmental science" and "ecology" are often used interchangeably, but technically, ecology refers only to the study of organisms and their interactions with each other as well as how they interrelate with environment. Ecology could be considered a subset of environmental science, which also could involve purely chemical or public health issues (for example) ecologists would be unlikely to study. In practice, there are considerable similarities between the work of ecologists and other environmental scientists. There is substantial overlap between ecology and environmental science with the disciplines of fisheries, forestry, and wildlife.
History
Ancient civilizations
Historical concern for environmental issues is well documented in archives around the world. Ancient civilizations were mainly concerned with what is now known as environmental science insofar as it related to agriculture and natural resources. Scholars believe that early interest in the environment began around 6000 BCE when ancient civilizations in Israel and Jordan collapsed due to deforestation. As a result, in 2700 BCE the first legislation limiting deforestation was established in Mesopotamia. Two hundred years later, in 2500 BCE, a community residing in the Indus River Valley observed the nearby river system in order to improve sanitation. This involved manipulating the flow of water to account for public health. In the Western Hemisphere, numerous ancient Central American city-states collapsed around 1500 BCE due to soil erosion from intensive agriculture. Those remaining from these civilizations took greater attention to the impact of farming practices on the sustainability of the land and its stable food production. Furthermore, in 1450 BCE the Minoan civilization on the Greek island of Crete declined due to deforestation and the resulting environmental degradation of natural resources. Pliny the Elder somewhat addressed the environmental concerns of ancient civilizations in the text Naturalis Historia, written between 77 and 79 ACE, which provided an overview of many related subsets of the discipline.
Although warfare and disease were of primary concern in ancient society, environmental issues played a crucial role in the survival and power of different civilizations. As more communities recognized the importance of the natural world to their long-term success, an interest in studying the environment came into existence.
Beginnings of environmental science
18th century
In 1735, the concept of binomial nomenclature is introduced by Carolus Linnaeus as a way to classify all living organisms, influenced by earlier works of Aristotle. His text, Systema Naturae, represents one of the earliest culminations of knowledge on the subject, providing a means to identify different species based partially on how they interact with their environment.
19th century
In the 1820s, scientists were studying the properties of gases, particularly those in the Earth's atmosphere and their interactions with heat from the Sun. Later that century, studies suggested that the Earth had experienced an Ice Age and that warming of the Earth was partially due to what are now known as greenhouse gases (GHG). The greenhouse effect was introduced, although climate science was not yet recognized as an important topic in environmental science due to minimal industrialization and lower rates of greenhouse gas emissions at the time.
20th century
In the 1900s, the discipline of environmental science as it is known today began to take shape. The century is marked by significant research, literature, and international cooperation in the field.
In the early 20th century, criticism from dissenters downplayed the effects of global warming. At this time, few researchers were studying the dangers of fossil fuels. After a 1.3 degrees Celsius temperature anomaly was found in the Atlantic Ocean in the 1940s, however, scientists renewed their studies of gaseous heat trapping from the greenhouse effect (although only carbon dioxide and water vapor were known to be greenhouse gases then). Nuclear development following the Second World War allowed environmental scientists to intensively study the effects of carbon and make advancements in the field. Further knowledge from archaeological evidence brought to light the changes in climate over time, particularly ice core sampling.
Environmental science was brought to the forefront of society in 1962 when Rachel Carson published an influential piece of environmental literature, Silent Spring. Carson's writing led the American public to pursue environmental safeguards, such as bans on harmful chemicals like the insecticide DDT. Another important work, The Tragedy of the Commons, was published by Garrett Hardin in 1968 in response to accelerating natural degradation. In 1969, environmental science once again became a household term after two striking disasters: Ohio's Cuyahoga River caught fire due to the amount of pollution in its waters and a Santa Barbara oil spill endangered thousands of marine animals, both receiving prolific media coverage. Consequently, the United States passed an abundance of legislation, including the Clean Water Act and the Great Lakes Water Quality Agreement. The following year, in 1970, the first ever Earth Day was celebrated worldwide and the United States Environmental Protection Agency (EPA) was formed, legitimizing the study of environmental science in government policy. In the next two years, the United Nations created the United Nations Environment Programme (UNEP) in Stockholm, Sweden to address global environmental degradation.
Much of the interest in environmental science throughout the 1970s and the 1980s was characterized by major disasters and social movements. In 1978, hundreds of people were relocated from Love Canal, New York after carcinogenic pollutants were found to be buried underground near residential areas. The next year, in 1979, the nuclear power plant on Three Mile Island in Pennsylvania suffered a meltdown and raised concerns about the dangers of radioactive waste and the safety of nuclear energy. In response to landfills and toxic waste often disposed of near their homes, the official Environmental Justice Movement was started by a Black community in North Carolina in 1982. Two years later, the toxic methyl isocyanate gas was released to the public from a power plant disaster in Bhopal, India, harming hundreds of thousands of people living near the disaster site, the effects of which are still felt today. In a groundbreaking discovery in 1985, a British team of researchers studying Antarctica found evidence of a hole in the ozone layer, inspiring global agreements banning the use of chlorofluorocarbons (CFCs), which were previously used in nearly all aerosols and refrigerants. Notably, in 1986, the meltdown at the Chernobyl nuclear power plant in Ukraine released radioactive waste to the public, leading to international studies on the ramifications of environmental disasters. Over the next couple of years, the Brundtland Commission (previously known as the World Commission on Environment and Development) published a report titled Our Common Future and the Montreal Protocol formed the International Panel on Climate Change (IPCC) as international communication focused on finding solutions for climate change and degradation. In the late 1980s, the Exxon Valdez company was fined for spilling large quantities of crude oil off the coast of Alaska and the resulting cleanup, involving the work of environmental scientists. After hundreds of oil wells were burned in combat in 1991, warfare between Iraq and Kuwait polluted the surrounding atmosphere just below the air quality threshold s believed was life-threatening.
21st century
Many niche disciplines of environmental science have emerged over the years, although climatology is one of the most known topics. Since the 2000s, environmental scientists have focused on modeling the effects of climate change and encouraging global cooperation to minimize potential damages. In 2002, the Society for the Environment as well as the Institute of Air Quality Management were founded to share knowledge and develop solutions around the world. Later, in 2008, the United Kingdom became the first country to pass legislation (the Climate Change Act) that aims to reduce carbon dioxide output to a specified threshold. In 2016 the Kyoto Protocol became the Paris Agreement, which sets concrete goals to reduce greenhouse gas emissions and restricts Earth's rise in temperature to a 2 degrees Celsius maximum. The agreement is one of the most expansive international efforts to limit the effects of global warming to date.
Most environmental disasters in this time period involve crude oil pollution or the effects of rising temperatures. In 2010, BP was responsible for the largest American oil spill in the Gulf of Mexico, known as the Deepwater Horizon spill, which killed a number of the company's workers and released large amounts of crude oil into the water. Furthermore, throughout this century, much of the world has been ravaged by widespread wildfires and water scarcity, prompting regulations on the sustainable use of natural resources as determined by environmental scientists.
The 21st century is marked by significant technological advancements. New technology in environmental science has transformed how researchers gather information about various topics in the field. Research in engines, fuel efficiency, and decreasing emissions from vehicles since the times of the Industrial Revolution has reduced the amount of carbon and other pollutants into the atmosphere. Furthermore, investment in researching and developing clean energy (i.e. wind, solar, hydroelectric, and geothermal power) has significantly increased in recent years, indicating the beginnings of the divestment from fossil fuel use. Geographic information systems (GIS) are used to observe sources of air or water pollution through satellites and digital imagery analysis. This technology allows for advanced farming techniques like precision agriculture as well as monitoring water usage in order to set market prices. In the field of water quality, developed strains of natural and manmade bacteria contribute to bioremediation, the treatment of wastewaters for future use. This method is more eco-friendly and cheaper than manual cleanup or treatment of wastewaters. Most notably, the expansion of computer technology has allowed for large data collection, advanced analysis, historical archives, public awareness of environmental issues, and international scientific communication. The ability to crowdsource on the Internet, for example, represents the process of collectivizing knowledge from researchers around the world to create increased opportunity for scientific progress. With crowdsourcing, data is released to the public for personal analyses which can later be shared as new information is found. Another technological development, blockchain technology, monitors and regulates global fisheries. By tracking the path of fish through global markets, environmental scientists can observe whether certain species are being overharvested to the point of extinction. Additionally, remote sensing allows for the detection of features of the environment without physical intervention. The resulting digital imagery is used to create increasingly accurate models of environmental processes, climate change, and much more. Advancements to remote sensing technology are particularly useful in locating the nonpoint sources of pollution and analyzing ecosystem health through image analysis across the electromagnetic spectrum. Lastly, thermal imaging technology is used in wildlife management to catch and discourage poachers and other illegal wildlife traffickers from killing endangered animals, proving useful for conservation efforts. Artificial intelligence has also been used to predict the movement of animal populations and protect the habitats of wildlife.
Components
Atmospheric sciences
Atmospheric sciences focus on the Earth's atmosphere, with an emphasis upon its interrelation to other systems. Atmospheric sciences can include studies of meteorology, greenhouse gas phenomena, atmospheric dispersion modeling of airborne contaminants, sound propagation phenomena related to noise pollution, and even light pollution.
Taking the example of the global warming phenomena, physicists create computer models of atmospheric circulation and infrared radiation transmission, chemists examine the inventory of atmospheric chemicals and their reactions, biologists analyze the plant and animal contributions to carbon dioxide fluxes, and specialists such as meteorologists and oceanographers add additional breadth in understanding the atmospheric dynamics.
Ecology
As defined by the Ecological Society of America, "Ecology is the study of the relationships between living organisms, including humans, and their physical environment; it seeks to understand the vital connections between plants and animals and the world around them." Ecologists might investigate the relationship between a population of organisms and some physical characteristic of their environment, such as concentration of a chemical; or they might investigate the interaction between two populations of different organisms through some symbiotic or competitive relationship. For example, an interdisciplinary analysis of an ecological system which is being impacted by one or more stressors might include several related environmental science fields. In an estuarine setting where a proposed industrial development could impact certain species by water and air pollution, biologists would describe the flora and fauna, chemists would analyze the transport of water pollutants to the marsh, physicists would calculate air pollution emissions and geologists would assist in understanding the marsh soils and bay muds.
Environmental chemistry
Environmental chemistry is the study of chemical alterations in the environment. Principal areas of study include soil contamination and water pollution. The topics of analysis include chemical degradation in the environment, multi-phase transport of chemicals (for example, evaporation of a solvent containing lake to yield solvent as an air pollutant), and chemical effects upon biota.
As an example study, consider the case of a leaking solvent tank which has entered the habitat soil of an endangered species of amphibian. As a method to resolve or understand the extent of soil contamination and subsurface transport of solvent, a computer model would be implemented. Chemists would then characterize the molecular bonding of the solvent to the specific soil type, and biologists would study the impacts upon soil arthropods, plants, and ultimately pond-dwelling organisms that are the food of the endangered amphibian.
Geosciences
Geosciences include environmental geology, environmental soil science, volcanic phenomena and evolution of the Earth's crust. In some classification systems this can also include hydrology, including oceanography.
As an example study, of soils erosion, calculations would be made of surface runoff by soil scientists. Fluvial geomorphologists would assist in examining sediment transport in overland flow. Physicists would contribute by assessing the changes in light transmission in the receiving waters. Biologists would analyze subsequent impacts to aquatic flora and fauna from increases in water turbidity.
Regulations driving the studies
In the United States the National Environmental Policy Act (NEPA) of 1969 set forth requirements for analysis of federal government actions (such as highway construction projects and land management decisions) in terms of specific environmental criteria. Numerous state laws have echoed these mandates, applying the principles to local-scale actions. The upshot has been an explosion of documentation and study of environmental consequences before the fact of development actions.
One can examine the specifics of environmental science by reading examples of Environmental Impact Statements prepared under NEPA such as: Wastewater treatment expansion options discharging into the San Diego/Tijuana Estuary, Expansion of the San Francisco International Airport, Development of the Houston, Metro Transportation system, Expansion of the metropolitan Boston MBTA transit system, and Construction of Interstate 66 through Arlington, Virginia.
In England and Wales the Environment Agency (EA), formed in 1996, is a public body for protecting and improving the environment and enforces the regulations listed on the communities and local government site. (formerly the office of the deputy prime minister). The agency was set up under the Environment Act 1995 as an independent body and works closely with UK Government to enforce the regulations.
See also
Environmental monitoring
Environmental planning
Environmental statistics
Environmental informatics
Glossary of environmental science
List of environmental studies topics
References
External links
Glossary of environmental terms – Global Development Research Center
Earth sciences | Environmental science | [
"Environmental_science"
] | 3,408 | [
"nan"
] |
16,406,885 | https://en.wikipedia.org/wiki/Proportionator | The proportionator is the most efficient unbiased stereological method used to estimate population size in samples.
A typical application is counting the number of cells in an organ. The proportionator is related to the optical fractionator and physical dissector methods that also estimate population. The optical and physical fractionators use a sampling method called systematic uniform random sampling, or SURS. Unlike these two methods the proportionator introduces sampling with probability proportional to size, or PPS. With SURS all sampling sites are equal. With PPS sites are not sampled with the same probability. The reason for using PPS is to improve the efficiency of the estimation process.
Efficiency is the notion of how much is gained by a given amount of work. A more efficient method provides better results for the same amount of work. The proportionator provides a better estimate, that is a more precise estimate, than either of these two methods: the optical fractionator and physical dissector . The PPS is implemented by assigning a value to a sampling site. This value is the characteristic of the sampling site. The proportionator becomes the optical fractionator if the characteristic is constant, i.e. the same, for all sampling sites. If there is no difference between sampling sites, then the proportionator behaves the same as the optical fractionator. In actual sampling, the characteristic varies across the tissue being studied. Information about the distribution of the characteristic is used to refine the sampling. The greater the variance of the characteristic, the greater the efficiency of the proportionator. What this means to the stereologist is simple: if you need to count more and more to get the CE needed to publish just stop and switch to the proportionator.
The proportionator is a patented process that is not generally available. The only current licensee for the patent is Visiopharm.
Introduction
The proportionator is the de facto standard method used to count cells in large projects. The increased efficiency provided by the proportionator makes more work intensive methods such as the optical fractionator less attractive except in small projects.
A common misconception in the stereological literature is that design based methodologies require that all objects of interest must have the same probability of being selected. It is true that making such a design decision ensures an unbiased result, but it is not necessary. The use of nonuniform sampling is often used in stereological work. The point sampled intercept method selects cells using a point probe. The result is a volume-weighted estimate of the size of the cells. This is not a biased result.
A sampling method known as probability proportional to size, or PPS, selects objects based on a characteristic that differs between objects. An excellent example of this is the selection of trees based on their diameter, or selecting a cell based on volume. The PSI selects cells with points. DeVries estimators select trees with lines. Sections select objects based on their height. These are examples of objects being selected in a varying probability by probes. In these examples the characteristic is a function of the objects themselves. That does not have to be the case.
The proportionator applies PPS to counting cells. The PPS is employed to gain efficiency in the sampling, and not to produce a weighted estimate, such as a volume weighted estimate. The optical fractionator is the older standard for estimating the number of cells in an unbiased manner. The optical fractionator, and other sampling methods, has some statistical uncertainty. This uncertainty is due to the variance of the sampling even though the result is unbiased. The efficiency of the sampling can be determined by use of the coefficient of error, or CE. This value describes the variance of the sampling method. Often, biological sampling is done at a CE of .05.
The efficiency of a sampling method is the amount of work it takes to obtain a desired CE. A more efficient method is one that requires less work to obtain a desired CE. A method is less efficient if the same amount of work results in a larger CE.
Suppose that every sample always gave the same result. There would be no difference between samples. This means that the variance in this case is 0. No more than 1 sample would be required to obtain a good result. (Understand that this might not be efficient if the sampling requires a great deal of work and there is no need for a CE this low.) If samples differ, then the variance is positive, and so is the CE.
The typical method of controlling the CE is to do more counting. The literature on the optical fractionator recommends methods of deciding where to increase the workload: more slices, or more optical dissectors. In keeping with this notion some amount of effort has been made to perform automatic image acquisition and counting to facilitate the process. The proportionator provides a superior result by avoiding more counting.
Plotless sampling
One of the earliest stereological methods that employed PPS was introduced by Walter Bitterlich in 1939 to improve the efficiency of fieldwork in the forest sciences. Bitterlich developed a sampling method that revolutionized the forest sciences. Up to this time the sampling quadrat method proposed by Pond and Clements in 1898 was still in use. Laying out sampling quadrats at each sampling site was a difficult process at times due to the physical obstructions of the natural world. Besides the physical issues it was also a costly procedure. It took a considerable amount of time to lay out a rectangle and to measure the trees included in the quadrat. Bitterlich realized that PPS could be used in the field. Bitterlich proposed the use of a sampling angle. All of the trees selected by a fixed angle from a sampling point would be counted. The quadrat, or plot as it was often called, was not required.
The quantity being estimated by the researchers was tree volume. The original sampling method was to choose a number of sampling points. The researcher traveled to each sampling point. A quadrat, rectangular sampling area, was laid out at each sampling point. Measurements of the trees in the quadrats was used to estimate tree volume. A typical measurement is basal area.
Bitterlich's method was to choose a number of sampling points. The researcher traveled to each sampling point just as in the quadrat method. At each sampling point the researcher used an angle gauge to see if a tree had a larger apparent angle than the gauge. If so, the tree was counted. No quadrat and no measurements! Just count and go. The result of this procedure was an estimate of tree volume.
Lou Grosenbaugh realized the importance of Bitterlich's work and wrote a number of articles describing the method. Soon a host of devices from angle gauge, to relascope, to sampling prism were developed. The Bitterlich method, employing PPS, and these devices profoundly increased the efficiency of fieldwork.
The proportionator reduces the workload by avoiding the expense of increased counting. The efficiency increase is attained by employing PPS. Efforts to automate the counting process attack the variance problem at the wrong level of sampling. The better solution is to reduce the workload before going to the counting step. The optimal situation is to have all samples providing identical counts. The next best situation is to reduce the difference between samples.
The proportionator adjusts the sampling scheme to select samples that are likely to provide estimates that have a smaller difference. Thus the variance of the estimator is addressed without changing the workload. That results in a gain in efficiency due to the reduction in variance for a given cost.
The main steps in sampling biological tissue are:
Selection of a set of animals
Selection of tissue, usually organs from the animals in step 1
Sampling of the organs by means such as slabbing, cutting bars from organs in step 2
Selecting a sample of the slices produced from the material in step 3
Selection of sampling sites on slices from step 4
Sampling in an optical dissector within the sampling sites chosen in step 5
The typical attempt at increasing efficiency is the counting which occurs in step 6. The proportionator adjusts the sampling at step 5. This is accomplished by assigning a characteristic to each sampling site. Since each of the sampling sites is viewed it is possible for the automated systems to make a visual record of the site. The image collected at each site is used to determine a value for the site. The values for the sites are the characteristic. Recall that the characteristic may, but does not have to a function of the objects being counted. The potential sampling sites are then sampled based on the observed characteristic. Sites are chosen in a non-uniform manner, but still an unbiased method. Not only is the result unbiased, but the result is not weighted by the characteristic. The end result is that the difference between samples is reduced. This reduces the variance. Therefore, the workload is reduced.
Experimental evidence demonstrates that the proportionator significantly reduces the variance between samples, especially in situations where the tissue distribution is heterogeneous. This means that the situations where it is harder to reduce the variance, or improve the CE, are just the situations where the proportionator excels. Another way to look at this is that the proportionator is designed to take the CE reduction issue out of the hands of the researcher.
Suppose that the goal is to have a CE of .05. If the CE is larger than that value, then the only option available in the optical fractionator method is to increase the counting by either using more slices or more sampling sites on the slices. The proportionator is able to adjust the sampling to decrease the CE without increasing the counting. In fact, if the proportionator is able to reduce the CE below .05, then it is possible to reduce the counting workload and allow the CE to come up to the .05 requirement.
PPS revolutionized the forestry sciences. The application of PPS to cell counting makes larger scale research projects possible, while saving time and reducing expenses.
Sources
Gardi, J.E., J. R. Nyengard, H.J.G. Gundersen, Using unbiased image analysis for improving unbiased stereological number estimates - a pilot simulation study using the smooth fractionator, Journal of Microscopy, 2006, Vol. 222, Pt. 3, pp. 242–250
Gardi, J.E., J.R. Nyengard, H.J.G. Gundersen, Automatic sampling for unbiased and efficient stereological estimation using the proportionator in biological studies, Journal of Microscopy, 2008, Vol. 230, Pt. 1, pp. 108–120
Grosenbaugh, L.R., Plotless Timber Estimates - New, Fast, Easy, Journal of Forestry, 1952, Vol. 52, pp. 32–37
Grosenbaugh, L.R., The Gains from Sample-Tree Selection with Unequal Probabilities, Journal of Forestry, 1967, Vol. 65, No. 3, pp. 203–206
Keller, K.K., Andersen, I.T., Andersen, J.B., Hahn, U., Stengaard-Pedersen, K., Hauge, E.M., Nyengaard, J.R., Improving efficiency in stereology: a study applying the proportionator and the autodisector on virtual slides, Journal of Microscopy, 2013
Commercial products
newCast by Visiopharm
Microscopy
Forest modelling | Proportionator | [
"Chemistry"
] | 2,326 | [
"Microscopy"
] |
16,407,499 | https://en.wikipedia.org/wiki/Frobenius%20solution%20to%20the%20hypergeometric%20equation | In the following we solve the second-order differential equation called the hypergeometric differential equation using Frobenius method, named after Ferdinand Georg Frobenius. This is a method that uses the series solution for a differential equation, where we assume the solution takes the form of a series. This is usually the method we use for complicated ordinary differential equations.
The solution of the hypergeometric differential equation is very important. For instance, Legendre's differential equation can be shown to be a special case of the hypergeometric differential equation. Hence, by solving the hypergeometric differential equation, one may directly compare its solutions to get the solutions of Legendre's differential equation, after making the necessary substitutions. For more details, please check the hypergeometric differential equation.
We shall prove that this equation has three singularities, namely at x = 0, x = 1 and around x = infinity. However, as these will turn out to be regular singular points, we will be able to assume a solution on the form of a series. Since this is a second-order differential equation, we must have two linearly independent solutions.
The problem however will be that our assumed solutions may or not be independent, or worse, may not even be defined (depending on the value of the parameters of the equation). This is why we shall study the different cases for the parameters and modify our assumed solution accordingly.
The equation
Solve the hypergeometric equation around all singularities:
Solution around x = 0
Let
Then
Hence, x = 0 and x = 1 are singular points. Let's start with x = 0. To see if it is regular, we study the following limits:
Hence, both limits exist and x = 0 is a regular singular point. Therefore, we assume the solution takes the form
with a0 ≠ 0. Hence,
Substituting these into the hypergeometric equation, we get
That is,
In order to simplify this equation, we need all powers to be the same, equal to r + c − 1, the smallest power. Hence, we switch the indices as follows:
Thus, isolating the first term of the sums starting from 0 we get
Now, from the linear independence of all powers of x, that is, of the functions 1, x, x2, etc., the coefficients of xk vanish for all k. Hence, from the first term, we have
which is the indicial equation. Since a0 ≠ 0, we have
Hence,
Also, from the rest of the terms, we have
Hence,
But
Hence, we get the recurrence relation
Let's now simplify this relation by giving ar in terms of a0 instead of ar−1. From the recurrence relation (note: below, expressions of the form (u)r refer to the Pochhammer symbol).
As we can see,
Hence, our assumed solution takes the form
We are now ready to study the solutions corresponding to the different cases for c1 − c2 = γ − 1 (this reduces to studying the nature of the parameter γ: whether it is an integer or not).
Analysis of the solution in terms of the difference γ − 1 of the two roots
γ not an integer
Then y1 = y|c = 0 and y2 = y|c = 1 − γ. Since
we have
Hence, Let A′ a0 = a and B′ a0 = B. Then
γ = 1
Then y1 = y|c = 0. Since γ = 1, we have
Hence,
To calculate this derivative, let
Then
But
Hence,
Differentiating both sides of the equation with respect to c, we get:
Hence,
Now,
Hence,
For c = 0, we get
Hence, y = C′y1 + D′y2. Let C′a0 = C and D′a0 = D. Then
γ an integer and γ ≠ 1
γ ≤ 0
The value of is .
To begin with, we shall simplify matters by concentrating a particular value of
and generalise the result at a later stage.
We shall use the value . The indicial equation
has a root at , and we see from the recurrence relation
that when that that denominator has a factor
which vanishes when . In this case, a solution can be obtained by
putting where is a constant.
With this substitution, the coefficients of vanish when
and . The factor of
in the denominator of the recurrence relation cancels with that of the numerator
when . Hence, our solution takes the form
If we start the summation at rather than
we see that
The result (as we have written it) generalises easily.
For , with then
Obviously, if , then .
The expression for we have just given looks a little
inelegant since we have a multiplicative constant apart from
the usual arbitrary multiplicative constant .
Later, we shall see that we can recast things in such a way
that this extra constant never appears
The other root to the indicial equation is , but
this gives us (apart from a multiplicative constant) the same result
as found using .
This means we must take the partial derivative (w.r.t. ) of the usual trial solution in order to find a second independent solution.
If we define the linear
operator as
then since in our case,
(We insist that .) Taking the partial derivative w.r.t ,
Note that we must evaluate the partial derivative at
(and not at the other root ). Otherwise the right hand side
is non-zero in the above, and we do not have a solution of .
The factor
is not cancelled for and .
This part of the second independent solution is
Now we can turn our attention to the terms where the factor cancels.
First
After this, the recurrence relations give us
So, if we have
We need the partial derivatives
Similarly, we can write
and
It becomes clear that for
Here, is the th partial sum of the harmonic series,
and by definition and .
Putting these together, for the case
we have a second solution
The two independent solutions for (where
is a positive integer) are then
and
The general solution is as usual
where and are arbitrary constants.
Now, if the reader consults a ``standard solution" for this case,
such as given by Abramowitz and Stegun in §15.5.21
(which we shall write down at the end of the next section) it shall be found that the
solution we have found looks somewhat different from the standard solution.
In our solution for , the first term in
the infinite series part of
is a term in . The first term in the corresponding infinite
series in the standard solution is a term in .
The term is missing from the standard solution.
Nonetheless, the two solutions are entirely equivalent.
The "Standard" Form of the Solution γ ≤ 0
The reason for the apparent discrepancy between the solution
given above and the standard solution in Abramowitz and Stegun
§15.5.21 is that there are an infinite number of
ways in which to represent the two independent solutions of the hypergeometric ODE.
In the last section, for instance, we replaced
with . Suppose though, we are given some function
which is continuous and finite everywhere in an arbitrarily
small interval about . Suppose we are also given
and
Then, if instead of replacing
with we replace
with , we still find we have a valid solution of
the hypergeometric equation. Clearly, we have an infinity of possibilities
for . There is however a ``natural choice" for .
Suppose that is the first non zero term
in the first solution with . If we make the reciprocal
of , then we won't have a multiplicative constant involved in
as we did in the previous section. From another point of
view, we get the same result if we ``insist" that is independent of
, and find by using the recurrence relations
backwards.
For the first solution,
the function gives us (apart from multiplicative constant)
the same
as we would have obtained using .
Suppose that using gives rise to two independent solutions
and . In the following we shall
denote the solutions arrived at given some as
and .
The second solution requires us to take the partial derivative w.r.t ,
and substituting the usual trial solution gives us
The operator is the same linear operator discussed in the previous section.
That is to say, the hypergeometric ODE is represented as .
Evaluating the left hand side at will give us a second independent solution.
Note that this second solution is in fact a linear
combination of and .
Any two independent linear combinations ( and ) of and are independent solutions of .
The general solution can be written as a linear combination of and just as well as linear combinations of and .
We shall review the special case where that was considered in the last section. If we ``insist" , then the recurrence relations yield
and
These three coefficients are all zero at as expected.
We have three terms involved in by
taking the partial derivative w.r.t , we denote the sum of the
three terms involving these coefficients as
where
The reader may confirm that we can tidy this up and make it easy to generalise by putting
Next we can turn to the other coefficients, the recurrence relations yield
Setting gives us
This is (apart from the multiplicative constant) the same as .
Now, to find we need partial derivatives
Then
we can re-write this as
The pattern soon becomes clear, and for
Clearly, for ,
The infinite series part of is , where
Now we can write (disregarding the arbitrary constant) for
Some authors prefer to express the finite sums in this last result using the
digamma function . In particular, the following results are used
Here, is the Euler-Mascheroni constant. Also
With these results we obtain the form given in Abramamowitz and Stegun §15.5.21, namely
The Standard" Form of the Solution γ > 1
In this section, we shall concentrate on the ``standard solution", and
we shall not replace with .
We shall put where .
For the root of the indicial equation we had
where in which case we are in trouble if .
For instance, if , the denominator in the recurrence relations
vanishes for .
We can use exactly the same methods that we have just used for the standard solution in the last
section. We shall not (in the instance where )
replace with as this
will not give us the standard form of solution that we are after.
Rather, we shall ``insist" that as we did
in the standard solution for in the last section.
(Recall that this defined the function and
that will now be replaced with .)
Then we may work out the coefficients of to
as functions of using the recurrence relations backwards.
There is nothing new to add here, and the reader may use
the same methods as used in the last section to find
the results of §15.5.18 and §15.5.19,
these are
and
Note that the powers of in the finite sum part
of are now negative
so that this sum diverges as
Solution around x = 1
Let us now study the singular point x = 1. To see if it is regular,
Hence, both limits exist and x = 1 is a regular singular point. Now, instead of assuming a solution on the form
we will try to express the solutions of this case in terms of the solutions for the point x = 0. We proceed as follows: we had the hypergeometric equation
Let z = 1 − x. Then
Hence, the equation takes the form
Since z = 1 − x, the solution of the hypergeometric equation at x = 1 is the same as the solution for this equation at z = 0. But the solution at z = 0 is identical to the solution we obtained for the point x = 0, if we replace each γ by α + β − γ + 1. Hence, to get the solutions, we just make this substitution in the previous results. For x = 0, c1 = 0 and c2 = 1 − γ. Hence, in our case, c1 = 0 while c2 = γ − α − β. Let us now write the solutions. In the following we replaced each z by 1 - x.
Analysis of the solution in terms of the difference γ − α − β of the two roots
To simplify notation from now on denote γ − α − β by Δ, therefore γ = Δ + α + β.
Δ not an integer
Δ = 0
Δ is a non-zero integer
Δ > 0
Δ < 0
Solution around infinity
Finally, we study the singularity as x → ∞. Since we can't study this directly, we let x = s−1. Then the solution of the equation as x → ∞ is identical to the solution of the modified equation when s = 0. We had
Hence, the equation takes the new form
which reduces to
Let
As we said, we shall only study the solution when s = 0. As we can see, this is a singular point since P2(0) = 0. To see if it is regular,
Hence, both limits exist and s = 0 is a regular singular point. Therefore, we assume the solution takes the form
with a0 ≠ 0. Hence,
Substituting in the modified hypergeometric equation we get
And therefore:
i.e.,
In order to simplify this equation, we need all powers to be the same, equal to r + c, the smallest power. Hence, we switch the indices as follows
Thus, isolating the first term of the sums starting from 0 we get
Now, from the linear independence of all powers of s (i.e., of the functions 1, s, s2, ...), the coefficients of sk vanish for all k. Hence, from the first term we have
which is the indicial equation. Since a0 ≠ 0, we have
Hence, c1 = α and c2 = β.
Also, from the rest of the terms we have
Hence,
But
Hence, we get the recurrence relation
Let's now simplify this relation by giving ar in terms of a0 instead of ar−1. From the recurrence relation,
As we can see,
Hence, our assumed solution takes the form
We are now ready to study the solutions corresponding to the different cases for c1 − c2 = α − β.
Analysis of the solution in terms of the difference α − β of the two roots
α − β not an integer
Then y1 = y|c = α and y2 = y|c = β. Since
we have
Hence, y = A′y1 + B′y2. Let A′a0 = A and B′a0 = B. Then, noting that s = x−1,
α − β = 0
Then y1 = y|c = α. Since α = β, we have
Hence,
To calculate this derivative, let
Then using the method in the case γ = 1 above, we get
Now,
Hence,
Therefore:
Hence, y = C′y1 + D′y2. Let C′a0 = C and D′a0 = D. Noting that s = x−1,
α − β an integer and α − β ≠ 0
α − β > 0
From the recurrence relation
we see that when c = β (the smaller root), aα−β → ∞. Hence, we must make the substitution a0 = b0(c − ci), where ci is the root for which our solution is infinite. Hence, we take a0 = b0(c − β) and our assumed solution takes the new form
Then y1 = yb|c = β. As we can see, all terms before
vanish because of the c − β in the numerator.
But starting from this term, the c − β in the numerator vanishes. To see this, note that
Hence, our solution takes the form
Now,
To calculate this derivative, let
Then using the method in the case γ = 1 above we get
Now,
Hence,
Hence,
At c = α we get y2. Hence, y = E′y1 + F′y2. Let E′b0 = E and F′b0 = F. Noting that s = x−1 we get
α − β < 0
From the symmetry of the situation here, we see that
References
Hypergeometric functions
Ordinary differential equations
Modular forms | Frobenius solution to the hypergeometric equation | [
"Mathematics"
] | 3,358 | [
"Modular forms",
"Number theory"
] |
16,408,009 | https://en.wikipedia.org/wiki/Cauchy%20momentum%20equation | The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
Main equation
In convective (or Lagrangian) form the Cauchy momentum equation is written as:
where
is the flow velocity vector field, which depends on time and space, (unit: )
is time, (unit: )
is the material derivative of , equal to , (unit: )
is the density at a given point of the continuum (for which the continuity equation holds), (unit: )
is the stress tensor, (unit: )
is a vector containing all of the accelerations caused by body forces (sometimes simply gravitational acceleration), (unit: )
is the divergence of stress tensor. (unit: )
Commonly used SI units are given in parentheses although the equations are general in nature and other units can be entered into them or units can be removed at all by nondimensionalization.
Note that only we use column vectors (in the Cartesian coordinate system) above for clarity, but the equation is written using physical components (which are neither covariants ("column") nor contravariants ("row") ). However, if we chose a non-orthogonal curvilinear coordinate system, then we should calculate and write equations in covariant ("row vectors") or contravariant ("column vectors") form.
After an appropriate change of variables, it can also be written in conservation form:
where is the momentum density at a given space-time point, is the flux associated to the momentum density, and contains all of the body forces per unit volume.
Differential derivation
Let us start with the generalized momentum conservation principle which can be written as follows: "The change in system momentum is proportional to the resulting force acting on this system". It is expressed by the formula:
where is momentum at time , and is force averaged over . After dividing by and passing to the limit we get (derivative):
Let us analyse each side of the equation above.
Right side
We split the forces into body forces and surface forces
Surface forces act on walls of the cubic fluid element. For each wall, the X component of these forces was marked in the figure with a cubic element (in the form of a product of stress and surface area e.g. with units ).
Adding forces (their X components) acting on each of the cube walls, we get:
After ordering and performing similar reasoning for components (they have not been shown in the figure, but these would be vectors parallel to the Y and Z axes, respectively) we get:
We can then write it in the symbolic operational form:
There are mass forces acting on the inside of the control volume. We can write them using the acceleration field (e.g. gravitational acceleration):
Left side
Let us calculate momentum of the cube:
Because we assume that tested mass (cube) is constant in time, so
Left and Right side comparison
We have
then
then
Divide both sides by , and because we get:
which finishes the derivation.
Integral derivation
Applying Newton's second law (th component) to a control volume in the continuum being modeled gives:
Then, based on the Reynolds transport theorem and using material derivative notation, one can write
where represents the control volume. Since this equation must hold for any control volume, it must be true that the integrand is zero, from this the Cauchy momentum equation follows. The main step (not done above) in deriving this equation is establishing that the derivative of the stress tensor is one of the forces that constitutes .
Conservation form
The Cauchy momentum equation can also be put in the following form:
simply by defining:
where is the momentum density at the point considered in the continuum (for which the continuity equation holds), is the flux associated to the momentum density, and contains all of the body forces per unit volume. is the dyad of the velocity.
Here and have same number of dimensions as the flow speed and the body acceleration, while , being a tensor, has .
In the Eulerian forms it is apparent that the assumption of no deviatoric stress brings Cauchy equations to the Euler equations.
Convective acceleration
A significant feature of the Navier–Stokes equations is the presence of convective acceleration: the effect of time-independent acceleration of a flow with respect to space. While individual continuum particles indeed experience time dependent acceleration, the convective acceleration of the flow field is a spatial effect, one example being fluid speeding up in a nozzle.
Regardless of what kind of continuum is being dealt with, convective acceleration is a nonlinear effect. Convective acceleration is present in most flows (exceptions include one-dimensional incompressible flow), but its dynamic effect is disregarded in creeping flow (also called Stokes flow). Convective acceleration is represented by the nonlinear quantity , which may be interpreted either as or as , with the tensor derivative of the velocity vector . Both interpretations give the same result.
Advection operator vs tensor derivative
The convective acceleration can be thought of as the advection operator acting on the velocity field . This contrasts with the expression in terms of tensor derivative , which is the component-wise derivative of the velocity vector defined by , so that
Lamb form
The vector calculus identity of the cross product of a curl holds:
where the Feynman subscript notation is used, which means the subscripted gradient operates only on the factor .
Lamb in his famous classical book Hydrodynamics (1895), used this identity to change the convective term of the flow velocity in rotational form, i.e. without a tensor derivative:
where the vector is called the Lamb vector. The Cauchy momentum equation becomes:
Using the identity:
the Cauchy equation becomes:
In fact, in case of an external conservative field, by defining its potential :
In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes:
And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears due to a vector calculus identity of the triple scalar product:
If the stress tensor is isotropic, then only the pressure enters: (where is the identity tensor), and the Euler momentum equation in the steady incompressible case becomes:
In the steady incompressible case the mass equation is simply:
that is, the mass conservation for a steady incompressible flow states that the density along a streamline is constant. This leads to a considerable simplification of the Euler momentum equation:
The convenience of defining the total head for an inviscid liquid flow is now apparent:
in fact, the above equation can be simply written as:
That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant.
Irrotational flows
The Lamb form is also useful in irrotational flow, where the curl of the velocity (called vorticity) is equal to zero. In that case, the convection term in reduces to
Stresses
The effect of stress in the continuum flow is represented by the and terms; these are gradients of surface forces, analogous to stresses in a solid. Here is the pressure gradient and arises from the isotropic part of the Cauchy stress tensor. This part is given by the normal stresses that occur in almost all situations. The anisotropic part of the stress tensor gives rise to , which usually describes viscous forces; for incompressible flow, this is only a shear effect. Thus, is the deviatoric stress tensor, and the stress tensor is equal to:
where is the identity matrix in the space considered and the shear tensor.
All non-relativistic momentum conservation equations, such as the Navier–Stokes equation, can be derived by beginning with the Cauchy momentum equation and specifying the stress tensor through a constitutive relation. By expressing the shear tensor in terms of viscosity and fluid velocity, and assuming constant density and viscosity, the Cauchy momentum equation will lead to the Navier–Stokes equations. By assuming inviscid flow, the Navier–Stokes equations can further simplify to the Euler equations.
The divergence of the stress tensor can be written as
The effect of the pressure gradient on the flow is to accelerate the flow in the direction from high pressure to low pressure.
As written in the Cauchy momentum equation, the stress terms and are yet unknown, so this equation alone cannot be used to solve problems. Besides the equations of motion—Newton's second law—a force model is needed relating the stresses to the flow motion. For this reason, assumptions based on natural observations are often applied to specify the stresses in terms of the other flow variables, such as velocity and density.
External forces
The vector field represents body forces per unit mass. Typically, these consist of only gravity acceleration, but may include others, such as electromagnetic forces. In non-inertial coordinate frames, other "inertial accelerations" associated with rotating coordinates may arise.
Often, these forces may be represented as the gradient of some scalar quantity , with in which case they are called conservative forces. Gravity in the direction, for example, is the gradient of . Because pressure from such gravitation arises only as a gradient, we may include it in the pressure term as a body force . The pressure and force terms on the right-hand side of the Navier–Stokes equation become
It is also possible to include external influences into the stress term rather than the body force term. This may even include antisymmetric stresses (inputs of angular momentum), in contrast to the usually symmetrical internal contributions to the stress tensor.
Nondimensionalisation
In order to make the equations dimensionless, a characteristic length and a characteristic velocity need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained:
Substitution of these inverted relations in the Euler momentum equations yields:
and by dividing for the first coefficient:
Now defining the Froude number:
the Euler number:
and the coefficient of skin-friction or the one usually referred as 'drag coefficient' in the field of aerodynamics:
by passing respectively to the conservative variables, i.e. the momentum density and the force density:
the equations are finally expressed (now omitting the indexes):
Cauchy equations in the Froude limit (corresponding to negligible external field) are named free Cauchy equations:
and can be eventually conservation equations. The limit of high Froude numbers (low external field) is thus notable for such equations and is studied with perturbation theory.
Finally in convective form the equations are:
3D explicit convective forms
Cartesian 3D coordinates
For asymmetric stress tensors, equations in general take the following forms:
Cylindrical 3D coordinates
Below, we write the main equation in pressure-tau form assuming that the stress tensor is symmetrical ():
See also
Euler equations (fluid dynamics)
Navier–Stokes equations
Burnett equations
Chapman–Enskog expansion
Notes
References
Continuum mechanics
Eponymous equations of physics
Momentum
Partial differential equations | Cauchy momentum equation | [
"Physics",
"Mathematics"
] | 2,330 | [
"Equations of physics",
"Physical quantities",
"Continuum mechanics",
"Quantity",
"Eponymous equations of physics",
"Classical mechanics",
"Momentum",
"Moment (physics)"
] |
16,408,253 | https://en.wikipedia.org/wiki/Russian%20University%20of%20Transport | The Russian University of Transport (RUT (MIIT); ), officially the Autonomous Educational Institution of Higher Education "Russian University of transport" () is a public university founded in 1896 and headquartered in Moscow, Russia. Along with its main campus located in the capital, the university maintains one other regional campus in Sochi.
RUT is a leading transport educational institution and hosts the biggest university complex in Moscow, Russia. RUT is under the governance of the Ministry of Transport of the Russian Federation.
History
Russian University of Transport was established as the Imperial Moscow Engineering School (IMIU) by decree of Emperor Nicholas II on May 23, 1896, in order to train engineers for the construction of the Trans-Siberian Railway. Training courses began on 12 September 1896. The first director of the university was professor F.E. Maksimenko. The first course of engineers graduated in 1901 (three-years theoretical course and two-years building practice). The Russian University of Transport is the oldest institution of higher technical education in Russia founded in 1896. Today TUT is the largest scientific and academic complex in Russia, the all Russian leader in the field of training and retraining of specialists and scientific personnel for transport and transport construction. On 15 June 1993, MIIT got the status of the university and its name was changed to the Moscow State University of Railway Engineering.
For more than 120 years of history, more than 650 thousand highly qualified specialists with higher and secondary professional education have left the university. They successfully work in transport and in other various sectors of the economy in Russia and 57 countries of the world. More than 30 Heroes of the Soviet Union and Socialist Labor, transport industry leaders, world-famous scientists, members of the Russian government, governors, mayors of large cities, prominent public figures, prominent representatives of the Russian Orthodox Church and culture, and top businessmen are among the graduates of RUT.
In Soviet times, the university was notable for being one of just a few institutions in the upper echelon that did not participate in the state-directed antisemitism campaign in higher education. This gave many Jewish students in the USSR a chance at a higher education when they would not have otherwise gotten it.
Structure of the university
Institutes:
Institute of Transport Engineering and Operation Systems
Institute of Railway Track, Construction and Structures
Institute of Operation and Digital Technologies
Institute of Economics and Finance
Institute of International Transport Communications
Law Institute
Academies:
Basic Training Academy
Academy of Water Transport
Russian Academy of Railway Engineering
Russian Open Transport Academy
Research Institute:
Scientific Research Institute of Transport and Transport Construction
Branches and representative offices:
Sochi Institute of Transport - branch of the federal state autonomous educational institution of higher education "Russian University of Transport", Sochi branch of RUT (MIIT)
Colleges and technical schools:
Gymnasium
College of the Academy of Water Transport
Medical College
Moscow College of Transport,
Law College of the Law Institute
Notable alumni
Semyon Belits-Geiman, Soviet Olympic swimmer
Bidzina Ivanishvili, Georgian businessman and politician
Vitaly Malkin, Russian businessman and politician
Konstantin Borovoi, Russian businessman and politician
Igor Guberman, Russian-Israeli poet
Anatoly Rybakov, Soviet and Russian writer
Roustam Tariko, Russian businessman
Yakov Dzhugashvili, son of Joseph Stalin
Igor Zaitsev, Soviet Grandmaster
Gregory Kaidanov, Russian-American Grandmaster
Viktor Vekselberg, Russian businessman
Len Blavatnik, Russian-American businessman
Vladimir Kuzmin, Russian musician
Victor Cherkashin, Longtime KGB officer in America, handler for Ames and Hanssen
Alexander Maslyakov, Russian television personality, founder of KVN
Isidore Mvouba
Fu Zhihuan
Yaakov Kedmi
References
External links
Official site of Russian University of Transport
Universities in Moscow
Universities and colleges established in 1896
1896 establishments in the Russian Empire
Transport education
Engineering universities and colleges in Russia
Cultural heritage monuments of regional significance in Moscow | Russian University of Transport | [
"Physics"
] | 791 | [
"Physical systems",
"Transport",
"Transport education"
] |
16,408,345 | https://en.wikipedia.org/wiki/Fusaric%20acid | Fusaric acid is a picolinic acid derivative and an antibiotic (wilting agent) first isolated from the fungus Fusarium heterosporium.
It is typically isolated from various Fusarium species, and has been proposed for a various therapeutic applications. However, it is primarily used as a research tool.
Its mechanism of action is not well understood. It likely inhibits Dopamine beta-hydroxylase (the enzyme that converts dopamine to norepinephrine). It may also have other actions, such as the inhibition of cell proliferation and DNA synthesis. Fusaric acid and analogues also reported as quorum sensing inhibitors.
It is used to make bupicomide.
References
External links
Carboxylic acids
Oxidoreductase inhibitors
Pyridines
Butyl compounds | Fusaric acid | [
"Chemistry"
] | 168 | [
"Carboxylic acids",
"Functional groups"
] |
16,409,230 | https://en.wikipedia.org/wiki/Aleglitazar | Aleglitazar is a peroxisome proliferator-activated receptor agonist (hence a PPAR modulator) with affinity to PPARα and PPARγ, which was under development by Hoffmann–La Roche for the treatment of type II diabetes. It is no longer in phase III clinical trials.
References
Oxazoles
Benzothiophenes
Carboxylic acids
Phenol ethers
PPAR agonists
Methoxy compounds | Aleglitazar | [
"Chemistry"
] | 94 | [
"Carboxylic acids",
"Functional groups"
] |
16,409,709 | https://en.wikipedia.org/wiki/Megaselia%20scalaris | The fly Megaselia scalaris (often called the laboratory fly) is a member of the order Diptera and the family Phoridae, and it is widely distributed in warm regions of the world. The family members are commonly known as the "humpbacked fly", the "coffin fly", and the "scuttle fly". The name "scuttle fly" derives from the jerky, short bursts of running, characteristic to the adult fly. The name "coffin fly" is due to their being found in coffins, digging six feet deep in order to reach buried corpses. It is one of the more common species found within the family Phoridae; more than 370 species have been identified within North America.
Taxonomy
Megaselia scalaris was described by the German entomologist Hermann Loew in 1866.
Description
Adults of this species are about 2 mm long and yellowish with dark markings. The labellum and labrum have trichoid and conical sensilla, and the labellum's ventral surface has five pairs of sharp teeth. The hind femur has hairs below its basal half and these are shorter than hairs in an anteroventral row on the distal half. The hind tibia lacks a clearly differentiated row of spine-like antero-dorsal hairs. There is a pair of translucent wings, in which vein 3 is not or barely broader than the costa.
In males, the labellum has a dense covering of microtrichia, the bristles at the tip of the anal tube are longer than the longest hairs of the cerci, and the longest hair of the left side of the epandrium is almost bristle-like. In females, the tergite of the sixth abdominal segment is short, narrow, shiny, and extends laterally on the segment, unlike tergites of preceding segments.
Larvae of this species are pale, legless and covered in rows of short spines. The anterior end has the mouthparts, which look like a pair of sharp spines and are darker than the surrounding tissue. The posterior end has a pair of spiracles.
Life cycle
Egg and larva
The development of Megaselia scalaris fly is holometabolous, consisting of four distinct stages. These stages include: egg, larva, pupa, and adult. There are three distinct larval instars of M. scalaris. The third instar of development usually lasts longer than the first two because there are dramatic changes from a larva into a fly. The development of each life cycle depends on the environmental conditions in which the larvae are feeding or being reared. It generally occurs "at 22-24°C, the first instar lasts 1-2 days, the second 1-2 days, and the third 3-4 days before pupation and a further 1-2 days before pupation." The larvae are usually very small, roughly between 1 and 8 mm in length.
Megaselia scalaris larvae display a unique behavior of swallowing air when they are exposed to small pools of liquid. This intake of air gives them the ability to float, which may prevent drowning in flood conditions in the natural environment.
Pupa and adult
The male Megaselia scalaris fly matures more quickly than the female pupa, emerging two days prior to the females. Emerging before the females gives the males the advantage to feed, allowing their sperm to mature by the time the females emerge. Adult Megaselia scalaris reproduce by means of oviposition. The females lay relatively large eggs for their size due to the extended incubation period of the eggs.
Feeding habits
Many of the flies within the family Phoridae prefer nectar as an energy source; however, Megaselia scalaris is an omnivorous species. It has been recorded feeding on plants, wounds, and corpses. Protein food sources are preferred by the females preceding maturation of their eggs. All meals must be a fluid in order for the flies to access the meal because Megaselia scalaris has sponging mouthparts. This is a characteristic common to the family Phoridae.
The sharp teeth possessed by adults are not used in retrieval of a food source, like a piercing mouthpart, but are instead used to aid digestion and breakdown of nutrients. Human cases involving skin inflammation are likely due to these teeth. It is important to note the distinction that while Megaselia scalaris can feed on blood meals, the teeth are not used to puncture the host. The blood must be found on the body as an exudate. One theory to the evolution of these teeth is that Megaselia scalaris uses them in order to exit their pupal casings.
Habitat
Megaselia scalaris optimal culture temperature is 28 °C. They are common in many areas but thrive predominately in moist unsanitary vicinities such as dumpsters, trash containers, rotting meat, vegetable remains, public washrooms, homes, and sewer pipes. Although referred to as scavengers, adults are known to feed primarily on sugars. The larvae, however, depend on moist decaying plant or animal material and feed on a wide range of additional decaying material.
Importance to forensic entomology
Megaselia scalaris are important in the study of forensic entomology because evidence derived from the lifecycle and behavior of these flies is useful in both medicocriminal and abuse/neglect cases and is admissible in court.
Megaselia scalaris are small in size; this allows them to locate carrion buried within the ground and to locate bodies concealed in coffins. They can travel 0.5 m in a four-day period. They lay their eggs on carrion to provide food for the hatched larvae.
Often, Megaselia scalaris may be the only forensic entomological evidence available if the carrion is obstructed or concealed in a place that is hard for other insects to reach. Larger flies are not always able to reach the carrion. Calculations involving M. scalaris can result in an insect colonization time that can be used for a postmortem interval, which may help establish an estimated time of death. M. scalaris are classified in a secondary forensic role because they prefer older decaying carrion.
Evidence collected by forensic entomologists involving Megaselia scalaris has been used to demonstrate in court that caretakers have neglected the care of their elderly patients. Megaselia scalaris is also involved in cases of myiasis. Megaselia scalaris larvae found on a body can be used in court as a tool to show "time of death" or "time of neglect".
Current and future research
Megaselia scalaris is commonly used in research and within the lab because it is easily cultured; this species is used in experiments involving genetic, developmental, and bioassay studies. Research has also been done on the unique neurophysiology and neuromuscular junction within this fly, giving it its characteristic "scuttle" movement. In comparison to Drosophila melanogaster, M. scalaris has decreased excitatory postsynaptic potentials (EPSPs) and facilitation of EPSPs in response to repetitive stimulation. With such a wide range of food sources, the larvae can be considered facultative predators, parasitoids, or parasites.
References
Bibliography
Peterson B. V. (1987). Phoridae. In: Manual of Nearctic Diptera. Vol. 2. Canada Department of Agriculture Research Branch, Monograph no. 27, p. 689-712. The full text (53 MB)
Phoridae
Forensic entomology
Insects described in 1866
Diptera of North America
Laboratory animals | Megaselia scalaris | [
"Chemistry"
] | 1,580 | [
"Animal testing",
"Laboratory animals"
] |
16,410,813 | https://en.wikipedia.org/wiki/Helferich%20method | The Helferich method may refer to:
Glycosylation of an alcohol using a glycosyl acetate as glycosyl donor and a Lewis acid (e.g. a metal halide) as promoter
Glycosylation of an alcohol using a glycosyl halide as a glycosyl donor and a mercury salt as promoter (cf the Koenigs-Knorr reaction, which uses silver salts as promoters)
References
Organic reactions | Helferich method | [
"Chemistry"
] | 101 | [
"Organic reactions"
] |
16,413,376 | https://en.wikipedia.org/wiki/Crownshaft | An elongated circumferential leaf base formation present on some species of palm is called a crownshaft.
The leaf bases of some pinnate leaved palms (most notable being Roystonea regia or the royal palm but also including the genera Areca, Wodyetia and Pinanga) form a sheath at the top of the trunk surrounding the bud where all the subsequent leaves are formed.
The crownshaft takes the form of a column above the main trunk and beneath the main crown of leaves and is nothing but the collection of the leaf bases of the plant, all tightly wrapped around one another. It is usually green in color but may be a different color from that of the leaves themselves, including white, blue, red, brownish or orange. Each layer of the crownshaft is a distinct leaf base and is usually made of a tough fibrous material with a feel similar to leather and in many parts of the world, it is cured and used to prepare covers, sheets and roofing material. The leaf base of some palms is also used to extract coir.
The oldest leaf forms the outermost layer of the crownshaft. Eventually the lowest palm frond dies back, the outer layer of the crownshaft splits, the leaf unwraps and pulls away from the trunk exposing the new crownshaft surface. In time the old leaf separates at the base and falls away leaving the distinct rings and ridges of the leafbase scars seen on the trunks of many species of palm. These scars usually fade over time and the distance between two successive scars is an approximate indicator of the speed of growth of the palm. In tropical conditions when growing conditions are good, the palm grows faster and the gap between scars is large; conversely when growing conditions are not optimum, plant growth is slow and the gaps are narrower. Juveniles and younger palms usually grow faster than adults; this is demonstrated by the larger gaps between scars at the base as compared to the top.
In some species of palm the shaft is fairly indistinct because the leaf bases are not wrapped around each other very tightly, and the shaft becomes extended and “loose.”
Some palm species do not form a shaft until past the juvenile stage.
Gallery
References
Arecaceae
Plant morphology | Crownshaft | [
"Biology"
] | 459 | [
"Plant morphology",
"Plants"
] |
16,413,778 | https://en.wikipedia.org/wiki/Ageing | Ageing (or aging in American English) is the process of becoming older. The term refers mainly to humans, many other animals, and fungi, whereas for example, bacteria, perennial plants and some simple animals are potentially biologically immortal. In a broader sense, ageing can refer to single cells within an organism which have ceased dividing, or to the population of a species.
In humans, ageing represents the accumulation of changes in a human being over time and can encompass physical, psychological, and social changes. Reaction time, for example, may slow with age, while memories and general knowledge typically increase. Ageing is associated with increased risk of cancer, Alzheimer's disease, diabetes, cardiovascular disease, increased mental health risks, and many more. Of the roughly 150,000 people who die each day across the globe, about two-thirds die from age-related causes. Certain lifestyle choices and socioeconomic conditions have been linked to ageing.
Current ageing theories are assigned to the damage concept, whereby the accumulation of damage (such as DNA oxidation) may cause biological systems to fail, or to the programmed ageing concept, whereby the internal processes (epigenetic maintenance such as DNA methylation) inherently may cause ageing. Programmed ageing should not be confused with programmed cell death (apoptosis).
Ageing versus immortality
Human beings and members of other species, especially animals, age and die. Fungi, too, can age. In contrast, many species can be considered potentially immortal: for example, bacteria fission to produce daughter cells, strawberry plants grow runners to produce clones of themselves, and animals in the genus Hydra have a regenerative ability by which they avoid dying of old age.
Early life forms on Earth, starting at least 3.7 billion years ago, were single-celled organisms. Such organisms (Prokaryotes, Protozoans, algae) multiply by fission into daughter cells; thus single celled organisms have been thought to not age and to be potentially immortal under favorable conditions. However, evidence has been reported that aging leading to death occurs in the single-cell bacterium Escherichia coli, an organism that reproduces by morphologically symmetrical division. Evidence of aging has also been reported for the bacterium Caulobacter crescintus. and the single cell yeast Saccharomyces cerevisiae.
Ageing and mortality of the individual organism became more evident with the evolution of eukaryotic sexual reproduction, which occurred with the emergence of the fungal/animal kingdoms approximately a billion years ago, and the evolution of seed-producing plants 320 million years ago. The sexual organism could henceforth pass on some of its genetic material to produce new individuals and could itself become disposable with respect to the survival of its species. This classic biological idea has however been perturbed recently by the discovery that the bacterium E. coli may split into distinguishable daughter cells, which opens the theoretical possibility of "age classes" among bacteria.
Even within humans and other mortal species, there are cells with the potential for immortality: cancer cells which have lost the ability to die when maintained in a cell culture such as the HeLa cell line, and specific stem cells such as germ cells (producing ova and spermatozoa). In artificial cloning, adult cells can be rejuvenated to embryonic status and then used to grow a new tissue or animal without ageing. Normal human cells however die after about 50 cell divisions in laboratory culture (the Hayflick Limit, discovered by Leonard Hayflick in 1961).
Symptoms
A number of characteristic ageing symptoms are experienced by a majority, or by a significant proportion of humans during their lifetimes.
Teenagers lose the young child's ability to hear high-frequency sounds above 20 kHz.
Wrinkles develop mainly due to photoageing, particularly affecting sun-exposed areas such as the face.
After peaking from the late teens to the late 20s, female fertility declines.
After age 30, the mass of the human body is decreased until 70 years and then shows damping oscillations.
People over 35 years of age are at increasing risk for losing strength in the ciliary muscle of the eyes, which leads to difficulty focusing on close objects, or presbyopia. Most people experience presbyopia by age 45–50. The cause is lens hardening by decreasing levels of alpha-crystallin, a process which may be sped up by higher temperatures.
Around age 55, hair turns grey. Pattern hair loss by the age of 55 affects about 30–50% of males and a quarter of females.
Menopause typically occurs between 44 and 58 years of age.
In the 60–64 age cohort, the incidence of osteoarthritis rises to 53%. Only 20%, however, report disabling osteoarthritis at this age.
Almost half of people older than 75 have hearing loss (presbycusis), inhibiting spoken communication. Many vertebrates such as fish, birds and amphibians do not develop presbycusis in old age, as they are able to regenerate their cochlear sensory cells; mammals, including humans, have genetically lost this ability.
By age 80, more than half of all Americans either have a cataract or have had cataract surgery.
Frailty, a syndrome of decreased strength, physical activity, physical performance and energy, affects 25% of those over 85. Muscles have a reduced capacity of responding to exercise or injury and loss of muscle mass and strength (sarcopenia) is common. Maximum oxygen use and maximum heart rate decline. Hand strength and mobility decrease.
Atherosclerosis is classified as an ageing disease. It leads to cardiovascular disease (for example, stroke and heart attacks), which, globally, is the most common cause of death. Vessel ageing causes vascular remodelling and loss of arterial elasticity, and as a result, causes the stiffness of the vasculature.
Evidence suggests that age-related risk of death plateaus after the age of 105. The maximum human lifespan is suggested to be 115 years. The oldest reliably recorded human was Jeanne Calment, who died in 1997 at 122.
Dementia becomes more common with age. About 3% of people between the ages of 65 and 74, 19% of those between 75 and 84, and nearly half of those over 85 years old have dementia. The spectrum ranges from mild cognitive impairment to the neurodegenerative diseases of Alzheimer's disease, cerebrovascular disease, Parkinson's disease and Lou Gehrig's disease. Furthermore, many types of memory decline with ageing, but not semantic memory or general knowledge such as vocabulary definitions. These typically increase or remain steady until late adulthood (see Ageing brain). Intelligence declines with age, though the rate varies depending on the type and may, in fact, remain steady throughout most of the human lifespan, dropping suddenly only as people near the end of their lives. Individual variations in the rate of cognitive decline may therefore be explained in terms of people having different lengths of life. There are changes to the brain: after 20 years of age, there is a 10% reduction each decade in the total length of the brain's myelinated axons.
Age can result in visual impairment, whereby non-verbal communication is reduced, which can lead to isolation and possible depression. Older adults, however, may not experience depression as much as younger adults, and were paradoxically found to have improved mood, despite declining physical health. Macular degeneration causes vision loss and increases with age, affecting nearly 12% of those above the age of 80. This degeneration is caused by systemic changes in the circulation of waste products and by the growth of abnormal vessels around the retina.
Other visual diseases that often appear with age are cataracts and glaucoma. A cataract occurs when the lens of the eye becomes cloudy, making vision blurry; it eventually causes blindness if untreated. They develop over time and are seen most often with those that are older. Cataracts can be treated through surgery. Glaucoma is another common visual disease that appears in older adults. Glaucoma is caused by damage to the optic nerve, causing vision loss. Glaucoma usually develops over time, but there are variations to glaucoma, and some have a sudden onset. There are a few procedures for glaucoma, but there is no cure or fix for the damage, once it has occurred. Prevention is the best measure in the case of glaucoma.
In addition to physical symptoms, aging can also cause a number of mental health issues as older adults deal with challenges such as the death of loved ones, retirement and loss of purpose, as well as their own health issues. Some warning signs are: changes in mood or energy, changes in sleep or eating habits, pain, sadness, unhealthy coping mechanisms such as smoking, suicidal ideations, and others. Older adults are more prone to social isolation as well, which can further increase the risk for physical and mental conditions such as anxiety, depression, and cognitive decline.
A distinction can be made between "proximal ageing" (age-based effects that come about because of factors in the recent past) and "distal ageing" (age-based differences that can be traced to a cause in a person's early life, such as childhood poliomyelitis).
Ageing is among the greatest known risk factors for most human diseases. Of the roughly 150,000 people who die each day across the globe, about two-thirds--100,000 per day--die from age-related causes. In industrialized nations, the proportion is higher, reaching 90%.
Biological basis
In the 21st century, researchers are only beginning to investigate the biological basis of ageing even in relatively simple and short-lived organisms, such as yeast. Little is known of mammalian ageing, in part due to the much longer lives of even small mammals, such as the mouse (around 3 years). A model organism for the study of ageing is the nematode C. elegans having a short lifespan of 2–3 weeks enabling genetic manipulations or suppression of gene activity with RNA interference, and other factors. Most known mutations and RNA interference targets that extend lifespan were first discovered in C. elegans.
The factors proposed to influence biological ageing fall into two main categories, programmed and error-related. Programmed factors follow a biological timetable that might be a continuation of inherent mechanisms that regulate childhood growth and development. This regulation would depend on changes in gene expression that affect the systems responsible for maintenance, repair and defense responses.
Factors causing errors or damage include internal and environmental events that induce cumulative deterioration in one or more organs.
Molecular and cellular hallmarks of ageing
One 2013 review assessed ageing through the lens of the damage theory, proposing nine metabolic "hallmarks" of ageing in various organisms but especially mammals:
genomic instability (mutations accumulated in nuclear DNA, in mtDNA, and in the nuclear lamina)
telomere attrition (the authors note that artificial telomerase confers non-cancerous immortality to otherwise mortal cells)
epigenetic alterations (including DNA methylation patterns, post-translational modification of histones, and chromatin remodelling). Ageing and disease are related to a misregulation of gene expression through impaired methylation patterns, from hypomethylation to hypermethylation.
loss of proteostasis (protein folding and proteolysis)
deregulated nutrient sensing (relating to the Growth hormone/Insulin-like growth factor 1 signalling pathway, which is the most conserved ageing-controlling pathway in evolution and among its targets are the FOXO3/Sirtuin transcription factors and the mTOR complexes, probably responsive to caloric restriction)
mitochondrial dysfunction (the authors point out however that a causal link between ageing and increased mitochondrial production of reactive oxygen species is no longer supported by recent research)
cellular senescence (accumulation of no longer dividing cells in certain tissues, a process induced especially by p16INK4a/Rb and p19ARF/p53 to stop cancerous cells from proliferating)
stem cell exhaustion (in the authors' view caused by damage factors such as those listed above)
altered intercellular communication (encompassing especially inflammation but possibly also other intercellular interactions)
inflammageing, a chronic inflammatory phenotype in the elderly in the absence of viral infection, due to over-activation and a decrease in the precision of the innate immune system
dysbiosis of gut microbiome (e.g., loss of microbial diversity, expansion of enteropathogens, and altered vitamin B12 biosynthesis) is correlated with biological age rather than chronological age.
Metabolic pathways involved in ageing
There are three main metabolic pathways which can influence the rate of ageing, discussed below:
the FOXO3/Sirtuin pathway, probably responsive to caloric restriction
the Growth hormone/Insulin-like growth factor 1 signalling pathway
the activity levels of the electron transport chain in mitochondria and (in plants) in chloroplasts.
It is likely that most of these pathways affect ageing separately, because targeting them simultaneously leads to additive increases in lifespan.
Programmed factors
The rate of ageing varies substantially across different species, and this, to a large extent, is genetically based. For example, numerous perennial plants ranging from strawberries and potatoes to willow trees typically produce clones of themselves by vegetative reproduction and are thus potentially immortal, while annual plants such as wheat and watermelons die each year and reproduce by sexual reproduction. In 2008 it was discovered that inactivation of only two genes in the annual plant Arabidopsis thaliana leads to its conversion into a potentially immortal perennial plant. The oldest animals known so far are 15,000-year-old Antarctic sponges, which can reproduce both sexually and clonally.
Clonal immortality apart, there are certain species whose individual lifespans stand out among Earth's life-forms, including the bristlecone pine at 5062 years or 5067 years, invertebrates like the hard clam (known as quahog in New England) at 508 years, the Greenland shark at 400 years, various deep-sea tube worms at over 300 years, fish like the sturgeon and the rockfish, and the sea anemone and lobster. Such organisms are sometimes said to exhibit negligible senescence. The genetic aspect has also been demonstrated in studies of human centenarians.
Evolution of ageing
Life span, like other phenotypes, is selected for in evolution. Traits that benefit early survival and reproduction will be selected for even if they contribute to an earlier death. Such a genetic effect is called the antagonistic pleiotropy effect when referring to a gene (pleiotropy signifying the gene has a double function – enabling reproduction at a young age but costing the organism life expectancy in old age) and is called the disposable soma effect when referring to an entire genetic programme (the organism diverting limited resources from maintenance to reproduction). The biological mechanisms which regulate lifespan probably evolved with the first multicellular organisms more than a billion years ago. However, even single-celled organisms such as yeast have been used as models in ageing, hence ageing has its biological roots much earlier than multi-cellularity.
Damage-related factors
DNA damage theory of ageing: DNA damage is thought to be the common basis of both cancer and ageing, and it has been argued that intrinsic causes of DNA damage are the most important causes of ageing. Genetic damage (aberrant structural alterations of the DNA), mutations (changes in the DNA sequence), and epimutations (methylation of gene promoter regions or alterations of the DNA scaffolding which regulate gene expression), can cause abnormal gene expression. DNA damage causes the cells to stop dividing or induces apoptosis, often affecting stem cell pools and therefore hindering regeneration. However, lifelong studies of mice suggest that most mutations happen during embryonic and childhood development, when cells divide often, as each cell division is a chance for errors in DNA replication. A meta analysis study of 36 studies with 4,676 participants showed an association between age and DNA damage in humans. In the human hematopoietic stem cell compartment DNA damage accumulates with age. In healthy humans after 50 years of age, chronological age shows a linear association with DNA damage accumulation in blood mononuclear cells. Genome-wide profiles of DNA damage can be used as highly accurate predictors of mammalian age.
Genetic instability: Dogs annually lose approximately 3.3% of the DNA in their heart muscle cells while humans lose approximately 0.6% of their heart muscle DNA each year. These numbers are close to the ratio of the maximum longevities of the two species (120 years vs. 20 years, a 6/1 ratio). The comparative percentage is also similar between the dog and human for yearly DNA loss in the brain and lymphocytes. As stated by lead author, Bernard L. Strehler, "... genetic damage (particularly gene loss) is almost certainly (or probably the) central cause of ageing."
Accumulation of waste:
A buildup of waste products in cells presumably interferes with metabolism. For example, a waste product called lipofuscin is formed by a complex reaction in cells that binds fat to proteins. Lipofuscin may accumulate in the cells as small granules during ageing.
The hallmark of ageing yeast cells appears to be overproduction of certain proteins.
Autophagy induction can enhance clearance of toxic intracellular waste associated with neurodegenerative diseases and has been comprehensively demonstrated to improve lifespan in yeast, worms, flies, rodents and primates. The situation, however, has been complicated by the identification that autophagy up-regulation can also occur during ageing.
Wear-and-tear theory: The general idea that changes associated with ageing are the result of chance damage that accumulates over time.
Accumulation of errors: The idea that ageing results from chance events that escape proofreading mechanisms, which gradually damages the genetic code.
Heterochromatin loss, model of ageing.
Cross-linkage: The idea that ageing results from accumulation of cross-linked compounds that interfere with normal cell function.
Studies of mtDNA mutator mice have shown that increased levels of somatic mtDNA mutations directly can cause a variety of ageing phenotypes. The authors propose that mtDNA mutations lead to respiratory-chain-deficient cells and thence to apoptosis and cell loss. They cast doubt experimentally however on the common assumption that mitochondrial mutations and dysfunction lead to increased generation of reactive oxygen species (ROS).
Free-radical theory: Damage by free radicals, or more generally reactive oxygen species or oxidative stress, create damage that may give rise to the symptoms we recognise as ageing. The effect of calorie restriction may be due to increased formation of free radicals within the mitochondria, causing a secondary induction of increased antioxidant defence capacity.
Mitochondrial theory of ageing: free radicals produced by mitochondrial activity damage cellular components, leading to ageing.
DNA oxidation and caloric restriction: Caloric restriction reduces 8-OH-dG DNA damage in organs of ageing rats and mice. Thus, reduction of oxidative DNA damage is associated with a slower rate of ageing and increased lifespan. In a 2021 review article, Vijg stated that "Based on an abundance of evidence, DNA damage is now considered as the single most important driver of the degenerative processes that collectively cause ageing."
Research
Diet
The Mediterranean diet is credited with lowering the risk of heart disease and early death. The major contributors to mortality risk reduction appear to be a higher consumption of vegetables, fish, fruits, nuts and monounsaturated fatty acids, such as by consuming olive oil.
As of 2021, there is insufficient clinical evidence that calorie restriction or any dietary practice affects the process of ageing.
Exercise
People who participate in moderate to high levels of physical exercise have a lower mortality rate compared to individuals who are not physically active. The majority of the benefits from exercise are achieved with around 3500 metabolic equivalent (MET) minutes per week. For example, climbing stairs 10 minutes, vacuuming 15 minutes, gardening 20 minutes, running 20 minutes, and walking or bicycling for 25 minutes on a daily basis would together achieve about 3000 MET minutes a week.
Exercise has also been found to be an effective measure to treat declines in neuromuscular function due to age. A meta-analysis found that resistance training with elastic bands or kettlebells provided significant improvements to grip strength, gait speed, and skeletal muscle mass in patients with sarcopenia. Furthermore, another analysis found that the positive effects of resistance exercise on strength, muscle mass, and motor coordination reduce the risk of falls in the elderly, which is a key factor for living a longer and healthier life. In terms of programming, there is no one-size-fits-all approach. General recommendations for improvements to gait speed, strength, and muscle size for reduced fall risk are resistance training programs with two to three 40-60 minute workouts per week, consisting of 1-2 sets of 5-8 repetitions of 2-3 different exercises for each major muscle group, but individual considerations must be taken due to differences in health status, motivation, and accessibility to exercise facilities.
There is also evidence to suggest that exercise of any type may mitigate the degradation of the neuromuscular junction (NMJ) that occurs with age. Current evidence suggests that aerobic exercise causes the most hypertrophy of the NMJ, although resistance training is still somewhat effective. However, further evidence is necessary to identify optimal training protocols for NMJ function and to further understand how exercise affects the mechanisms that cause NMJ degradation.
Social factors
A meta-analysis showed that loneliness carries a higher mortality risk than smoking.
Society and culture
Different cultures express age in different ways. The age of an adult human is commonly measured in whole years since the day of birth. (The most notable exceptionEast Asian age reckoningis becoming less common, particularly in official contexts.) Arbitrary divisions set to mark periods of life may include juvenile (from infancy through childhood, preadolescence, and adolescence), early adulthood, middle adulthood, and late adulthood. Informal terms include "tweens", "teenagers", "twentysomething", "thirtysomething", etc. as well as "denarian", "vicenarian", "tricenarian", "quadragenarian", etc.
Most legal systems define a specific age for when an individual is allowed or obliged to do particular activities. These age specifications include voting age, drinking age, age of consent, age of majority, age of criminal responsibility, marriageable age, age of candidacy, and mandatory retirement age. Admission to a movie, for instance, may depend on age according to a motion picture rating system. A bus fare might be discounted for the young or old. Each nation, government, and non-governmental organization has different ways of classifying age. In other words, chronological ageing may be distinguished from "social ageing" (cultural age-expectations of how people should act as they grow older) and "biological ageing" (an organism's physical state as it ages).
Ageism cost the United States $63 billion in one year according to a Yale School of Public Health study. In a UNFPA report about ageing in the 21st century, it highlighted the need to "Develop a new rights-based culture of ageing and a change of mindset and societal attitudes towards ageing and older persons, from welfare recipients to active, contributing members of society". UNFPA said that this "requires, among others, working towards the development of international human rights instruments and their translation into national laws and regulations and affirmative measures that challenge age discrimination and recognise older people as autonomous subjects". Older people's music participation contributes to the maintenance of interpersonal relationships and promoting successful ageing. At the same time, older persons can make contributions to society including caregiving and volunteering. For example, "A study of Bolivian migrants who [had] moved to Spain found that 69% left their children at home, usually with grandparents. In rural China, grandparents care for 38% of children aged under five whose parents have gone to work in cities."
Economics
Population ageing is the increase in the number and proportion of older people in society. Population ageing has three possible causes: migration, longer life expectancy (decreased death rate) and decreased birth rate. Ageing has a significant impact on society. Young people tend to have fewer legal privileges (if they are below the age of majority), they are more likely to push for political and social change, to develop and adopt new technologies, and to need education. Older people have different requirements from society and government, and frequently have differing values as well, such as for property and pension rights.
In the 21st century, one of the most significant population trends is ageing. Currently, over 11% of the world's current population are people aged 60 and older and the United Nations Population Fund (UNFPA) estimates that by 2050 that number will rise to approximately 22%. Ageing has occurred due to development which has enabled better nutrition, sanitation, health care, education and economic well-being. Consequently, fertility rates have continued to decline and life expectancy has risen. Life expectancy at birth is over 80 now in 33 countries. Ageing is a "global phenomenon", that is occurring fastest in developing countries, including those with large youth populations, and poses social and economic challenges to the work which can be overcome with "the right set of policies to equip individuals, families and societies to address these challenges and to reap its benefits".
As life expectancy rises and birth rates decline in developed countries, the median age rises accordingly. According to the United Nations, this process is taking place in nearly every country in the world. A rising median age can have significant social and economic implications, as the workforce gets progressively older and the number of old workers and retirees grows relative to the number of young workers. Older people generally incur more health-related costs than do younger people in the workplace and can also cost more in worker's compensation and pension liabilities. In most developed countries an older workforce is somewhat inevitable. In the United States for instance, the Bureau of Labor Statistics estimates that one in four American workers will be 55 or older by 2020.
Among the most urgent concerns of older persons worldwide is income security. This poses challenges for governments with ageing populations to ensure investments in pension systems continues to provide economic independence and reduce poverty in old age. These challenges vary for developing and developed countries. UNFPA stated that, "Sustainability of these systems is of particular concern, particularly in developed countries, while social protection and old-age pension coverage remain a challenge for developing countries, where a large proportion of the labour force is found in the informal sector."
The global economic crisis has increased financial pressure to ensure economic security and access to health care in old age. To elevate this pressure "social protection floors must be implemented in order to guarantee income security and access to essential health and social services for all older persons and provide a safety net that contributes to the postponement of disability and prevention of impoverishment in old age".
It has been argued that population ageing has undermined economic development and can lead to lower inflation because elderly individuals care especially strongly about the value of their pensions and savings. Evidence suggests that pensions, while making a difference to the well-being of older persons, also benefit entire families especially in times of crisis when there may be a shortage or loss of employment within households. A study by the Australian Government in 2003 estimated that "women between the ages of 65 and 74 years contribute A$16 billion per year in unpaid caregiving and voluntary work. Similarly, men in the same age group contributed A$10 billion per year."
Due to increasing share of the elderly in the population, health care expenditures will continue to grow relative to the economy in coming decades. This has been considered as a negative phenomenon and effective strategies like labour productivity enhancement should be considered to deal with negative consequences of ageing.
Sociology
In the field of sociology and mental health, ageing is seen in five different views: ageing as maturity, ageing as decline, ageing as a life-cycle event, ageing as generation, and ageing as survival. Positive correlates with ageing often include economics, employment, marriage, children, education, and sense of control, as well as many others. The social science of ageing includes disengagement theory, activity theory, selectivity theory, and continuity theory. Retirement, a common transition faced by the elderly, may have both positive and negative consequences. As cyborgs currently are on the rise some theorists argue there is a need to develop new definitions of ageing and for instance a bio-techno-social definition of ageing has been suggested.
There is a current debate as to whether or not the pursuit of longevity and the postponement of senescence are cost-effective health care goals given finite health care resources. Because of the accumulated infirmities of old age, bioethicist Ezekiel Emanuel, opines that the pursuit of longevity via the compression of morbidity hypothesis is a "fantasy" and that human life is not worth living after age 75; longevity then should not be a goal of health care policy. This opinion has been contested by neurosurgeon and medical ethicist Miguel Faria, who states that life can be worthwhile during old age, and that longevity should be pursued in association with the attainment of quality of life. Faria claims that postponement of senescence as well as happiness and wisdom can be attained in old age in a large proportion of those who lead healthy lifestyles and remain intellectually active.
Health care demand
With age inevitable biological changes occur that increase the risk of illness and disability. UNFPA states that:
"A life-cycle approach to health care – one that starts early, continues through the reproductive years and lasts into old age – is essential for the physical and emotional well-being of older persons, and, indeed, all people. Public policies and programmes should additionally address the needs of older impoverished people who cannot afford health care."
Many societies in Western Europe and Japan have ageing populations. While the effects on society are complex, there is a concern about the impact on health care demand. The large number of suggestions in the literature for specific interventions to cope with the expected increase in demand for long-term care in ageing societies can be organized under four headings: improve system performance; redesign service delivery; support informal caregivers; and shift demographic parameters.
However, the annual growth in national health spending is not mainly due to increasing demand from ageing populations, but rather has been driven by rising incomes, costly new medical technology, a shortage of health care workers and informational asymmetries between providers and patients. A number of health problems become more prevalent as people get older. These include mental health problems as well as physical health problems, especially dementia.
It has been estimated that population ageing only explains 0.2 percentage points of the annual growth rate in medical spending of 4.3% since 1970. In addition, certain reforms to the Medicare system in the United States decreased elderly spending on home health care by 12.5% per year between 1996 and 2000.
Self-perception
Beauty standards have evolved over time, and as scientific research in cosmeceuticals, cosmetic products seen to have medicinal benefits like anti-ageing creams, has increased, the industry has also expanded; the kinds of products they produce (such as serums and creams) have gradually gained popularity and become a part of many people's personal care routine.
The increase in demand for cosmeceuticals has led scientists to find ingredients for these products in unorthodox places. For example, the secretion of cryptomphalus aspersa (or brown garden snail) has been found to have antioxidant properties, increase skin cell proliferation, and increase extracellular proteins such as collagen and fibronectin (important proteins for cell proliferation). Another substance used to prevent the physical manifestations of ageing is onobotulinumtoxinA, the toxin injected for Botox.
In some cultures, old age is celebrated and honoured. In Korea, for example, a special party called hwangap is held to celebrate and congratulate an individual for turning 60 years old. In China, respect for elderly is often the basis for how a community is organized and has been at the foundation of Chinese culture and morality for thousands of years. Older people are respected for their wisdom and most important decisions have traditionally not been made without consulting them. This is a similar case for most Asian countries such as the Philippines, Thailand, Vietnam, Singapore, etc.
Positive self-perceptions of ageing are associated with better mental and physical health and well-being. Positive self-perception of health has been correlated with higher well-being and reduced mortality among the elderly. Various reasons have been proposed for this association; people who are objectively healthy may naturally rate their health better as than that of their ill counterparts, though this link has been observed even in studies which have controlled for socioeconomic status, psychological functioning and health status. This finding is generally stronger for men than women, though this relationship is not universal across all studies and may only be true in some circumstances.
As people age, subjective health remains relatively stable, even though objective health worsens. In fact, perceived health improves with age when objective health is controlled in the equation. This phenomenon is known as the "paradox of ageing". This may be a result of social comparison; for instance, the older people get, the more they may consider themselves in better health than their same-aged peers. Elderly people often associate their functional and physical decline with the normal ageing process.
One way to help younger people experience what it feels like to be older is through an ageing suit. There are several different kinds of suits including the GERT (named as a reference to gerontology), the R70i exoskeleton, and the AGNES (Age Gain Now Empathy Suit) suits. These suits create the feelings of the effects of ageing by adding extra weight and increased pressure in certain points like the wrists, ankles and other joints. In addition, the various suits have different ways to impair vision and hearing to simulate the loss of these senses. To create the loss of feeling in hands that the elderly experience, special gloves are a part of the uniforms.
Use of these suits may help to increase the amount of empathy felt for the elderly and could be considered particularly useful for those who are either learning about ageing, or those who work with the elderly, such as nurses or care centre staff.
Design is another field that could benefit from the empathy these suits may cause. When designers understand what it feels like to have the impairments of old age, they can better design buildings, packaging, or even tools to help with the simple day-to-day tasks that are more difficult with less dexterity. Designing with the elderly in mind may help to reduce the negative feelings that are associated with the loss of abilities that the elderly face.
Healthy ageing
The healthy ageing framework, proposed by the World Health Organation operationalizes health as functional ability, which results from the interactions of intrinsic capacity and the environments.
Intrinsic capacity
Intrinsic capacity is a construct encompassing people's physical and mental abilities which can be drawn upon during ageing. Intrinsic capacity comprises the domains of: cognition, locomotion, vitality/nutrition, psychological and sensory (visual and hearing).
A recent study found four "profiles" or "statuses" of intrinsic capacity among older adults, namely high IC (43% at baseline), low deterioration with impaired locomotion (17%), high deterioration without cognitive impairment (22%) and high deterioration with cognitive impairment (18%). Over half of the study sample remained in the same status at baseline and follow-up (61%). Around one-fourth of participants transitioned from the high IC to the low deterioration status, and only 3% of the participants improved their status. Interestingly, the probability of improvement was observed in the status of high deterioration. Participants in the latent statuses of low and high levels of deterioration had a significantly higher risk of frailty, disability and dementia than their high IC counterparts.
Successful aging
The concept of successful aging can be traced back to the 1950s and was popularized in the 1980s. Traditional definitions of successful aging have emphasized absence of physical and cognitive disabilities. In their 1987 article, Rowe and Kahn characterized successful aging as involving three components: a) freedom from disease and disability, b) high cognitive and physical functioning, and c) social and productive engagement. The study cited previous was also done back in 1987 and therefore, these factors associated with successful aging have probably been changed. With the current knowledge, scientists started to focus on learning about the effect spirituality in successful aging. There are some differences in cultures as to which of these components are the most important. Most often across cultures, social engagement was the most highly rated but depending on the culture the definition of successful aging changes.
Cultural references
The ancient Greek dramatist Euripides (5th century BC) describes the multiple-headed mythological monster Hydra as having a regenerative capacity which makes it immortal, which is the historical background to the name of the biological genus Hydra. The Book of Job (c. 6th century BC) describes the human lifespan as inherently limited and makes a comparison with the innate immortality that a felled tree may have when undergoing vegetative regeneration:
See also
Ageing brain
Ageing movement control
Ageing of Europe
Ageing studies
Anti-ageing movement
Biodemography of human longevity
Biogerontology
Biological immortality
Biomarkers of ageing
Clinical geropsychology
Death
DNA damage theory of aging
Epigenetic clock
Evolution of ageing
Genetics of ageing
Gerontechnology
Gerontology
Gerascophobia
List of life extension-related topics
Longevity
Mitochondrial theory of ageing
Neuroscience of ageing
Old age
Old person smell
Particulates
Pollutants
Population ageing
Progeria
Rejuvenation
Stem cell theory of ageing
Supercentenarian
Thermoregulation in humans
Transgenerational design
References
Gerontology
Old age | Ageing | [
"Biology"
] | 7,980 | [
"Gerontology"
] |
16,413,876 | https://en.wikipedia.org/wiki/Biotechnology%20consulting | Biotechnology consulting (or biotech consulting) refers to the practice of assisting organizations involved in research and commercialization of biotechnology in improving their methods and efficiency of production, and approaches to R&D. This assistance is usually provided in the form of specialized technological advice and sharing of expertise. Both start-up and established organizations would hire biotechnology consultants mainly to receive an independent and professional advice from key opinion leaders, individuals with extensive knowledge and experience in a particular area of biotechnology or biological sciences, and, often, to outsource their projects for implementation by well qualified individuals. Large management consulting firms would often be able to provide technological advice as well, depending on the qualifications of their consulting team. With the growth of pharmaceutical companies, biotechnology consulting has recently developed into an industry of its own and separated from the management consulting industry that traditionally also provides technological advice on R&D projects to various industries. This has also been fueled by the impact various conflicts of interests can have on commercialization when biotechnology organizations contract services from academic institutions or government scientists
This is exemplified by the successful emergence of many consulting companies dedicated exclusively to servicing the biotech industry. Occasionally, university professors and Phd students engage in biotechnology consulting, either commercially or free of charge.
A special type of consulting is patent strategy and management consulting or simply patent consulting which specifically emphasizes on the scope of patent rights versus R&D in industry. It also assets successful commercialization of patentable matter. The primary aim of patent consulting company is to assist various small, medium and large corporation in realizing their research project toward successful patent registration with minimized danger of infringement and other risks that patent registrations may be subjected to prior to commercialization. One example of patent consulting firm is The Patent World.
References
Consulting by type
Biotechnology organizations | Biotechnology consulting | [
"Engineering",
"Biology"
] | 353 | [
"Biotechnology organizations"
] |
16,415,701 | https://en.wikipedia.org/wiki/Fast%20Library%20for%20Number%20Theory | The Fast Library for Number Theory (FLINT) is a C library for number theory applications. The two major areas of functionality currently implemented in FLINT are polynomial arithmetic over the integers and a quadratic sieve. The library is designed to be compiled with the GNU Multi-Precision Library (GMP) and is released under the GNU General Public License. It is developed by William Hart of the University of Kaiserslautern (formerly University of Warwick) and David Harvey of University of New South Wales (formerly Harvard University) to address the speed limitations of the PARI and NTL libraries.
Design Philosophy
Asymptotically Fast Algorithms
Implementations Fast as or Faster than Alternatives
Written in Pure C
Reliance on GMP
Extensively Tested
Extensively Profiled
Support for Parallel Computation
Functionality
Polynomial Arithmetic over the Integers
Quadratic Sieve
References
Further reading
FLINT 1.0.9: Fast Library for Number Theory by William Hart and David Harvey
Computational number theory
Free software programmed in C
Integer factorization algorithms
Numerical software | Fast Library for Number Theory | [
"Mathematics"
] | 197 | [
"Computational number theory",
"Computational mathematics",
"Numerical software",
"Number theory",
"Mathematical software"
] |
16,417,451 | https://en.wikipedia.org/wiki/Silva%20Ciminia | The Silva Ciminia, the Ciminian Forest, was the unbroken primeval forest that separated Ancient Rome from Etruria. According to the Roman historian Livy it was, in the 4th century BCE, a feared, pathless wilderness in which few dared tread.
History
The Ciminian Forest received its name from the Monti Cimini, which are still a densely wooded range of volcanic hills northwest of Rome. They form the part of the forerange of the Apennine main range that faces towards the Tyrrhenian Sea.
In the south, the Silva Ciminia stretched from Lake Bracciano to the edges of the flat plain of the Roman Campagna, in the lower Tiber Valley. Stretches of cleared fields round the major Etruscan settlements formed the Ager Veientanus that supported Veii, the Ager Faliscus of the Falisci, and the Ager Capenas of Capena. In the heart of the Ciminian woodlands lay the Ciminus Lake (Lago di Vico). In the northwest, they reached as far as Tarquinia.
The forest was predominantly formed by oak and beech, though second growth in the lower slopes has favoured the aggressively re-seeding Spanish chestnut. A relict stand of beech, rare in Central Italy, remains on the upper slopes of Monte Cimino. Sub-fossil pollen analyses from cores of stratified sediment taken in the region's crater lakes typically reveal a pollen sequence characteristic of tundra lying over an all-but-sterile wind-blown loess sand; this in turn was followed by grassland, with pollen of water-lilies and pondweeds blown from glacial meltwater lakes. The earliest Holocene forest was fir, followed by mixed pine and oak, with a climax forest of beech and oak, including Quercus ilex.
The surface profiles have been transformed since the region was first deforested in Roman times, as settlers worked outwards from strips flanking the Roman roads — the via Cassia, the via Amerina and the via Flaminia — which had been struck through the forest. In the deforested slopes, streams with even moderate flow have cut deeply eroded gullies and valleys in the geologically very recent soft tuff and volcanic ash. A sudden increase in organic sediments in strata corresponding to the third century BCE records this erosion following agrarian deforestation, which, far downstream, would initiate the Tiber's delta. Thereafter the palynological record attests many cultivated plants, and, significantly, nettles, the weed of disturbed, untended corners that follows temperate agriculture everywhere. By the third and fourth centuries CE very little of the primeval forest survived.
To the Romans of the Republic, the forest was as much feared as the trackless Hercynian Forest would be when they encountered that. In 310 BCE the Roman Senate, even after the rout of the Etruscans at Sutrium, charged the consul Fabius Maximus Rullianus not to enter this woodland in pursuit of the Etruscans, and when it emerged that he had done so, all Rome was struck with terror. The Silva formed a natural barrier between Ancient Rome and Etruria.
Notes
Old-growth forests
Forests of Italy
Former forests
Falisci | Silva Ciminia | [
"Biology"
] | 674 | [
"Old-growth forests",
"Ecosystems"
] |
16,420,547 | https://en.wikipedia.org/wiki/Characteristic%20velocity | Characteristic velocity or , or C-star is a measure of the combustion performance of a rocket engine independent of nozzle performance, and is used to compare different propellants and propulsion systems. c* should not be confused with c, which is the effective exhaust velocity related to the specific impulse by: . Specific impulse and effective exhaust velocity are dependent on the nozzle design unlike the characteristic velocity, explaining why C-star is an important value when comparing different propulsion system efficiencies. c* can be useful when comparing actual combustion performance to theoretical performance in order to determine how completely chemical energy release occurred. This is known as c*-efficiency.
Formula
is the characteristic velocity (m/s, ft/s)
is the chamber pressure (Pa, psi)
is the area of the throat (m2, in2)
is the mass flow rate of the engine (kg/s, slug/s)
is the specific impulse (s)
is the gravitational acceleration at sea-level (m/s2)
is the thrust coefficient
is the effective exhaust velocity (m/s)
is the specific heat ratio for the exhaust gases
is the gas constant per unit weight (J/kg-K)
is the chamber temperature (K)
References
Rocket Propulsion Elements, 7th Edition by George P. Sutton, Oscar Biblarz
Rocket Propulsion Elements, 9th Edition by George P. Sutton, Oscar Biblarz
Rocketry
Rocket propulsion
Aerospace engineering | Characteristic velocity | [
"Astronomy",
"Engineering"
] | 289 | [
"Rocketry",
"Rocketry stubs",
"Astronomy stubs",
"Aerospace engineering"
] |
16,422,485 | https://en.wikipedia.org/wiki/Ultrafine%20particle | Ultrafine particles (UFPs) are particulate matter of nanoscale size (less than 0.1 μm or 100 nm in diameter). Regulations do not exist for this size class of ambient air pollution particles, which are far smaller than the regulated PM10 and PM2.5 particle classes and are believed to have several more aggressive health implications than those classes of larger particulates.
Although they remain largely unregulated, the World Health Organization has published good practice statements regarding measuring UFPs.
There are two main divisions that categorize types of UFPs. UFPs can either be carbon-based or metallic, and then can be further subdivided by their magnetic properties. Electron microscopy and special physical lab conditions allow scientists to observe UFP morphology. Airborne UFPs can be measured using a condensation particle counter, in which particles are mixed with alcohol vapor and then cooled, allowing the vapor to condense around them, after which they are counted using a light scanner. UFPs are both manufactured and naturally occurring. UFPs are the main constituent of airborne particulate matter. Owing to their large quantity and ability to penetrate deep within the lung, UFPs are a major concern for respiratory exposure and health.
Sources and applications
UFPs are both manufactured and naturally occurring. Hot volcanic lava, ocean spray, and smoke are common natural UFPs sources. UFPs can be intentionally fabricated as fine particles to serve a vast range of applications in both medicine and technology. Other UFPs are byproducts, like emissions, from specific processes, combustion reactions, or equipment such as printer toner and automobile exhaust. Anthropogenic sources of UFPs include combustion of gas, coal or hydrocarbons, biomass burning (i.e. agricultural burning, forest fires and waste disposal), vehicular traffic and industrial emissions, tire wear and tear from car brakes, air traffic, seaport, maritime transportation, construction, demolition, restoration and concrete processing, domestic wood stoves, outdoor burning, kitchen, and cigarette smoke. In 2014, an air quality study found harmful ultrafine particles from the takeoffs and landings at Los Angeles International Airport to be of much greater magnitude than previously thought. There are a multitude of indoor sources that include but are not limited to laser printers, fax machines, photocopiers, the peeling of citrus fruits, cooking, tobacco smoke, penetration of contaminated outdoor air, chimney cracks and vacuum cleaners.
UFPs have a variety of applications in the medical and technology fields. They are used in diagnostic imagining, and novel drug delivery systems that include targeting the circulatory system, and or passage of the blood brain barrier to name just a few. Certain UFPs like silver based nanostructures have antimicrobial properties that are exploited in wound healing and internal instrumental coatings among other uses, in order to prevent infections. In the area of technology, carbon based UFPs have a plethora of applications in computers. This includes the use of graphene and carbon nanotubes in electronic as well as other computer and circuitry components. Some UFPs have characteristics similar to gas or liquid and are useful in powders or lubricants.
Exposure, risk, and health effects
The main exposure to UFPs is through inhalation. Owing to their size, UFPs are considered to be respirable particles. Contrary to the behaviour of inhaled PM10 and PM2.5, ultrafine particles are deposited in the lungs, where they have the ability to penetrate tissue and undergo interstitialization, or to be absorbed directly into the bloodstream—and therefore are not easily removed from the body and may have immediate effect. Exposure to UFPs, even if components are not very toxic, may cause oxidative stress, inflammatory mediator release, and could induce heart disease, lung disease, and other systemic effects.
The exact mechanism through which UFP exposure leads to health effects remains to be elucidated, but effects on blood pressure may play a role. It has recently been reported that UFP is associated with an increase in blood pressure in schoolchildren with the smallest particles inducing the largest effect. According to research, infants whose mothers were exposed to higher levels of UFPs during pregnancy are much more likely to develop asthma.
There is a range of potential human exposures that include occupational, due to the direct manufacturing process or a byproduct from an industrial or office environment, as well as incidental, from contaminated outdoor air and other byproduct emissions. In order to quantify exposure and risk, both in vivo and in vitro studies of various UFP species are currently being done using a variety of animal models including mouse, rat, and fish. These studies aim to establish toxicological profiles necessary for risk assessment, risk management, and potential regulation and legislation.
Some sizes of UFPs may be filtered from the air using ULPA filters.
Regulation and legislation
As the nanotechnology industry has grown, nanoparticles have brought UFPs more public and regulatory attention. UFP risk assessment research is still in the very early stages. There are continuing debates about whether to regulate UFPs and how to research and manage the health risks they may pose. As of March 19, 2008, the EPA does not yet regulate ultrafine particle emissions.
The EPA does require notification of the intentional manufacture of nanoparticles.
In 2008, the EPA drafted a Nanomaterial Research Strategy. There is also debate about how the European Union (EU) should regulate UFPs.
Political disputes
There is political dispute between China and South Korea on ultrafine dust. South Korea claims that about 80% of ultrafine dust comes from China, and China and South Korea should cooperate to reduce the level of fine dust. China, however, argues that the Chinese government has already implemented its policy regarding ecological environment. According to China's government, its quality of air has been improved more than 40% since 2013. However, the air pollution in South Korea got worse. Therefore, the dispute between China and South Korea has become political. In March 2019, Seoul Research Institute of Public Health and Environment said that 50% to 70% of the fine dust is from China, therefore China is responsible for the air pollution in South Korea. This dispute provokes dispute among citizens as well.
In July 2014, China's paramount leader Xi Jinping and the South Korean government agreed to enforce Korea-China Cooperative Project, regarding Sharing of observation data on air pollutions, joint research on an air pollution forecast model and air pollution source identification, and human resources exchanges, etc. Followed by this agreement, in 2018, China and South Korea signed China-Korea Environmental Cooperation Plan to resolute environmental issues. China Research Academy of Environmental Studies (CRAES) in Beijing is developing a building for China-Korea Environmental Cooperation Center including office building and laboratory building. Based on this cooperation, South Korea already sent 10 experts on environments to China for research, and China will also send more experts for long-term research. By this bilateral relations, China and Republic of Korea are seeking resolution on air pollution in North East Asia region, and seeks international security.
See also
Diesel particulate matter
Health and safety hazards of nanomaterials
Metal fume fever
Metal working
Microplastics
Nanostructures
Open burning of waste
Power tool
Renovation
Welding
Wildfire
References
Further reading
External links
Current global map of PM distribution
Particulates | Ultrafine particle | [
"Chemistry"
] | 1,531 | [
"Particulates",
"Particle technology"
] |
16,423,973 | https://en.wikipedia.org/wiki/Direct-coupled%20transistor%20logic | Direct-coupled transistor logic (DCTL) is similar to resistor–transistor logic (RTL), but the input transistor bases are connected directly to the collector outputs without any base resistors. Consequently, DCTL gates have fewer components, are more economical, and are simpler to fabricate onto integrated circuits than RTL gates. Unfortunately, DCTL has much smaller signal levels, has more susceptibility to ground noise, and requires matched transistor characteristics. The transistors are also heavily overdriven; this is a good feature in that it reduces the saturation voltage of the output transistors, but it also slows the circuit down due to a high stored charge in the base. Gate fan-out is limited due to "current hogging": if the transistor base–emitter voltages () are not well matched, then the base–emitter junction of one transistor may conduct most of the input drive current at such a low base–emitter voltage that other input transistors fail to turn on.
DCTL is close to the simplest possible digital logic family, using close to fewest possible components per logical element.
A similar logic family, direct-coupled transistor–transistor logic, is faster than ECL.
John T. Wallmark and Sanford M. Marcus described direct-coupled transistor logic using JFETs. It was termed direct-coupled unipolar transistor logic (DCUTL). They published a variety of complex logic functions implemented as integrated circuits using JFETs, including complementary memory circuits.
DCTL in today's life
DCTL is a catalyst for other transistors which are very convenient to use. They were made 65 years ago and have many updated and different variations of them today. One of the more recent and used today is called transistor–transistor logic (TTL) and resistor–transistor logic (RTL). TTL functions similarly to DCTL, except DCTL has lower signal levels and is sensitive to ground noise, while TTL depends more on polarity. DCTLs are not used as much as they were in the past. RTL also focuses heavily on polarity, specifically being a bipolar transistor switch. They are still very important and changed the history of audio and are the fundamental stepping stones to creating higher-quality inventions.
Logical functions
A DCTL circuit has three logical functions: AND gating, OR gating, and signal inversion (NOT gating). Each of these functions is the building block of what creates the circuit board. An AND gate requires two or more inputs that are true for there to be an outcome. As an example, let's say that . If any of the inputs are 0, there will be no output. All inputs must be true for there to be an output. OR gating also requires two or more inputs, but unlike AND gating, only one of the inputs is required to be true. The NOT gate only needs a single input, so there could be an output. Therefore, if the single input is not true, there will be no output. With these three gates, many other logical functions can be made with them making the possibilities endless.
Other functions
A DCLT is known for doing three functions:
Inverters
Series gating
Parallel gating
Each of these functions makes the output voltage supply low, so it does not have a negative impact on the other circuits in the machine.
Inverters are also known as NOT gates which can be connected by collector resistors. For the next DCTL to turn on, there must be enough VCE(SAT) (saturation voltage) going through the previous circuit. If the VCE(SAT ) is too low, the next gate will not open up. If you want only a certain amount of circuits open, then the VCE(SAT) needs to be smaller than the next transistor VBE(ON) (voltage input) between the base and emitter. It depends on your desired function.
Series gating is a little different. If even one of the transistors is off, then the output voltage would end up being the supply voltage (VCC) at D. For the next stage to determine the voltage of D, that would entirely depend on the VBE(ON) (input voltage) of the next transistor. If all the transistors are on, then D would be closer to the ground, which can cause some complications. The next transistor has to be completely off for there to be no complication with the device.
Parallel gating is three transistors with individual inputs rather than a single input compared to the other functions. If the VBE(ON) is high, the voltage will go through a load resistor causing the voltage output to be low.
Disadvantages of using DCTL
Current hogging
Noise problem
One of the main disadvantages of using a DCTL is called current hogging. Current hogging is when two or more circuits are operating in parallel. The downside of this is that one of the circuits tends to do all the work and take up all the voltage (VBE) resulting in it overheating and then possibly breaking down. Since no two transmitters have the same voltage, this tends to happen. Due to this happening, inventors and engineers look for transistors with a small voltage output, which is something that a DCTL is known for, but the phenomenon can still happen.
The noise problem is related to the voltage noise. The reason that the phenomenon is such a great problem is due to circuits being incredibly sensitive to noise, since they operate at a fast speed and low voltage. Also with several transistors, the polarities pulse can cause unwanted transistors to turn on. Picked-up noises by connecting leads can also lead to more problems leading the device to not work.
Advantages of using DCTL
Simple circuit
Does not require much power to work
Does not take up too much space
Help limit voltage output
With these advantages, many incredible inventions have been created. Since it does not take up too much space and does not use too much power, this makes them very convenient to use. They can also limit the voltage output that other transistors may create and therefore lead to there being less issues with machines.
References
Digital electronics
Logic families | Direct-coupled transistor logic | [
"Engineering"
] | 1,308 | [
"Electronic engineering",
"Digital electronics"
] |
16,426,004 | https://en.wikipedia.org/wiki/Agile%20leadership | Rooted in agile software development and initially referred to leading self-organizing development teams (Appelo, 2011;), the concept of agile leadership is now used to more generally denote an approach to people and team leadership that is focused on boosting adaptiveness in highly dynamic and complex business environments (Hayward, 2018; Koning, 2020; Solga, 2021).
History
There are many perspectives on the origins of agile leadership, some of which align with the advent of the Agile Software Development manifesto. With the rise of Agile software development organizations discovered the need for a new leadership approach. The relentless advancements of technology have introduced an evergrowing amount of VUCA (Volatility, uncertainty, complexity and ambiguity). As complexity grows, organizations need to be able to respond quickly with the ability to make decisions in ambiguous environments with increasing uncertainties. Traditional management is often seen as too slow in organizations engaged in these markets. Like Transformational leadership, Agile leadership practices promote enabling individuals and teams through the mandate and freedom to make their own decisions. Through realignment of accountability and decision-making, teams are offered the ability to respond quicker to changes and complexity. This technology-driven evolution of leadership approaches looks at the leader for supporting a managerial need for creating the right context and environment for self-managing teams. See Workers' self-management.
The framework for business agility has also created a set of Agile Leadership principles.
The Agile leadership approach fits well in today's technology-focused culture in providing autonomy to employees while encouraging growth and experimentation to address the unknown needs of the future. By enabling individuals and teams to create clarity on the objectives or desired outcome and discover the best ways to achieve Agile Leadership looks to address the near-constant complexity and change intrinsic to organizations. Building from the origins of the Agile Software Development Manifesto, Agile Leadership practices fit the importance of customer focus or customer-centricity market needs.
Leading self-organizing teams
For some authors, the essence of agile leadership is creating the right environment for self-managing teams. Koning (2019), for example, defines four corresponding areas of action:
Co-create the goals – instead of giving instructions, rather make sure that the goals are clear. So teams know what to achieve, and if their actions are bringing them any closer to their goal.
Facilitate Ownership – create an environment in which agile teams can grow and thrive. Teams can't be forced to take Ownership, leaders can only create those circumstances in which teams take ownership. This is a balancing between stepping in and letting go. Finding the sweetspot where teams have the right amount of freedom aligned with their level of maturity.
Learn faster – being fit and ready for the future is not about being the best, it's about learning faster. Self-managing teams need to get fast feedback on their actions and their decisions. Preferably from users and customers. It's the leaders role to promote learning from experiments and failures.
Design the culture – The agile leader has to envision, design and improve the culture of the organisation.
'Enabler - disruptor' model of agile leadership
Favoring a more general approach and highlighting the leadership demands linked to digitization, Hayward (2018) describes agile leadership as simultaneously enabling and disrupting teams and the organization (a paradox, he refers to as the 'agile leadership paradox'):
Agile leader as 'enabler'
Learning agility
Clarity of direction
Empathy and trust
Empowering
Working together
Agile leader as 'disruptor'
Thoughtfully decisive
Digitally literate
Questioning the status quo
Creating new ways of thinking
Close to customer trends
'Align - empower' model of agile leadership
This framework (Solga, 2021) strives to integrate the various ideas that have been floating around the concept of agile leadership. It defines the purpose of agile leadership as enabling people and teams to meet performance expectations and customer demands in business/task environments that are charged with VUCA (volatility, uncertainty, complexity, and ambiguity) and where process knowledge (knowing how to produce desired results) is weak.
To achieve this, an agile leader needs to simultaneously foster divergence and convergence (Solga, 2021). The former involves enabling and exploiting a multitude and diversity of options and possibilities to boost adaptiveness, that is to say, promote responsiveness, flexibility, and speed to effectively deal with dynamic change and disruptive challenges (the 'empower' component). The latter involves promoting alignment with overarching goals and standards as well as across teams (the 'align' component).
Solga (2021) defines three 'alignment' practices and three 'empowerment' practices:
'Alignment' practices (ensuring convergence):
Motivate: Giving esteem, inspiration, and care to inspire emotional engagement and, with it, 'emotional alignment'
Infuse: Creating value orientation and commitment to the purpose and values of the organization ('normative alignment')
Focus: Creating a shared understanding of goals and priorities, roles, processes, and crucial boundary conditions within teams and across the organization ('task alignment')
'Empowerment' practices (enabling and exploiting divergence):
Facilitate: Providing resources, removing obstacles, enabling self-organization, and giving decision-making discretion ('structural empowerment')
Coach: Enabling people and teams to (co-) operate effectively in 'structurally empowered' task environments ('competency-focused empowerment')
Innovate: Enabling an explorative or iterative approach to problem solving and task delivery (i.e., spiraling between experimentation and reflection, prototyping and feedback); also, promoting a constructive approach to handling tension (understanding frictions and conflicts as 'drivers of development'); since all this is about expanding and testing options to reach improvements and novel solutions, its focus is on 'innovation empowerment'
See also
Agility
VUCA
Digitization
Leadership
References
Organizational behavior
Leadership | Agile leadership | [
"Biology"
] | 1,186 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
16,426,096 | https://en.wikipedia.org/wiki/Chevalley%E2%80%93Shephard%E2%80%93Todd%20theorem | In mathematics, the Chevalley–Shephard–Todd theorem in invariant theory of finite groups states that the ring of invariants of a finite group acting on a complex vector space is a polynomial ring if and only if the group is generated by pseudoreflections. In the case of subgroups of the complex general linear group the theorem was first proved by who gave a case-by-case proof. soon afterwards gave a uniform proof. It has been extended to finite linear groups over an arbitrary field in the non-modular case by Jean-Pierre Serre.
Statement of the theorem
Let V be a finite-dimensional vector space over a field K and let G be a finite subgroup of the general linear group GL(V). An element s of GL(V) is called a pseudoreflection if it fixes a codimension 1 subspace of V and is not the identity transformation I, or equivalently, if the kernel Ker (s − I) has codimension one in V. Assume that the order of G is relatively prime to the characteristic of K (the so-called non-modular case). Then the following properties are equivalent:
(A) The group G is generated by pseudoreflections.
(B) The algebra of invariants K[V]G is a (free) polynomial algebra.
(B) The algebra of invariants K[V]G is a regular ring.
(C) The algebra K[V] is a free module over K[V]G.
(C) The algebra K[V] is a projective module over K[V]G.
In the case when the field K is the field C of complex numbers, the first condition is usually stated as "G is a complex reflection group". Shephard and Todd derived a full classification of such groups.
Examples
Let V be one-dimensional. Then any finite group faithfully acting on V is a subgroup of the multiplicative group of the field K, and hence a cyclic group. It follows that G consists of roots of unity of order dividing n, where n is its order, so G is generated by pseudoreflections. In this case, K[V] = K[x] is the polynomial ring in one variable and the algebra of invariants of G is the subalgebra generated by xn, hence it is a polynomial algebra.
Let V = Kn be the standard n-dimensional vector space and G be the symmetric group Sn acting by permutations of the elements of the standard basis. The symmetric group is generated by transpositions (ij), which act by reflections on V. On the other hand, by the main theorem of symmetric functions, the algebra of invariants is the polynomial algebra generated by the elementary symmetric functions e1, ... en.
Let V = K2 and G be the cyclic group of order 2 acting by ±I. In this case, G is not generated by pseudoreflections, since the nonidentity element s of G acts without fixed points, so that dim Ker (s − I) = 0. On the other hand, the algebra of invariants is the subalgebra of K[V] = K[x, y] generated by the homogeneous elements x2, xy, and y2 of degree 2. This subalgebra is not a polynomial algebra because of the relation x2y2 = (xy)2.
Generalizations
gave an extension of the Chevalley–Shephard–Todd theorem to positive characteristic.
There has been much work on the question of when a reductive algebraic group acting on a vector
space has a polynomial ring of invariants. In the case when the algebraic group is simple all cases when the invariant ring is polynomial have been classified by
In general, the ring of invariants of a finite group acting linearly on a complex vector space is Cohen-Macaulay, so it is a finite rank free module over a polynomial subring.
Notes
References
(English translation: )
Invariant theory
Theorems about finite groups | Chevalley–Shephard–Todd theorem | [
"Physics"
] | 842 | [
"Invariant theory",
"Group actions",
"Symmetry"
] |
16,426,540 | https://en.wikipedia.org/wiki/1916%20Boreas | 1916 Boreas, provisional designation , is an eccentric, stony asteroid and near-Earth object of the Amor group, approximately 3 kilometers in diameter. After its discovery in 1953, it became a lost asteroid until 1974. It was named after Boreas from Greek mythology.
Discovery
Boreas was discovered on 1 September 1953, by Belgian astronomer Sylvain Arend at the Royal Observatory of Belgium in Uccle. The asteroid was observed for 2 months and then with time became a lost asteroid. It was recovered in 1974, by Richard Eugene McCrosky, G. Schwartz and JH Bulger based on a predicted position by Brian G. Marsden.
Orbit and classification
Boreas orbits the Sun at a distance of 1.3–3.3 AU once every 3 years and 5 months (1,251 days). Its orbit has an eccentricity of 0.45 and an inclination of 13° with respect to the ecliptic.
The near-Earth asteroid has an Earth minimum orbit intersection distance of , which corresponds to 98.2 lunar distances. Its observation arc begins with it official discovery observation at Uccle in 1953.
Physical characteristics
On the Tholen and SMASS taxonomic scheme, Boreas is classified as a common S-type asteroid with a stony composition. It has also been characterized as a Sw-subtype.
Several rotational lightcurves gave a rotation period between 3.4741 and 3.49 hours with a brightness variation between 0.25 and 0.35 magnitude ().
In 1994, astronomer Tom Gehrels estimated Boreas to measure 3.5 kilometers in diameter, based on an assumed albedo of 0.15. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for stony asteroids of 0.20 and calculates a diameter of 3.07 kilometers with an absolute magnitude of 14.93.
Naming
This minor planet is named after the Greek god of the north wind, Boreas, as the asteroid was discovered moving rapidly northward after passing the ascending node of its orbit. The official naming citation was published by the Minor Planet Center on 8 April 1982 ().
Notes
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
001916
Discoveries by Sylvain Arend
Named minor planets
001916
001916
19530901
Recovered astronomical objects | 1916 Boreas | [
"Astronomy"
] | 501 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
9,646,204 | https://en.wikipedia.org/wiki/Master%20of%20Business%20Informatics | Master of Business Informatics (MBI) is a postgraduate degree in Business Informatics (BI). BI programs combine information technology (IT) and management courses and are common in central Europe.
The first master programs in Business Informatics were offered by the University of Rostock, as a face-to-face program, and by the Virtual Global University (VGU) together with the European University Viadrina Frankfurt (Oder) as an online program (see virtual education).
An MBI programme, which includes inter-cultural studies affecting business operations in European markets, was first offered by Dublin City University. Within the Bologna process, many Central European universities have been, or are in the process of, setting up master programmes in Business Informatics. Due to legal frameworks and restrictions, however, most of these programs are forced to award an M.Sc. degree instead of an MBI degree.
Example
A typical MBI program is the VGU's "International Master of Business Informatics" program in Germany. Since this program was set up and accredited in accordance with nationwide guidelines for content and structure, it reflects well the state-of-the-art of Business Informatics master programs. If studied full-time, the MBI program is a four-semester program and can be composed of courses from the following areas of study.
Another typical MBI program is the MIAGE (Méthodes Informatiques Appliquées à la Gestion des Entreprises) program in France, present in more than 20 universities (MIAGE Toulouse,MIAGE Nancy,MIAGE Paris Ouest Nanterre La Défense, MIAGE Dauphine, MIAGE Aix-Marseille, MIAGE Grenoble Alpes, MIAGE Sorbonne, MIAGE Lille, MIAGE Rennes,MIAGE Bordeaux).
Some MBI programs are organized in tracks or profiles, guiding the students in the design of their master study plan. This is the case of the "Master in Business Informatics" in Utrecht University, which allows students to specialize in one of the following four career areas: business consultant (provides business advice from an ICT perspective), IT consultant (provides ICT advice from a business perspective), entrepreneur (independent entrepreneur who develops ICT products), and IT researcher (continuing with a PhD or targeting a research and development department in a software company).
Typical areas of study
Basic Technology:
Courses may include topics like applied computer science, computer networks and Internet technology, website engineering, programming, or information security.
Business Informatics Methods:
Courses may focus on information systems development, database management, information systems architectures, business intelligence, or business process modelling.
Management:
Management oriented topics may be studied in courses on management information systems, information management, project management, management control, knowledge management, management and organization of IT departments, or software engineering management.
Applications:
Important application domains of Business Informatics may be investigated in courses like enterprise resource planning, e-commerce and e-business networking, industrial information systems, or electronic finance/electronic banking.
Typical positions for MBI graduates
Graduates in Business Informatics can fill positions like information manager, systems analyst, systems designer, project manager, business solutions developer, IT entrepreneur, IS specialist, consultant in areas like enterprise resource planning, supply chain management, customer relationship management, or knowledge management.
References
See also
Virtual Global University
Virtual education
Virtual University
Business Informatics
Business Administration, Master
Business qualifications
Information systems
Information technology management | Master of Business Informatics | [
"Technology"
] | 699 | [
"Information systems",
"Information technology",
"Information technology management"
] |
9,646,468 | https://en.wikipedia.org/wiki/Disodium%20hydrogen%20phosphite | Disodium hydrogen phosphite is the name for inorganic compounds with the formula Na2HPO3•(H2O)x. The commonly encountered salt is the pentahydrate. A derivative of phosphorous acid (HP(O)(OH)2), it contains the anion HPO32−. Its common name suggests that it contains an acidic hydrogen atom, as in sodium hydrogen carbonate. However, this name is misleading as the hydrogen atom is not acidic, being bonded to phosphorus rather than oxygen. The salt has reducing properties. It is white or colorless solid, and is little studied.
References
Phosphites
Inorganic phosphorus compounds
Sodium compounds | Disodium hydrogen phosphite | [
"Chemistry"
] | 142 | [
"Inorganic phosphorus compounds",
"Inorganic compounds",
"Inorganic compound stubs"
] |
9,646,527 | https://en.wikipedia.org/wiki/Disodium%20phosphate | Disodium phosphate (DSP), or disodium hydrogen phosphate, or sodium phosphate dibasic, is an inorganic compound with the chemical formula . It is one of several sodium phosphates. The salt is known in anhydrous form as well as hydrates , where n is 2, 7, 8, and 12. All are water-soluble white powders. The anhydrous salt is hygroscopic.
The pH of disodium hydrogen phosphate water solution is between 8.0 and 11.0, meaning it is moderately basic:
Production and reactions
It can be generated by neutralization of phosphoric acid with sodium hydroxide:
Industrially It is prepared in a two-step process by treating dicalcium phosphate with sodium bisulfate, which precipitates calcium sulfate:
In the second step, the resulting solution of monosodium phosphate is partially neutralized:
Uses
It is used in conjunction with trisodium phosphate in foods and water softening treatment. In foods, it is used to adjust pH. Its presence prevents coagulation in the preparation of condensed milk. Similarly, it is used as an anti-caking additive in powdered products. It is used in desserts and puddings, e.g. Cream of Wheat to quicken cook time, and Jell-O Instant Pudding for thickening. In water treatment, it retards calcium scale formation. It is also found in some detergents and cleaning agents.
Heating solid disodium phosphate gives the useful compound tetrasodium pyrophosphate:
Laxative
Monobasic and dibasic sodium phosphate are used as a saline laxative to treat constipation or to clean the bowel before a colonoscopy.
References
External links
solubility in Prophylaxis alcohol
Sodium compounds
Phosphates
Edible thickening agents | Disodium phosphate | [
"Chemistry"
] | 385 | [
"Phosphates",
"Salts"
] |
9,646,634 | https://en.wikipedia.org/wiki/Eco-Runner%20Team%20Delft | Eco-Runner Team Delft is a multidisciplinary student team from Delft University of Technology (TU Delft) focused on developing efficient cars powered by sustainable fuels. Every year, a new car is produced to inspire the world towards sustainable mobility.
At the end of each year, the team aims to achieve a specific year goal. These have included competing in the Shell Eco-marathon, a competition the team won in 2015 and 2022. Until 2019, Eco-Runner participated in the Prototype class of the competition but transitioned to the Urban Concept class in 2020.
Other year goals have taken the form of world record attempts, the most recent success being in June 2023. The Eco-Runner XIII set a Guinness World Record by driving 2488.5 km on 950 grams of hydrogen.
In 2024, the team produced a street-legal hydrogen-powered car, making it the first of their vehicles authorized to drive on the public road.
In 2025 the team plans to develop a car powered by an externally fired gas turbine, which allows them to test alternative sustainable fuels in the future.
Origins & Organization
Eco-Runner Team Delft was founded by 11 students (7 from Belgium and 4 from the Netherlands), in the Netherlands in November 2005. They were second-year students of the Faculty of Aerospace Engineering, Delft University of Technology. The team's goal was to compete in the Shell Eco-marathon Prototype class competition on the Rockingham Motor Speedway in the UK in July 2006. The team built the Eco-Runner 1, an efficient petrol vehicle. The result of second project was the Eco-Runner H2, which was the first hydrogen-powered vehicle built by the team. Subsequently, all Eco-Runners from H2 to XIV were hydrogen-powered.
Currently, the team consists of 26 TU Delft students from various faculties. The team consists of both full-timers and part-timers across 5 departments: Management, Operations, Bodywork, Powertrain & Electronics, and Vehicle Dynamics. The team's offices and workshops are located in the D:DREAM Hall on the campus of the Delft University of Technology.
Hereunder, an overview of the different departments and the current (XV 2024/25) team members is included.
Teams across the years
Eco-Runner 1
The first Eco-Runner, the Eco-Runner 1, was built with limited time and resources, and it lacked a functional fuel injection system. Despite this, it achieved the team's goal of driving 500 km on 1 liter of petrol. The car performed at 557 kilometers per liter, and this encouraged the team to build a new Eco-Runner and participate again with a new goal: to drive at 2000 kilometers per liter and land a top-5 spot in the Shell Eco-marathon at the Rockingham speedway.
Eco-Runner H2
The second Eco-Runner was named the Eco-Runner H2. Its main improvement with respect to the Eco-Runner 1 was its completely integrated design. This improved the aerodynamic shape and reduced the weight of the vehicle. Furthermore, the team developed two propulsion methods for the Eco-Runner H2, resulting in two physical Eco-Runners H2.
The first propulsion method comprised a fuel cell driving an electric motor. The second propulsion method was is a six-stroke petrol combustion engine, which works similarly to a four-stroke engine, with the additional injection of a drop of water after the fourth stroke. The extreme heat remaining in the cylinder head causes the water to expand rapidly, resulting in a "free" working stroke. The disadvantage of this engine was a high level of corrosion, resulting from the combination of water, high temperature and high pressure.
The Eco-Runner H2 participated in the 2007 edition of the Shell Eco-Marathon, where it achieved the Dutch fuel efficiency record of 2282 km/L of petrol using the fuel cell set-up (the hydrogen consumption of the fuel cell is monitored carefully by race officials and then converted to the equivalent of a liter of Shell 95 standard fuel using specific combustion heat of both substances). This was despite a hastily repaired and therefore very poorly functioning cruise control system, a feature essential for keeping all of the components working at maximum efficiency.
Eco-Runner 3
The third generation of the Eco-Runner participated in the 2011 Shell Eco-Marathon and placed 2nd. Major improvements were achieved regarding, aerodynamics, fuel cell efficiency, and weight reduction. Virtually no components of the Eco-Runner 3 were off-the-shelf: 95% of all components were either designed in-house or modified to suit the team's specific needs.
Eco-Runner 4
The fourth generation of the Eco-Runner participated in the 2014 Shell Eco-marathon in Rotterdam and placed 2nd again. This year marked specification changes in the prototype class, it was now a hydrogen class vehicle. A conversion was used from the energy in hydrogen to the equivalent energy in gasoline (liters), which allowed the vehicle to achieve a significant result of 3524 km/L. The weight of the Eco-Runner 4 was around 38 kg and driver's weight was 50 kg. During the race, the average force generated was close to 4 Newtons, and a nominal power of 35 Watts was achieved.
Eco-Runner 5
The fifth generation of the Eco-Runner participated in the 2015 Shell Eco-marathon in Rotterdam and came 1st. The archetype was named as a one-man vehicle powered by hydrogen fuel cell, becoming a category within the hydrogen class vehicle. The major specifications of the Eco-Runner 4 were carried over due to its highly aerodynamic design. Once again, the energy conversion from hydrogen to gasoline (liters) allowed the Eco-Runner 5 to achieve a significant result of 3653 km/L. Furthermore, the use of front wheel steering mechanism enabled the vehicle to achieve a turning radius of just 8 meters.
Eco-Runner 6
The Eco-Runner 6 participated in the 2016 Shell Eco-Marathon in London. It's aerodynamic shape was designed in such a way that extensive computational fluid dynamic analysis could be carried out for racing conditions. The application of zigzag strips was used to delay flow separation, which ultimately improved race performance. The combination of supercapacitors acted as a buffer between the fuel cell and the motor, while also providing the necessary support for the inclines on London tracks. This setup allowed for the rapid distribution of power to the motor, enabling the fuel cell to operate at its optimal efficiency continuously.
Eco-Runner 7
The Eco-Runner 7 was built in 9 months to participate at the 2017 Shell Eco-marathon in London. The track in London, as mentioned, is very dynamic, consisting of a hill and sharp corners, making it challenging to drive efficiently. The Eco-Runner 7 is subsequently custom-made for the track in London, a new concept for the entire powertrain system had to be designed. The powertrain system had been altered by implementing an external electric motor which is connected to the wheel with a chain transmission. In addition, the fuel cell was improved by reducing its weight and increasing its efficiency.
Eco-Runner 8
Eco-Runner 8 placed 3rd in the Shell Eco-marathon, which took place in London. The team also won the “Vehicle Design” prototype award, partially due to its impressive “neural network” which allowed them to effectively communicate with the driver. The hull of the Eco-Runner 8 was designed by means of wind tunnel tests. The vehicle had the electromotor placed in front of the rear wheel so that the vehicle was powered by means of a chain transmission. However, this chain resulted in significant energy losses during the race, prompting the next team to implement an in-wheel motor.
Eco-Runner 9
In comparison to its predecessors, the Eco-Runner 9 reached a milestone in innovation. The team improved the in-depth neural network to provide the driver with real-time information to optimize the race strategy, made changes to the composite thickness enhancing weight reduction, and placed an in-wheel motor to increase efficiency by changing from chain transmission to a direct transmission. The vehicle weighed only 42 kg, a 19% weight reduction compared to the Eco-Runner 8, and was able to claim third place in the Shell Eco-marathon Europe. Additionally, they again received the Vehicle Design Award.
Eco-Runner X
The Eco-Runner X was the first to participate in the Urban Concept class of the Shell Eco-marathon. For this a completely new design was developed, including headlights, windshield wipers and a luggage compartment, approaching the design of a city car. With this new design the Eco-Runner X team once again won the Vehicle Design Award of the Shell Eco-marathon Off-track competition. The On-Track competitions of the Shell Eco-marathon were cancelled due to COVID-19, so the team organized their own on-track competition with HAN Hydromotive and won this race with an efficiency of 2500 km/L hydrogen.
Eco-Runner XI
The Eco-Runner XI was the second iteration to participate in the Urban Concept class of the Shell Eco-marathon. A teardrop profile of the body accompanied with continuous wheel caps and a curved bottom plane minimised the turbulence and maximised performance. This resulted in an overall decrease of 48% in aerodynamic drag compared to the previous car. A smart cruise control system was implemented with newly designed aluminium rims and an in-wheel motor, improving overall efficiency to 3396 km/kg hydrogen. The car won the Hydrogen Efficiency Challenge, a competition organized by the team itself in collaboration with HAN Hydromotive and Green Team Twente, because the On-track competition had once again been cancelled. Additionally, the Eco-Runner XI broke the world record for longest distance travelled in a hydrogen vehicle by driving non-stop for 36 hours on one tank (450 gr) of hydrogen, reaching a record distance of 1195.74 km.
Eco-Runner XII
The Eco-Runner XII was the third Urban Concept car of Eco-Runner Team Delft, but the first to participate in this class at the on-track competition of the Shell Eco-marathon in 2022 at TT Circuit Assen. It finished in 1st place, performing with an efficiency of 468 km/m^3, equivalent to 5407 km/kg. That year, the team had improved various aspects of the car design, including an optimized race strategy, a new fuel cell, an in-wheel electric motor, and significant weight reduction. Specifically, 41% weight reduction was achieved with various strategies. These included making the entire body of the car load-carrying, similar to the concept of a Formula 1 car, and replacing many aluminum parts with carbon fiber, a significantly lighter material.
Eco-Runner XIII
The Eco-Runner XIII was the fourth Urban Concept car of Eco-Runner Team Delft, and set a new Guinness World Record by driving 2488.5 km on 950 grams of hydrogen.
Eco-Runner XIV
The Eco-Runner XIV was designed, for the first time, to be a street legal hydrogen car, so the car had more than 1300 new regulations to adhere to, some of which were formulated with the help of the RDW. The vehicle class of the car was L7EA2, and it drove 2056 km on 1.45 kg of hydrogen for 3 days, following the route of an annual Dutch ice-skating route: the Eflstedentocht.
Shell Eco-marathon results
Prototype class results
The results of the Shell Eco-marathon Prototype class competition are obtained by measuring the amount of hydrogen used per km and converting it to petrol (liters).
Urban Concept class results
The results of the Urban Concept class are presented in the following table.
*There was no on-track SEM competition due to COVID-19. 1st in the off-track Vehicle Design Awards and at on-track competition against HAN Hydromotive.
**There was no on-track SEM competition due to COVID-19. 1st in the Hydrogen Efficiency Challenge, an on-track competition against Green Team Twente & HAN Hydromotive.
External links
Shell Eco-marathon challengers
Delft University of Technology
Sustainable transport
Concept cars
Vehicle technology | Eco-Runner Team Delft | [
"Physics",
"Engineering"
] | 2,469 | [
"Physical systems",
"Transport",
"Sustainable transport",
"Vehicle technology",
"Mechanical engineering by discipline"
] |
9,646,648 | https://en.wikipedia.org/wiki/Chlorine%20bombings%20in%20Iraq | Chlorine bombings in Iraq began as early as October 2004, when insurgents in Al Anbar province started using chlorine gas in conjunction with conventional vehicle-borne explosive devices.
The inaugural chlorine attacks in Iraq were described as poorly executed, probably because much of the chemical agent was rendered nontoxic by the heat of the accompanying explosives. Subsequent, more refined, attacks resulted in hundreds of injuries, but have proven not to be a viable means of inflicting massive loss of life. Their primary impact has therefore been to cause widespread panic, with large numbers of civilians suffering non life-threatening, but nonetheless highly traumatic, injuries.
Chlorine was used as a poison gas in World War I, but was delivered by artillery shell, unlike the modern stationary or car bombs. Still, its function as a weapon in both instances is similar. Low level exposure results in burning sensations to the eyes, nose and throat, usually accompanied by dizziness, nausea and vomiting. Higher levels of exposure can cause fatal lung damage; but because the gas is heavier than air it will not dissipate until well after an explosion, it is generally considered ineffective as an improvised chemical weapon.
Western media linking chlorine attacks to 'al Qaeda'
In February 2007, a U.S. military spokesman said that ‘al Qaeda propaganda material’ had been found at a factory for chlorine chemical weapons in Karma, east of Fallujah, which led press agency Reuters to the conclusion that that “chlorine bomb factory was al Qaeda's”.
Attacks
October 21, 2006: A car bomb carrying 12 120 mm mortar shells and two 100-pound chlorine tanks detonated, wounding three Iraqi policemen and a civilian in Ramadi.
January 28, 2007: A suicide bomber drove a dump truck carrying explosives and a chlorine tank into an emergency response unit compound in Ramadi. 16 people were killed by the explosives, but none by the chlorine.
February 19, 2007: A suicide bombing in Ramadi involving chlorine killed two Iraqi security forces and wounded 16 other people.
February 20, 2007: A bomb blew up a tanker carrying chlorine north of Baghdad, killing nine and emitting fumes that made 148 others ill, including 42 women and 52 children.
February 21, 2007: A pickup truck carrying chlorine gas cylinders exploded in Baghdad, killing at least five people and hospitalizing over 50.
March 16, 2007: Three separate suicide attacks on this day used chlorine. The first attack occurred at a checkpoint northeast of Ramadi when a truck bomb wounded one US service member and one Iraqi civilian. A second truck bomb detonated in Falluja, killing two policemen and leaving a hundred Iraqis showing signs of chlorine exposure. Forty minutes later, yet another chlorine-laden truck bomb exploded at the entrance to a housing estate south of Falluja, this time injuring 250 and according to some reports killing six.
March 28, 2007: Suicide bombers detonated a pair of truck bombs, one containing chlorine, as part of a sustained attack aimed at the Fallujah Government Center. The initial bombings along with a subsequent gun battle left 14 American forces and 57 Iraqi forces wounded.
April 6, 2007: A chlorine-laden suicide truck bomb detonated at a police checkpoint in Ramadi, leaving 27 dead. Thirty people were hospitalized with wounds from the explosion, while many more suffered breathing difficulties attributed to the chlorine gas.
April 25, 2007: A chlorine truck bomb detonated at a military checkpoint on the western outskirts of Baghdad, killing one Iraqi and wounding two others.
April 30, 2007: A tanker laden with chlorine exploded near a restaurant west of Ramadi, killing six people and wounding 10.
May 15, 2007: A chlorine bomb exploded in an open-air market in the village of Abu Sayda in Diyala province, killing 32 people and injuring 50.
May 20, 2007: A suicide truck bomber exploded his vehicle Sunday near an Iraqi police checkpoint outside Ramadi, Zangora district west of Ramadi, killing two police officers and wounding 11 others.
June 3, 2007: A car bomb exploded outside a U.S. military base in Diyala, unleashing a noxious cloud of chlorine gas that sickened at least 62 soldiers but caused no serious injuries.
See also
2007 Iraq cholera outbreak
Iraqi insurgency (2003–2011)
References
External links
Chlorine gas attacks hint at new enemy strategy, Associated Press
Concern over Iraqi chemical bombs, BBC News
U.S.: Iraq bomb factory raid nets deadly chlorine supply, CNN
War crimes in the Iraq War
Chlorine
Baghdad in the Iraq War
Fallujah in the Iraq War
Ramadi in the Iraq War
Chemical weapons attacks
Al-Qaeda activities in Iraq
Improvised explosive device bombings in Baghdad
Terrorist incidents in Baghdad in the 2000s
Terrorist incidents in Iraq in the 2000s
Mass murder in the 2000s
Chemical terrorism | Chlorine bombings in Iraq | [
"Chemistry"
] | 1,003 | [
"Chemical terrorism",
"Chemical weapons attacks",
"Chemical weapons"
] |
9,646,826 | https://en.wikipedia.org/wiki/Network%20agility | Network Agility is an architectural discipline for computer networking. It can be defined as:
The ability of network software and hardware to automatically control and configure itself and other network assets across any number of devices on a network.
With regards to network hardware, network agility is used when referring to automatic hardware configuration and reconfiguration of network devices e.g. routers, switches, SNMP devices.
Network agility, as a software discipline, borrows from many fields, both technical and commercial.
On the technical side, network agility solutions leverage techniques from areas such as:
Service-oriented architecture (SOA)
Object-oriented design
Architectural patterns
Loosely coupled data streaming (e.g.: web services)
Iterative design
Artificial intelligence
Inductive scheduling
On-demand computing
Utility computing
Commercially, network agility is about solving real-world business problems using existing technology. It forms a three-way bridge between business processes, hardware resources, and software assets. In more detail, it takes, as input: 1
the business processes – i.e. what the network must achieve in real business terms;
the hardware that resides within the network; and
the set of software assets that run on this hardware.
Much of this input can be obtained through automatic discovery – finding the hardware, its types and locations, software, licenses etc. The business processes can be inferred to a certain degree, but it is these processes that business managers need to be able to control and organize.
Software resources discovered on the network can take a variety of forms – some assets may be licensed software products, others as blocks of software service code that can be accessed via some service enterprise portal, such as (but not necessarily) web services. These services may reside in-house, or they may be 'on-demand' via an on-line subscription service. Indeed, the primary motivation of network agility is to make the most efficient use of the resources available, wherever they may reside, and to identify areas where business process goals are not being satisfied to some benchmark level (and ideally to offer possible solutions).
Network agility tools are then in a position to optimize the existing hardware to run software assets as needed to achieve the business process goals. As network usage is never linear, the hardware/software mix requirements will change dynamically over various time segments (weekly, quarterly, annually etc.), and step changes will be required from time to time when business-process goals change/evolve/are updated (e.g. during/after a company re-organization).
The benefits to business of the network agility approach are obvious – cost savings in software licensing and higher efficiency of hardware assets – leading to better productivity.
See also
Service-oriented analysis and design
Object-oriented design
Design patterns
SOA governance
Business-driven development
Semantic service-oriented architecture
Enterprise service bus
Finite-state machine
Scheduling (computing)
Representational state transfer
Service component architecture
Comparison of business integration software
Service-oriented infrastructure
Enterprise application integration
Grid computing
Distributed computing
References
Erl Thomas, Service-Oriented Architecture: Concepts, Technology, and Design (Prentice Hall) 2005,
Jerome F. DiMarzio, Network Architecture and Design: A Field Guide for IT Consultants (Sams) 2001-5,
University of California, Methodology for Developing Web Design Patterns (White Paper)
Computer networking | Network agility | [
"Technology",
"Engineering"
] | 680 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
9,648,811 | https://en.wikipedia.org/wiki/Flow%20to%20HDL | Flow to HDL tools and methods convert flow-based system design into a hardware description language (HDL) such as VHDL or Verilog. Typically this is a method of creating designs for field-programmable gate array, application-specific integrated circuit prototyping and digital signal processing (DSP) design. Flow-based system design is well-suited to field-programmable gate array design as it is easier to specify the innate parallelism of the architecture.
History
The use of flow-based design tools in engineering is a reasonably new trend. Unified Modeling Language is the most widely used example for software design. The use of flow-based design tools allows for more holistic system design and faster development. C to HDL tools and flow have a similar aim, but with C or C-like programming languages.
Applications
Most applications are ones which take too long with existing supercomputer architectures. These include bioinformatics, CFD, financial processing and oil and gas survey data analysis. Embedded applications that require high performance or real-time data processing are also an area of use. System-on-a-chip design can also be done using this flow.
Examples
Xilinx System Generator from Xilinx
StarBridge VIVA from defunct
Nimbus from defunct Exsedia
External links
an overview of flows by Daresbury Labs.
Xilinx's ESL initiative, some products listed and C to VHDL tools.
See also
Application Specific Integrated Circuit (ASIC)
C to HDL
Comparison of Free EDA software
Comparison of EDA Software
Complex programmable logic device (CPLD)
ELLA (programming language)
Electronic design automation (EDA)
Embedded C++
Field Programmable Gate Array (FPGA)
Hardware description language (HDL)
Handel-C
Icarus Verilog
Lustre (programming language)
MyHDL
Open source software
Register transfer notation
Register transfer level (RTL)
Ruby (hardware description language)
SpecC
SystemC
SystemVerilog
Systemverilog DPI
VHDL
VHDL-AMS
Verilog
Verilog-A
Verilog-AMS
Hardware description languages | Flow to HDL | [
"Engineering"
] | 440 | [
"Electronic engineering",
"Hardware description languages"
] |
9,649,191 | https://en.wikipedia.org/wiki/Suicide%20crisis | A suicide crisis, suicidal crisis or potential suicide is a situation in which a person is attempting to kill themselves or is seriously contemplating or planning to do so. It is considered by public safety authorities, medical practice, and emergency services to be a medical emergency, requiring immediate suicide intervention and emergency medical treatment. Suicidal presentations occur when an individual faces an emotional, physical, or social problem they feel they cannot overcome and considers suicide to be a solution. Clinicians usually attempt to re-frame suicidal crises, point out that suicide is not a solution and help the individual identify and solve or tolerate the problems.
Nature
Most cases of potential suicide have warning signs. Attempting to kill oneself, talking about or planning suicide, writing a suicide note, talking or thinking frequently about death, exhibiting a death wish by expressing it verbally or by taking potentially deadly risks, or taking steps towards attempting suicide (e.g., obtaining rope and tying it to a ligature point to attempt a hanging or stockpiling pills for an attempted overdose) are all indicators of a suicide crisis. More subtle clues include preparing for death for no apparent reason (such as putting affairs in order, changing a will, etc.), writing goodbye letters, and visiting or calling family members or friends to say farewell. The person may also start giving away previously valued items (because they "no longer need them"). In other cases, the person who seemed depressed and suicidal may become normal or filled with energy or calmness again; these people particularly need to be watched because the return to normalcy could be because they have come to terms with whatever act is next (e.g., a plan to attempt suicide and "escape" from their problems).
Depression is a major causative factor of suicide, and individuals with depression are considered a high-risk group for suicidal behavior. However, suicidal behaviour is not just restricted to patients diagnosed with some form of depression. More than 90% of all suicides are related to a mood disorder, such as bipolar disorder, depression, addiction, PTSD, or other psychiatric illnesses, such as schizophrenia. The deeper the depression, the greater the risk, often manifested in feelings or expressions of apathy, helplessness, hopelessness, or worthlessness.
Suicide is often committed in response to a cause of depression, such as the cessation of a romantic relationship, serious illness or injury (like the loss of a limb or blindness), the death of a loved one, financial problems or poverty, guilt or fear of getting caught for something the person did, drug abuse, old age, concerns with gender identity, among others.
In 2006, WHO conducted a study on suicide around the world. The results in Canada showed that 80-90% of suicide attempts (an estimation, due to the complications of predicting attempted suicide). 90% of attempted suicides investigated led to hospitalizations. 12% of attempts were in hospitals.
Treatments
Ketamine has been tested for treatment-resistant bipolar depression, major depressive disorder, and people in a suicidal crisis in emergency rooms, and is being used this way off-label. The drug is given by a single intravenous infusion at doses less than those used in anesthesia, and preliminary data have indicated it produces a rapid (within 2 hours) and relatively sustained (about 1–2 weeks long) significant reduction in symptoms in some patients. Initial studies with ketamine have sparked scientific and clinical interest due to its rapid onset, and because it appears to work by blocking NMDA receptors for glutamate, a different mechanism from most modern antidepressants that operate on other targets. Some studies have shown that lithium medication can reduce suicidal ideation within 48 hours of administration.
Intervention
Intervention is important to stop someone in a suicidal crisis from harming or killing themselves. Every sign of suicide should be taken seriously. Steps to take in order to help defuse the situation or get the person in crisis to safety include:
Stay with the person so they are not alone.
Call 988 (if in the U.S.) or another suicide hotline, or take the person to the nearest hospital facility.
Reach out to a family member or friend about what is going on.
In many countries police negotiators will be called to respond to situations where a person is at high risk of an immediate suicide crisis. However offers of help are frequently rejected in these situations, because they have not been directly sought by the person in crisis, who wants to maintain a level of independence. Supporting those in crisis to make independent decisions, and adapting terminology, for example using the phrase ‘sort (x) out’ can aid in minimising resistance to the help being offered.
If a friend or loved one is talking about suicide but is not yet in crisis, the following steps should be taken to help them get professional help and feel supported:
Call a suicide hotline number; the U.S. numbers are 988 or 800-273-8255.
Remove dangerous objects, such as guns and knives, from the home.
Offer reassurance and support.
Help the person to seek medical treatment.
See also
Depression
Suicide
Suicidal ideation
Suicide prevention
References
External links
National Suicide Prevention Lifeline (US) - 24-hour, toll-free suicide prevention service for persons in a suicide crisis.
Ligature Resistant - Preventing suicides.
Crisis
Medical emergencies
Mental health
Suicide | Suicide crisis | [
"Biology"
] | 1,092 | [
"Behavior",
"Human behavior",
"Suicide"
] |
9,649,236 | https://en.wikipedia.org/wiki/Lead%28IV%29%20sulfide | Lead(IV) sulfide is a chemical compound with the formula PbS2. This material is generated by the reaction of the more common lead(II) sulfide, PbS, with sulfur at >600 °C and at high pressures. PbS2, like the related tin(IV) sulfide SnS2, crystallises in the cadmium iodide motif, which indicates that Pb should be assigned the formal oxidation state of 4+.
Lead(IV) sulfide is a p-type semiconductor, and is also a thermoelectric material.
References
Lead(IV) compounds
Disulfides | Lead(IV) sulfide | [
"Chemistry"
] | 127 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
9,649,605 | https://en.wikipedia.org/wiki/Event%20correlation | Event correlation is a technique for making sense of a large number of events and pinpointing the few events that are really important in that mass of information. This is accomplished by looking for and analyzing relationships between events.
History
Event correlation has been used in various fields for many years:
since the 1970s, telecommunications and industrial process control;
since the 1980s, network management and systems management;
since the 1990s, IT service management, publish-subscribe systems (pub/sub), Complex Event Processing (CEP) and Security Information and Event Management (SIEM);
since the early 2000s, Distributed Event-Based Systems and Business Activity Monitoring (BAM).
Examples and application domains
Integrated management is traditionally subdivided into various fields:
layer by layer: network management, system management, service management, etc.
by management function: performance management, security management, etc.
Event correlation takes place in different components depending on the field of study:
Within the field of network management, event correlation is performed in a management platform typically known as a Network Management Station or Network Management System (NMS). For example, events may notify that a device has just rebooted or that a network link is currently down.
Within the field of systems management, an event may for instance report that the CPU utilization of an e-business server has been at 100% for over 15 minutes.
Within the field of service management, an event may notify that a Service-Level Objective is not met for a given customer, for example.
Within the field of security management, the management platform is usually known as the Security Information and Event Management (SIEM), and event correlation is often performed in a separate correlation engine. That engine may directly receive events in real time, or it may read them from SIEM storage. In this case, examples of monitored events include activity such as authentication, access to services and data, and output from point security tools such as an Intrusion Detection System (IDS) or antivirus software.
In this article, we focus on event correlation in integrated management and provide links to other fields.
Event correlation in integrated management
The goal of integrated management is to integrate the management of networks (data, telephone and multimedia), systems (servers, databases and applications) and IT services in a coherent manner. The scope of this discipline notably includes network management, systems management and Service-Level Management.
Events and event correlator
Event correlation usually takes place inside one or several management platforms. It is implemented by a piece of software known as the event correlator. This component is automatically fed with events originating from managed elements (applications, devices), monitoring tools, the Trouble Ticket System, etc. Each event captures something special (from the event source standpoint) that happened in the domain of interest to the event correlator, which will vary depending upon the type of analysis the correlator is attempting to perform.
The event correlator plays a key role in integrated management, for only within it do events from many disparate sources come together and allow for comparison across sources. For instance, this is where the failure of a service can be ascribed to a specific failure in the underlying IT infrastructure, or where the root cause of a potential security attack can be identified.
Most event correlators can receive events from trouble ticket systems. However, only some of them are able to notify trouble ticket systems when a problem is solved, which partly explains the difficulty for Service Desks to keep updated with the latest news. In theory, the integration of management in organizations requires the communication between the event correlator and the trouble ticket system to work both ways.
An event may convey an alarm or report an incident (which explains why event correlation used to be called alarm correlation), but not necessarily. It may also report that a situation goes back to normal, or simply send some information that it deems relevant (e.g., policy P has been updated on device D). The severity of the event is an indication given by the event source to the event destination of the priority that this event should be given while being processed.
Step-by-step decomposition
Event correlation can be decomposed into four steps: event filtering, event aggregation, event masking and root cause analysis. A fifth step (action triggering) is often associated with event correlation and therefore briefly mentioned here.
Event filtering
Event filtering consists in discarding events that are deemed to be irrelevant by the event correlator. For instance, a number of bottom-of-the-range devices are difficult to configure and occasionally send events of no interest to the management platform (e.g., printer P needs A4 paper in tray 1). Another example is the filtering of informational or debugging events by an event correlator that is only interested in availability and faults.
Event aggregation
Event aggregation is a technique where multiple events that are very similar (but not necessarily identical) are combined into an aggregate that represents the underlying event data. Its main objective is to summarize a collection of input events into a smaller collection that can be processed using various analytics methods. For example, the aggregate may provide statistical summaries of the underlying events and the resources that are affected by those events. Another example is temporal aggregation, when the same problem is reported over and over again by the event source, until the problem is finally solved.
Event de-duplication is a special type of event aggregation that consists in merging exact duplicates of the same event. Such duplicates may be caused by network instability (e.g., the same event is sent twice by the event source because the first instance was not acknowledged sufficiently quickly, but both instances eventually reach the event destination).
Event masking
Event masking (also known as topological masking in network management) consists of ignoring events pertaining to systems that are downstream of a failed system. For example, servers that are downstream of a crashed router will fail availability polling.
Root cause analysis
Root cause analysis is the last and most complex step of event correlation. It consists of analyzing dependencies between events, based for instance on a model of the environment and dependency graphs, to detect whether some events can be explained by others. For example, if database D runs on server S and this server gets durably overloaded (CPU used at 100% for a long time), the event “the SLA for database D is no longer fulfilled” can be explained by the event “Server S is durably overloaded”.
Action triggering
At this stage, the event correlator is left with at most a handful of events that need to be acted upon. Strictly speaking, event correlation ends here. However, by language abuse, the event correlators found on the market (e.g., in network management) sometimes also include problem-solving capabilities. For instance, they may trigger corrective actions or further investigations automatically.
Event correlation in other fields
Event correlation in ITIL
The scope of ITIL is larger than that of integrated management. However, event correlation in ITIL is quite similar to event correlation in integrated management.
In the ITIL version 2 framework, event correlation spans three processes: Incident Management, Problem Management and Service Level Management.
In the ITIL version 3 framework, event correlation takes place in the Event Management process. The event correlator is called a correlation engine.
Event correlation in publish-subscribe systems
Event correlation in complex event processing
Event correlation in business activity monitoring
Event correlation in industrial process control
See also
Business activity monitoring
Causal reasoning
Complex event processing
ECA rules
Event stream processing
Event-driven architecture
Event-driven programming
Event-driven SOA
Incident management
Issue tracking system
IT service management
Network management
Problem management
Root cause analysis
Supervisory control and data acquisition (SCADA)
Systems management
References
M. Hasan, B. Sugla and R. Viswanathan, "A Conceptual Framework for Network Management Event Correlation and Filtering Systems", in Proc. 6th IFIP/IEEE International Symposium on Integrated Network Management (IM 1999), Boston, MA, USA, May 1999, pp. 233–246.
H.G. Hegering, S. Abeck and B. Neumair, Integrated Management of Networked Systems, Morgan Kaufmann, 1998.
G. Jakobson and M. Weissman, "Alarm Correlation", IEEE Network, Vol. 7, No. 6, pp. 52–59, November 1993.
S. Kliger, S. Yemini, Y. Yemini, D. Ohsie and S. Stolfo, "A Coding Approach to Event Correlation", in Proc. 4th IEEE/IFIP International Symposium on Integrated Network Management (ISINM 1995), Santa Barbara, CA, USA, May 1995, pp. 266–277.
J.P. Martin-Flatin, G. Jakobson and L. Lewis, "Event Correlation in Integrated Management: Lessons Learned and Outlook”, Journal of Network and Systems Management, Vol. 17, No. 4, December 2007.
M. Sloman (Ed.), "Network and Distributed Systems Management", Addison-Wesley, 1994.
External links
Softpanorama event correlation technologies page
Events (computing)
Evaluation methods
Causal inference | Event correlation | [
"Technology"
] | 1,887 | [
"Information systems",
"Events (computing)"
] |
9,650,153 | https://en.wikipedia.org/wiki/One%20Watt%20Initiative | The One Watt Initiative is an energy-saving initiative by the International Energy Agency (IEA) to reduce standby power-use by any appliance to no more than one watt in 2010, and 0.5 watts in 2013, which has given rise to regulations in many countries and regions.
Standby power
Standby power, informally called vampire or phantom power, refers to the electricity consumed by many appliances when they are switched off or in standby mode. The typical standby power per appliance is low (typically from less than 1 to 25 W), but, when multiplied by the billions of appliances in houses and in commercial buildings, standby losses represent a significant fraction of total world electricity use. According to Alan Meier, a staff scientist at the Lawrence Berkeley National Laboratory, standby power before the One Watt Initiative proposals were implemented as regulations accounted for as much as 10% of household power consumption. A study in France found that standby power accounted for 7% of total residential consumption, and other studies put the proportion of consumption due to standby power at 13%.
The IEA estimated in 2007 that standby produced 1% of the world's carbon dioxide (CO2) emissions. To put the figure into context, total air travel contributes less than 3% of global CO2 emissions.
Standby power can be reduced by technological means, reducing power used without affecting functionality, and by changing users' operating procedures.
Policy
The One Watt Initiative was launched by the IEA in 1999 to ensure through international cooperation that by 2010 all new appliances sold in the world use only one watt in standby mode. This would reduce CO2 emissions by 50 million tons in the OECD countries alone by 2010; the equivalent to removing 18 million cars from the roads.
In 2001, US President George W. Bush issued Executive Order 13221, which states that every government agency, "when it purchases commercially available, off-the-shelf products that use external standby power devices, or that contain an internal standby power function, shall purchase products that use no more than one watt in their standby power consuming mode."
By 2005, South Korea and Australia had introduced the one watt benchmark in all new electrical devices, and according to the IEA other countries, notably Japan and China, had undertaken "strong measures" to reduce standby power use.
In July 2007, California's 2005 appliance standards came into effect, limiting external power supply standby power to 0.5 watts.
On 6 January 2010, the European Commission's EC Regulation 1275/2008 came into force regulating requirements for standby and "off mode" electric power consumption of electrical and electronic household and office equipment. The regulations mandate that from 6 January 2010 "off mode" and standby power shall not exceed 1 W, "standby-plus" power (providing information or status display in addition to possible reactivation function) shall not exceed 2 W (these figures are halved on 6 January 2013). Equipment must, where appropriate, provide off mode and/or standby mode when the equipment is connected to the mains power source.
See also
Carbon footprint
Energy conservation
Energy-Efficient Ethernet
Energy policy
Low-carbon economy
Standby power
Voltage optimisation
References
External links
Things that go blip in the night, Standby power and how to limit it, International Energy Agency/Organisation for Economic Co-operation and Development, Paris, 2001
International Energy Agency
Standby Power Home Page, Lawrence Berkeley National Laboratory California
Electric power
Energy conservation
International Energy Agency | One Watt Initiative | [
"Physics",
"Engineering"
] | 718 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
9,651,443 | https://en.wikipedia.org/wiki/Radial%20basis%20function%20network | In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
Network architecture
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers . The output of the network is then a scalar function of the input vector, , and is given by
where is the number of neurons in the hidden layer, is the center vector for neuron , and is the weight of neuron in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance (although the Mahalanobis distance appears to perform better with pattern recognition) and the radial basis function is commonly taken to be Gaussian
.
The Gaussian basis functions are local to the center vector in the sense that
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of . This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.
The parameters , , and are determined in a manner that optimizes the fit between and the data.
Normalized
Normalized architecture
In addition to the above unnormalized architecture, RBF networks can be normalized. In this case the mapping is
where
is known as a normalized radial basis function.
Theoretical motivation for normalization
There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density
where the weights and are exemplars from the data and we require the kernels to be normalized
and
.
The probability densities in the input and output spaces are
and
The expectation of y given an input is
where
is the conditional probability of y given .
The conditional probability is related to the joint probability through Bayes theorem
which yields
.
This becomes
when the integrations are performed.
Local linear models
It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order,
and
in the unnormalized and normalized cases, respectively. Here are weights to be determined. Higher order linear terms are also possible.
This result can be written
where
and
in the unnormalized case and
in the normalized case.
Here is a Kronecker delta function defined as
.
Training
RBF networks are typically trained from pairs of input and target values , by a two-step algorithm.
In the first step, the center vectors of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined using k-means clustering. Note that this step is unsupervised.
The second step simply fits a linear model with coefficients to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function:
where
.
We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.
There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as
where
and
where optimization of S maximizes smoothness and is known as a regularization parameter.
A third optional backpropagation step can be performed to fine-tune all of the RBF net's parameters.
Interpolation
RBF networks can be used to interpolate a function when the values of that function are known on finite number of points: . Taking the known points to be the centers of the radial basis functions and evaluating the values of the basis functions at the same points the weights can be solved from the equation
It can be shown that the interpolation matrix in the above equation is non-singular, if the points are distinct, and thus the weights can be solved by simple linear algebra:
where .
Function approximation
If the purpose is not to perform strict interpolation but instead more general function approximation or classification the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.
Training the basis function centers
Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers.
The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.
Pseudoinverse solution for the linear weights
After the centers have been fixed, the weights that minimize the error at the output can be computed with a linear pseudoinverse solution:
,
where the entries of G are the values of the radial basis functions evaluated at the points : .
The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed).
Gradient descent training of the linear weights
Another possible training algorithm is gradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found),
where is a "learning parameter."
For the case of training the linear weights, , the algorithm becomes
in the unnormalized case and
in the normalized case.
For local-linear-architectures gradient-descent training is
Projection operator training of the linear weights
For the case of training the linear weights, and , the algorithm becomes
in the unnormalized case and
in the normalized case and
in the local-linear case.
For one basis function, projection operator training reduces to Newton's method.
Examples
Logistic map
The basic properties of radial basis functions can be illustrated with a simple mathematical map, the logistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore function approximation, time series prediction, and control theory. The map originated from the field of population dynamics and became the prototype for chaotic time series. The map, in the fully chaotic regime, is given by
where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.
Generation of the time series from this equation is the forward problem. The examples here illustrate the inverse problem; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate
for f.
Function approximation
Unnormalized radial basis functions
The architecture is
where
.
Since the input is a scalar rather than a vector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight is taken to be a constant equal to 5. The weights are five exemplars from the time series. The weights are trained with projection operator training:
where the learning rate is taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error is 0.15.
Normalized radial basis functions
The normalized RBF architecture is
where
.
Again:
.
Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight is taken to be a constant equal to 6. The weights are five exemplars from the time series. The weights are trained with projection operator training:
where the learning rate is again taken to be 0.3. The training is performed with one pass through the 100 training points. The rms error on a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.
Time series prediction
Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration:
.
A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps.
Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the Lyapunov exponent.
Control of a chaotic time series
We assume the output of the logistic map can be manipulated through a control parameter such that
.
The goal is to choose the control parameter in such a way as to drive the time series to a desired output . This can be done if we choose the control parameter to be
where
is an approximation to the underlying natural dynamics of the system.
The learning algorithm is given by
where
.
See also
Radial basis function kernel
instance-based learning
In Situ Adaptive Tabulation
Predictive analytics
Chaos theory
Hierarchical RBF
Cerebellar model articulation controller
Instantaneously trained neural networks
References
Further reading
J. Moody and C. J. Darken, "Fast learning in networks of locally tuned processing units," Neural Computation, 1, 281-294 (1989). Also see Radial basis function networks according to Moody and Darken
T. Poggio and F. Girosi, "Networks for approximation and learning," Proc. IEEE 78(9), 1484-1487 (1990).
Roger D. Jones, Y. C. Lee, C. W. Barnes, G. W. Flake, K. Lee, P. S. Lewis, and S. Qian, Function approximation and time series prediction with neural networks, Proceedings of the International Joint Conference on Neural Networks, June 17–21, p. I-649 (1990).
S. Chen, C. F. N. Cowan, and P. M. Grant, "Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks", IEEE Transactions on Neural Networks, Vol 2, No 2 (Mar) 1991.
Neural network architectures
Computational statistics
Classification algorithms
Machine learning algorithms
Regression analysis | Radial basis function network | [
"Mathematics"
] | 2,375 | [
"Computational statistics",
"Computational mathematics"
] |
9,651,556 | https://en.wikipedia.org/wiki/Volunteer%20computing | Volunteer computing is a type of distributed computing in which people donate their computers' unused resources to a research-oriented project, and sometimes in exchange for credit points. The fundamental idea behind it is that a modern desktop computer is sufficiently powerful to perform billions of operations a second, but for most users only between 10–15% of its capacity is used. Common tasks such as word processing or web browsing leave the computer mostly idle.
The practice of volunteer computing, which dates back to the mid-1990s, can potentially make substantial processing power available to researchers at minimal cost. Typically, a program running on a volunteer's computer periodically contacts a research application to request jobs and report results. A middleware system usually serves as an intermediary.
History
The first volunteer computing project was the Great Internet Mersenne Prime Search, which started in January 1996. It was followed in 1997 by distributed.net. In 1997 and 1998, several academic research projects developed Java-based systems for volunteer computing; examples include Bayanihan, Popcorn, Superweb, and Charlotte.
The term volunteer computing was coined by Luis F. G. Sarmenta, the developer of Bayanihan. It is also appealing for global efforts on social responsibility, or Corporate Social Responsibility as reported in a Harvard Business Review.
In 1999, the SETI@home and Folding@home projects were launched. These projects received considerable media coverage, and each one attracted several hundred thousand volunteers.
Between 1998 and 2002, several companies were formed with business models involving volunteer computing. Examples include Popular Power, Porivo, Entropia, and United Devices.
In 2002, the Berkeley Open Infrastructure for Network Computing (BOINC) project was founded at University of California, Berkeley Space Sciences Laboratory, funded by the National Science Foundation. BOINC provides a complete middleware system for volunteer computing, including a client, client GUI, application runtime system, server software, and software implementing a project web site. The first project based on BOINC was Predictor@home, based at the Scripps Research Institute, which began operation in 2004. Soon thereafter, SETI@home and climateprediction.net began using BOINC. A number of new BOINC-based projects were created over the next few years, including Rosetta@home, Einstein@home, and AQUA@home. In 2007, IBM World Community Grid switched from the United Devices platform to BOINC.
Middleware
The client software of the early volunteer computing projects consisted of a single program that combined the scientific computation and the distributed computing infrastructure. This monolithic architecture was inflexible. For example, it was difficult to deploy new application versions.
More recently, volunteer computing has moved to middleware systems that provide a distributed computing infrastructure independent from the scientific computation. Examples include:
BOINC is the most widely used middleware system. It offers client software for Windows, macOS, Linux, Android, and other Unix variants.
XtremWeb is used primarily as a research tool. It is developed by a group based at the University of Paris-South.
Xgrid is developed by Apple. Its client and server components run only on macOS.
Grid MP is a commercial middleware platform developed by United Devices and was used in volunteer computing projects including grid.org, World Community Grid, Cell Computing, and Hikari Grid.
Most of these systems have the same basic structure: a client program runs on the volunteer's computer. It periodically contacts project-operated servers over the Internet, requesting jobs and reporting the results of completed jobs. This "pull" model is necessary because many volunteer computers are behind firewalls that do not allow incoming connections. The system keeps track of each user's "credit", a numerical measure of how much work that user's computers have done for the project.
Volunteer computing systems must deal with several issues involving volunteered computers: their heterogeneity, their churn (the tendency of individual computers to join and leave the network over time), their sporadic availability, and the need to not interfere with their performance during regular use.
In addition, volunteer computing systems must deal with problems related to correctness:
Volunteers are unaccountable and essentially anonymous.
Some volunteer computers (especially those that are overclocked) occasionally malfunction and return incorrect results.
Some volunteers intentionally return incorrect results or claim excessive credit for results.
One common approach to these problems is replicated computing, in which each job is performed on at least two computers. The results (and the corresponding credit) are accepted only if they agree sufficiently.
Drawbacks for participants
Increased power consumption: A CPU generally uses more electricity when it is active compared to when it is idle. Additionally, the desire to participate may cause the volunteer to leave the PC on overnight or disable power-saving features like suspend. Furthermore, if the computer cannot cool itself adequately, the added load on the volunteer's CPU can cause it to overheat.
Decreased performance of the PC: If the volunteer computing application runs while the computer is in use, it may impact performance of the PC. This is due to increased usage of the CPU, CPU cache, local storage, and network connection. If RAM is a limitation, increased disk cache misses or increased paging can result. Volunteer computing applications typically execute at a lower CPU scheduling priority, which helps to alleviate CPU contention.
These effects may or may not be noticeable, and even if they are noticeable, the volunteer might choose to continue participating. However, the increased power consumption can be remedied to some extent by setting an option to limit the percentage of the processor used by the client, which is available in some client software.
Benefits for researchers
Computing power
Volunteer computing can provide researchers with computing power that is not achievable any other way. For example, Folding@home has been ranked as one of the world's fastest computing systems. With heightened interest and volunteer participation in the project as a result of the COVID-19 pandemic, the system achieved a speed of approximately 1.22 exaflops by late March 2020 and reached 2.43 exaflops by April 12, 2020, making it the world's first exaflop computing system.
Cost
Volunteer computing is often cheaper than other forms of distributed computing, and typically at zero cost to the end researcher.
Importance
Although there are issues such as lack of accountability and trust between participants and researchers while implementing the projects, volunteer computing is crucially important, especially to projects that have limited funding.
Supercomputers that have huge computing power are extremely expensive and are available only to some applications only if they can afford it. Whereas volunteer computing is not something that can be bought, its power arises from the public support. A research project that has limited sources and funding can get huge computing power by attracting public attention.
By volunteering and providing support and computing power to the researches on topics such as science, citizens are encouraged to be interested in science and also citizens are allowed to have a voice in directions of scientific researches and eventually the future science by providing support or not to the researches.
See also
Citizen science
Cloud computing
List of volunteer computing projects
Peer-to-peer
Swarm intelligence
Virtual volunteering
References
External links
Wanted: Your computer's spare time Physics.org, September 2009
The Strongest Supercomputer on Earth Still Needs Your Laptop to Cure Cancer Inverse.com, December 2015
Digital labor
Distributed computing architecture
Middleware | Volunteer computing | [
"Technology",
"Engineering"
] | 1,515 | [
"Information and communications technology",
"Digital labor",
"IT infrastructure",
"Software engineering",
"Middleware"
] |
9,651,765 | https://en.wikipedia.org/wiki/Go%20fever | In the US space industry, "go fever" (also "launch fever") is an informal term used to refer to the overall attitude of being in a rush or hurry to get a project or task done while overlooking potential problems or mistakes.
The term was coined after the Apollo 1 fire in 1967 and has been referred to in subsequent NASA incidents such as the Space Shuttle Challenger disaster in 1986 and the Space Shuttle Columbia disaster in 2003.
Causes
"Go fever" results from both individual and collective aspects of human behavior. It is due to the tendency as individuals to be overly committed to a previously chosen course of action based on time and resources already expended (sunk costs) despite reduced or insufficient future benefits, or even considerable risks. It is also due to both general budget concerns and the desire of members of a team not to be seen as not fully committed to the team's goals or even as interfering with the team's progress or success.
"Go fever" is comparable to the "groupthink" phenomenon, where a group makes a bad decision for the sake of cordiality and maintaining the group's atmosphere. The term was coined by social psychologist Irving Janis in 1972. The psychology behind "go fever" is also reminiscent of "get-home-itis", the irrational desire to press on unnecessarily to a desired destination despite significant (but likely temporary) adverse conditions.
See also
Groupthink
Sunk cost
References
Bibliography
The Nation: NASA's Curse?; 'Groupthink' Is 30 Years Old, And Still Going Strong
External links
NASA's Safety Culture (archived at the Internet Archive)
NASA
English phrases | Go fever | [
"Astronomy"
] | 337 | [
"Outer space stubs",
"Outer space",
"Astronomy stubs"
] |
9,652,379 | https://en.wikipedia.org/wiki/Membrane%20fusion%20protein | Membrane fusion proteins (not to be confused with chimeric or fusion proteins) are proteins that cause fusion of biological membranes. Membrane fusion is critical for many biological processes, especially in eukaryotic development and viral entry. Fusion proteins can originate from genes encoded by infectious enveloped viruses, ancient retroviruses integrated into the host genome, or solely by the host genome. Post-transcriptional modifications made to the fusion proteins by the host, namely addition and modification of glycans and acetyl groups, can drastically affect fusogenicity (the ability to fuse).
Fusion in eukaryotes
Eukaryotic genomes contain several gene families, of host and viral origin, which encode products involved in driving membrane fusion. While adult somatic cells do not typically undergo membrane fusion under normal conditions, gametes and embryonic cells follow developmental pathways to non-spontaneously drive membrane fusion, such as in placental formation, syncytiotrophoblast formation, and neurodevelopment. Fusion pathways are also involved in the development of musculoskeletal and nervous system tissues. Vesicle fusion events involved in neurotransmitter trafficking also relies on the catalytic activity of fusion proteins.
SNARE family
The SNARE family include bona fide eukaryotic fusion proteins. They are only found in eukaryotes and their closest archaeal relatives like Heimdallarchaeota.
Retroviral
These proteins originate from the env gene of endogenous retroviruses. They are domesticated viral class I fusion proteins.
Syncytins are responsible for structures of the placenta.
Syncytin-1
Syncytin-2
ERV3 is not fusogenic in humans. Still plays a role in helping the placenta evade immune response.
HAP2 family
HAP2 is a fusexin (similar to viral class II) found in diverse eukaryotes including Toxoplasma, vascular plants, and fruit flies. This protein is essential for gamete fusion in these organisms. Its origin is unclear, as the broader grouping of fusexins could be older than the viral class II with the discovery of archaeal homologs.
Pathogenic viral fusion
Enveloped viruses readily overcome the thermodynamic barrier of merging two plasma membranes by storing kinetic energy in fusion (F) proteins. F proteins can be independently expressed on host cell surfaces which can either (1) drive the infected cell to fuse with neighboring cells, forming a syncytium, or (2) be incorporated into a budding virion from the infected cell which leads to the full emancipation of plasma membrane from the host cell. Some F components solely drive fusion while a subset of F proteins can interact with host factors. There are four groups of fusion proteins categorized by their structure and mechanism of fusion.
Despite their very different structure and presumably different origins, classes I, II, and III all work by forming a trimer of hairpins.
Class I
Class I fusion proteins resemble influenzavirus hemagglutinin in their structure. Post-fusion, the active site has a trimer of α-helical coiled-coils. The binding domain is rich in α-helices and hydrophobic fusion peptides located near the N-terminus (some examples show internal fusion peptides, however). Fusion conformation change can often be controlled by pH.
Class II
Class II proteins are dominant in β-sheets and the catalytic sites are localized in the core region. The peptide regions required to drive fusion are formed from the turns between the β-sheets.x They usually start as dimers, becoming a trimer as fusion happens.
Class III
Class III fusion proteins are distinct from I and II. They typically consist of 5 structural domains, where domain 1 and 2 localized to the C-terminal end often contain more β-sheets and domains 2-5 closer to the N-terminal side are richer in α-helices. In the pre-fusion state, the later domains nest and protect domain 1 (i.e. domain 1 is protected by domain 2, which is nested in domain 3, which is protected by domain 4). Domain 1 contains the catalytic site for membrane fusion.
Others
A number of fusion proteins belong to none of the three main classes.
Poxviruses employ a multiprotein system of 11 different genes and their relatives in the broader group of Nucleocytoviricota appear to do likewise. The structure of the fusion complex is not yet resolved. Scientists have produced some information on what each of the components bind to, but still not enough to produce a full picture.
Hepadnaviridae, which includes the Hep B virus, uses different forms of the surface antigen (HBsAg - S, M and L) to fuse. It was found in 2021 that it has a fusion peptide in preS1, which is found in the L form.
FAST
Fusion-associated small transmembrane proteins (FAST) are the smallest type of fusion protein. They are found in reoviruses, which are non-enveloped viruses and are specialized for cell-cell rather than virus-cell fusion, forming syncytia. They are the only known membrane fusion proteins found in non-enveloped viruses. They exploit the cell-cell adhesion machinery to achieve initial attachment. They might encourage fusion by inducing membrane curvature using a variety of hydrophobic motifs and modified residues.
Examples
Cross-group families
Fusexin
The fusexin family consists of eukaryotic HAP2/GCS1, eukaryotic EFF-1, viral "class II", and haloarchaeal Fsx1. They all share a common fold and fuse membranes. In an unrooted phylogenetic tree from 2021, HAP2/GCS1 and EFF-1/AFF-1 occupy two ends of the tree, the middle being occupied by viral sequences; this suggests that they may have been acquired separately. The latest structure-based unrooted phylogenetic tree of Brukman et al. (2022), which takes into account the newly-discovered archaeal sequences, shows that Fsx1 groups with HAP2/GCS1, and that they are separated from EFF-1 by a number of viral sequences. Based on where the root is placed, a number of different hypotheses regarding the history of these families – their horizontal transfer and vertical inheritance – can be generated. Older comparisons excluding archaeal sequences would strongly favor an interpretation where HAP2/GCS1 is acquired from a virus, but the grouping of Fsx1 with HAP2/GCS1 has allowed the possibility of a much more ancient source.
See also
Interbilayer forces in membrane fusion
Viral membrane fusion proteins
References
External links
Membrane proteins | Membrane fusion protein | [
"Biology"
] | 1,392 | [
"Protein classification",
"Membrane proteins"
] |
9,654,085 | https://en.wikipedia.org/wiki/Phred%20quality%20score | A Phred quality score is a measure of the quality of the identification of the nucleobases generated by automated DNA sequencing. It was originally developed for the computer program Phred to help in the automation of DNA sequencing in the Human Genome Project. Phred quality scores are assigned to each nucleotide base call in automated sequencer traces. The FASTQ format encodes phred scores as ASCII characters alongside the read sequences. Phred quality scores have become widely accepted to characterize the quality of DNA sequences, and can be used to compare the efficacy of different sequencing methods. Perhaps the most important use of Phred quality scores is the automatic determination of accurate, quality-based consensus sequences.
Definition
Phred quality scores are logarithmically related to the base-calling error probabilities and defined as
.
This relation can also be written as
.
For example, if Phred assigns a quality score of 30 to a base, the chances that this base is called incorrectly are 1 in 1000.
The phred quality score is the negative ratio of the error probability to the reference level of expressed in Decibel (dB).
History
The idea of sequence quality scores can be traced back to the original description of the SCF file format by Rodger Staden's group in 1992. In 1995, Bonfield and Staden proposed a method to use base-specific quality scores to improve the accuracy of consensus sequences in DNA sequencing projects.
However, early attempts to develop base-specific quality scores had only limited success.
The first program to develop accurate and powerful base-specific quality scores was the program Phred. Phred was able to calculate highly accurate quality scores that were logarithmically linked to the error probabilities. Phred was quickly adopted by all the major genome sequencing centers as well as many other laboratories; the vast majority of the DNA sequences produced during the Human Genome Project were processed with Phred.
After Phred quality scores became the required standard in DNA sequencing, other manufacturers of DNA sequencing instruments, including Li-Cor and ABI, developed similar quality scoring metrics for their base calling software.
Methods
Phred's approach to base calling and calculating quality scores was outlined by Ewing et al.. To determine quality scores, Phred first calculates several parameters related to peak shape and peak resolution at each base. Phred then uses these parameters to look up a corresponding quality score in huge lookup tables. These lookup tables were generated from sequence traces where the correct sequence was known, and are hard coded in Phred; different lookup tables are used for different sequencing chemistries and machines. An evaluation of the accuracy of Phred quality scores for a number of variations in sequencing chemistry and instrumentation showed that Phred quality scores are highly accurate.
Phred was originally developed for "slab gel" sequencing machines like the ABI373. When originally developed, Phred had a lower base calling error rate than the manufacturer's base calling software, which also did not provide quality scores. However, Phred was only partially adapted to the capillary DNA sequencers that became popular later. In contrast, instrument manufacturers like ABI continued to adapt their base calling software changes in sequencing chemistry, and have included the ability to create Phred-like quality scores. Therefore, the need to use Phred for base calling of DNA sequencing traces has diminished, and using the manufacturer's current software versions can often give more accurate results.
Applications
Phred quality scores are used for assessment of sequence quality, recognition and removal of low-quality sequence (end clipping), and determination of accurate consensus sequences.
Originally, Phred quality scores were primarily used by the sequence assembly program Phrap. Phrap was routinely used in some of the largest sequencing projects in the Human Genome Sequencing Project and is currently one of the most widely used DNA sequence assembly programs in the biotech industry. Phrap uses Phred quality scores to determine highly accurate consensus sequences and to estimate the quality of the consensus sequences. Phrap also uses Phred quality scores to estimate whether discrepancies between two overlapping sequences are more likely to arise from random errors, or from different copies of a repeated sequence.
Within the Human Genome Project, the most important use of Phred quality scores was for automatic determination of consensus sequences. Before Phred and Phrap, scientists had to carefully look at discrepancies between overlapping DNA fragments; often, this involved manual determination of the highest-quality sequence, and manual editing of any errors. Phrap's use of Phred quality scores effectively automated finding the highest-quality consensus sequence; in most cases, this completely circumvents the need for any manual editing. As a result, the estimated error rate in assemblies that were created automatically with Phred and Phrap is typically substantially lower than the error rate of manually edited sequence.
In 2009, many commonly used software packages make use of Phred quality scores, albeit to a different extent. Programs like Sequencher use quality scores for display, end clipping, and consensus determination; other programs like CodonCode Aligner also implement quality-based consensus methods.
Compression
Quality scores are normally stored together with the nucleotide sequence in the widely accepted FASTQ format. They account for about half of the required disk space in the FASTQ format (before compression), and therefore the compression of the quality values can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Both lossless and lossy compression are recently being considered in the literature. For example, the algorithm QualComp performs lossy compression with a rate (number of bits per quality value) specified by the user. Based on rate-distortion theory results, it allocates the number of bits so as to minimize the MSE (mean squared error) between the original (uncompressed) and the reconstructed (after compression) quality values. Other algorithms for compression of quality values include SCALCE, Fastqz and more recently QVZ, AQUa and the MPEG-G standard, that is currently under development by the MPEG standardisation working group. Both are lossless compression algorithms that provide an optional controlled lossy transformation approach. For example, SCALCE reduces the alphabet size based on the observation that “neighboring” quality values are similar in general.
Symbols
References
External links
Long Reads with the KB Basecaller Comparison of Phred accuracy with a competing program, ABI's KB Basecaller
The Laboratory of Phil Green Phrap's homepage.
Molecular biology
DNA | Phred quality score | [
"Chemistry",
"Biology"
] | 1,338 | [
"Biochemistry",
"Molecular biology"
] |
9,654,388 | https://en.wikipedia.org/wiki/Slip%20%28ceramics%29 | A slip is a clay slurry used to produce pottery and other ceramic wares. Liquified clay, in which there is no fixed ratio of water and clay, is called slip or clay slurry which is used either for joining leather-hard (semi-hardened) clay body (pieces of pottery) together by slipcasting with mould, glazing or decorating the pottery by painting or dipping the pottery with slip. Pottery on which slip has been applied either for glazing or decoration is called slipware.
Engobe, from the French word for slip, is a related term for a liquid suspension of clays and flux, in addition to fillers and other materials. This is in contrast to slips, which are historically considered to be a liquid suspension of only clay or clays in water.
Engobes are commonly used in the ceramic industry, typically to mask the appearance of the underlying clay body. They can be sprayed onto pieces in a similar method to glaze and through the addition of coloring oxides they can achieve a wide variety of colors, though not with the same vibrancy as glazes. Among artists engobes are often confused with slip, and the term is sometimes used interchangeably.
Usage
Joining and molding
An additive with deflocculant properties, such as sodium silicate, can be added to disperse the raw material particles. This allows a higher solids content to be used, or allows a fluid to be produced with a minimal amount of water so that drying shrinkage is minimised, which is important during slipcasting. Usually the mixing of slip is undertaken in a blunger although it can be done using other types of mixers or even by hand.
To join sections of unfired ware or greenware, such as handles and spouts.
To fix into place pieces of relief decoration produced separately, for example by moulding. This technique is known as sprigging; an example is Jasperware.
When slip is used to join two pieces of greenware together, it is generally used with a technique known as scratch and slip, whereby the contact points on both pieces are scored with multiple criss-crossing lines and slip painted on one piece over the scores.
Decoration and protection
Slipware is pottery decorated by slip placed onto a wet or leather-hard clay body surface by dipping, painting or splashing. Some slips will also give decreased permeability, though not as much as a ceramic glaze would give. Often only pottery where the slip creates patterns or images will be described as slipware, as opposed to the many types where a plain slip is applied to the whole body, for example most fine wares in Ancient Roman pottery, such as African red slip ware (note: "slip ware" not "slipware"). Decorative slips may be a different colour than the underlying clay body or offer other decorative qualities such as a shiny surface.
Selectively applying layers of colored slips can create the effect of a painted ceramic, such as in the black-figure or red-figure pottery styles of Ancient Greek pottery. Slip decoration is an ancient technique in Chinese pottery also, used to cover whole vessels over 4,000 years ago. Principal techniques include slip-painting, where the slip is treated like paint and used to create a design with brushes or other implements, and slip-trailing, where the slip, usually rather thick, is dripped onto the body. Slip-trailed wares, especially if Early Modern English, are called slipware.
Chinese pottery also used techniques where patterns, images or calligraphy were created as part-dried slip was cut away to reveal a lower layer of slip or the main clay body in a contrasting colour. The latter of these is called the "cut-glaze" technique.
Slipware may be carved or burnished to change the surface appearance of the ware. Specialized slip recipes may be applied to biscuit ware and then refired.
Barbotine (another French word for slip) covers different techniques in English, but in the sense used of late 19th-century art pottery is a technique for painting wares in polychrome slips to make painting-like images on pottery.
Other uses in pottery
A slip may be made for various other purposes in the production and decoration of ceramics, such as slip can be used to mix the constituents of a clay body.
Gallery
See also
Ceramics
Ceramic glazes
Glossary of pottery terms
Porcelain
Pottery
Slipware
References
Vainker, S.J., Chinese Pottery and Porcelain, 1991, British Museum Press, 9780714114705
Ceramic materials
Pottery
Silicates
Types of pottery decoration | Slip (ceramics) | [
"Engineering"
] | 941 | [
"Ceramic engineering",
"Ceramic materials"
] |
9,654,898 | https://en.wikipedia.org/wiki/Log%20management | Log management is the process for generating, transmitting, storing, accessing, and disposing of log data. A log data (or logs) is composed of entries (records), and each entry contains information related to a specific event that occur within an organization's computing assets, including physical and virtual platforms, networks, services, and cloud environments.
The process of log management generally breaks down into:
Log collection - a process of capturing actual data from log files, application standard output stream (stdout), network socket and other sources.
Logs aggregation (centralization) - a process of putting all the log data together in a single place for the sake of further analysis or/and retention.
Log storage and retention - a process of handling large volumes of log data according to corporate or regulatory policies (compliance).
Log analysis - a process that helps operations and security team to handle system performance issues and security incidents
Overview
The primary drivers for log management implementations are concerns about security, system and network operations (such as system or network administration) and regulatory compliance. Logs are generated by nearly every computing device, and can often be directed to different locations both on a local file system or remote system.
Effectively analyzing large volumes of diverse logs can pose many challenges, such as:
Volume: log data can reach hundreds of gigabytes of data per day for a large organization. Simply collecting, centralizing and storing data at this volume can be challenging.
Normalization: logs are produced in multiple formats. The process of normalization is designed to provide a common output for analysis from diverse sources.
Velocity: The speed at which logs are produced from devices can make collection and aggregation difficult
Veracity: Log events may not be accurate. This is especially problematic for systems that perform detection, such as intrusion detection systems.
Users and potential users of log management may purchase complete commercial tools or build their own log-management and intelligence tools, assembling the functionality from various open-source components, or acquire (sub-)systems from commercial vendors. Log management is a complicated process and organizations often make mistakes while approaching it.
Logging can produce technical information usable for the maintenance of applications or websites. It can serve:
to define whether a reported bug is actually a bug
to help analyze, reproduce and solve bugs
to help test new features in a development stage
Terminology
Suggestions were made to change the definition of logging. This change would keep matters both purer and more easily maintainable:
Logging would then be defined as all instantly discardable data on the technical process of an application or website, as it represents and processes data and user input.
Auditing, then, would involve data that is not immediately discardable. In other words: data that is assembled in the auditing process, is stored persistently, is protected by authorization schemes and is, always, connected to some end-user functional requirement.
Deployment life-cycle
One view of assessing the maturity of an organization in terms of the deployment of log-management tools might use successive levels such as:
in the initial stages, organizations use different log-analyzers for analyzing the logs in the devices on the security perimeter. They aim to identify the patterns of attack on the perimeter infrastructure of the organization.
with the increased use of integrated computing, organizations mandate logs to identify the access and usage of confidential data within the security perimeter.
at the next level of maturity, the log analyzer can track and monitor the performance and availability of systems at the level of the enterprise — especially of those information assets whose availability organizations regard as vital.
organizations integrate the logs of various business applications into an enterprise log manager for a better value proposition.
organizations merge the physical-access monitoring and the logical-access monitoring into a single view.
See also
Audit trail
Common Base Event
Common Log Format
DARPA PRODIGAL and Anomaly Detection at Multiple Scales (ADAMS) projects.
Data logging
Log analysis
Log monitor
Log management knowledge base
Security information and event management
Server log
Syslog
Web counter
Web log analysis software
References
Chris MacKinnon: "LMI In The Enterprise". Processor November 18, 2005, Vol.27 Issue 46, page 33. Online at http://www.processor.com/editorial/article.asp?article=articles%2Fp2746%2F09p46%2F09p46.asp, retrieved 2007-09-10
MITRE: Common Event Expression (CEE) Proposed Log Standard. Online at http://cee.mitre.org, retrieved 2010-03-03
External links
InfoWorld review and comparison of commercial Log Management products
Network management
Computer systems | Log management | [
"Technology",
"Engineering"
] | 935 | [
"Computer engineering",
"Computer networks engineering",
"Computer systems",
"Computer logging",
"Computer science",
"Computers",
"Network management"
] |
9,655,514 | https://en.wikipedia.org/wiki/Barium%20chlorate | Barium chlorate, Ba(ClO3)2, is the barium salt of chloric acid. It is a white crystalline solid, and like all soluble barium compounds, irritant and toxic. It is sometimes used in pyrotechnics to produce a green colour. It also finds use in the production of chloric acid.
Reactions
Synthesis
Barium chlorate can be produced through a double replacement reaction between solutions of barium chloride and sodium chlorate:
After concentrating and cooling the resulting mixture, barium chlorate precipitates. This is perhaps the most common preparation, exploiting the lower solubility of barium chlorate compared to sodium chlorate.
The above method does result in some sodium contamination, which is undesirable for pyrotechnic purposes, where the strong yellow colour of sodium can easily overpower the green of barium. Sodium-free barium chlorate can be produced directly through electrolysis:
It can also be produced by the reaction of barium carbonate with boiling ammonium chlorate solution:
The reaction initially produces barium chlorate and ammonium carbonate; boiling the solution decomposes the ammonium carbonate and drives off the resulting ammonia and carbon dioxide, leaving only barium chlorate in solution.
Decomposition
When exposed to heat, barium chlorate alone will decompose to barium chloride and oxygen:
Chloric acid
Barium chlorate is sometimes used to produce chloric acid.
Commercial uses
When barium chlorate is heated with a fuel, it burns to produce a vibrant green light, which is also a flame test for the presence of bariom ions. Because it is an oxidizer, a chlorine donor, and contains a metal ion, this compound produces a distinctive green colour. However, due to the instability of all chlorates to sulfur, acids, and ammonium ions, chlorates have been banned from use in class C fireworks in the United States. Therefore, more and more firework producers have begun to use more stable compounds such as barium nitrate and barium carbonate.
Environmental Hazard
Barium chlorate, like all oxidizing agents, is dangerous to human health and is also classed as toxic to the environment. It is very harmful to aquatic organisms if it is leached into bodies of water. Chemical spills of this compound, although not common, can pollute entire ecosystems and should be prevented. It is necessary to dispose of this compound as hazardous waste. The Environmental Protection Agency (EPA) lists barium chlorate as hazardous.
References
Barium compounds
Inorganic compounds
Chlorates
Pyrotechnic oxidizers
Pyrotechnic colorants
Oxidizing agents | Barium chlorate | [
"Chemistry"
] | 567 | [
"Redox",
"Inorganic compounds",
"Chlorates",
"Oxidizing agents",
"Salts"
] |
9,655,587 | https://en.wikipedia.org/wiki/Society%20for%20Applied%20Spectroscopy | The Society for Applied Spectroscopy (SAS) is an organization promoting research and education in the fields of spectroscopy, optics, and analytical chemistry. Founded in 1958, it is currently headquartered in Albany, New York. In 2006 it had about 2,000 members worldwide.
SAS is perhaps best known for its technical conference with the Federation of Analytical Chemistry and Spectroscopy Societies and short courses on various aspects of spectroscopy and data analysis. The society publishes the scientific journal Applied Spectroscopy.
SAS is affiliated with American Institute of Physics (AIP), the Coblentz Society, the Council for Near Infrared Spectroscopy (CNIRS), Federation of Analytical Chemistry and Spectroscopy Societies (FACSS), The Instrumentation, Systems, and Automation Society (ISA), and Optica.
SAS provides a number of awards with honoraria to encourage and recognize outstanding achievements.
See also
Spectroscopy
American Institute of Physics (AIP)
The Instrumentation, Systems, and Automation Society (ISA)
Optical Society of America (OSA)
References
External links
Coblentz
Council for Near Infrared Spectroscopy (CNIRS)
Federation of Analytical Chemistry and Spectroscopy Societies (FACSS)
Scientific societies based in the United States
Spectroscopy
Analytical chemistry | Society for Applied Spectroscopy | [
"Physics",
"Chemistry"
] | 237 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"nan",
"Spectroscopy"
] |
9,655,788 | https://en.wikipedia.org/wiki/Transuranic%20waste | Transuranic waste (TRU) is stated by U.S. regulations, and independent of state or origin, to be waste which has been contaminated with alpha emitting transuranic radionuclides possessing half-lives greater than 20 years and in concentrations greater than 100 nCi/g (3.7 MBq/kg).
Elements having atomic numbers greater than that of uranium are called transuranic. Elements within TRU are typically man-made and are known to contain americium-241 and several isotopes of plutonium. Because of the elements' longer half-lives, TRU is disposed of more cautiously than low level waste and intermediate level waste. In the U.S. it is a byproduct of weapons production, nuclear research and power production, and consists of protective gear, tools, residue, debris and other items contaminated with small amounts of radioactive elements (mainly plutonium).
Under U.S. law, TRU is further categorized into "contact-handled" (CH) and "remote-handled" (RH) on the basis of the radiation field measured on the waste container's surface. CH TRU has a surface dose rate not greater than 2 mSv per hour (200 mrem/h), whereas RH TRU has rates of 2 mSv/h or higher. CH TRU has neither the high radioactivity of high level waste, nor its high heat generation. In contrast, RH TRU can be highly radioactive, with surface dose rates up to 10 Sv/h (1000 rem/h).
The United States currently permanently disposes of TRU generated from defense nuclear activities at the Waste Isolation Pilot Plant, a deep geologic repository.
Other countries do not include this category, favoring variations of High, Medium/Intermediate, and Low Level waste.
References
External links
Final Environmental Assessment for Actinide Chemistry and Repository Science Laboratory - Citing a DOE TRU Definition
US Department of Energy's page on the Waste Isolation Pilot Plant (WIPP)
Radioactive waste | Transuranic waste | [
"Physics",
"Chemistry",
"Technology"
] | 419 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Radioactivity",
"Nuclear physics",
"Hazardous waste",
"Radioactive waste"
] |
9,655,935 | https://en.wikipedia.org/wiki/XMPP%20Standards%20Foundation | XMPP Standards Foundation (XSF) is the foundation in charge of the standardization of the protocol extensions of XMPP, the open standard of instant messaging and presence of the IETF.
History
The XSF was originally called the Jabber Software Foundation (JSF). The Jabber Software Foundation was originally established to provide an independent, non-profit, legal entity to support the development community around Jabber technologies (and later XMPP). Originally its main focus was on developing JOSL, the Jabber Open Source License (since deprecated), and an open standards process for documenting the protocols used in the Jabber/XMPP developer community. Its founders included Michael Bauer and Peter Saint-Andre.
Process
Members of the XSF vote on acceptance of new members, a technical Council, and a Board of Directors. However, membership is not required to publish, view, or comment on the standards that it promulgates. The unit of work at the XSF is the XMPP Extension Protocol (XEP); XEP-0001 specifies the process for XEPs to be accepted by the community. Most of the work of the XSF takes place on the XMPP Extension Discussion List, the jdev and the xsf chat room.
Organization
Board of directors
The Board of Directors of the XMPP Standards Foundation oversees the business affairs of the organization. As elected by the XSF membership, the Board of Directors for 2020-2021 consists of the following individuals:
Ralph Meijer ( XSF Chair)
Dave Cridland
Ralph Meijer
Severino Ferrer de la Peñita
Arc Riley
Matthew Wild
Council
The XMPP Council is the technical steering group that approves XMPP Extension Protocols, as governed by the XSF Bylaws and XEP-0001. The Council is elected by the members of the XMPP Standards Foundation each year in September. The XMPP Council (2020–2021) consists of the following individuals:
Kim Alvefur
Dave Cridland
Daniel Gultsch
Georg Lukas
Jonas Schäfer
Members
There are currently 66 elected members of the XSF.
Emeritus Members
The following individuals are emeritus members of the XMPP Standards Foundation:
Ryan Eatmon
Peter Millard (deceased)
Jeremie Miller
Julian Missig
Thomas Muldowney
Dave Smith
XEPs
One of the most important outputs of the XSF is a series of "XEPs", or XMPP Extension Protocols, auxiliary protocols defining additional features. Some have chosen to pronounce "XEP" as if it were spelled "JEP", rather than "ZEP", in order to keep with a sense of tradition. Some XEPs of note include:
Data Forms
Service Discovery
Multi-User Chat
Publish-Subscribe
XHTML-IM
Entity Capabilities
Bidirectional-streams Over Synchronous HTTP (BOSH)
Jingle
Serverless Messaging
XMPP Summit
The XSF biannually holds a XMPP Summit where software and protocol developers from all around the world meet and share ideas and discuss topics around the XMPP protocol and the XEPs. In winter it takes place around the FOSDEM event in Brussels, Belgium and in summer it takes place around the RealtimeConf event in Portland, USA. These meetings are open to anyone and focus on discussing both technical and non-technical issues that the XSF members wish to discuss with no costs attached for the participants. However the XSF is open to donations. The first XMPP Summit took place on July 24 and 25, 2006, in Portland.
References
External links
Instant messaging
Standards organizations in the United States
Free and open-source software organizations
Organizations based in Denver
XMPP | XMPP Standards Foundation | [
"Technology"
] | 761 | [
"Instant messaging",
"XMPP"
] |
7,440,425 | https://en.wikipedia.org/wiki/Beauchamp%20Tower | Beauchamp Tower (13 January 1845 – 31 December 1904) was an English inventor and railway engineer who is chiefly known for his discovery of full-film or hydrodynamic lubrication.
Early life
Beauchamp Tower was born the son of Robert Beauchamp Tower, rector of Moreton, Essex and educated at Uppingham School, Rutland. He decided at the age of 16 that he wanted to become an engineer and received early training at the Armstrong Works at Elswick, where he stayed for a few months as a draughtsman after completing his four-year apprenticeship.
Inventions
Beauchamp Tower held several patents regarding an apparatus for maintaining a constant plane in a floating vessel. The apparatus is based on the gyroscopic principle.
One of the possible applications of this patent was steadying guns on shipboard. In 1977, he was named by Duncan Dowson as one of the 23 "Men of Tribology".
Influence
Tower's work on lubrication influenced many other engineers, including Osborne Reynolds, who acknowledged Tower in his 1886 paper on lubrication and the viscosity of olive oil. Lord Kelvin credited Tower with the idea of a chain and pulleys as part of his Tide-predicting machine.
References
1845 births
English railway mechanical engineers
English inventors
1904 deaths
People from Epping Forest District
Tribologists | Beauchamp Tower | [
"Materials_science"
] | 271 | [
"Tribology",
"Tribologists"
] |
7,440,465 | https://en.wikipedia.org/wiki/Agitated%20Nutsche%20Filter | Agitated Nutsche filter (ANF) is a filtration technique used in applications such as dye, paint, and pharmaceutical production and waste water treatment. Safety requirements and environmental concerns due to solvent evaporation led to the development of this type of filter wherein filtration under vacuum or pressure can be carried out in closed vessels and solids can be discharged straightaway into a dryer.
Filter features
A typical unit consists of a dished vessel with a perforated plate. The entire vessel can be kept at the desired temperature by using a limpet jacket, jacketed bottom dish and stirrer (blade and shaft) through which heat transfer media can flow. The vessel can be made completely leak-proof for vacuum or pressure service. Its used for Multiple Processes like Solid Liquid Separation, Agitating / Washing, Resuspending / Mixing Extraction, Crystallizing, Drying can be performed within a closed system.
Nutsche filter disc
The filter disc is the bottom porous plate of the nutsche filter. The filter disc retains the solids and lets the liquid/ gas passing through. It is the main filtration component of the nutsche filter.
Types of the filter disc:
Perforated support plate with filter mesh (metallic or non-metallic)
Welded multi-layer mesh
Sintered wire mesh
Agitator
A multipurpose agitator is the unique feature of this system. The agitator performs a number of operations through movement in axes both parallel and perpendicular to the shaft.
Important points
Slurry contents can be kept liquidized using heat and agitation until most of the liquid is filtered through.
When filtration is complete, the cake develops cracks causing upsets in the vacuum operation. This hinders removal of mother liquor. The agitator can be used to maintain a uniform cake.
The cake can be washed after filtration by re-slurrying the cake.
After washing, the mother liquor can be refiltered. The cake can then be discharged by lowering the agitator and rotating it in such a manner that it brings all the cake towards the discharge port.
Agitator filters are suitable for filtration of liquids with a high solid content. The liquid is separated mechanically using a permeable layer/ filter medium under vacuum or pressure.
A special height-adjustable agitator design improvises the degree of filtration effectiveness and enables the mechanical discharge of the solid. An even filter cake forms on the horizontal base of the filter, which ensures the best possible recovery of the solid
Power pack
A hydraulic power pack or hydraulic power unit is a unit attached to the ANF's agitator system, discharge valve and bottom removal (for cleaning). It consists of an oil tank on which a pump is provided for circulating high-pressure oil through a control valve system and to hydraulic cylinders. These cylinders are provided for vertical movement of the agitator, discharge product and sometimes detach the bottom to clean the filter before changing the product. Operating pressure of the oil varies from 2 kg/cm to 80 kg/cm (200 kPa to 8 MPa).
Materials of construction
Agitated Nutsche filters can be fabricated in materials like Hastelloy C-276, C-22, stainless steel, mild steel, and mild steel with rubber lining as per service requirements. Recently, agitated Nutsche filters have been fabricated out of polypropylene fibre-reinforced plastic (PPFRP). Also, Nutsche filters made from Borosilicate glass 3.3 find use in applications where visibility of process are important along with chemical inertness.
Advantages
Vacuum or pressure filtration possible.
Inert gas atmosphere can be maintained.
Minimal contamination of the cake.
Very high solvent recovery.
Considerable saving in manpower.
Solvents are in closed systems, so no toxic vapors are let off in the atmosphere.
Personal safety is maintained and heat transfer surfaces can be provided to maintain filtration temperature.
Commercial uses
The Agitated Nutsche Filter Dryer (ANFD) is specifically engineered to meet the stringent demands of the pharmaceutical and fine chemical industries for efficient solids washing, separation, and drying under challenging conditions. This versatile filter-dryer system allows for both filtration and drying processes to be completed within the same vessel, significantly improving process efficiency.
ANFD systems are particularly suited for liquids with a high solid content, where the liquid phase is mechanically separated through a permeable filter medium under vacuum or pressure. The height-adjustable agitator optimizes filtration, enabling uniform filter cake formation on the horizontal base of the filter, ensuring superior solid recovery. The system also supports the mechanical discharge of solids, making it highly efficient for production.
References
Filters | Agitated Nutsche Filter | [
"Chemistry",
"Engineering"
] | 959 | [
"Chemical equipment",
"Filtration",
"Filters"
] |
7,441,268 | https://en.wikipedia.org/wiki/NTFS%20links | NTFS links are the abstraction used in the NTFS file system—the default file system for all Microsoft Windows versions belonging to the Windows NT family—to associate pathnames and certain kinds of metadata, with entries in the NTFS Master File Table (MFT). NTFS broadly adopts a pattern akin to typical Unix file systems in the way it stores and references file data and metadata; the most significant difference is that in NTFS, the MFT "takes the place of" inodes, fulfilling most of the functions which inodes fulfill in a typical Unix filesystem.
In NTFS, an entity in the filesystem fundamentally exists as: a record stored in the MFT of an NTFS volume, the MFT being the core database of the NTFS filesystem; and, any attributes and NTFS streams associated with said record. A link in NTFS is itself a record, stored in the MFT, which "points" to another MFT record: the target of the link. Links are the file "entries" in the volume's hierarchical file tree: an NTFS pathname such as or is a link. If the volume containing said pathnames were mapped to in a Windows system, these could be referenced as and . (Compare and contrast with typical Unix file systems, where a link is an entry in a directory—directories themselves being just a type of file stored in the filesystem—pointing either to another link, or to an inode.)
Types of links
NTFS has four types of links. These map relatively closely to the generic hard link and soft link concepts which modern file systems tend to follow.
Hard links
Hard links are typical in behavior. A hard link "points" to an MFT record. That target record will be the record for a "regular" file, such as a text file or executable (assuming the NTFS volume is in a normal "healthy" state). Compare with a typical Unix file system, where a hard link points to an inode. As in such file systems, an NTFS hard link cannot point to a directory.
A typical new file creation event on an NTFS volume, then, simply involves NTFS allocating and creating one new MFT record, for storing the new file entity's file metadata—including, about any of the data clusters assigned to the file, and the file's data streams; one MFT record for a hard link which points to the first newly-created MFT record as its target; storing a reference to the hard link in a directory file; and setting the reference count of both these MFT records to. Any file name provided as part of the file creation event is stored in the hard link. An MFT record can be the target of up to 1024 hard links; each time a new hard link is successfully created, targeting a previously extant MFT record, the target's reference count is incremented.
Symmetrically, the immediate tasks performed by NTFS in a typical file deletion event, when deleting a hard link, are simply: removing the reference to the link from the directory file containing it (the root directory, if applicable); and decrementing by the reference counts of the MFT record targeted by the link, and, of the entry containing the hard link itself. Any MFT record which now has a refcount of , is now in the "deleted" state: all its associated resources are considered "free" by NTFS, to be freely overwritten and used as needed.
Junction points
Junction points are NTFS reparse points and operate similarly to symbolic links in Unix or Linux, but are only defined for directories, and may only be absolute paths on local filesystems (as opposed to remote filesystems being accessed). They are created and behave in a similar way to hard links, except that if the target directory is renamed, moved, or deleted, the link will no longer be valid.
Symbolic links
Symbolic links are reparse points which operate similarly to Junction Points, or symbolic links in Unix or Linux, and accept relative paths and paths to files as well as directories. Support for directory and UNC paths were added in NTFS 3.1.
NTFS volume mount points
All NTFS links are intended to be transparent to applications. This means that the application accessing a link will be seamlessly redirected by the file system driver, and no special handling is needed. To users, they appear as normal directories or files. This also leads to an aliasing effect: writes to a link will pass the write to the underlying, linked file or MFT entry.
Symbolic links and junction points contain the path to the linked file, and a tag identifying the driver which implements the behaviour. Because they record the path, they can link to files on other volumes or even remote files. However this also means that if the referenced file is deleted or renamed, the link becomes invalid, and if the referenced file or directory is replaced with another, the link will now refer to the new file or directory.
Shortcut files
An NTFS symbolic link is not the same as a Windows shortcut file, which is a regular file. The latter may be created on any filesystem (such as the earlier FAT32), may contain metadata (such as an icon to display when the shortcut is viewed in Remove links), and is not transparent to applications.
Implementations of unix-like environments for Windows such as Cygwin and Mingw can use shortcut files to emulate symbolic links where the host operating system does not support them, if configured to do so.
Examples of use
Built-in uses
Windows Component Store (WinSxS) use hard links to keep track of different versions of DLLs stored on the hard disk drive.
Basic installations of Windows Server 2008 used symlinks for \Users\All Users\ → \ProgramData\ redirection.
Since Windows Vista, all versions of Windows have used a specific scheme of built-in directories and utilize hidden junctions to maintain backward compatibility with Windows XP and older. Examples of these junctions are:
C:\Documents and Settings pointing to C:\Users
%USERPROFILE%\Application Data pointing to %USERPROFILE%\AppData\Roaming
%USERPROFILE%\My Documents\My Pictures pointing to %USERPROFILE%\Pictures
Program redirection
By setting a junction point that points to a directory containing a particular version of a piece of software, it may be possible to add another version of the software and redirect the junction point to point to the version desired.
Saving storage space
The contents of a junction use almost no storage space (they simply point to the original directory). If an administrator needs to have multiple points of entry to a large directory, junction points can be an effective solution. Junction points should not be confused with a copy of something as junctions simply point to the original. If directories need to be modified separately a junction cannot be used as it does not provide a distinct copy of the directory or files within.
Likewise, symbolic links and hard links are useful for merging the contents of individual files.
Circumventing predefined paths
Since reinstalling Windows (or installing a new version) often requires deleting the contents of the C: drive, it is advantageous to create multiple partitions so only one partition needs to be deleted during the installation. However, some programs don't let the user choose the installation directory, or install some of their files to the C: drive even when they are installed to a different drive. By creating a junction point, the program can be tricked into installing to a different directory.
Command-line tools
Windows comes with several tools capable of creating and manipulating NTFS links.
PowerShell: The New-Item cmdlet of Windows PowerShell that can create empty files, folders, junctions, and hard links. In PowerShell 5.0 and later, it can create symbolic links as well. The Get-Item and Get-ChildItem cmdlets can be used to interrogate file system objects, and if they are NTFS links, find information about them. The Remove-Item cmdlet can remove said items, although there has been a record of a bug preventing this cmdlet from working properly.
Windows Command Prompt: Starting with Windows Vista and Windows Server 2008, the mklink internal command can create junctions, hard links, and symbolic links. This command is also available in ReactOS. In addition, the venerable dir command can display and filter junction points via the /aL switch. Finally, the rd command (also known as rmdir) can delete junction points.
fsutil.exe: A command-line utility introduced with Windows 2000. Its hardlink sub-command can make hard links or list hard links associated with a file. Another sub-command, reparsepoint, can query or delete reparse points, the file system objects that make up junction points, hard links, and symbolic links.
In addition, the following utilities can create NTFS links, even though they don't come with Windows.
: It is a component of the Resource Kit for Windows 2000 and Windows Server 2003. It can make junction points.
junction: A free command-line utility from Microsoft, it can create or delete junctions.
PowerShell Community Extensions (PSCX): Hosted on Microsoft PowerShell Gallery, this module adds several cmdlets for dealing with NTFS links, including: New-Hardlink, New-Junction, Get-ReparsePoint, Remove-ReparsePoint, and New-Symlink.
APIs
To create hard links, apps may use the function of Windows API. All versions of the Windows NT family can use GetFileInformationByHandle() to determine the number of hard links associated with a file. There can be up to 1024 links associated with an MFT entry. Similarly, the function can create symbolic links. Junctions are more complex to create. They require manual reparse point information filling. A code example is found in libuv. Junctions are defined for directories only: although the API does not fail when one creates a junction pointing to a file, the junction will not be interpreted successfully when used later.
Junctions and symbolic links, even those pointing to directories, can be removed with pNtSetInformationFile. Libuv's implementation of unlink on Windows demonstrates this use. Alternatively, the .NET System.IO.Directory.Delete() method works on them as well.
Hazards
Consistency
Symbolic links and NTFS junctions can point to non-existent targets because the operating system does not continuously ensure that the target exists.
Additional hazards lurk in the use of NTFS directory junctions that:
include links that refer to their own parent folders, such as creating hard link X:\path\to\parent which points to either X:\path\ or X:\path\to\, or
specify targets by using volume drive letters, such as X:, in X:\some\path\.
Recursive structure
The problem in the first case is that it creates recursive paths, which further implies infinite recursion in the directory structure. By introducing reentrancy, the presence of one or more directory junctions changes the structure of the file system from a simple proper tree into a directed graph, but recursive linking further complicates the graph-theoretical character from acyclic to cyclic. Since the same files and directories can now be encountered through multiple paths, applications which traverse reentrant or recursive structures naively may give incorrect or incoherent results, or may never terminate. Worse, if recursively deleting, such programs may attempt to delete a parent of the directory it is currently traversing.
Note that both of the conditions listed above exist in the system of hard links established on the C: drive in the default Windows setup. For example, every Windows 10 installation defines the recursive path:
C:\ProgramData\
C:\ProgramData\Application Data\
C:\ProgramData\Application Data\Application Data\
C:\ProgramData\Application Data\Application Data\Application Data\
C:\ProgramData\Application Data\Application Data\Application Data\Application Data\
C:\ProgramData\Application Data\Application Data\Application Data\Application Data\Application Data\ ...
Each additional path name in this seemingly infinite set is an actual valid Windows path which refers to the same location. In practice, path names are limited by the 260-character DOS path limit (or newer 32,767 character limit), but truncation may result in incomplete or invalid path and file names. Whenever a copy of a Windows installation is archived, with directory junctions intact, to another volume on the same—or worse—another computer, the archived copy may still incorporate active folders from the running installation. For example, depending on the method used for copying, a backup copy of a Windows drive X:\archive\... will include a hard link called X:\archive\Users\USERNAME\My Documents which still points to folder C:\Users\USERNAME\Documents\ in the current, active installation.
Cross-volume traversal
The second form of deferred target mis-referral, while conceptually simpler, can have more severe consequences. When a self-consistent volume or directory structure containing hard links which use volume drive-letter path names is copied or moved to another volume (or when the drive letter of a volume is reassigned by some other means), such links may no longer point to the corresponding target in the copied structure. Again the results depend on the software that was used for copying; while some programs may intercede by modifying any fully subsumed hard links in the copy in order to preserve structural consistency, others may ignore, copy exactly, or even traverse into hard links, copying their contents.
The serious problems occur if hard links are copied exactly such that they become, in the new copy, cross-volume hard links which still point to original files and folders on the source volume. Unintentional cross-volume hard links, such as hard links in an "archive" folder which still point to locations on the original volume (according to drive letter), are catastrophes waiting to happen. For example, deleting what is much later presumed to be an unused archive directory on a disused backup volume may result in deleting current, active user data or system files.
A preventative measure for the drive-letter hazard is to use volume GUID path syntax, rather than paths containing volume drive letters, when specifying the target path for a directory junction. For example, consider creating an alias for X:\Some\Other\Path at X:\Some\Path\Foo:
As described above, if the folder structure that contains the resulting link is moved to a disk with a drive letter other than X:, or if the letter is changed on drive X: itself, the data content at the target location is vulnerable to accidental corruption or malicious abuse. A more resilient version of this link can partially mitigate this risk by referencing the target volume by its GUID identifier value (which can be discovered by running the fsutil volume list command).
Doing so ensures that the junction will remain valid if drive letter X: is changed by any means.
As for a proactive means of avoiding directory junction disasters, the command dir /AL /S /B "X:\Some\Path" can be used to obtain, for careful analysis prior to committing any irreversible file system alterations, a list of all hard links "below" a certain file system location. While by definition every link in the resulting list has a path name that starts with X:\Some\Path\, if any of those hard links contains a target which is not subsumed by X:\Some\Path, then the specified scope has been escaped, and the starting directory you specified is not fully-subsuming. Extra caution may be indicated in this case, since the specified directory includes files and directories which reside on other physical volumes, or whose own parent-traversal-to-root does not include the specified directory.
Limitations
Privilege requirements
The default security settings in Windows disallow non-elevated administrators and all non-administrators from creating symbolic links but not junctions. This behavior can be changed running "secpol.msc", the Local Security Policy management console (under: Security Settings\Local Policies\User Rights Assignment\Create symbolic links). It can be worked around by starting cmd.exe with Run as administrator option or the runas command. Starting with Windows 10 Insiders build 14972 the requirement for elevated administrator privileges was removed in Windows "Developer Mode", allowing symlinks to be created without needing to elevate the console as administrator. At the API level, a flag is supplied for this purpose.
Boot time
The Windows startup process does not support junction points, so it is impossible to redirect certain system folders:
\Windows
\Windows\System32
\Windows\System32\Config
Other critical system boot files, such as The sleep image file hiberfil.sys, also do not support redirection.
System-defined locations
It is technically possible to redirect the following non-critical system folder locations:
\Users
\Documents and Settings
\ProgramData
\Program Files
\Program Files (x86)
Doing this may lead to long-term Windows reliability or compatibility issues. Creating junctions for \Users and \ProgramData pointing to another drive is not recommended as it breaks updates and Windows Store Apps.
Creating junctions for \Users, \ProgramData, \Program Files or \Program Files (x86) pointing to other locations breaks installation or upgrade of Windows.
Creating junctions for \Program Files or \Program Files (x86) pointing to another drive breaks Windows' Component Based Servicing which hardlinks files from its repository \Windows\SxS to their installation directory.
Windows installer
Windows Installer does not fully support symbolic links. Redirecting \Windows\Installer will cause most .msi-based Windows installers to fail with error 2755 and/or error 1632.
Symbolic link support in Windows XP
Since Windows XP uses the same NTFS format version as later releases, it's feasible to enable symbolic links support in it. For using NTFS symbolic links under Windows 2000 and XP, a third-party driver exists that does it by installing itself as a file system filter.
History
Symbolic links to directories or volumes, called junction points and mount points, were introduced with NTFS 3.0 that shipped with Windows 2000. From NTFS 3.1 onwards, symbolic links can be created for any kind of file system object. NTFS 3.1 was introduced together with Windows XP, but the functionality was not made available (through ntfs.sys) to user mode applications. Third-party filter drivers such as Masatoshi Kimura's opensource senable driver could however be installed to make the feature available in user mode as well. The ntfs.sys released with Windows Vista made the functionality available to user mode applications by default.
Since NTFS 3.1, a symbolic link can also point to a file or remote SMB network path. While NTFS junction points support only absolute paths on local drives, the NTFS symbolic links allow linking using relative paths. Additionally, the NTFS symbolic link implementation provides full support for cross-filesystem links. However, the functionality enabling cross-host symbolic links requires that the remote system also support them, which effectively limits their support to Windows Vista and later Windows operating systems.
See also
NTFS volume mount point
NTFS reparse point
Symbolic link
File shortcut
References
External links
Documentation for NTFS symbolic links on MSDN
CreateSymbolicLink function in the Win32 API
fsutil hardlink create - creates a hard link (Windows 2000 and later)
Microsoft Knowledge Base Article – 'How to Create and Manipulate NTFS Junction Points' (archived version)
Junction command line utility from Microsoft TechNet
Codeproject Article – discussion on the source code of a junction point utility, aimed at programmers
PC Mag Article about adding any directory to the start menu (allowing a preview within the startmenu as a submenu).
Disk file systems
Windows disk file systems
Windows administration
pl:Dowiązanie symboliczne | NTFS links | [
"Technology"
] | 4,247 | [
"Windows commands",
"Computing commands"
] |
7,441,383 | https://en.wikipedia.org/wiki/Eschenmoser%20fragmentation | The Eschenmoser fragmentation, first published in 1967, is the chemical reaction of α,β-epoxyketones (1) with aryl sulfonylhydrazines (2) to give alkynes (3) and carbonyl compounds (4). The reaction is named after the Swiss chemist Albert Eschenmoser, who devised it in collaboration with an industrial research group of Günther Ohloff, and applied it to the production of muscone and related macrocyclic musks. The reaction is also sometimes known as the Eschenmoser–Ohloff fragmentation or the Eschenmoser–Tanabe fragmentation as Masato Tanabe independently published an article on the reaction the same year. The general formula of the fragmentation using p-toluenesulfonylhydrazide is:
Several examples exist in the literature, and the reaction is also carried out on industrial scale.
Reaction mechanism
The mechanism of the Eschenmoser fragmentation begins with the condensation of an α,β-epoxyketone (1) with an aryl sulfonylhydrazine (2) to afford the intermediate hydrazone (3). This hydrazone can either be protonated at the epoxide oxygen or deprotonated at the sulfonamide nitrogen to initiate the fragmentation, and thus the fragmentation is catalyzed by acids or bases. Most common reaction conditions, however, are treatment with acetic acid in dichloromethane. The proton transfer leads to intermediate (4), which undergoes the key fragmentation to alkyne (6) and the corresponding carbonyl compound (7). The driving force for the reaction is the formation of highly stable molecular nitrogen.
There is a radical variant of this α,β-enone to alkynone fragmentation in which no epoxide is required. 1,3-Dibromo-5,5-dimethylhydantoin (DBDMH) in sec-butanol with the appropriate p-tolylhydrazone has been used to prepare exaltone (cyclopentadecanone) and muscone (the 3-methyl structural analog). The α,β-unsaturated hydrazone is brominated by DBDMH in the allylic position (relative to the sulfonamide nitrogen), leading to a captodatively stabilized radical, and the bromide ion becomes the leaving group in the subsequent nucleophilic attack by an alcoholate ion. This Fehr–Ohloff–Büchi variant of the Eschenmoser–Ohloff fragramentation in which an epoxidation step is avoided is suited to sterically-demanding substrates where low yields typically result from classical Eschenmoser fragmentation.
A closely related fragmentation has been reported, employing diazirine derivatives of cyclic α,β-epoxyketones.
See also
Grob fragmentation
Wharton reaction
Shapiro reaction
References
Elimination reactions
Name reactions | Eschenmoser fragmentation | [
"Chemistry"
] | 618 | [
"Name reactions"
] |
7,441,542 | https://en.wikipedia.org/wiki/Pentamer | A pentamer is an entity composed of five subunits.
In chemistry, it applies to molecules made of five monomers.
In biochemistry, it applies to macromolecules, particularly pentameric proteins, made of five protein sub-units.
In microbiology, a pentamer is one of the proteins that compose the polyhedral protein shell that encloses the bacterial micro-compartments known as carboxysomes.
In immunology, an MHC pentamer is a reagent used to detect antigen-specific CD8+ T cells.
See also
penta prefix
-mer suffix
Pentamerous Metamorphosis, an album by Global Communication
Pentamery (botany), having five parts in a distinct whorl of a plant structure
Pentamerous can also refer to animals, such as crinoids
Oligomers | Pentamer | [
"Chemistry",
"Materials_science"
] | 178 | [
"Polymer stubs",
"Organic compounds",
"Polymer chemistry",
"Oligomers",
"Organic chemistry stubs"
] |
7,441,553 | https://en.wikipedia.org/wiki/Moorish%20architecture | Moorish architecture is a style within Islamic architecture which developed in the western Islamic world, including al-Andalus (on the Iberian peninsula) and what is now Morocco, Algeria, and Tunisia (part of the Maghreb). Scholarly references on Islamic architecture often refer to this architectural tradition in terms such as architecture of the Islamic West or architecture of the Western Islamic lands. The use of the term "Moorish" comes from the historical Western European designation of the Muslim inhabitants of these regions as "Moors". Some references on Islamic art and architecture consider this term to be outdated or contested.
This architectural tradition integrated influences from pre-Islamic Roman, Byzantine, and Visigothic architectures, from ongoing artistic currents in the Islamic Middle East, and from North African Berber traditions. Major centers of artistic development included the main capitals of the empires and Muslim states in the region's history, such as Córdoba, Kairouan, Fes, Marrakesh, Seville, Granada and Tlemcen. While Kairouan and Córdoba were some of the most important centers during the 8th to 10th centuries, a wider regional style was later synthesized and shared across the Maghreb and al-Andalus thanks to the empires of the Almoravids and the Almohads, which unified both regions for much of the 11th to 13th centuries. Within this wider region, a certain difference remained between architectural styles in the more easterly region of Ifriqiya (roughly present-day Tunisia) and a more specific style in the western Maghreb (present-day Morocco and western Algeria) and al-Andalus, sometimes referred to as Hispano-Moresque or Hispano-Maghrebi.
This architectural style came to encompass distinctive features such as the horseshoe arch, riad gardens (courtyard gardens with a symmetrical four-part division), square (cuboid) minarets, and elaborate geometric and arabesque motifs in wood, stucco, and tilework (notably zellij). Over time, it made increasing use of surface decoration while also retaining a tradition of focusing attention on the interior of buildings rather than their exterior. Unlike Islamic architecture further east, western Islamic architecture did not make prominent use of large vaults and domes.
Even as Muslim rule ended on the Iberian Peninsula, the traditions of Moorish architecture continued in North Africa as well as in the Mudéjar style in Spain, which adapted Moorish techniques and designs for Christian patrons. In Algeria and Tunisia local styles were subjected to Ottoman influence and other changes from the 16th century onward, while in Morocco the earlier Hispano-Maghrebi style was largely perpetuated up to modern times with fewer external influences. In the 19th century and after, the Moorish style was frequently imitated in the form of Neo-Moorish or Moorish Revival architecture in Europe and America, including Neo-Mudéjar in Spain. Some scholarly references associate the term "Moorish" or "Moorish style" more narrowly with this 19th-century trend in Western architecture.
Historical development
Earliest Islamic monuments (8th–9th centuries)
In the 7th century the region of North Africa became steadily integrated into the emerging Muslim world during the Early Arab-Muslim Conquests. The territory of Ifriqiya (roughly present-day Tunisia), and its newly-founded capital city of Kairouan (also transliterated as "Qayrawan") became an early center of Islamic culture for the region. According to tradition, the Great Mosque of Kairouan was founded here by Uqba ibn Nafi in 670, although the current structure dates from later.
Al-Andalus
In 711 most of the Iberian Peninsula, part of the Visigothic Kingdom at the time, was conquered by a Muslim (largely Berber) army led by Tariq ibn Ziyad and became known as Al-Andalus. The city of Cordoba became its capital. In 756 Abd ar-Rahman I established the independent Emirate of Cordoba here and in 785 he also founded the Great Mosque of Cordoba, one of the most important architectural monuments of the western Islamic world. The mosque was notable for its vast hypostyle hall composed of rows of columns connected by double tiers of arches (including horseshoe arches on the lower tier) composed of alternating red brick and light-colored stone. The mosque was subsequently expanded by Abd ar-Rahman II in 836, who preserved the original design while extending its dimensions. The mosque was again embellished with new features by his successors Muhammad, Al-Mundhir, and Abdallah. One of the western gates of the mosque, known as Bab al-Wuzara''' (today known as Puerta de San Esteban), dates from this period and is often noted as an important prototype of later Moorish architectural forms and motifs: the horseshoe arch has voussoirs that alternate in colour and decoration and the arch is set inside a decorative rectangular frame (alfiz). The influence of ancient Classical architecture is strongly felt in the Islamic architecture during this early Emirate period of the peninsula. The most obvious example of this was the reuse of columns and capitals from earlier periods in the initial construction of the Great Mosque of Cordoba. When new, richly-carved capitals were produced for the mosque's 9th-century expansion, they emulated the form of classical Corinthian capitals.
In Seville, the Mosque of Ibn Adabbas was founded in 829 and was considered the second-oldest Muslim building in Spain (after the Great Mosque of Cordoba) until it was demolished in 1671. This mosque had a hypostyle form consisting of eleven aisles divided by rows of brick arches supported on marble columns. Of the brief Muslim presence in southern France during the 8th century, only a few funerary stelae have been found. In 1952 French archaeologist Jean Lacam excavated the Cour de la Madeleine ('Courtyard of Madeline') in the in Narbonne, where he discovered remains which he interpreted as the remains of a mosque from the 8th-century Muslim occupation of Narbonne.Islam Outside the Arab World, David Westerlund, Ingvar Svanberg, Palgrave Macmillan, 1999, page 342
Ifriqiya
In Ifriqiya, the Ribat of Sousse and the Ribat of Monastir are two military structures dated to the late 8th century, making them the oldest surviving Islamic-era monuments in Tunisia – although subjected to later modifications. The Ribat of Sousse contains a small vaulted room with a mihrab (niche symbolizing the direction of prayer) which is the oldest preserved mosque or prayer hall in North Africa. Another small room in the fortress, located above the front gate, is covered by a dome supported on squinches, which is the oldest example of this construction technique in Islamic North Africa. The tall cylindrical tower inside the ribat, most likely intended as a lighthouse, has a marble plaque over its entrance inscribed with the name of Ziyadat Allah I and the date 821, which in turn is the oldest Islamic-era monumental inscription to survive in Tunisia.
In the 9th century Ifriqiya was controlled by the Aghlabid dynasty, who ruled nominally on behalf of the Abbasid Caliphs in Baghdad but were de facto autonomous. The Aghlabids were major builders and erected many of Tunisia's oldest Islamic religious buildings and practical infrastructure works like the Aghlabid Reservoirs of Kairouan. Much of their architecture, even their mosques, had a heavy and almost fortress-like appearance, but they nonetheless left an influential artistic legacy.
One of the most important Aghlabid monuments is the Great Mosque of Kairouan, which was completely rebuilt in 836 by the emir Ziyadat Allah I (r. 817–838), although various additions and repairs were effected later which complicate the chronology of its construction. Its design was a major reference point in the architectural history of mosques in the Maghreb. The mosque features an enormous rectangular courtyard, a large hypostyle prayer hall, and a thick three-story minaret (tower from which the call to prayer is issued). The prayer hall's layout reflects an early use of the so-called "T-plan", in which the central nave of the hypostyle hall (the one leading to the mihrab) and the transverse aisle running along the qibla wall are wider than the other aisles and intersect in front of the mihrab. The mihrab of the prayer hall is among the oldest examples of its kind, richly decorated with marble panels carved in high-relief vegetal motifs and with ceramic tiles with overglaze and luster. Next to the mihrab is the oldest surviving minbar (pulpit) in the world, made of richly-carved teakwood panels. Both the carved panels of the minbar and the ceramic tiles of the mihrab are believed to be imports from Abbasid Iraq. An elegant dome in front of the mihrab with an elaborately-decorated drum is one of architectural highlights of this period. Its light construction contrasts with the bulky structure of the surrounding mosque and the dome's drum is elaborately decorated with a frieze of blind arches, squinches carved in the shape of shells, and various motifs carved in low-relief. The mosque's minaret is the oldest surviving one in North Africa and the western Islamic world. Its form was modeled on older Roman lighthouses in North Africa, quite possibly the lighthouse at Salakta (Sullecthum) in particular.
The Great Mosque of al-Zaytuna in Tunis, which was founded earlier around 698, owes its overall current form to a reconstruction during the reign of the Aghlabid emir Abu Ibrahim Ahmad (r. 856–863). Its layout is very similar to the Great Mosque of Kairouan. Two other congregational mosques in Tunisia, the Great Mosque of Sfax (circa 849) and the Great Mosque of Sousse (851), were also built by the Aghlabids but have different forms. The small Mosque of Ibn Khayrun in Kairouan (also known as the "Mosque of the Three Doors"), dated to 866 and commissioned by a private patron, possesses what is considered by some to be the oldest decorated external façade in Islamic architecture, featuring carved Kufic inscriptions and vegetal motifs. Apart from its limestone façade, most of the mosque was rebuilt at a later period. Another small local mosque from this period is the Mosque of Bu Fatata in Sousse, dated to the reign of Abu Iqal al-Aghlab ibn Ibrahim (r. 838–841), which has a hypostyle prayer hall fronted by an external portico of three arches. Both the Ibn Khayrun and Bu Fatata mosques are early examples of the "nine-bay" mosque, meaning that the interior has a square plan subdivided into nine smaller square spaces, usually vaulted, arranged in three rows of three. This type of layout is found later in al-Andalus and as far as Central Asia, suggesting that it may be a design that was disseminated widely by Muslim pilgrims returning from Mecca.
Western and central Maghreb
Further west, the Rustamid dynasty, who were Ibadi Kharijites and did not recognize the Abbasid Caliphs, held sway over much of the central Maghreb. Their capital, Tahart (near present-day Tiaret), was founded in the second half of the 8th century by Abd al-Rahman ibn Rustam and was occupied seasonally by its semi-nomadic inhabitants. It was destroyed by the Fatimids in 909 but its remains were excavated in the 20th century. The city was surrounded by a fortified wall interspersed with square towers. It contained a hypostyle mosque, a fortified citadel on higher ground, and a palace structure with a large courtyard similar to the design of traditional houses.
The Islamization of present-day Morocco, the westernmost territory of the Muslim world (known as the Maghreb al-Aqsa), became more definitive with the advent of the Idrisid dynasty at the end of the 8th century. The Idrisids founded the city of Fes, which became their capital and the major political and cultural center of early Islamic Morocco. In this early period Morocco also absorbed waves of immigrants from Tunisia and al-Andalus who brought in cultural and artistic influences from their home countries. The well-known Qarawiyyin and Andalusiyyin mosques in Fes, founded in the 9th century during, were built in hypostyle form but the structures themselves were rebuilt during later expansions. The layout of two other mosques from this era, the Mosque of Agadir and the Mosque of Aghmat, are known thanks to modern archeological investigations. The Mosque of Agadir was founded in 790 by Idris I on the site of the former Roman town of Pomeria (present-day Tlemcen in Algeria), while the Mosque of Aghmat, a town about 30 km southeast of present-day Marrakesh, was founded in 859 by Wattas Ibn Kardus. Both of them were also hypostyle mosques with prayer halls supported by rows of pillars.
The rival caliphates (10th century)
The Caliphate of Córdoba
In the 10th century Abd ar-Rahman III declared a new Caliphate in al-Andalus and inaugurated the height of Andalusi power in the region. He marked this political evolution with the creation of a vast and lavish palace-city called Madinat al-Zahra, located just outside Cordoba on the lower slopes of the Sierra Morena. Its construction started in 936 and continued for decades during his reign and that of his son. The site was later destroyed and pillaged after the end of the Caliphate, but its remains have been excavated since 1911. The site covers a vast area divided into three terraced levels: the highest level contained the caliph's palaces, the level below this contained official buildings and dwellings of high officials, and the lowest and largest level was inhabited by common workers, craftsmen, and soldiers. The most lavish building discovered so far, known today as the Salón Rico ("Rich Hall" in Spanish), is the reception hall of Abd ar-Rahman III, which is fronted by sunken gardens and reflective pools on a terrace overlooking the landscape below. Its main hall is a rectangular space divided into three naves by two rows of horseshoe arches and nearly every wall surface is covered in exceptional stone-carved decoration with geometric and tree of life motifs. While garden estates were built by the Umayyad rulers and elites of Cordoba before this, the gardens of Madinat al-Zahra are the oldest archeologically documented example of geometrically-divided gardens (related to the chahar bagh type) in the western Islamic world, among the oldest examples in the Islamic world generally, and the oldest known example to combine this type of garden with a system of terraces.
Andalusi decoration and craftsmanship of this period became more standardized. While Classical inspirations are still present, they are interpreted more freely and are mixed with influences from the Middle East, including ancient Sasanian or more recent Abbasid motifs. This is seen for example in the stylized vegetal motifs intricately carved onto limestone panels on the walls at Madinat al-Zahra. It is also at Madinat al-Zahra that the "caliphal" style of horseshoe arch was formalized: the curve of the arch forms about three quarters of a circle, the voussoirs are aligned with the imposts rather than the center of the arch, the curve of the extrados is "stilted" in relation to that of the intrados, and the arch is set within a decorative alfiz. Back in Cordoba itself, Abd ar-Rahman III also expanded the courtyard (sahn) of the Great Mosque and built its first true minaret. The minaret, with a cuboid shape about tall, became the model followed for later minarets in the region. Abd ar-Rahman III's cultured son and successor, al-Hakam II, further expanded the mosque's prayer hall, starting in 962. He endowed it with some of its most significant architectural flourishes and innovations, which included a maqsura enclosed by intersecting multifoil arches, four ornate ribbed domes, and a richly-ornamented mihrab with Byzantine-influenced gold mosaics.
A much smaller but notable work from the late caliphate period is the Bab al-Mardum Mosque (now known as the Church of San Cristo de la Luz) in Toledo, which has a nine-bay layout covered by a variety of ribbed domes and an exterior façade with an Arabic inscription carved in brick. Other monuments from the Caliphate period in al-Andalus include some of Toledo's old city gates (e.g. Puerta de Bisagra), the former mosque (and later monastery) of Almonaster la Real, the Castle of Tarifa, the Burgalimar Castle, the Caliphal Baths of Cordoba, and, possibly, the Baths of Jaen.
In the 10th century much of northern Morocco also came directly within the sphere of influence of the Ummayyad Caliphate of Cordoba, with competition from the Fatimid Caliphate further east. Early contributions to Moroccan architecture from this period include expansions to the Qarawiyyin and Andalusiyyin mosques in Fes and the addition of their square-shafted minarets, carried out under the sponsorship of Abd ar-Rahman III and following the example of the minaret he built for the Great Mosque of Cordoba.
The Fatimid Caliphate
In Ifriqiya, the Fatimids also built extensively, most notably with the creation of a new fortified capital on the coast, Mahdia. Construction began in 916 and the new city was officially inaugurated on 20 February 921, although some construction continued. In addition to its heavy fortified walls, the city included the Fatimid palaces, an artificial harbor, and a congregational mosque (the Great Mosque of Mahdia). Much of this has not survived to the present day. Fragments of mosaic pavements from the palaces have been discovered from modern excavations. The mosque is one of the most well-preserved Fatimid monuments in the Maghreb, although it too has been extensively damaged over time and was in large part reconstructed by archeologists in the 1960s. It consists of a hypostyle prayer hall with a roughly square courtyard. The mosque's original main entrance, a monumental portal projecting from the wall, was relatively unusual at the time and may have been inspired by ancient Roman triumphal arches. Another unusual feature was the absence of a minaret, which may have reflected an early Fatimid rejection of such structures as unnecessary innovations.
In 946 the Fatimids began construction of a new capital, al-Mansuriyya, near Kairouan. Unlike Mahdia, which was built with more strategic and defensive considerations in mind, this capital was built as a display of power and wealth. The city had a round layout with the caliph's palace at the center, possibly modeled on the Round City of Baghad. While only sparse remains of the city have been uncovered, it appears to have differed from earlier Fatimid palaces in its extensive use of water. One excavated structure had a vast rectangular courtyard mostly occupied by a large pool. This use of water was reminiscent of earlier Aghlabid palaces at nearby Raqqada and of contemporary palaces at Madinat al-Zahra, but not of older Umayyad and Abbasid palaces further east, suggesting that displays of waterworks were evolving as symbols of power in the Maghreb and al-Andalus.
Political fragmentation (11th century)
The Taifas in Al-Andalus
The collapse of the Cordoban caliphate in the early 11th century gave rise to the first Taifas period, during which al-Andalus was politically fragmented into a number of smaller kingdoms. The disintegration of central authority resulted in the ruin and pillage of Madinat al-Zahra. Despite this political decline, the culture of the Taifa emirates was vibrant and productive, with the architectural forms of the Caliphate period continuing to evolve. A number of important palaces or fortresses, in various cities, were begun or expanded by local dynasties. The Alcazaba of Malaga, begun in the early 11th century and subsequently modified, is one of the most important examples. The earliest part of the palace features horseshoe arches with carved vegetal decoration that appear to imitate, with less sophistication, the style of Madinat al-Zahra. Another part contains intersecting multifoil arches that resemble those of al-Hakam II's maqsura in the Cordoba mosque, though serving a purely decorative and non-structural purpose here. The Alcazar of Seville and the Alcazaba of the Alhambra were also the site of earlier fortresses or palaces by the Abbadids (in Seville) and the Zirids (in Granada), respectively. The Alcazaba of Almería, along with a preserved section of Almería's defensive walls, dates from the 11th century, though little remains of the palaces built inside the Alcazaba. The Bañuelo of Granada, another historic Islamic bathhouse, is also traditionally dated to the 11th century, though recent studies suggest it may date from slightly later, the 12th century.
The Aljaferia Palace in Zaragoza, though much restored in modern times, is one of the most significant and best-preserved examples of this period, built during the second half of the 11th century by the Banu Hud. Inside its enclosure of fortified walls, one courtyard has been preserved from this period, occupied by pools and sunken gardens and wide rectangular halls fronted by porticos at either end. The arches of this courtyard have elaborate intersecting and mixed-linear designs and intricately-carved stucco decoration. The carved stucco of the southern portico, enveloping a simple brick core, is especially dizzying and complex, drawing on the forms of plain and multifoil arches but manipulating them into motifs outside their normal structural logic. Next to the northern hall of the courtyard, which was probably al-Muqtadir's audience hall, is an unusual small octagonal room with a mihrab, most likely a private oratory for the ruler. The designs and decoration of the palace appear to be a further elaboration of 10th-century Cordoban architecture, in particular al-Hakam II's extension in the Mosque of Cordoba, and of the Taifa-period aesthetic that followed it. Remains of another palace at Balaguer, further east in Catalonia today, are contemporary with the Aljaferia. Fragments of stucco decoration found here show that it was built in a very similar style. However, they also include rare surviving examples of figural sculpture in western Islamic architectural decoration, such as the carved image of a tree occupied by birds and harpies.
Zirids and Hammadids in North Africa
In North Africa, new Berber dynasties such as the Zirids ruled on behalf of the Fatimids, who had moved their base of power to Cairo in the late 10th century. The Zirid palace at 'Ashir (near the present town of Kef Lakhdar in Algeria) was built in 934 by Ziri ibn Manad while in the service of the Fatimid caliph al-Qa'im. It is one of the oldest palaces in the Maghreb to have been discovered and excavated. It was built in stone and has a carefully-designed symmetrical plan which included a large central courtyard and two smaller courtyards in each of the side wings of the palace. Some scholars believe this design imitated the now-lost Fatimid palaces of Mahdia. As independent rulers, however, the Zirids of Ifriqiya built relatively few grand structures. They reportedly built a new palace at al-Mansuriyya, a former Fatimid capital near Kairouan, but it has not been found by archeologists. In Kairouan itself the Great Mosque was restored by Al-Mu'izz ibn Badis. The wooden maqsura within the mosque today is believed to date from this time. It is the oldest maqsura in the Islamic world to be preserved in situ and was commissioned by al-Mu῾izz ibn Badis in the first half of the 11th century (though later restored). It is notable for its woodwork, which includes an elaborately carved Kufic inscription dedicated to al-Mu'izz. The Qubbat al-Bahw, an elegant dome at the entrance of the prayer hall of the Zaytuna Mosque in Tunis, dates from 991 and can be attributed to Al-Mansur ibn Buluggin.
The Hammadids, an offshoot of the Zirids, ruled in the central Maghreb (present-day Algeria) during the 11th and 12th centuries. They built an entirely new fortified capital known as Qal'at Bani Hammad, founded in 1007. Although abandoned and destroyed in the 12th century, the city has been excavated by modern archeologists and the site is one of the best-preserved medieval Islamic capitals in the world. It contains several palaces, various amenities, and a grand mosque, in an arrangement that bears similarities to other palace-cities such as Madinat al-Zahra. The largest palace, Qasr al-Bahr ("Palace of the Sea"), was built around an enormous rectangular water basin. The architecture of the site has been compared to Fatimid architecture, but bears specific resemblances to contemporary architecture in the western Maghreb, Al-Andalus, and Arab-Norman Sicily. For example, while the Fatimids usually built no minarets, the grand mosque of Qal'at Bani Hammad has a large square-based minaret with interlacing and polylobed arch decoration, which are features of architecture in al-Andalus. Various remnants of tile decoration have been discovered at the site, including the earliest known use of glazed tile decoration in western Islamic architecture. Archeologists also discovered fragments of plaster which have been identified by some as the earliest appearance of muqarnas ("stalactite" or "honeycomb" sculpting) in the western Islamic world, but their identification as true muqarnas has been questioned or rejected by some other scholars.
The Berber Empires (11th–13th centuries)
The late 11th century saw the significant advance of Christian kingdoms into Muslim al-Andalus, particularly with the fall of Toledo to Alfonso VI of Castile in 1085, and the rise of major Berber empires originating in northwestern Africa. The latter included first the Almoravids (11th–12th centuries) and then the Almohads (12th–13th centuries), both of whom created empires that stretched across large parts of western and northern Africa and took over the remaining Muslim territories of al-Andalus in Europe. Both empires had their capital at Marrakesh, which was founded by the Almoravids in the second half of the 11th century. This period is one of the most formative stages of architecture in al-Andalus and the Maghreb, establishing many of the forms and motifs that were refined in subsequent centuries.
Almoravids
The Almoravids made use of Andalusi craftsmen throughout their realms, thus helping to spread the highly ornate architectural style of al-Andalus to North Africa. Almoravid architecture assimilated the motifs and innovations of Andalusi architecture, such as the complex interlacing arches of the Great Mosque in Cordoba and of the Aljaferia palace in Zaragoza, but it also introduced new ornamental techniques from the east, such as muqarnas, and added its own innovations, such as the lambrequin arch and the use of pillars instead of columns in mosques. Stucco-carved decoration began to appear more and more as part of these compositions and would become even more elaborate in subsequent periods. Almoravid patronage thus marks a period of transition for architecture in the region, setting the stage for future developments.
Some of the oldest and most significant surviving examples of Almoravid religious architecture, although with later modifications, are the Great Mosque of Algiers (1096–1097), the Great Mosque of Tlemcen (1136), and the Great Mosque of Nedroma (1145), all located in Algeria today. The highly ornate, semi-transparent plaster dome in front of the mihrab of the Great Mosque of Tlemcen, dating from the reign of Ali ibn Yusuf (r. 1106–1143), is one of the highlights of this period. The design of the dome traces its origins to the earlier ribbed domes of Al-Andalus and, in turn, it probably influenced the design of similar ornamental domes in later mosques in Fez and Taza.
In Morocco, the only notable remnants of Almoravid religious architecture are the Qubba Ba'adiyyin, a small but highly ornate ablutions pavilion in Marrakesh, and the Almoravid expansion of the Qarawiyyin Mosque in Fez. These two monuments also contain the earliest clear examples of muqarnas decoration in the region, with the first complete muqarnas vault appearing in the central nave of the Qarawiyyin Mosque. The Almoravid palace of Ali Ibn Yusuf in Marrakesh, excavated in the 20th century, contains the earliest known example of a riad garden (an interior garden symmetrically divided into four parts) in Morocco.
In present-day Spain, the oldest surviving muqarnas fragments were found in a palace built by Muhammad Ibn Mardanish, the independent ruler of Murcia (1147–1172). The remains of the palace, known as al-Qasr al-Seghir (or Alcázar Seguir in Spanish) are part of the present-day Monastery of Santa Clara in Murcia. The muqarnas fragments are painted with images of musicians and other figures. Ibn Mardanish also constructed what is now known as the Castillejo de Monteagudo, a hilltop castle and fortified palace outside the city that is one of the best-preserved examples of Almoravid-era architecture in the Iberian Peninsula. It has a rectangular plan and contained a large riad garden courtyard with symmetrical reception halls facing each other across the long axis of the garden.
Almohads
Almohad architecture showed more restraint than Almoravid architecture in its use of ornamental richness, giving greater attention to wider forms, contours, and overall proportions. Earlier motifs were refined and were given a grander scale. While surface ornament remained important, architects strove for a balance between decorated surfaces and empty spaces, allowing the interaction of light and shadows across carved surfaces to play a role.
The Almohad Kutubiyya and Tinmal mosques are often considered the prototypes of medieval mosque architecture in the region. The so-called "T-plan", combined with a hierarchical use of decoration that emphasizes the wider central and transverse qibla aisles of the mosque, became an established feature of this architecture. The monumental minarets of the Kutubiyya Mosque, the Giralda of the Great Mosque of Seville (now part of the city's cathedral), and the Hassan Tower of Rabat, as well as the ornamental gateways of Bab Agnaou in Marrakesh and Bab Oudaia and Bab er-Rouah in Rabat, were all models that established the overall decorative schemes that became recurrent in these architectural elements from then on. The minaret of the Kasbah Mosque of Marrakesh, with its façades covered by sebka motifs and glazed tile, was particularly influential and set a style that was repeated, with minor elaborations, in the following period under the Marinids and other dynasties.
The Almohad caliphs constructed their own palace complexes in several cities. They founded the Kasbah of Marrakesh in the late 12th century as their main residence, imitating earlier examples of self-contained palace-cities such as Madinat al-Zahra in the 10th century. The Almohads also made Tunis the regional capital of their territories in Ifriqiya (present-day Tunisia), establishing the city's own kasbah (citadel). The caliphs also constructed multiple country estates and gardens right outside some of these cities, continuing a tradition that existed under the Almoravids. These estates were typically centered around a large artificial water reservoir that sustained orchards of fruit trees and other plants, while small palaces or pleasure pavilions were built along the water's edge. In Marrakesh, the present-day Agdal and Menara gardens both developed from such Almohad creations. In Seville, the remains of the Almohad al-Buḥayra garden, founded in 1171, were excavated in the 1970s. Sunken gardens were also part of Almohad palace courtyards. In some cases the gardens were divided symmetrically into four parts, much like a riad garden. Examples of these have been found in some courtyards of the Alcázar of Seville, where the former Almohad palaces once stood.
Arab-Norman architecture in Sicily (11th-12th centuries)
Sicily was progressively brought under Muslim control in the 9th when the Aghlabids conquered it from the Byzantines. The island was subsequently settled by Arabs and Berbers from North Africa. In the following century the island passed into the control of the Fatimids, who left the island under the governorship of the Kalbids. By the mid-11th century the island was fragmented into smaller Muslim states and by the end of that century the Normans had conquered it under the leadership of Robert Guiscard and Roger de Hauteville (Roger I).
Virtually no examples of architecture from the period of the Emirate of Sicily have survived today. However, the following period of Norman domination, especially under Roger II in the 12th century, was notable for its unique blending of Norman, Byzantine and Arab-Islamic cultures. Multiple examples of this "Arab-Norman" architecture – which was also heavily influenced by Byzantine architecture – have survived today and are even classified together as a UNESCO World Heritage Site (since 2015). While the Arab-Islamic elements of this architecture are closely linked to Fatimid architecture, they also come from Moorish architecture and are stylistically similar to the preceding Almoravid period.
The Palazzo dei Normanni (Palace of the Normans) in Palermo contains the Cappella Palatina, one of the most important masterpieces of this style, built under Roger II in the 1130s and 1140s. It combines harmoniously a variety of styles: the Norman architecture and door decor, the Arabic arches and scripts adorning the roof, the Byzantine dome and mosaics. The central nave of the chapel is covered by a large rectangular vault ceiling made of painted wood and carved in muqarnas: the largest rectangular muqarnas vault of its kind.
Marinids, Nasrids, and Zayyanids (13th–15th centuries)
The eventual collapse of the Almohad Empire in the 13th century was precipitated by its defeat at the Battle of Las Navas de Tolosa (1212) in al-Andalus and by the advance of the Berber Marinid dynasty in the western Maghreb, the Zayyanids in the central Maghreb, and the Hafsids in Ifriqiya. What remained of the Muslim-controlled territories in al-Andalus was consolidated by the Nasrid dynasty into the Emirate of Granada, which lasted another 250 years until its final conquest by the Catholic Monarchs in 1492, at the end of the Reconquista. Both the Nasrids in al-Andalus to the north and the Marinids in Morocco to the south were important in further refining the artistic legacy established by their predecessors. When Granada was conquered in 1492 by Catholic Spain and the last Muslim realm of al-Andalus came to an end, many of the remaining Spanish Muslims (and Jews) fled to Morocco and other parts of North Africa, further increasing the Andalusian influence in these regions in subsequent generations.The architectural styles of the Marinids, Zayyanids, and Nasrids were very similar to each other. Craftsmen probably travelled between royal courts and from region to region, resulting in mutual influences between the arts of the three kingdoms. Compared with the relatively restrained decoration of Almohad architecture, the monuments of all three dynasties during this period are marked by increasingly extensive and intricate decoration on every surface, particularly in wood, stucco, and zellij (mosaic tilework in complex geometric patterns). Some differences are still found between the styles of each dynasty, such as the wider use of marble columns in Nasrid palaces and the increasing use of wooden elements in Marinid architecture. Nasrid architecture also exhibits details influenced by Granada's closer interactions with Christian kingdoms like Castile.
The Marinids, who chose Fes as their capital, were also the first to build madrasas in this region, a type of institution which originated in Iran and had spread west. The madrasas of Fes, such as the Bou Inania, al-Attarine, and as-Sahrij madrasas, as well as the Marinid madrasa of Salé and the other Bou Inania in Meknes, are considered among the greatest architectural works of this period. The Marinids also imitated previous dynasties by founding their own fortified palace-city to the west of Fes, known afterwards as Fes el-Jdid ("New Fez"), which remained a frequent center of power in Morocco even during later dynasties such as the 'Alawis. Unlike the Alhambra of Granada, the grand palaces of Fes el-Jdid have not survived, though they may have been comparable in splendor. The Great Mosque of Fes el-Jdid, on the other hand, is one of the major Marinid mosques that is still well-preserved today, while numerous other mosques were built throughout Fes and in other cities during this period, including the Lalla az-Zhar Mosque in Fes, the Ben Salah Mosque in Marrakesh, the Zawiya an-Nussak in Salé, the Great Mosque of Oujda, and others.
The most famous architectural legacy of the Nasrids in Granada is the Alhambra, a hilltop palace district protected by heavy fortifications and containing some of the most famous and best-preserved palaces of western Islamic architecture. Initially a fortress built by the Zirids in the 11th century (corresponding to the current Alcazaba), it was expanded into a self-contained and well-fortified palace district complete with habitations for servants and workers. The oldest remaining palace there today, built under Muhammad III (ruled 1302–1309), is the Palacio del Partal which, although only partly preserved, demonstrates the typical layout which would be repeated in other palaces nearby: a courtyard centered on a large reflective pool with porticos at either end and a mirador (lookout) tower at one end which looked down on the city from the edge of the palace walls. The most famous palaces, the Comares Palace and the Palace of the Lions, were added afterwards. The Comares Palace, which includes a lavish hammam (bathhouse) and the Hall of the Ambasadors (a throne room), was begun under Isma'il I (ruled 1314–1325) but mostly constructed under Yusuf I (1333–1354) and Muhammad V (ruled 1354–1359 and 1362–1391). The Palace of the Lions was built under Muhammad V and possibly finished around 1380. It features a courtyard with a central marble fountain decorated with twelve lion sculptures. The galleries and chambers around the courtyard are notable for their extremely fine stucco decoration and some exceptional muqarnas vault ceilings. Four other nearby palaces in the Alhambra were demolished at various points after the end of the Reconquista (1492). The summer palace and gardens known as the Generalife were also created nearby – at the end of the 13th century or in the early 14th century – in a tradition reminiscent of the Almohad-era Agdal Gardens of Marrakesh and the Marinid Royal Gardens of Fes. The Nasrids also built other structures throughout the city – such as the Madrasa and the Corral del Carbón – and left their mark on other structures and fortifications throughout their territory, though not many significant structures have survived intact to the present-day.
Meanwhile, in the former territories of al-Andalus under the control of the Spanish kingdoms of Léon, Castile and Aragon, Andalusi art and architecture continued to be employed for many years as a prestigious style under new Christian patrons, becoming what is known as Mudéjar art (named after the Mudéjars or Muslims under Christian rule). This type of architecture, created by Muslim craftsmen or by other craftsmen following the same tradition, continued many of the same forms and motifs with minor variations. Numerous examples are found in the early churches of Toledo (e.g. the Church of San Román, 13th century), as well as other cities in Aragon such as Zaragoza and Teruel. Among the most famous and celebrated examples is the Alcazar of Seville, which was the former palace of the Abbadids and the Almohads in the city but was rebuilt in by Christian rulers, including Peter the Cruel who added lavish sections in Moorish style starting in 1364 with the help of craftsmen from Granada and Toledo. Other smaller but notable examples in Cordoba include the Chapel of San Bartolomé and the Royal Chapel (Capilla Real) in the Great Mosque (which was converted to a cathedral in 1236). Some surviving 13th and 14th-century Jewish synagogues were also built (or rebuilt) in Mudéjar Moorish style while under Christian rule, such as the Synagogue of Santa Maria la Blanca in Toledo (rebuilt in its current form in 1250), Synagogue of Cordoba (1315), and the Synagogue of El Tránsito (1355–1357).
Further east, in Algeria, the Berber Zayyanid or Abd al-Wadid dynasty controlled their own state and built monuments in their main capital at Tlemcen. Yaghmorasan (r. 1236–1283), the founder of the dynasty, added minarets to the earlier Mosque of Agadir and the Great Mosque of Tlemcen while his successor, Abu Sa'id 'Uthman (r. 1283–1304), founded the Mosque of Sidi Bel Hasan in 1296. The Zayyanids built other religious foundations in the area, but many have not survived to the present day or have preserved little of their original appearance. In addition to mosques, they built the first madrasas in Tlemcen. The Madrasa Tashfiniya, founded by Abu Tashfin I (r. 1318–1337), was celebrated for its rich decoration, including zellij tile decoration with sophisticated arabesque and geometric motifs whose style was repeated in some subsequent Marinid monuments. The Marinids also intermittently occupied Tlemcen in the 14th century and left their mark on the area. During his siege of the city at the beginning of the century, the Marinid leader Abu Ya'qub built a fortified settlement nearby named al-Mansurah, which includes the monumental Mansurah Mosque (begun in 1303, only partly preserved today). Further east, Abu al-Hasan founded the Mosque of Sidi Bu Madyan in the city in 1338–39.
The Hafsids of Tunisia (13th–16th centuries)
In Ifriqiya (Tunisia), the Hafsids, a branch of the Almohad ruling class, declared their independence from the Almohads in 1229 and developed their own state which came to control much of the surrounding region. They were also significant builders, particularly under the reigns of successful leaders like Abu Zakariya (ruled 1229–1249) and Abu Faris (ruled 1394–1434), though not many of their monuments have survived intact to the present-day. While Kairouan remained an important religious center, Tunis was the capital and progressively replaced it as the main city of the region and the main center of architectural patronage. Unlike the architecture further west, Hafsid architecture was built primarily in stone (rather than brick or mudbrick) and appears to have featured much less decoration. In reviewing the history of architecture in the region, scholar Jonathan Bloom remarks that Hafsid architecture seems to have "largely charted a course independent of the developments elsewhere in the Maghrib [North Africa]".
The Kasbah Mosque of Tunis was one of the first works of this period, built by Abu Zakariya (the first independent Hafsid ruler) at the beginning of his reign. Its floor plan had noticeable differences from previous Almohad-period mosques but the minaret, completed in 1233, bears very strong resemblance the minaret of the earlier Almohad Kasbah Mosque in Marrakesh. Other foundations from the Hafsid period in Tunis include the Haliq Mosque (13th century) and the al-Hawa Mosque (1375). The Bardo Palace (today a national museum) was also begun by the Hafsids in the 15th century, and is mentioned in historical records for the first time during the reign of Abu Faris. The Hafsids also made significant renovations to the much older Great Mosque of Kairouan – renovating its ceiling, reinforcing its walls, and building or rebuilding two of its entrance gates in 1293 – as well as to the al-Zaytuna Mosque in Tunis.
The Hafsids also introduced the first madrasas to the region, beginning with the Madrasa al-Shamma῾iyya built in Tunis in 1238 (or in 1249 according to some sources). This was followed by many others (almost all of them in Tunis) such as the Madrasa al-Hawa founded in the 1250s, the Madrasa al-Ma'ridiya (1282), and the Madrasa al-Unqiya (1341). Many of these early madrasas, however, have been poorly preserved or have been considerably modified in the centuries since their foundation. The Madrasa al-Muntasiriya, completed in 1437, is among the best preserved madrasas of the Hafsid period.
The Hafsids were eventually supplanted by the Ottomans who took over most of the Maghreb in the 16th century, with the exception of Morocco, which remained an independent kingdom. This resulted in an even greater divergence between the architecture of Morocco to the west, which continued to follow essentially the same Andalusi-Maghrebi traditions of art as before, and the architecture of Algeria and Tunisia to the east, which increasingly blended influences from Ottoman architecture into local designs.
The Sharifian dynasties in Morocco: Saadians and 'Alawis (16th century and after)
In Morocco, after the Marinids came the Saadian dynasty in the 16th century, which marked a political shift from Berber-led empires to sultanates led by Arab sharifian dynasties. Artistically and architecturally, however, there was broad continuity and the Saadians are seen by modern scholars as continuing to refine the existing Moorish-Moroccan style, with some considering the Saadian Tombs in Marrakesh as one of the apogees of this style. Starting with the Saadians, and continuing with the 'Alawis (their successors and the reigning monarchy of Morocco today), Moroccan art and architecture is portrayed by modern scholars as having remained essentially "conservative"; meaning that it continued to reproduce the existing style with high fidelity but did not introduce major new innovations.
The Saadians, especially under the sultans Abdallah al-Ghalib and Ahmad al-Mansur, were extensive builders and benefitted from great economic resources at the height of their power in the late 16th century. In addition to the Saadian Tombs, they also built several major mosques in Marrakesh including the Mouassine Mosque and the Bab Doukkala Mosque, which are notable for being part of larger multi-purpose charitable complexes including several other structures like public fountains, hammams, madrasas, and libraries. This marked a shift from the previous patterns of architectural patronage and may have been influenced by the tradition of building such complexes in Mamluk architecture in Egypt and the külliyes of Ottoman architecture. The Saadians also rebuilt the royal palace complex in the Kasbah of Marrakesh for themselves, where Ahmad al-Mansur constructed the famous El Badi Palace (built between 1578 and 1593) which was known for its superlative decoration and costly building materials including Italian marble.
The 'Alawis, starting with Moulay Rashid in the mid-17th century, succeeded the Saadians as rulers of Morocco and continue to be the reigning monarchy of the country to this day. As a result, many of the mosques and palaces standing in Morocco today have been built or restored by the 'Alawis at some point or another in recent centuries. Ornate architectural elements from Saadian buildings, most infamously from the lavish El Badi Palace, were also stripped and reused in buildings elsewhere during the reign of Moulay Isma'il (1672–1727). Moulay Isma'il is also notable for having built a vast imperial capital in Meknes, where the remains of his monumental structures can still be seen today. In 1765 Sultan Mohammed ben Abdallah (one of Moulay Isma'il's sons) started the construction of a new port city called Essaouira (formerly Mogador), located along the Atlantic coast as close as possible to his capital at Marrakesh, to which he tried to move and restrict European trade. He hired European architects to design the city, resulting in a relatively unique historic city built by Moroccans but with Western European architecture, particularly in the style of its fortifications. Similar maritime fortifications or bastions, usually called a sqala, were built at the same time in other port cities like Anfa (present-day Casablanca), Rabat, Larache, and Tangier. Late sultans were also significant builders. Up until the late 19th century and early 20th century, both the sultans and their ministers continued to build beautiful palaces, many of which are now used as museums or tourist attractions, such as the Bahia Palace in Marrakesh, the Dar Jamaï in Meknes, and the Dar Batha in Fes.
Ottoman rule in Algeria and Tunisia (16th century and after)
Over the course of the 16th century the central and eastern Maghreb – Algeria, Tunisia, and Libya – came under Ottoman control. Major port cities such as Algiers, Tunis, and Tripoli also became centers of pirate activity, which brought in wealth to local elites but also attracted intrusions by European powers, who occupied and fortified some coastal positions. In the late 17th century and early 18th century, Ottoman control became largely nominal: the Regency of Algiers (Algeria) was de facto ruled by the local deys until the French conquest of 1830, Tunisia was ruled by the Muradid dynasty (after 1602) and the Husaynid dynasty (after 1705), and Libya was ruled by the Qaramanli dynasty until the return of direct Ottoman control in 1835. Whereas architecture in Morocco remained largely traditional during the same period, architecture in Algeria and Tunisia was blended with Ottoman architecture, especially in the coastal cities where Ottoman influence was strongest. Some European influences were also introduced, particularly through the importation of materials from Italy such as marble.
Tunisia
In Tunis, the Mosque complex of Yusuf Dey, built or begun around 1614–15 by Yusuf Dey (r. 1610–1637), is one of the earliest and most important examples that imported Ottoman elements into local architecture. Its congregational mosque is accompanied by a madrasa, a primary school, fountains, latrines, and even a café, many of which provided revenues for the upkeep of the complex. This arrangement is similar to Ottoman külliye complexes. It was also the first example of a "funerary mosque" in Tunis, as the complex includes the founder's mausoleum, dated to 1639. While the hypostyle form of the mosque and the pyramidal roof of the mausoleum reflect traditional architecture in the region, the minaret's octagonal shaft reflects the influence of the "pencil"-shaped Ottoman minarets. In this period, octagonal minarets often distinguished mosques following the Hanafi maddhab (which was associated with the Ottomans), while mosques which continued to follow the Maliki maddhab (predominant in the Maghreb) continued to employ traditional square-shaft minarets.
The Mosque of Hammuda Pasha, built by Hammuda Pasha (r. 1631–1664) between 1631 and 1654, reprises many of these same elements as the Yusuf Dey Mosque. Both mosques make use of marble columns and capitals that were imported from Italy and possibly even carved by Italian craftsmen in Tunis. Hammuda Pasha was also responsible for starting in 1629 a major restoration and expansion of the Zawiya of Abu al-Balawi or "Mosque of the Barber" in Kairouan. While the Zawiya has been further modified since, one of its characteristic 17th-century features is the decoration of underglaze-painted Qallalin tiles on many of its walls. These tiles, generally produced in the Qallalin district of Tunis, are painted with motifs of vases, plants, and arches and use predominant blue, green, and ochre-like yellow colours which distinguish them from contemporary Ottoman tiles. The artistic height of these tiles was in the 17th and 18th centuries.
It wasn't until the end of the 17th century that the first and only Ottoman-style domed mosque in Tunisia was built: the Sidi Mahrez Mosque, begun by Muhammad Bey and completed by his successor, Ramadan ibn Murad, between 1696 and 1699. The mosque's prayer hall is covered by a dome system typical of Classical Ottoman architecture and first employed by Sinan for the Şehzade Mosque (c. 1548) in Istanbul: a large central dome flanked by four semi-domes, with four smaller domes at the corners and pendentives in the transitional zones between the semi-domes. The interior is decorated with marble paneling and Ottoman Iznik tiles.
Algeria
During this period Algiers developed into a major town and witnessed regular architectural patronage, and as such most of the major monuments from this period are concentrated there. By contrast, the city of Tlemcen, the former major capital of the region, went into relative decline and saw far less architectural activity. Mosque architecture in Algiers during this period demonstrates the convergence of multiple influences as well as peculiarities that may be attributed to the innovations of local architects. Domes of Ottoman influence were introduced into the design of mosques, but minarets generally continued to be built with square shafts instead of round or octagonal ones, thus retaining local tradition, unlike contemporary architecture in Ottoman Tunisia and other Ottoman provinces, where the "pencil"-shaped minaret was a symbol of Ottoman sovereignty.
The oldest surviving mosque from the Ottoman period in Algeria is the Ali Bitchin (or 'Ali Bitshin) Mosque in Algiers, commissioned by an admiral of the same name, a convert of Italian origin, in 1622. The mosque is built on top of a raised platform and was once associated with various annexes including a hospice, a hammam, and a mill. A minaret and public fountain stand on its northeast corner. The interior prayer hall is centered around a square space covered by a large octagonal dome supported on four large pillars and pendentives. This space is surrounded on all four sides with galleries or aisles covered by rows of smaller domes. On the west side of the central space this gallery is two bays deep (i.e. composed of two aisles instead of one), while on the other sides, including on the side of the mihrab, the galleries are just one bay deep. Several other mosques in Algiers built from the 17th to early 19th centuries had a similar floor plan. This particular design was unprecedented in the Maghreb. The use of a large central dome was a clear connection with Ottoman architecture. However, the rest of the layout is quite different from the mosques of metropolitan Ottoman architecture in cities like Istanbul. Some scholars, such as Georges Marçais, suggested that the architects or patrons could have been influenced by Ottoman-era mosques built in the Levantine provinces of the empire, where many of the rulers of Algiers had originated.
The most notable monument from this period in Algiers is the New Mosque (Djamaa el Djedid) in Algiers, built in 1660–1661. The mosque has a large central dome supported by four pillars, but instead of being surrounded by smaller domes it is flanked on four sides by wide barrel-vaulted spaces, with small domed or vaulted bays occupying the corners between these barrel vaults. The barrel-vaulted space on the north side of the dome (the entrance side) is elongated, giving the main vaulted spaces of the mosque a cross-like configuration resembling a Christian cathedral. The mosque's minaret has a traditional form with a square shaft surmounted by a small lantern structure. Its simple decoration includes tilework; the clock faces visible today were added at a later period. The mihrab has a more traditional western Islamic form, with a horseshoe-arch shape and stucco decoration, although the decoration around it is crowned with Ottoman-style half-medallion and quarter-medallion shapes. The mosque's overall design and its details thus attest to an apparent mix of Ottoman, Maghrebi, and European influences. As the architect is unknown, Jonathan Bloom suggests that it could very well have been a local architect who simply took the general idea of Ottoman mosque architecture and developed his own interpretation of it.
Beyond the Islamic world
Certain aspects and traditions of Moorish architecture were brought to the Iberian colonies in the Americas. outlines the influence of Arab and Amazigh substrates in popular architecture in Brazil, noting the considerable number of architectural terms in Portuguese inherited from Arabic, including () and ( ). Elements of Mudéjar architecture, derived from Islamic architectural traditions and assimilated into Spanish architecture, are found in the architecture of the Spanish colonies. The Islamic and Mudéjar style of decorative wooden ceilings, known in Spanish as armadura, proved particularly popular in both Spain and its colonies. Examples of Mudéjar-influenced colonial architecture are concentrated in Mexico and Central America, including some in what is now the southwestern United States.
Later, particularly in the 19th century, the Moorish Islamic style was frequently imitated by the Neo-Moorish or Moorish Revival style which emerged in the Europe and North America as part of the Romanticist interest in the "Orient". The term "Moorish" or "neo-Moorish" sometimes also covered an appropriation of motifs from a wider range of Islamic architecture. This style was a recurring choice for Jewish synagogue architecture of the era, where it was seen as an appropriate way to mark Judaism's non-European origins. Similar to Neo-Moorish, Néo-Mudéjar was a revivalist style evident in late 19th and early 20th-century Spain and in some Spanish Colonial architecture in northern Morocco. During the French occupation of Algeria, Tunisia, and Morocco, the French colonial administration also encouraged, in some cases, the use of indigenous North African or arabisant ("Arabizing") motifs in new buildings.
Architectural features
General characteristics
The architecture of the western Islamic world is exemplified by mosques, madrasas, palaces, fortifications, hammams (bathhouses), funduqs (caravanserais), and other historic building types common to Islamic architecture. Characteristic elements of the western regional style include horseshoe-shaped, intersecting, and polylobed arches, often with voussoirs of alternating colors or patterns, as well as internal courtyards, riad gardens, ribbed domes, and cuboid (square-base) minarets. Decoration typically consists of vegetal arabesques, geometric motifs, muqarnas sculpting, Arabic inscriptions, and epigraphic motifs. These motifs were translated into woodwork, carved stucco, and mosaic tilework known as zellij. The nature of the medieval Islamic world encouraged people to travel, which made it possible for artists, craftsmen, and ideas from other parts of the Islamic world to be transmitted here. Some features, such as muqarnas and tile revetments, were transmitted from the east but were realized differently in this region.
As scholar Jonathan Bloom remarks in his introduction to this topic, traditional Islamic-era architecture in the Maghreb and Al-Andalus was in some respects more "conservative" than other regional styles of Islamic architecture, in the sense that these buildings were less structurally ambitious than, for example, the increasingly audacious domed or vaulted structures that developed in Ottoman architecture and Iranian architecture. With the exception of minarets, Moorish monuments were rarely very tall and Moorish architecture persisted in using the hypostyle hall – one of the earliest types of structures in Islamic architecture – as the main type of interior space throughout its history. Moreover, Moorish architecture also continued an early Islamic tradition of avoiding ostentatious exterior decoration or exterior monumentality. With the important exception of gateways and minarets, the exteriors of buildings were often very plain, while the interiors were the focus of architectural innovation and could be lavishly decorated. By contrast, architectural styles in the eastern parts of the Islamic world developed significantly different and innovative spatial arrangements in their construction of domed halls or vaulted iwans and featured increasingly imposing and elaborate exteriors that dominated their surroundings.
Arches
Horseshoe arch
Perhaps the most characteristic arch type of western Islamic architecture generally is the so-called "Moorish" or "horseshoe" arch. This is an arch where the curves of the arch continue downward past the horizontal middle axis of the circle and begin to curve towards each other, rather than just forming a half circle. This arch profile became nearly ubiquitous in the region from the very beginning of the Islamic period. The origin of this arch appear to date back to the preceding Byzantine period across the Mediterranean, as versions of it appear in Byzantine-era buildings in Cappadocia, Armenia, and Syria. They also appear frequently in Visigothic churches in the Iberian peninsula (5th–7th centuries). Perhaps due to this Visigothic influence, horseshoe arches were particularly predominant afterwards in al-Andalus under the Umayyads of Cordoba, although the "Moorish" arch was of a slightly different and more sophisticated form than the Visigothic arch. Arches were not only used for supporting the weight of the structure above them. Blind arches and arched niches were also used as decorative elements. The mihrab of a mosque was almost invariably in the shape of horseshoe arch.
Starting in the Almoravid period, the first pointed or "broken" horseshoe arches began to appear in the region and became more widespread during the Almohad period. This arch is likely of North African origin, since pointed arches were already present in earlier Fatimid architecture further east.
Polylobed arch
Polylobed (or multifoil) arches, have their earliest precedents in Fatimid architecture in Ifriqiya and Egypt and had also appeared in Andalusi Taifa architecture such as the Aljaferia palace and the Alcazaba of Malaga, which elaborated on the existing examples of al-Hakam II's extension to the Great Mosque of Cordoba. In the Almoravid and Almohad periods, this type of arch was further refined for decorative functions while horseshoe arches continued to be standard elsewhere. Some early examples appear in the Great Mosque of Tlemcen (in Algeria) and the Mosque of Tinmal.
"Lambrequin" arch
The so-called "lambrequin" arch, with a more intricate profile of lobes and points, was also introduced in the Almoravid period, with an early appearance in the funerary section of the Qarawiyyin Mosque (in Fes) dating from the early 12th century. It then became common in subsequent Almohad, Marinid, and Nasrid architecture, in many cases used to highlight the arches near the mihrab area of a mosque. This type of arch is also sometimes referred to as a "muqarnas" arch due to its similarities with a muqarnas profile and because of its speculated derivation from the use of muqarnas itself. Moreover, this type of arch was indeed commonly used with muqarnas sculpting along the intrados (inner surfaces) of the arch.
Domes
Although domes and vaulting were not extensively used in western Islamic architecture, domes were still employed as decorative features to highlight certain areas, such as the space in front of the mihrab in a mosque. In the extension of the Great Mosque of Córdoba by al-Hakam II in the late 10th century, three domes were built over the maqsura (the privileged space in front of the mihrab) and another one in the central nave or aisle of the prayer hall at the beginning of the new extension. These domes were constructed as ribbed vaults. Rather than meeting in the centre of the dome, the "ribs" intersect one another off-center, forming a square or an octagon in the centre.
The ribbed domes of the Mosque of Córdoba served as models for later mosque buildings in Al-Andalus and the Maghreb. At around 1000 AD, the Bab al-Mardum Mosque in Toledo was constructed with a similar, eight-ribbed dome, surrounded by eight other ribbed domes of varying design. Similar domes are also seen in the mosque building of the Aljafería of Zaragoza. The architectural form of the ribbed dome was further developed in the Maghreb: the central dome of the Great Mosque of Tlemcen, a masterpiece of the Almoravids founded in 1082 and redecorated in 1136, has twelve slender ribs, the shell between the ribs is filled with filigree stucco work.
In Ifriqiya, certain domes from the 9th and 10th centuries, of a quite different style, are also particularly accomplished in their design and decoration. These are the 9th-century (Aghlabid) dome in front of the mihrab in the Great Mosque of Kairouan and the 10th-centuy (Zirid) Qubbat al-Bahw dome in the Al-Zaytuna Mosque in Tunis. Both are elegant ribbed domes with stonework flourishes such as decorative niches, inscriptions, and shell-shaped squinches.
Decorative motifs
Floral and vegetal motifs
Arabesques, or stylized floral and vegetal motifs, derive from a long tradition of similar motifs in Syrian, Hellenistic, and Roman architectural ornamentation. Early arabesque motifs in Umayyad Cordoba, such as those seen at the Great Mosque or Madinat al-Zahra, continued to make use of acanthus leaves and grapevine motifs from this Hellenistic tradition. Almoravid and Almohad architecture made more use of a general striated leaf motif, often curling and splitting into unequal parts along an axis of symmetry. Palmettes and, to a lesser extent, seashell and pine cone images were also featured. In the late 16th century, Saadian architecture sometimes made use of a mandorla-type (or almond-shaped) motif which may have been of Ottoman influence.
Sebka motif
Various types of interlacing lozenge-like motifs are heavily featured on the surface of minarets starting in the Almohad period (12th–13th centuries) and are later found in other decoration such as carved stucco along walls in Marinid and Nasrid architecture, eventually becoming a standard feature in the western Islamic ornamental repertoire in combination with arabesques. This motif, typically called sebka (meaning "net"), is believed by some scholars to have originated with the large interlacing arches in the 10th-century extension of the Great Mosque of Cordoba by Caliph al-Hakam II. It was then miniaturized and widened into a repeating net-like pattern that can cover surfaces. This motif, in turn, had many detailed variations. One common version, called darj wa ktaf ("step and shoulder") by Moroccan craftsmen, makes use of alternating straight and curved lines which cross each other on their symmetrical axes, forming a motif that looks roughly like a fleur-de-lys or palmette shape. Another version, also commonly found on minarets in alternation with the darj wa ktaf, consists of interlacing multifoil/polylobed arches which form a repeating partial trefoil shape.
Geometric patterns
Geometric patterns, most typically making use of intersecting straight lines which are rotated to form a radiating star-like pattern, were common in Islamic architecture generally and across Moorish architecture. These are found in carved stucco and wood decoration, and most notably in zellij mosaic tilework which became commonplace in Moorish architecture from the 13th century onward. Other polygon motifs are also found, often in combination with arabesques.
In addition to zellij tiles, geometric motifs were also predominant in the decoration and composition of wooden ceilings. One of the most famous examples of such ceilings, considered the masterpiece of its kind, is the ceiling of the Salón de Embajadores in the Comares Palace at the Alhambra in Granada, Spain. The ceiling, composed of 8,017 individual wooden pieces joined together into a pyramid-like dome, consists of a recurring 16-pointed star motif which is believed to have symbolized the Seven Heavens of Paradise described in the Qur'an (specifically the Surat al-Mulk, which is also inscribed at the ceiling's base). Like other stucco and wood decoration, it would have originally been painted in different colours order to enhance its motifs.
Arabic calligraphy
Many Islamic monuments feature inscriptions of one kind or another which serve to either decorate or inform, or both. Arabic calligraphy, as in other parts of the Muslim world, was also an art form. Many buildings had foundation inscriptions which record the date of their construction and the patron who sponsored it. Inscriptions could also feature Qur'anic verses, exhortations of God, and other religiously significant passages. Early inscriptions were generally written in the Kufic script, a style where letters were written with straight lines and had fewer flourishes. At a slightly later period, mainly in the 11th century, Kufic letters were enhanced with ornamentation, particularly to fill the empty spaces that were usually present above the letters. This resulted in the addition of floral forms or arabesque backgrounds to calligraphic compositions.
In the 12th century the cursive Naskh script began to appear, though it only became commonplace in monuments from the Marinid and Nasrid period (13th–15th century) onward. Kufic was still employed, especially for more formal or solemn inscriptions such as religious content. However, from the 13th century onward Kufic became increasingly stylized and almost illegible. In the decoration of the Alhambra, one can find examples of "Knotted" Kufic, a particularly elaborate style where the letters tie together in intricate knots. This style is also found in other parts of the Islamic world and may have had its origins in Iran. The extensions of the letters could turn into strips or lines that continued to form more motifs or form the edges of a cartouche encompassing the rest of the inscription. As a result, Kufic script could be used in a more strictly decorative form, as the starting point for an interlacing or knotted motif that could be woven into a larger arabesque background.
Muqarnas Muqarnas (also called mocárabe in Spain), sometimes referred to as "honeycomb" or "stalactite" carvings, consists of a three-dimensional geometric prismatic motif which is among the most characteristic features of Islamic architecture. This technique originated further east in Iran before spreading across the Muslim world. It was first introduced into al-Andalus and the western Maghreb by the Almoravids, who made early use of it in early 12th century in the Qubba Ba'adiyyin in Marrakesh and in the Qarawiyyin Mosque in Fes. While the earliest forms of muqarnas in Islamic architecture were used as squinches or pendentives at the corners of domes, they were quickly adapted to other architectural uses. In the western Islamic world they were particularly dynamic and were used, among other examples, to enhance entire vaulted ceilings, fill in certain vertical transitions between different architectural elements, and even to highlight the presence of windows on otherwise flat surfaces.
Zellij (tilework)
Tilework, particularly in the form of mosaic tilework called zellij, is a standard decorative element along lower walls and for the paving of floors across the region. It consists of hand-cut pieces of faience in different colours fitted together to form elaborate geometric motifs, often based on radiating star patterns. Zellij made its appearance in the region during the 10th century and became widespread by the 14th century during the Marinid and Nasrid period. It may have been inspired or derived from Byzantine mosaics and then adapted by Muslim craftsmen for faience tiles.
In the traditional Moroccan craft of zellij-making, the tiles are first fabricated in glazed squares, typically 10 cm per side, then cut by hand into a variety of pre-established shapes (usually memorized by heart) necessary to form the overall pattern. This pre-established repertoire of shapes combined to generate a variety of complex patterns is also known as the hasba method. Although the exact patterns vary from case to case, the underlying principles have been constant for centuries and Moroccan craftsmen are still adept at making them today.
Riads and gardens
A riad (sometimes spelled riyad; ) is an interior garden found in many Moorish palaces and mansions. It is typically rectangular and divided into four parts along its central axes, with a fountain at its middle. Riad gardens probably originated in Persian architecture (where it is also known as chahar bagh) and became a prominent feature in Moorish palaces in Spain (such Madinat al-Zahra, the Aljaferia, and the Alhambra). In Morocco, they became especially widespread in the palaces and mansions of Marrakesh, where the combination of available space and warm climate made them particularly appealing. The term is nowadays applied in a broader way to traditional Moroccan houses that have been converted into hotels and tourist guesthouses.
Many royal palaces were also accompanied by vast pleasure gardens, sometimes built outside the main defensive walls or within their own defensive enclosure. This tradition is evident in the gardens of the Madinat al-Zahra built by the Caliphs of Cordoba (10th century), in the Agdal Gardens south of the Kasbah of Marrakesh created by the Almohads (12th century), the Mosara Garden created by the Marinids north of their palace-city of Fes el-Jdid (13th century), and the Generalife created by the Nasrids east of the Alhambra (13th century).
Building types
Mosques
Historically, there was a distinction between regular mosques and Friday mosques, which were larger and had a more important status by virtue of being the venue where the khutba (sermon) was delivered on Fridays. In the early Islamic era there was typically only one Friday mosque per city, but over time Friday mosques multiplied until it was common practice to have one in every neighbourhood or district of the city. Mosques could also frequently be accompanied by other facilities which served the community.
Most mosques in the region have roughly rectangular floor plans and follow the hypostyle format: they consist of a large prayer hall divided into naves or aisles by rows of horseshoe arches that run either parallel or perpendicular to the qibla wall (the wall towards which prayers faced). The qibla (direction of prayer) is symbolized by a decorative niche or alcove in the qibla wall, known as a mihrab. Next to the mihrab there is usually a symbolic pulpit known as a minbar, usually in the form of a staircase leading to a small kiosk or platform, where the imam would stand to deliver the khutba. The mosque also normally includes a sahn (courtyard) which often has a fountain or water basin to assist with ablutions. In early periods this courtyard could be relatively minor in proportion to the rest of the mosque, but in later periods (in Morocco at least) it became progressively larger until it was equal in size to the prayer hall and sometimes larger.
Hypostyle mosques also frequently follow the "T-type" model, in which the nave between the arches running towards the mihrab (perpendicular to the qibla wall) was wider than the others, as was also the aisle directly in front of and along the qibla wall (running parallel to it), thus forming a T-shaped space in the floor plan of the mosque. This part of the plan was often accentuated by greater decoration, such as more elaborate arch shapes or decorative cupola ceilings at each end of the "T").
From afar, mosque buildings are distinguished by their minaret towers. Minarets traditionally have a square shaft and are arranged in two tiers: the main shaft, which makes up most of its height, and a much smaller secondary tower above this which is in turn topped by a finial of copper or brass spheres. Some minarets in North Africa have octagonal shafts, though this is more characteristic of certain regions or periods. Inside the main shaft a staircase, and in other cases a ramp, ascends to the top.
The floor plan of a mosque is also aligned with the direction of prayer, sometimes even at odds with the orientation of the streets around it. Today it is standard practice that the direction of prayer is the line marking the shortest distance between oneself and the Kaaba in Mecca. In the western Mediterranean, this corresponds to a generally eastern orientation (varying slightly depending on your exact position). However, in early Islamic periods there were other interpretations of what the qibla should be. In the western Islamic world in particular, early mosques often had a southern orientation, as can be seen in major monuments like the Great Mosque of Cordoba and the Qarawiyyin Mosque in Fes. This was based on a reported hadith of the Islamic prophet Muhammad which stated that "what is between the east and west is a qibla", as well as on a popular view that mosques should follow the cardinal alignment of the Kaaba itself, whose axes are aligned according to certain astronomical references (e.g. its minor axis is aligned with the sunrise of the summer solstice).
Synagogues
Synagogues had a very different layout from mosques but in North Africa and Al-Andalus they often shared similar decorative trends as the traditional Islamic architecture around them, such as colourful tilework and carved stucco, though later synagogues in North Africa were built in other styles too. Notable examples of historic synagogues in Spain include the Synagogue of Santa Maria la Blanca in Toledo (rebuilt in its current form in 1250), the Synagogue of Cordoba (1315), and the Synagogue of El Tránsito in Toledo (1355–1357). In Morocco they include the Ibn Danan Synagogue in Fes, the Slat al-Azama Synagogue in Marrakesh, and the Beth-El Synagogue in Casablanca, though numerous other examples exist. One of the most famous historic synagogues in Tunisia is the 19th-century El Ghriba synagogue.
Madrasas
The madrasa was an institution which originated in northeastern Iran by the early 11th century and was progressively adopted further west. It provided higher education and served to train Islamic scholars, particularly in Islamic law and jurisprudence (fiqh), most commonly in the Maliki branch of Sunni legal thought. The madrasa of the Sunni world was generally antithetical to more heterodox religious doctrines, including the doctrine espoused by the Almohads. As such, in the westernmost parts of the Islamic world it only came to flourish in the late 13th century, under the Marinid, Zayyanid, and Hafsid dynasties that succeeded the Almohads.
In other parts of the Muslim world, the founders of madrasas could name themselves or their family members as administrators of the foundation's waqf (a charitable and inalienable endowment), making them a convenient means of protecting family fortunes, but this was not allowed under the Maliki school of law that was dominant in the western Islamic lands. As a result, the construction of madrasas was less prolific in the Maghreb and al-Andalus than it was further east. Madrasas in this region are also frequently named after their location or some other distinctive physical feature, rather than after their founders (as was common further east).
Madrasas were generally centered around a main courtyard with a central fountain, off which other rooms could be accessed. Student living quarters were typically distributed on an upper floor around the courtyard. Many madrasas also included a prayer hall with a mihrab, though only the Bou Inania Madrasa of Fes officially functioned as a full mosque and featured its own minaret.
Mausoleums and zawiyas
Most Muslim graves are traditionally simple and unadorned, but in North Africa the graves of important figures were often covered in a domed structure (or a cupola of often pyramidal shape) called a qubba (also spelled koubba). This was especially characteristic for the tombs of "saints" such as walis and marabouts: individuals who came to be venerated for their strong piety, reputed miracles, or other mystical attributes. Many of these existed within the wider category of Islamic mysticism known as Sufism. Some of these tombs became the focus of entire religious complexes built around them, known as a zawiya (also spelled zaouia; ). They typically included a mosque, school, and other charitable facilities. Such religious establishments were major centers of Sufism across the region and grew in power and influence over the centuries, often associated with specific Sufi Brotherhoods or schools of thought.
Funduqs (merchant inns)
A funduq (also spelled foundouk or fondouk; ) was a caravanserai or commercial building which served as both an inn for merchants and a warehouse for their goods and merchandise. In North Africa some funduqs also housed the workshops of local artisans. As a result of this function, they also became centers for other commercial activities such as auctions and markets. They typically consisted of a large central courtyard surrounded by a gallery, around which storage rooms and sleeping quarters were arranged, frequently over multiple floors. Some were relatively simple and plain, while others, like the Funduq al-Najjarin in Fes, were quite richly decorated. While many structures of this kind can be found in historic North African cities, the only one in Al-Andalus to have been preserved is the Nasrid-era Corral del Carbón in Granada.
Hammams (bathhouses)
Hammams () are public bathhouses which were ubiquitous in Muslim cities. Essentially derived from the Roman bathhouse model, hammams normally consisted of four main chambers: a changing room, from which one then moved on to a cold room, a warm room, and a hot room. Heat and steam were generated by a hypocaust system which heated the floors. The furnace re-used natural organic materials (such as wood shavings, olive pits, or other organic waste byproducts) by burning them for fuel. The smoke generated by this furnace helped with heating the floors while excess smoke was evacuated through chimneys. Of the different rooms, only the changing room was heavily decorated with zellij, stucco, or carved wood. The cold, warm, and hot rooms were usually vaulted or domed chambers without windows, designed to keep steam from escaping, but partially lit thanks to small holes in the ceiling which could be covered by ceramic or coloured glass. Many historic hammams have been preserved in cities like Marrakesh and Fez in Morocco, partly thanks to their continued use by locals up to the present day. In Al-Andalus, by contrast, they fell out of use after the expulsion of Muslims from the Iberian Peninsula and are only preserved as archeological sites or historic monuments.
Palaces
The main palaces of rulers were usually located inside a separate fortified district or citadel of the capital city. These citadels included a complex of different structures including administrative offices, official venues for ceremonies and receptions, functional amenities (such as warehouses, kitchens, and hammams), and the private residences of the ruler and his family. Although palace architecture varied from one period and region to the next, certain traits recurred such as the predominance of courtyards and internal gardens around which elements of the palace were typically centered.
In some cases, rulers were installed in the existing fortified citadel of the city, such as the many Alcazabas and Alcázars in Spain, or the Kasbahs of North Africa. The original Alcazar of Cordoba, used by the Umayyad emirs and their predecessors, was an early example of this. When Cordoba first became the capital of Al-Andalus in the 8th century the early Muslim governors simply moved into the former Visigothic palace, which was eventually redeveloped and modified by the Umayyad rulers after them. The Alcázar of Seville was also occupied and rebuilt in different periods by different rulers. In Marrakesh, Morocco, the Almohad Caliphs in the late 12th century built a large new palace district, the Kasbah, on the south side of the city, which was subsequently occupied and rebuilt by the later Saadian and 'Alawi dynasties. In Al-Andalus many palace enclosures were highly fortified alcazabas located on hilltops overlooking the rest of the city, such as the Alcazaba of Almería and the Alcazaba of Málaga, which were occupied by the various governors and local rulers. The most famous of all these, however, is the Alhambra of Granada, which was built up by the Nasrid dynasty during the 13th to 15th centuries.
Rulers with enough resources sometimes founded entirely separate and autonomous royal cities outside their capital cities, such as Madinat al-Zahra, built by Abd ar-Rahman III outside Cordoba, or Fes el-Jdid built by the Marinids outside old Fez. Some rulers even built entirely new capital cities centered on their palaces, such as the Qal'at Bani Hammad, founded in 1007 by the Hammadids in present-day Algeria, and Mahdia, begun in 916 by the Fatimid Caliphs in present-day Tunisia. In many periods and regions rulers also built outlying private estates with gardens in the countryside. As early as the 8th century, for example, Abd ar-Rahman I possessed such estates in the countryside outside Cordoba. The later Nasrid-built Generalife, located on the mountainside a short distance outside the Alhambra, is also an example of outlying residence and garden made for the private use of the rulers. Moroccan sultans also built pleasure pavilions or residences within the vast gardens and orchards that they maintained outside their cities, notably the Menara Gardens and Agdal Gardens on the outskirts of Marrakesh.
Fortifications
In Al-Andalus
The remains of castles and fortifications from various periods of Al-Andalus have survived across Spain and Portugal, often situated on hilltops and elevated positions that command the surrounding countryside. A large number of Arabic terms were used to denote different types and functions, many of which were borrowed into Spanish and are found in present-day toponyms, such as Alcazaba (from ), meaning a fortified enclosure or citadel where the governor or ruler was typically installed, and Alcázar (from ), which was typically a palace protected by fortifications. Fortifications were built either in stone or in rammed earth. Stone was used more commonly in the Umayyad period (8th–10th centuries) while rammed earth became more common in subsequent periods and was also more common in the south.
In the Umayyad period (8th–10th centuries) an extensive network of border fortifications stretched in a wide line roughly from Lisbon in the west then up through the Central System of mountains in Spain, around the region of Madrid, and up to the region of Navarre and Huesca in the northeast. Castles and fortified garrisons existed in the interior of the realm as well. Many of these early fortifications had relatively simple designs with no barbicans and only a single line of walls. The gates were typically straight entrances with an inner and outer doorway on the same axis. Castles typically had quadrangular layouts with walls reinforced by rectangular towers. The authorities also built multitudes of small, usually round, watch towers which could rapidly send messages to each other via fire or smoke signals.
Following the collapse of the Caliphate in the 11th century, the resulting political insecurity encouraged further fortification of cities. Military architecture also became steadily more complex. Fortified gates began to regularly include bent entrances. Military technology grew still more sophisticated during the Almohad period (12th and early 13th centuries), with barbicans appearing in front of city walls and albarrana towers appearing as a recurring innovation. Fortification towers also became taller and more massive, sometimes with round or polygonal bases but more commonly still rectangular. Some of the more famous tower fortifications from this period include the Calahorra Tower in Cordoba and the Torre del Oro in Seville. The latter is a dodecagonal tower which fortified a corner of the city walls and which, along with another tower across the river, protected the city's harbour.
In the final period from the 13th to 15th centuries, fortresses and towns were again refortified. In addition to the fortifications of Granada and its Alhambra, the Nasrids built or rebuilt the Gibralfaro Castle of Málaga and the castle of Antequera, and many smaller strategic hilltop forts like that of Tabernas. This late period saw the construction of massive towers and keeps which likely reflected a growing influence of Christian military architecture.
In the Maghreb
Some of the oldest surviving Islamic-era monuments in the Maghreb are military structures in present-day Tunisia. The best-known examples are the Ribat of Sousse and the Ribat of Monastir, both dating generally from the Aghlabid period in the 9th century. A ribat was a type of residential fortress which was built to guard the early frontiers of Muslim territory in North Africa. They were built at intervals along the coastline so that they could signal each other from afar. Also dating from the same period are the city walls of Sousse and Sfax, both made in stone and bearing similarities to earlier Byzantine-Roman walls in Africa.
Several ruling dynasties in the region built fortified capitals or citadels. The Fatimids built a heavily-fortified new capital at Mahdia in present-day Tunisia, located on a narrow peninsula extending from the coastline into the sea and surrounded by walls and a single land gate. The Hammadids also built a new fortified capital in present-day Algeria known as Qal'at Bani Hammad in the 11th century, located on a strategic elevated site. Along with the earlier Zirid fortifications of Bijaya and 'Ashir, its walls were made mainly of rough stone or rubble stone, demonstrating a slow shift in construction methods away from earlier Byzantine-Roman methods and towards more characteristically North African and Berber architecture. The later Marinids fortified their palace-city of Fes el-Jdid, built in the late 13th century, with a line of double walls.
Starting with the Almoravid and Almohad domination of the 11th–13th centuries, most medieval fortifications in the western Maghreb shared many characteristics with those of Al-Andalus. City walls in Morocco were generally built out of rammed earth, reinforced at regular intervals by square towers, as exemplified by the walls of Marrakesh, the walls of Fes, and the walls of Rabat. In western Algeria, the walls of Tlemcen (formerly Tagrart) were likewise partly built by the Almoravids with a mix of rubble stone at the base and rammed earth above. As elsewhere, the gates were often the weakest points of a defensive wall and so were usually more heavily fortified than the surrounding wall. In Morocco, gates were often designed with a bent entrance.
In later centuries, Moroccan rulers continued to build traditional walls and fortifications while at the same time borrowing elements from European military architecture in the new gunpowder age, most likely through their encounters with the Portuguese and other European powers at this time. The Saadian bastions of Fes, such as Borj Nord, are one early example of these architectural innovations.
"Kasbah", or tighremt in Amazigh, can also refer to various fortresses or fortified mansions in the Atlas Mountains and the desert oases regions of Morocco. In these regions, often traditionally Amazigh (Berber) areas, kasbahs are again made of rammed earth and mud-brick (or sometimes stone), often marked by square corner towers and decorated with simple geometric motifs. Communal fortified granaries are another feature of local Berber architecture in southern Morocco, Algeria, and southern Tunisia, with styles and layouts differing from region to region.
Preservation
Many important examples of Moorish architecture are located in Europe, in the Iberian Peninsula (in the former territories of Al-Andalus), with an especially strong concentration in southern Spain (modern-day Andalusia). There is also a high concentration of historic Islamic architecture in Morocco, Algeria, and Tunisia. The types of monuments that have been preserved vary greatly between regions and between periods. For example, the historic palaces of North Africa have rarely been preserved, whereas Spain retains multiple major examples of Islamic palace architecture that are among the best-studied in the world. By contrast, few major mosques from later periods have been preserved in Spain, whereas many historic mosques are still standing and still being used in North Africa.
See also
References
Notes
Citations
Further reading
– Comprehensive review of palace architecture in Al-Andalus and the Maghreb; slightly more technical than an introductory text.
Marçais, Georges (1954). L'architecture musulmane d'Occident. Paris: Arts et métiers graphiques. – In French; older, but one of the major comprehensive works on Islamic architecture in the region.
Bloom, Jonathan M. (2020). Architecture of the Islamic West: North Africa and the Iberian Peninsula, 700–1800. Yale University Press. – A more recent English-language introduction to Islamic architecture in the region.
Barrucand, Marianne; Bednorz, Achim (1992). Moorish architecture in Andalusia. Taschen. . – Overview focusing on architecture in al-Andalus.
Dodds, Jerrilynn D., ed. (1992). Al-Andalus: The Art of Islamic Spain. New York: The Metropolitan Museum of Art. . – Edited volume and exhibition catalogue focusing on architecture of al-Andalus and some related topics.
Salmon, Xavier (2018). Maroc Almoravide et Almohade: Architecture et décors au temps des conquérants, 1055–1269''. Paris: LienArt. – In French; well-illustrated volume focusing on Almoravid and Almohad architecture. The same author has published other books on Saadian and Marinid architecture.
Berber architecture
Arabic architecture
Architectural styles
Islamic architecture
Medieval Spanish architecture
Architectural history
Culture of al-Andalus
Architecture in Portugal
Architecture in Spain
. | Moorish architecture | [
"Engineering"
] | 19,931 | [
"Architectural history",
"Architecture"
] |
7,441,771 | https://en.wikipedia.org/wiki/Galloway%20Forest%20Park | Galloway Forest Park is a forest park operated by Forestry and Land Scotland, principally covering woodland in the historic counties of Kirkcudbrightshire and Wigtownshire in the administrative area of Dumfries and Galloway. It is claimed to be the largest forest in the UK. The park was granted Dark Sky Park status ("Galloway Forest Dark Sky Park") in November 2009, being the first area in the UK to be so designated.
The park, established in 1947, covers and receives over 800,000 visitors per year. The three visitor centres at Glen Trool, Kirroughtree, and Clatteringshaws receive around 150,000 each year. Much of the Galloway Hills lie within the boundaries of the park and there is good but rough hillwalking and also some rock climbing and ice-climbing within the park. Within or near the boundaries of the park are several well developed mountain bike tracks, forming part of the 7stanes project.
As well as catering for recreation, the park includes economically valuable woodland, producing 500,000 tons of timber per year.
Galloway Forest Park and the people who visit it and work in it were the subject of a six-part BBC One documentary series aired in early 2018 entitled "The Forest".
Dark sky
In November 2009 the International Dark-Sky Association conferred Dark Sky Park status on the Galloway Forest Park, the first area in the UK to be so designated.
The Scottish Dark Sky Observatory, near Dalmellington, is located within the northern edge of the Galloway Forest Dark Sky Park. The observatory was partly funded by the Scottish Government and opened in 2012. It suffered a devastating fire during the early hours of 23 June 2021, resulting in complete destruction of the observatory. The fire is currently being treated as suspicious.
Alexander Murray
The park is also home to the ruins of the birthplace of Alexander Murray, the son of a shepherd and farm labourer. Murray was self-taught on multiple languages, and eventually went on to become professor of Oriental languages at University of Edinburgh. A short distance away, high on a hillside, is Murray's Monument, which was erected in his memory in 1835.
Typhoon crash
On 18 March 1944, 22-year-old Canadian pilot Kenneth Mitchell crashed his Hawker Typhoon aircraft in the forest (location here). The impact killed him instantly. Mitchell was in training in preparation for his squadron's role fighting the German V-1 flying bombs in the Second World War. On 18 March 2009, 65 years to the day since the crash, a commemorative plaque was installed on a mortared cairn at the crash site, where pieces of the aircraft still remain. Mitchell was buried in Ayr Cemetery, Ayr.
See also
Loch Macaterick
References
External links
Recreation at Galloway Forest Park at the Forestry and Land Scotland website
'Activity Tourism' from the Countryside Recreation Network
Information on Hill Walking in the Galloway Hills
Rock and Ice climbing in the Galloway Hills
7 Stanes
7 Stanes - Galloway Forest Park
Forests and woodlands of Scotland
Country parks in Scotland
Dark-sky preserves in the United Kingdom
Parks in Dumfries and Galloway
Forest parks of Scotland | Galloway Forest Park | [
"Astronomy"
] | 628 | [
"Dark-sky preserves in the United Kingdom",
"Dark-sky preserves"
] |
7,442,513 | https://en.wikipedia.org/wiki/Samuel%20Cate%20Prescott%20Award | The Samuel Cate Prescott Award has been awarded since 1964 by the Institute of Food Technologists (IFT) in Chicago, Illinois. It is awarded to food science or technology researchers who are under 36 years of age or who earned their highest degree within ten years before July 1 of the year the award is presented. This award is named for Samuel Cate Prescott (1872-1962), a food science professor from the Massachusetts Institute of Technology who was also the first president of IFT.
Award winners receive a plaque from IFT and a USD 3,000 honorarium.
Winners
References
List of past winners - Official site
Food technology awards
Awards established in 1964
1964 establishments in Illinois | Samuel Cate Prescott Award | [
"Technology"
] | 137 | [
"Science and technology awards",
"Food technology awards"
] |
7,442,564 | https://en.wikipedia.org/wiki/Compact%20convergence | In mathematics compact convergence (or uniform convergence on compact sets) is a type of convergence that generalizes the idea of uniform convergence. It is associated with the compact-open topology.
Definition
Let be a topological space and be a metric space. A sequence of functions
,
is said to converge compactly as to some function if, for every compact set ,
uniformly on as . This means that for all compact ,
Examples
If and with their usual topologies, with , then converges compactly to the constant function with value 0, but not uniformly.
If , and , then converges pointwise to the function that is zero on and one at , but the sequence does not converge compactly.
A very powerful tool for showing compact convergence is the Arzelà–Ascoli theorem. There are several versions of this theorem, roughly speaking it states that every sequence of equicontinuous and uniformly bounded maps has a subsequence that converges compactly to some continuous map.
Properties
If uniformly, then compactly.
If is a compact space and compactly, then uniformly.
If is a locally compact space, then compactly if and only if locally uniformly.
If is a compactly generated space, compactly, and each is continuous, then is continuous.
See also
Modes of convergence (annotated index)
Montel's theorem
References
Reinhold Remmert Theory of complex functions (1991 Springer) p. 95
Functional analysis
Convergence (mathematics)
Topology of function spaces
Topological spaces | Compact convergence | [
"Mathematics"
] | 302 | [
"Sequences and series",
"Functions and mappings",
"Convergence (mathematics)",
"Functional analysis",
"Mathematical structures",
"Mathematical objects",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Mathematical relations"
] |
7,442,709 | https://en.wikipedia.org/wiki/William%20V.%20Cruess%20Award | The William V. Cruess Award has been awarded every year since 1970. It is awarded for excellence in teaching in food science and technology and is the only award in which student members in the Institute of Food Technologists (IFT) can nominate. This award is named after William V. Cruess (1886-1968), a food science professor at the University of California, Berkeley and later at the University of California, Davis who was also the first ever IFT Award winner when he won the Nicholas Appert Award in 1942.
Award winners receive a bronze medal showing a side view of Cruess from the Northern California Section of IFT and a USD 3000 honorarium from the IFT office in Chicago, Illinois.
Winners
References
List of past winners - Official site
Food technology awards | William V. Cruess Award | [
"Technology"
] | 161 | [
"Science and technology awards",
"Food technology awards"
] |
7,443,037 | https://en.wikipedia.org/wiki/List%20of%20neuroimaging%20software | Neuroimaging software is used to study the structure and function of the brain. To see an NIH Blueprint for Neuroscience Research funded clearinghouse of many of these software applications, as well as hardware, etc. go to the NITRC web site.
3D Slicer Extensible, free open source multi-purpose software for visualization and analysis.
Amira 3D visualization and analysis software
Analysis of Functional NeuroImages (AFNI)
Analyze developed by the Biomedical Imaging Resource (BIR) at Mayo Clinic.
Brain Image Analysis Package
CamBA
Caret Van Essen Lab, Washington University in St. Louis
CONN (functional connectivity toolbox)
Diffusion Imaging in Python (DIPY)
DL+DiReCT
EEGLAB
FMRIB Software Library (FSL)
FreeSurfer
Computational anatomy toolbox
Imaris Imaris for Neuroscientists
ISAS (Ictal-Interictal SPECT Analysis by SPM)
LONI Pipeline, Laboratory of Neuro Imaging, USC
Lead-DBS
Mango
NITRC The Neuroimaging Informatics Tools and Resources Clearinghouse. An NIH funded database of neuroimaging tools
NeuroKit, a Python open source toolbox for physiological signal processing
Neurophysiological Biomarker Toolbox
PyNets: A Reproducible Workflow for Structural and Functional Connectome Ensemble Learning (PyNets)
Seed-based d mapping (previously signed differential mapping, SDM): a method for conducting meta-analyses of voxel-based neuroimaging studies.
The Spinal Cord Toolbox (SCT) is the first comprehensive and open-source software for processing MR images of the spinal cord.
Statistical parametric mapping (SPM)
References
Neuroimaging | List of neuroimaging software | [
"Technology"
] | 354 | [
"Computing-related lists",
"Lists of software"
] |
7,443,163 | https://en.wikipedia.org/wiki/Carl%20R.%20Fellers%20Award | The Carl R. Fellers Award has been awarded every year since 1984. It is awarded to members of the Institute of Food Technologists (IFT) who are also members of Phi Tau Sigma, the honorary society of food science and technology, who have brought honor and recognition to food science through achievements in areas other than research, development, education, and technology transfer. The award is named after Carl R. Fellers, a food science professor who chaired the food technology department at the University of Massachusetts Amherst and when the first Phi Tau Sigma chapter was founded in 1953.
Award winners receive a plaque from IFT and a USD 3000 honorarium from Phi Tau Sigma.
Winners
References
List of past winners - Official site
Food technology awards | Carl R. Fellers Award | [
"Technology"
] | 147 | [
"Science and technology awards",
"Food technology awards"
] |
7,443,276 | https://en.wikipedia.org/wiki/Dead%20water | Dead water is the nautical term for a phenomenon which can occur when there is strong vertical density stratification due to salinity or temperature or both. It is common where a layer of fresh or brackish water rests on top of denser salt water, without the two layers mixing. The phenomenon is frequently, but not exclusively, observed in fjords where glacier runoff flows into salt water without much mixing. The phenomenon is a result of energy producing internal waves that have an effect on the vessel. The effect can also be found at density boundaries between sub surface layers.
In the better known surface phenomenon a ship traveling in a fresh water layer with a depth approximately equal to the vessel's draft will expend energy creating and maintaining internal waves between the layers. The vessel may be hard to maneuver or can even slow down almost to a standstill and "stick". An increase in speed by a few knots can overcome the effect. Experiments have shown the effect can be even more pronounced in the case of submersibles encountering such stratification at depth.
The phenomenon, long considered sailor's yarns, was first described for science by Fridtjof Nansen, the Norwegian Arctic explorer. Nansen wrote the following from his ship Fram in August 1893 in the Nordenskiöld Archipelago near the Taymyr Peninsula:
Nansen's experience led him to request physicist and meteorologist Vilhelm Bjerknes to study it scientifically. Bjerknes had his student, Vagn Walfrid Ekman, investigate. Ekman, who later described the effect now bearing his name as the Ekman spiral, demonstrated the effect of internal waves as being the cause of dead water.
A modern study by the Université de Poitiers entities CNRS' Institut Pprime and the Laboratoire de Mathématiques et Applications revealed that the effect is due to internal waves moving the vessel back and forth. Two types occur. The first as observed by Nansen causes a constant abnormally slow progress. The second, Ekman type, causes speed oscillations. The Ekman type may be temporary and become Nansen type as the vessel escapes the particular regime causing the oscillating speed. An interesting historical possibility is that the effect caused Cleopatra's ships difficulties and loss at the Battle of Actium in 31 BC in which legend attributes the loss to remora (suckerfish) attaching to the hulls.
See also
Ekman spiral
Ice drift
Iceberg
Internal wave
Nansen's Fram expedition
Nordenskiöld Archipelago
Polar ice cap
Polar ice packs
Polynya
Sea ice
Shelf ice
Vagn Walfrid Ekman – Swedish oceanographer
References
External links
Short movie demonstrating the phenomenon with a model
Description of Dead Water
Explanation of dead water
New Scientist article
'dead water' Encyclopædia Britannica Online. 3 December 2009
Nautical terminology
Physical oceanography
Waves | Dead water | [
"Physics",
"Chemistry"
] | 592 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Waves",
"Motion (physics)",
"Physical oceanography",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
7,443,433 | https://en.wikipedia.org/wiki/Calvert%20L.%20Willey%20Award | The Calvert L. Willey Award has been awarded every year since 1989. It is awarded to a member of the Institute of Food Technologists (IFT) who displayed meritorious and imaginative service to IFT. The award is named for Calvert L. Willey (1920-1994) who served as Executive Secretary and later Executive Director from 1961 until his retirement in 1987. Willey was given a distinguished service award by IFT at the 1987 Annual Meeting in Las Vegas, Nevada. This distinguished service award would be named in his honor and presented for the first time as an annual award at the 1989 Annual Meeting in Chicago, Illinois. It was the first IFT Award to be named for a living person.
Award winners receive an USD 3000 honorarium and a plaque from IFT.
Winners
References
List of past winners - Official site
Food technology awards | Calvert L. Willey Award | [
"Technology"
] | 170 | [
"Science and technology awards",
"Food technology awards"
] |
7,443,552 | https://en.wikipedia.org/wiki/Joug | The joug or Scots pint or Scottish pint () was a Scottish unit of liquid volume measurement that was in use from at least 1661 – possibly as early as the 15th century – until the early 19th century, approximately equivalent to 1696 mL or roughly three imperial pints.
The standard was held at Stirling and thereby called the Stirling Jug. It went astray in 1745 and its loss was hidden by replacement by a standard pewter jug of roughly the same size. The error was discovered by Rev Alexander Bryce in 1750, who after a long search found the damaged jug in the attic of a Mr Urquhart, a coppersmith in Stirling, and restored the standard.
Bakers used the measure until the late 19th century.
One joug was sixteen Scottish gills (of approximately 106 mL each)
One joug was four mutchkins (of approximately 424 mL each)
One joug was two chopins (of approximately 828 mL each)
Eight jougs made a Scottish gallon (approximately 13.568 L)
See also
Obsolete Scottish units of measurement
References
Scottish Weights and Measures: Capacity from SCAN
Obsolete Scottish units of measurement
Units of volume
Alcohol measurement | Joug | [
"Mathematics"
] | 236 | [
"Units of volume",
"Quantity",
"Units of measurement"
] |
7,443,810 | https://en.wikipedia.org/wiki/Jean%20Calvignac | Jean Calvignac is an IBM Fellow and was responsible for the architecture of PowerNP, an IBM network processor. He holds more than 220 patents.
Career
In 1998, at the IBM Laboratory in the Research Triangle Park, Calvignac and his team initiated the IBM network processor activities. He had previously been responsible for system design of the ATM switching products, which he initiated with his team in 1992 at the IBM Laboratory in La Gaude, France. Before that, he had held different management and technical leader positions for architecture and development of communication controller products at the La Gaude Laboratory. Calvignac joined IBM in 1971 as a development engineer in telephone switching products. He received an engineering degree in 1969 from the Grenoble Institute of Technology, France.
Calvignac has been awarded more than 220 patents, mostly in the field of communication and networking. He has contributed to standards and a few scientific papers.
He was named an IBM Fellow in 1997, IBM's highest technical honor. Calvignac is a Fellow of the IET (in Europe) and a Senior Member of the IEEE.
References
External links
French computer scientists
French electrical engineers
20th-century French inventors
Living people
Year of birth missing (living people)
IBM Fellows
IBM people
Fellows of the Institution of Engineering and Technology
Senior members of the IEEE
Grenoble Institute of Technology alumni
20th-century French engineers
21st-century French engineers | Jean Calvignac | [
"Engineering"
] | 286 | [
"Institution of Engineering and Technology",
"Fellows of the Institution of Engineering and Technology"
] |
7,444,590 | https://en.wikipedia.org/wiki/Ericsson%20T66 | Ericsson T66 is a discontinued mobile phone created by Ericsson Mobile Communications, their smallest ever. Released in September 2001, it surpassed the tiny Nokia 8210 in smallness and weight at the time. At just 59 grams, it remains as one of the lightest mobile phones ever released.
The T66 is compatible with GSM 900/1800/1900 mobile phone networks.
After Ericsson merged with Sony Corporation in 2001 to create Sony Ericsson, the T66's body, and color were changed in a new model called Sony Ericsson T600 in 2002.
References
External links
Official T66 specifications on the Sony Ericsson website
Эльдар Муртазин Обзор GSM-телефона Ericsson T66
Павел Марьюшкин Ericsson T66
Ericsson T66. Описание телефона. Технические характеристики
T66
T66
Mobile phones introduced in 2001 | Ericsson T66 | [
"Technology"
] | 236 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
7,444,893 | https://en.wikipedia.org/wiki/Injection%20molding%20of%20liquid%20silicone%20rubber | Injection molding of liquid silicone rubber (LSR) is a process to produce pliable, durable parts in high volume.
Liquid silicone rubber is a high purity platinum cured silicone with low compression set, good stability and ability to resist extreme temperatures of heat and cold ideally suitable for production of parts, where high quality is required. Due to the thermosetting nature of the material, liquid silicone injection molding requires special treatment, such as intensive distributive mixing, while maintaining the material at a low temperature before it is pushed into the heated cavity and vulcanized.
Chemically, silicone rubber is a family of thermoset elastomers that have a backbone of alternating silicon and oxygen atoms and methyl or vinyl side groups. Silicone rubbers constitute about 30% of the silicone family, making them the largest group of that family. Silicone rubbers maintain their mechanical properties over a wide range of temperatures and the presence of methyl-groups in silicone rubbers makes these materials extremely hydrophobic, making them suitable for electrical surface insulations.
Typical applications for liquid silicone rubber are products that require high precision such as seals, sealing membranes, electric connectors, multi-pin connectors, infant products where smooth surfaces are desired, such as bottle nipples, medical applications as well as kitchen goods such as baking pans, spatulas, etc. Often, silicone rubber is overmolded onto other parts made of different plastics. For example, a silicone button face might be overmolded onto a Nylon 6,6 housing.
Equipment
In order for the liquid injection molding process to fully occur, several mechanical components must be in place. Typically, a molding machine requires a metered pumping device in conjunction with an injection unit—a dynamic or static mixer is attached. An integrated system can aid in precision and process efficiency. The critical components of a liquid injection molding machine include:
Injectors. An injecting device is responsible for pressurizing the liquid silicone to aid in the injection of the material into the pumping section of the machine. Pressure and injection rate can be adjusted at the operator's discretion.
Metering Units. Metering units pump the two primary liquid materials, the catalyst and the base forming silicone, ensuring that the two materials maintain a constant ratio while being simultaneously released.
Supply Drums. Supply drums, also called plungers, serve as the primary containers for mixing materials. Both the supply drums and a container of pigment connect to the main pumping system.
Mixers. A static or dynamic mixer combines materials after they exit the metering units. Once combined, pressure is used to drive the mixture into a designated mold.
Nozzle. To facilitate the deposition of the mixture into the mold, a nozzle is used. Often, the nozzle features an automatic shut-off valve to help prevent leaking and overfilling the mold.
Mold Clamp. A mold clamp secures the mold during the injection molding process, and opens the mold upon completion.
Characteristics of LSR
Biocompatibility: Under extensive testing, liquid silicone rubber has demonstrated superior compatibility with human tissue and body fluids. In comparison to other elastomers, LSR is resistant to bacteria growth and will not stain or corrode other materials. LSR is also tasteless and odorless and can be formulated to comply with stringent FDA requirements. The material can be sterilized via a variety of methods, including steam autoclaving, ethylene oxide (ETO), gamma, e-beam and numerous other techniques, meeting all required approvals such as BfR XV, FDA 21 CFR 177.2600, USP Class VI.
Durable: LSR parts can withstand extreme temperatures, which makes them an ideal choice for components under the hood of cars and in close proximity to engines. Parts fabricated via liquid silicone rubber injection molding are fire retardant and will not melt.
Chemical resistance: Liquid silicone rubber resists water, oxidation and some chemical solutions such as acids and alkali.
Temperature resistance: Compared to other elastomers, silicone can withstand a wide range of high/low temperature extremes.
Mechanical properties: LSR has good elongation, high tear and tensile strength, excellent flexibility and a hardness range of 5 to 80 Shore A.
Electrical properties: LSR has excellent insulating properties, which offer an appealing option for a host of electrical applications. Compared to conventional insulating material, silicone can perform in far higher and lower temperatures.
Transparency and pigmentation: LSR possesses a natural transparency. This attribute makes it possible to produce colorful, custom, molded products
Injection molding process
Liquid silicone rubbers are supplied in barrels. Because of their low viscosity, these rubbers can be pumped through pipelines and tubes to the vulcanization equipment. The two components are pumped through a static mixer by a metering pump. One of the components contains the catalyst, typically platinum based. A coloring paste as well as other additives can also be added before the material enters the static mixer section. In the static mixer the components are well mixed and are transferred to the cooled metering section of the injection molding machine. The static mixer renders a very homogeneous material that results in products that are not only very consistent throughout the part, but also from part to part. This is in contrast to solid silicone rubber materials that are purchased pre-mixed and partially vulcanized. In contrast, hard silicone rubbers are processed by transfer molding and result in less material consistency and control, leading to higher part variability. Additionally, solid silicone rubber materials are processed at higher temperatures and require longer vulcanization times.
Liquid silicone has a very low viscosity index and requires perfect seals of the mould cavity in order to guarantee a burr-free finished product.
As injections are carried out at high temperature, steel dilation and natural shrinkage of materials must be considered at the design stage of the LSR injection tooling.
From the metering section of the injection molding machine, the compound is pushed through cooled sprue and runner systems into a heated cavity where the vulcanization takes place. The cold runner and general cooling results in no loss of material in the feed lines. The cooling allows production of LSR parts with nearly zero material waste, eliminating trimming operations and yielding significant savings in material cost.
Liquid silicone rubbers are supplied in a variety of containers, from tubes to 55 gallon drums. Because of their viscous nature, these liquids are pumped at high pressures (500 - 5000 psi) based on the durometer of the material. The raw materials are shipped in two separate containers (known in the industry as a kit) identified as "A" and B" compounds, with the "B" side usually containing the catalyst, but may vary based on the brand of silicone used. The two (A and B) compounds must be mixed in a 1 to 1 ratio, usually by way of a static mixer, adding pigment during the mixing process before the curing process begins. Once the two components come together the curing process begins immediately. A chiller supplying cold water to jacketed fittings is typically used to retard the curing process prior to the materials introduction to the mold. A color pigment can be added via a color injector used in conjunction with the material pump (closed loop metering system) before the material enters the static mixer section.
In a cold deck scenario, the 1 to 1 mixed compound is pumped through cooled sprue and runner systems into a heated cavity where the vulcanization takes place. The cold runner and general cooling results in minimal loss of material as the injection occurs directly into the part or cavity, saving on overall material costs and using high consistency rubber. The cooling allows production of LSR parts with nearly zero material valve gate waste, however this does not guarantee a "flash free" finished part. Molds and tooling are varying in design, execution and cost. A good cold runner is expensive as compared to conventional hot runner tooling, and has the potential to provide a high level of performance.
Advantages of liquid silicone injection molding
Source:
Batches stability (ready-to-use material)
Process repeatability
Direct injection (no waste)
Short cycle time
‘Flashless’ technology (no burrs)
Automated process
Automated demolding systems
References
Further reading
Elastomers
Plastics industry
Rubber
Silicon
Injection molding | Injection molding of liquid silicone rubber | [
"Chemistry"
] | 1,729 | [
"Synthetic materials",
"Elastomers"
] |
7,445,070 | https://en.wikipedia.org/wiki/CSS%20image%20replacement | CSS image replacement is a Web design technique that uses Cascading Style Sheets to replace text on a Web page with an image containing that text. It is intended to keep the page accessible to users of screen readers, text-only web browsers, or other browsers where support for images or style sheets is either disabled or nonexistent, while allowing the image to differ between styles. Also named Fahrner image replacement for Todd Fahrner, one of the persons originally credited with the idea of image replacement in 2003.
With the introduction of CSS web font support in all major web browsers, CSS image replacement is now little used.
Motivation
The typical method of inserting an image in an HTML document is via the <img> tag. This method has its drawbacks with regards to accessibility and flexibility, however:
While the alt attribute is designed for providing a textual representation of the image content, this precludes the use of HTML markup in the textual representation and causes problems with some search robots.
Using the <img> tag to show text is presentational; many Web designers argue that presentational elements should be separated from HTML content by placing the former in a CSS style sheet.
Images referenced using an <img> tag cannot be easily changed via CSS, causing problems with alternative stylesheets.
Fahrner image replacement was devised to rectify these issues.
Implementations
The original Image Replacement implementation described by Douglas Bowman used a heading, inside of which was a <span> element containing the text of the heading:
<h3 id="firHeader"><span>Sample Headline</span></h3>
Through style sheets, the heading was then given a background containing the desired image, and the <span> hidden by setting its display CSS property to none:
#firHeader
{
width: 300px;
height: 50px;
background: #fff url(firHeader.gif) top left no-repeat;
}
#firHeader span
{
display: none;
}
It was soon discovered, however, that this method caused some screen readers to skip over the heading entirely, as they would not read any text that had a display property of none. The later Phark method, developed by Mike Rundle in 2003, instead used the text-indent property to push the text out of the image's area, addressing this issue:
#firHeader
{
width: 300px;
height: 50px;
text-indent: -5000px; /* ← Phark */
}
The Phark method had its own problems, however; in visual browsers where CSS was on but images off, nothing would display.
Also in 2003, Dave Shea's eponymous Shea method solves both of the issues earlier mentioned, at the cost of an extra <span>:
<h3 id="header"><span></span>Revised Image Replacement</h3>
By absolutely positioning an empty <span> over the text element, the text is effectively hidden. If the image fails to load, the text behind it is still displayed. For this reason, images with transparency cannot be used with the Shea method.
#header
{
width: 329px;
height: 25px;
position: relative;
}
#header span
{
background: url(firHeader.gif) no-repeat;
position: absolute;
width: 100%;
height: 100%;
}
Over a dozen different methods has since been developed with varying degree of compatibility and complexity.
References
External links
Revised Image Replacement – an overview of the various FIR techniques by Dave Shea
Ultimate Image Replacement – a comprehensive image replacement technique by Jesse Schoberg
Web design
Cascading Style Sheets
Obsolete technologies | CSS image replacement | [
"Engineering"
] | 776 | [
"Design",
"Web design"
] |
7,445,076 | https://en.wikipedia.org/wiki/Extra-pair%20copulation | Extra-pair copulation (EPC) is a mating behaviour in monogamous species. Monogamy is the practice of having only one sexual partner at any one time, forming a long-term bond and combining efforts to raise offspring together; mating outside this pairing is extra-pair copulation. Across the animal kingdom, extra-pair copulation is common in monogamous species, and only a very few pair-bonded species are thought to be exclusively sexually monogamous. EPC in the animal kingdom has mostly been studied in birds and mammals. Possible benefits of EPC can be investigated within non-human species, such as birds.
For males, a number of theories are proposed to explain extra-pair copulations. One such hypothesis is that males maximise their reproductive success by copulating with as many females as possible outside of a pair bond relationship because their parental investment is lower, meaning they can copulate and leave the female with minimum risk to themselves. Females, on the other hand, have to invest a lot more in their offspring; extra-pair copulations produce a greater cost because they put the resources that their mate can offer at risk by copulating outside the relationship. Despite this, females do seek out extra pair copulations, and, because of the risk, there is more debate about the evolutionary benefits for females.
In human males
Extra-pair copulation in men has been explained as being partly due to parental investment. Research has suggested that copulation poses more of a risk to future investment for women, as they have the potential of becoming pregnant, and consequently require a large parental investment of the gestation period, and then further rearing of the offspring. Contrastingly, men are able to copulate and then abandon their mate as there is no risk of pregnancy for themselves, meaning there is a smaller risk of parental investment in any possible offspring. It has been suggested that, due to having such low parental investment, it is evolutionarily adaptive for men to copulate with as many women as possible. This will allow males to spread their genes with little risk of future investment but it does come with the increased risk of sexually transmitted infections.
Various factors can increase the probability of EPC in males. Firstly, males with low levels of fluctuating asymmetry are more likely to have EPCs. This may be due to the fact that signals of low fluctuating asymmetry suggest that the males have "good genes", making females more likely to copulate with them as it will enhance the genes of their offspring, even if they do not expect long-term commitment from the male. Psychosocial stress early on in life, including behaviours such as physical violence and substance abuse, can predict EPC in later life. This has been explained as being due to Life History Theory, which argues that individuals who are reared in environments where resources are scarce and life expectancy is low, are more likely to engage in reproductive behaviours earlier in life in order to ensure the proliferation of their genes. Individuals reared in these environments are said to have short life histories. With respect to Life History Theory, these finding have been explained by suggesting that males who experienced psychosocial stress early in life have short life histories, making them more likely to try and reproduce as much as possible by engaging in EPC to avoid gene extinction.
However, men may also choose not to have EPCs for multiple reasons. One reason may be that long-term monogamous relationships can help form environments that will aid the successful rearing of offspring, as the male is present to help raise them, leading to an increased probability of the male's genes surviving to the next generation. A second reason that EPCs may be avoided by a male is that it can be costly to them; their EPC may be discovered, leading to the dissolution of the long-term relationship with their partner and, in some cases, lead to their partner assaulting or even killing them. Men may also avoid EPCs to minimize the risk of putting themselves at increased opportunity for STD transmission which can be common in EPCs. The partners in the EPC may be promiscuous as well leading to a higher statistical chance and probability of contracting venereal diseases; this would counter the lower incidence of STD transmission among exclusively monogamous sexually active couples. Spousal homicide is more likely to be committed by males rather than females,
In human females
From an evolutionary perspective, females have to invest a lot more in their offspring than males due to prolonged pregnancy and child rearing, and a child has a better chance of survival and development with two parents involved in child-rearing. Therefore, extra-pair copulations have a greater cost for women because they put the support and resources that their mate can offer at risk by copulating outside the relationship. There is also the increased risk of sexually transmitted infections, which is suggested as a possible evolutionary reason for the transition from polygamous to monogamous relationships in humans. Despite this, females do seek out extra-pair copulation, with some research finding that women's levels of infidelity are equal to that of men's, although this evidence is mixed. Due to the increased risk, there is more confusion about the evolutionary benefits of extra-pair copulation for females.
The most common theory is that women mate outside of the monogamous relationship to acquire better genetic material for their offspring. A female in a relationship with a male with 'poorer genetic quality' may try to enhance the fitness of her children and therefore the continuation of her own genes by engaging in extra-pair copulation with better quality males. A second theory is that a woman will engage in extra-pair copulation to seek additional resources for herself or her offspring. This is based on observations from the animal world in which females may copulate outside of their pair-bond relationship with neighbours to gain extra protection, food or nesting materials. Finally, evolutionary psychologists have theorized that extra-pair copulation is an indirect result of selection on males. The alleles in males that promote extra-pair copulation as an evolutionary strategy to increase reproductive success is shared between sexes leading to this behaviour being expressed in females.
There are also social factors involved in extra-pair copulation. Both males and females have been found to engage in more sexual behaviour outside of the monogamous relationship when experiencing sexual dissatisfaction in the relationship, although how this links to evolutionary theory is unclear. Surveys have found cultural differences in attitudes towards infidelity, though it is relatively consistent that female attitudes are less favorable toward infidelity than male attitudes.
Other animals
As well as humans, EPC has been found in many other socially monogamous species.<ref
name="Gowaty and Bridges 1991a"></ref><ref
name="bollinger and gavin, 1991"></ref> When EPC occurs in animals which show sustained female-male social bonding, this can lead to extra-pair paternity (EPP), in which the female reproduces with an extra-pair male, and hence produces EPO (extra-pair offspring).
Due to the obvious reproductive success benefits for males, it used to be thought that males exclusively controlled EPCs. However, it is now known that females also seek EPC in some situations.
In birds
Extra-pair copulation is common in birds. For example, zebra finches, although socially monogamous, are not sexually monogamous and hence do engage in extra-pair courtship and attempts at copulation. In a laboratory study, female zebra finches copulated over several days, many times with one male and only once with another male. Results found that significantly more eggs were fertilised by the extra-pair male than expected proportionally from just one copulation versus many copulations with the other male. EPC proportion varies between different species of birds. For example, in eastern bluebirds, studies have shown that around 35% of offspring is due to EPC. Some of the highest levels of EPP are found in the New Zealand hihi/stitchbird (Notiomystis cincta), in which up to 79% of offspring are sired by EPC. EPC can have significant consequences for parental care, as shown in azure-winged magpie (Cyanopica cyanus).
In socially polygynous birds, EPC is only half as common as in socially monogamous birds. Some ethologists consider this finding to be support for the 'female choice' hypothesis of mating systems in birds.
In mammals
EPC has been shown in monogamous mammals, such as the white-handed gibbon. A study of one group found 88% in-pair copulation and 12% extra-pair copulation. However, there is much variability in rates of EPC in mammals. One study found that this disparity in EPC is better predicted by the differing social structures of different mammals, rather than differing types of pair bonding. For example, EPC was lower in species who live in pairs compared to those who live in solitary or family structures.
Reasons for evolution
Some argue that EPC is one way in which sexual selection is operating for genetic benefits which is why the extra-pair males involved in EPC seem to be a non-random subset. There is some evidence for this in birds. For example, in swallows, males with longer tails are involved in EPC more than those with shorter tails. Also female swallows with a shorter-tailed within-pair mates are more likely to conduct EPC than those whose mates have longer tails. A similar pattern has been found for black-capped chickadees, in which all extra-pair males had higher rank than the within-pair males. But some argue that genetic benefits for offspring is not the reason females participate in EPC. A meta-analysis of genetic benefits of EPC in 55 bird species found that extra-pair offspring were not more likely to survive than within-pair offspring. Also, extra-pair males did not show significantly better 'good-genes' traits than within-pair males, except for being slightly larger overall.
Another potential explanation for the occurrence of EPC in organisms where females solicit EPC is that the alleles controlling such behaviour are intersexually pleiotropic. Under the hypothesis of intersexual antagonistic pleiotropy, the benefit males get from EPC cancels out the negative effects of EPC for females. Thus, the allele that controls EPC in both organisms would persist, even if it would be detrimental to the fitness of females. Similarly, according to the hypothesis of intrasexual antagonistic pleiotropy, the allele that controls EPC in females also controls a behaviour that is under positive selection, such as receptiveness towards within-pair copulation.
References
Developmental biology
Mating
Reproduction in animals
Sexuality
Promiscuity | Extra-pair copulation | [
"Biology"
] | 2,239 | [
"Reproduction in animals",
"Behavior",
"Developmental biology",
"Reproduction",
"Sex",
"Ethology",
"Sexuality",
"Mating",
"Promiscuity"
] |
7,445,411 | https://en.wikipedia.org/wiki/Battlefield%20Airborne%20Communications%20Node | The Battlefield Airborne Communications Node (BACN) is a United States Air Force (USAF) airborne communications relay and gateway system carried by the unmanned EQ-4B and the manned Bombardier E-11A aircraft. BACN enables real-time information flow across the battlespace between similar and dissimilar tactical data link and voice systems through relay, bridging, and data translation in line-of-sight and beyond-line-of-sight situations. Its ability to translate between dissimilar communications systems allows them to interoperate without modification.
Because of its flexible deployment options and ability to operate at high altitudes, BACN can enable air and surface forces to overcome communications difficulties caused by mountains, other rough terrain, or distance. BACN provides critical information to all operational echelons and increases situational awareness by converging tactical and operational air and ground pictures. For example, an Army unit on the ground currently sees a different picture than an aircrew, but with BACN, both can see the same picture.
On 22 February 2010, the US Air Force and the Northrop Grumman BACN Team received the 2010 Network Centric Warfare Award from the Institute for Defense and Government Advancement.
On 27 January 2020, a USAF E-11A crashed in Afghanistan, killing both crew members on board.
Purpose
Individual tactical data links, such as Link 16 and EPLRS, are part of the larger tactical data link network, encompassing tactical data links, common data links, and weapon data links. Most military platforms or units are equipped with a tactical data link capability tailored to their individual missions. Those tactical data link capabilities are not necessarily interoperable with one another, preventing the digital exchange of information between military units. BACN acts as a universal translator, or gateway, that makes the tactical data links work with one another. BACN also serves as an airborne repeater, connecting tactical data link equipped military units that are not within line of sight of one another.
Background
Interoperability between airborne networking waveforms has been a persistent challenge. There have been multiple systems developed to address the challenge to include Air Defense Systems Integrator (ADSI), Gateway Manager, and Joint Range Extension (JRE) product lines. However, those product lines were separately funded/maintained and had interoperability concerns of their own. The solution was an "Objective Gateway" which would serve as a Universal Translator to make data from one network interoperable with another.
In 2005, the USAF's AFC2ISRC and ESC created BACN as an Objective Gateway technology demonstrator to provide voice and data interoperability between aircraft in a single battle area. The four key principles were
radio agnostic - it would support a variety of communication protocols
platform agnostic - BACN could be mounted on a variety of aircraft
un-tethered - unlike previous repeaters, which were hung from floating aerostats, BACN has the ability to move within the battlespace
Knowledge-based intelligence - the ability to sense waveform characteristics of sender and receiver and automatically route traffic.
The BACN first flight was November 2005 at MCAS Miramar in San Diego, CA.
BACN was successfully demonstrated in Joint Expeditionary Force eXperiment (JEFX) 2006 and JEFX 2008 and selected for field deployment.
Joint support
Getting critical air support to troops in contact with the enemy supports both troops on the ground and in the air.
This project is not limited to combat operations. It has provided the World Food convoy commander with “comms-on-the-move.” This capability allows convoys to stay in continuous contact with air support and with command channels in complex or adverse terrain, while mitigating exposure to attacks as the node is continually moving.
Platforms
The BACN prototype was originally developed and tested in 2005–2008 on the NASA WB-57 high altitude test aircraft during Joint Expeditionary Force Experiments and other experimentation venues. The last two flying WB-57s were used for this mission in Afghanistan.
BACN was also deployed for testing on a Bombardier Global 6000 and originally designated as the RC-700A under a reconnaissance classification. The aircraft was later re-designated as the E-11A under the special electronics installation category. The Global 6000 was selected due to its high service ceiling (up to 51,000 ft) and long flight duration (up to 12 hours). These flight characteristics are critical in providing unified datalink and voice networks in the mountainous terrain encountered in the current theater of operations.
Additional E-11As have been deployed to increase availability and flexibility. These have been used in operations in Afghanistan.
BACN payloads have also been developed, installed, and operated on special variant EQ-4B Global Hawk aircraft to provide unmanned long endurance high altitude communications coverage. The combination of BACN payloads on E-11A and EQ-4 aircraft gives planners and operators flexibility to adapt to mission needs and increase coverage in the battlespace to near 24/7 operations. The effectiveness of BACN has increased the demand for more EQ-4B Global Hawk aircraft to be created and installed with BACN to be utilized in the field. The BACN system continues to be a high in-demand system that the Air Force will more than likely continue to use for many years to come.
Northrop Grumman has also developed BACN pods that can be temporarily mounted to other various aircraft.
BACN as a concept
BACN has been a controversial program within the DoD. This is caused by a number of issues including the personality clashes between the service people who conceived the project back in late 2004 and the traditional acquisition bureaucracy. This was particularly true between requirements developers at the former Air Force Command and Control Intelligence, Surveillance, Reconnaissance Center at Langley AFB, Virginia and their acquisition partners at the Electronic Systems Center (ESC) at Hanscom AFB, Massachusetts, part of Air Force Materiel Command.
BACN divides military planners and acquisition bureaucrats on two main fronts. First, how will an "Airborne Network" evolve beyond the existing tactical data links on today's platforms. Second, the BACN effort presupposes that the capability will initially be "outsourced" to commercial companies that will provide an "airborne network" as a service to the DOD for the foreseeable future.
Future
With the increasing likelihood of a contested electromagnetic spectrum (EMS) in an era of great power competition, the idea of a "BACN-mesh" was proposed by Professor Jahara Matisek (and former E-11 BACN pilot) at the US Air Force Academy, as a way of pursuing new multi-domain war-fighting options against near-peers. Specifically, Prof. Matisek suggests that smart node pods (i.e. a BACN-light payloads affixed to aircraft with hardpoints), could provided layered BACN “bridging” connections and Tactical Data Link (TDL) services to war-fighters in an EMS-contested battlespace, without deploying a specific BACN aircraft. For example, in the Pacific – where infrastructure is limited – a “BACN-mesh” concept could be employed to create real-time battlespace pictures, proving useful when a near-peer adversary attempts localized jamming across the EMS. A "BACN-mesh" concept, if properly employed with numerous smart node equipped aircraft, would "create a complex, impregnable, and mutually reinforcing communication network with multiple relay nodes."
See also
Global Information Grid
Airborne radio relay
References
External links
AF C2 Integration Center
Military electronics
Networking standards
Northrop Grumman | Battlefield Airborne Communications Node | [
"Technology",
"Engineering"
] | 1,575 | [
"Networking standards",
"Computer standards",
"Computer networks engineering"
] |
7,445,882 | https://en.wikipedia.org/wiki/Protein%20turnover | In cell biology, protein turnover refers to the replacement of older proteins as they are broken down within the cell. Different types of proteins have very different turnover rates.
A balance between protein synthesis and protein degradation is required for good health and normal protein metabolism. More synthesis than breakdown indicates an anabolic state that builds lean tissues, more breakdown than synthesis indicates a catabolic state that burns lean tissues. According to D.S. Dunlop, protein turnover occurs in brain cells the same as any other eukaryotic cells, but that "knowledge of those aspects of control and regulation specific or peculiar to brain is an essential element for understanding brain function."
Protein turnover is believed to decrease with age in all senescent organisms including humans. This results in an increase in the amount of damaged protein within the body.
Protein turnover in the exercise science
Four weeks of aerobic exercise has been shown to increase skeletal muscle protein turnover in previously unfit individuals. A diet high in protein increases whole body turnover in endurance athletes.
Some bodybuilding supplements claim to reduce the protein breakdown by reducing or blocking the number of catabolic hormones within the body. This is believed to increase anabolism. However, if protein breakdown falls too low then the body would not be able to remove muscle cells that have been damaged during workouts which would in turn prevent the growth of new muscle cells.
References
Protein biosynthesis | Protein turnover | [
"Chemistry"
] | 281 | [
"Protein biosynthesis",
"Protein stubs",
"Gene expression",
"Biochemistry stubs",
"Biosynthesis"
] |
7,445,926 | https://en.wikipedia.org/wiki/Phosphazene | Phosphazenes refer to various classes of organophosphorus compounds featuring phosphorus(V) with a double bond between P and N. One class of phosphazenes have the formula . These phosphazenes are also known as iminophosphoranes and phosphine imides. They are superbases.
BEMP and t-Bu-P4
Well known phosphazene bases are BEMP (2-tert-Butylimino-2-diEthylamino-1,3-diMethylperhydro-1,3,2-diazaPhosphorine) with an acetonitrile pKa of the conjugate acid of 27.6 and the phosphorimidic triamide t-Bu-P4 (pKBH+ = 42.7) also known as Schwesinger base. BEMP and P4-t-Bu|t-Bu-P4 have attracted attention because they are low-nucleophilic, which precludes their participating in competing reactions. Being non-ionic ("charge-neutral"), they are soluble in nonpolar solvents. Protonation takes place at a doubly bonded nitrogen atom. The pKa's of , where R = Me and pyrrolidinyl, are 42.7 and 44, respectively. These are the highest pKa recorded for the conjugate acid of charge-neutral molecular base.
In one implemention, t-Bu-P4 catalyzes the conversion of pivaldehyde to the alcohol: Phosphazene bases have been used as basic titrants in non-aqueous acid–base titrations.
Other classes of phosphazenes
Also called phosphazenes are represented with the formula , where X = halogen, alkoxy group, amide and other organyl groups. One example is hexachlorocyclotriphosphazene . Bis(triphenylphosphine)iminium chloride is also referred to as a phosphazene, where Ph = phenyl group. The present article focuses on those phosphazenes with the formula .
See also
Verkade bases feature P(III) with three amido substituents and a transannular amine
Cyclodiphosphazane
Hexachlorophosphazene
Polyphosphazene
References
Nitrogen compounds
Phosphorus compounds
Non-nucleophilic bases
Superbases | Phosphazene | [
"Chemistry"
] | 535 | [
"Non-nucleophilic bases",
"Superbases",
"Bases (chemistry)",
"Reagents for organic chemistry"
] |
7,447,607 | https://en.wikipedia.org/wiki/Ukusoma | Ukusoma is the Zulu term for simulated intercourse (outercourse), also known as "thigh sex", in which a man is allowed to put his penis between the thighs of his partner, rather than in the vagina, with the woman's legs remaining crossed to prevent penetration. Another term for ukusoma is ukuhlobonga or ukumentsha, described in the 1861 dictionary by clergyman John Colenso. The practice has been widely reported across southern Africa by young couples.
References
External links
Zulu - Growing up sexually
Zulu culture
Zulu words and phrases | Ukusoma | [
"Biology"
] | 119 | [
"Behavior",
"Sexuality stubs",
"Sexuality"
] |
7,447,683 | https://en.wikipedia.org/wiki/Optibo | Optibo is the product of a collaboration between Swedish firms to solve the housing industry's problem of availability of space caused by high land prices. Optibo's main architect was inspired when he saw the Disney cartoon Mickey's Trailer on TV.
The project has led to the construction of an apartment which is only 25 square meters (270 square feet) in area. The single-room living space has the furniture built into the floor and the room can be changed from a living room to a bedroom, to a dining room, and back. The kitchen area is affixed to the wall and does not change.
References
Building engineering | Optibo | [
"Engineering"
] | 130 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
11,123,364 | https://en.wikipedia.org/wiki/Natural%20gas%20storage | Natural gas is a commodity that can be stored for an indefinite period of time in natural gas storage facilities for later consumption.
Usage
Gas storage is principally used to meet load variations. Gas is injected into storage during periods of low demand and withdrawn from storage during periods of peak demand. It is also used for a variety of secondary purposes, including:
Balancing the flow in pipeline systems. This is performed by mainline transmission pipeline companies to maintain operational integrity of the pipelines, by ensuring that the pipeline pressures are kept within design parameters.
Maintaining contractual balance. Shippers use stored gas to maintain the volume they deliver to the pipeline system and the volume they withdraw. Without access to such storage facilities, any imbalance situation would result in a hefty penalty.
Leveling production over periods of fluctuating demand. Producers use storage to store any gas that is not immediately marketable, typically over the summer when demand is low and deliver it in the winter months when the demand is high.
Market speculation. Producers and marketers use gas storage as a speculative tool, storing gas when they believe that prices will increase in the future and then selling it when it does reach those levels.
Insuring against any unforeseen accidents. Gas storage can be used as an insurance that may affect either production or delivery of natural gas. These may include natural factors such as hurricanes, or malfunction of production or distribution systems.
Meeting regulatory obligations. Gas storage ensures to some extent the reliability of gas supply to the consumer at the lowest cost, as required by the regulatory body. This is why the regulatory body monitors storage inventory levels.
Reducing price volatility. Gas storage ensures commodity liquidity at the market centers. This helps contain natural gas price volatility and uncertainty.
Offsetting changes in natural gas demands. Gas storage facilities are gaining more importance due to changes in natural gas demands. First, traditional supplies that once met the winter peak demand are now unable to keep pace. Second, there is a growing summer peak demand on natural gas, due to electric generation via gas fired power plants.
Measures and definitions
A number of metrics are used to define and measure the volume of an underground storage facility:
Total gas storage capacity: It is the maximum volume of natural gas that can be stored at the storage facility. It is determined by several physical factors such as the reservoir volume, and also on the operating procedures and engineering methods used.
Total gas in storage: It is the total volume of gas in storage at the facility at a particular time.
Base gas (also referred to as cushion gas): It is the volume of gas that is intended as permanent inventory in a storage reservoir to maintain adequate pressure and deliverability rates throughout the withdrawal season.
Working gas capacity: It is the total gas storage capacity minus the base gas.
Working gas: It is the total gas in storage minus the base gas. Working gas is the volume of gas available to the market place at a particular time.
Physically unrecoverable gas: The amount of gas that becomes permanently embedded in the formation of the storage facility and that can never be extracted.
Cycling rate: It is the average number of times a reservoir’s working gas volume can be turned over during a specific period of time. Typically the period of time used is one year.
Deliverability: It is a measure of the amount of gas that can be delivered (withdrawn) from a storage facility on a daily basis. It is also referred to as the deliverability rate, withdrawal rate, or withdrawal capacity and is usually expressed in terms of millions of cubic feet of gas per day that can be delivered.
Injection capacity (or rate): It is the amount of gas that can be injected into a storage facility on a daily basis. It can be thought of as the complement of the deliverability. Injection rate is also typically measured in millions of cubic feet of gas that can be delivered per day.
The measurements above are not fixed for a given storage facility. For example, deliverability depends on several factors including the amount of gas in the reservoir and the pressure etc. Generally, a storage facility’s deliverability rate varies directly with the total amount of gas in the reservoir. It is at its highest when the reservoir is full and declines as gas is withdrawn. The injection capacity of a storage facility is also variable and depends on factors similar to those that affect deliverability. The injection rate varies inversely with the total amount of gas in storage. It is at its highest when the reservoir is nearly empty and declines as more gas is injected. The storage facility operator may also change operational parameters. This would allow, for example, the storage capacity maximum to be increased, the withdrawal of base gas during very high demand or reclassifying base gas to working gas if technological advances or engineering procedures allow.
Types
The most important type of gas storage is in underground reservoirs. There are three principal types — depleted gas reservoirs, aquifer reservoirs and salt cavern reservoirs. Each of these types has distinct physical and economic characteristics which govern the suitability of a particular type of storage type for a given application.
Depleted gas reservoir
These are the most prominent and common form of underground storage of natural gas. They are the reservoir formations of natural gas fields that have produced all or part of their economically recoverable gas. The depleted reservoir formation should be readily capable of holding sufficient volumes of injected natural gas in the pore space between grains (via high porosity), of storing and delivering natural gas at sufficient economic rates (via high permeability) and be contained so that natural gas cannot migrate into other formations and be lost. In addition the rock (both the reservoir and the seal) should be capable of withstanding the repeated cycle of an increase in pressure when natural gas is injected into the reservoir and in reverse the drop in pressure when natural gas is produced.
Using such a facility that meets the above criteria is economically attractive because it allows the re-use, with suitable modification, of the extraction and distribution infrastructure remaining from the productive life of the gas field which reduces the start-up costs. Depleted reservoirs are also attractive because their geological and physical characteristics have already been studied by geologists and petroleum engineers and are usually well known. Consequently, depleted reservoirs are generally the cheapest and easiest to develop, operate, and maintain of the three types of underground storage.
In order to maintain working pressures in depleted reservoirs, about 50 percent of the natural gas in the formation must be kept as cushion gas. However, since depleted reservoirs were previously filled with natural gas and hydrocarbons, they do not require the injection of gas that will become physically unrecoverable as this is already present in the formation. This provides a further economic boost for this type of facility, particularly when the cost of gas is high. Typically, these facilities are operated on a single annual cycle; gas is injected during the off-peak summer months and withdrawn during the winter months of peak demand.
A number of factors determine whether or not a depleted gas field will make an economically viable storage facility:
The reservoir must be of sufficient quality in terms of porosity and permeability to allow storage and production to meet demand as required;
Natural gas must be contained by effective seals otherwise there will be lost volumes that cannot be recovered;
The depleted reservoir and field infrastructure must be close to gas markets;
The existing infrastructure must be suitable for retrofitting the equipment to inject and produce gas at the necessary pressures and rates;
Aquifer reservoir
Aquifers are underground, porous and permeable rock formations that act as natural water reservoirs. In some cases they can be used for natural gas storage. Usually these facilities are operated on a single annual cycle as with depleted reservoirs. The geological and physical characteristics of aquifer formation are not known ahead of time and a significant investment has to go into investigating these and evaluating the aquifer’s suitability for natural gas storage.
If the aquifer is suitable, all of the associated infrastructure must be developed from scratch, increasing the development costs compared to depleted reservoirs. This includes installation of wells, extraction equipment, pipelines, dehydration facilities, and possibly compression equipment. Since the aquifer initially contains water there is little or no naturally occurring gas in the formation and of the gas injected some will be physically unrecoverable. As a result, aquifer storage typically requires significantly more cushion gas than depleted reservoirs; up to 80% of the total gas volume. Most aquifer storage facilities were developed when the price of natural gas was low, meaning this cushion gas was inexpensive to sacrifice. With rising gas prices aquifer storage becomes more expensive to develop.
A consequence of the above factors is that developing an aquifer storage facility is usually time consuming and expensive. Aquifers are generally the least desirable and most expensive type of natural gas storage facility.
Salt formation
Underground salt formations are well suited to natural gas storage. Salt caverns allow very little of the injected natural gas to escape from storage unless specifically extracted. The walls of a salt cavern are strong and impervious to gas over the lifespan of the storage facility.
Once a salt feature is discovered and found to be suitable for the development of a gas storage facility a cavern is created within the salt feature. This is done by the process of solution mining. Fresh water is pumped down a borehole into the salt. Some of the salt is dissolved leaving a void and the water, now saline, is pumped back to the surface. The process continues until the cavern is the desired size, some are 800 m tall and 50 m diameter with a volume of around ½ million m3. Once created, a salt cavern offers an underground natural gas storage vessel with high deliverability. Cushion gas requirements are lower, typically about 33 percent of total gas capacity.
Salt caverns are usually much smaller than depleted gas reservoir and aquifer storage facilities. A salt cavern facility may occupy only one one-hundredth of the area taken up by a depleted gas reservoir facility. Consequently, salt caverns cannot hold the large volumes of gas necessary to meet base load storage requirements. Deliverability from salt caverns is, however, much higher than for either aquifers or depleted reservoirs. This allows the gas stored in a salt cavern to be withdrawn and replenished more readily and quickly. This faster cycle-time is useful in emergency situations or during short periods of unexpected demand surges.
Although construction is more costly than depleted field conversions when measured on the basis of dollars per thousand cubic feet of working gas, the ability to perform several withdrawal and injection cycles each year reduces the effective cost.
Other
There are also other types of storage such as:
Liquefied Natural Gas
Liquefied Natural Gas (LNG) facilities provide delivery capacity during peak periods when market demand exceeds pipeline deliverability. LNG storage tanks possess a number of advantages over underground storage. As a liquid at approximately −163 °C (−260 °F), it occupies about 600 times less space than gas stored underground, and it provides high deliverability at very short notice because LNG storage facilities are generally located close to market and can be trucked to some customers avoiding pipeline tolls. There is no requirement for cushion gas and it allows access to a global supply. LNG facilities are, however, more expensive to build and maintain than developing new underground storage facilities.
Pipeline capacity
Gas can be temporarily stored in the pipeline system, through a process called line packing. This is done by packing more gas into the pipeline by increasing the pressure. During periods of high demand, greater quantities of gas can be withdrawn from the pipeline in the market area than is injected at the production area. This process is usually performed during off peak times to meet the next day’s peaking demands. This method provides a temporary short-term substitute for traditional underground storage.
Gasholders
Gas can be stored above ground in a gasholder (or gasometer), largely for balancing, not long-term storage, and this has been done since Victorian times. These store gas at district pressure, meaning that they can provide extra gas very quickly at peak times. Gasholders are perhaps most used in the United Kingdom and Germany.
There are two kinds of gasholder — column-guided, which are guided up by a large frame that is always visible, regardless of the position of the holder; and spiral-guided, which have no frame and are guided up by concentric runners in the previous lift.
Perhaps the most famous British gasholder is the large column-guided "Oval gasholders" that overlooks The Oval cricket ground in London. Gasholders were built in the United Kingdom from early Victorian times; many such as Kings Cross in London and St. Marks Street in Kingston upon Hull are so old that they are entirely riveted, as their construction predates the use of welding in construction. The last to be built in the UK was in 1983.
Owners
Interstate pipeline companies
Interstate pipeline companies rely heavily on underground storage to perform load balancing and system supply management on their long-haul transmission lines. FERC regulations though demand that these companies open up the remainder of their capacity not used for that purpose to third parties. Twenty-five interstate companies currently operate 172 underground natural gas storage facilities. In 2005, their facilities accounted for about 43 percent of overall storage deliverability and 55 percent of working gas capacity in the US. These operators include the Columbia Gas Transmission Company, Dominion Gas Transmission Company, The National Fuel Gas Supply Company, Natural Gas Pipeline of America, Texas Gas Transmission Company, Southern Star Central Pipeline Company, TransCanada Corporation.
Intrastate pipeline companies and local distribution companies
Intrastate pipeline companies use storage facilities for operational balancing and system supply as well as to meet the energy demand of end-use customers. LDCs generally use gas from storage to serve customers directly. This group operates 148 underground storage sites and account for 40 percent of overall storage deliverability and 32 percent of working gas capacity in the US. These operators include Consumers Energy Company and the Northern Illinois Gas Company (Nicor), in the US and Enbridge and Union Gas in Canada.
Independent storage service providers
The deregulation activity in the underground gas storage arena has attracted independent storage service providers to develop storage facilities. The capacity made available would then be leased to third-party customers such as marketers and electricity generators. It is expected that in the future, this group would take more market share, as more deregulation takes place. Currently in the US, this group accounts for 18 percent of overall storage deliverability and 13 percent of working gas capacity in the US.
Location and distribution
Europe
As of January 2011, there were 124 underground storage facilities in Europe.
Gas Infrastructure Europe (GIE) reports 254 existing facilities or planned expansions in its Gas storage database.
Most member states have a minimum storage requirement that covers at least 15% of their annual gas consumption.
Russia
Gazprom uses large seasonal stores, mostly in western Russia, to manage the large variation in domestic and export demands, filling in the summer low demand season and supplying high demand in the winter. Between 2005 and 2021 an average of about of storage was used in this way, peaking at about in 2020/2021.
United States
The United States is typically broken out into three main regions when it comes to gas consumption and production. These are the consuming East, the consuming West and the producing South.
Consuming East
The consuming east region, particularly the states in the northern part, heavily rely on stored gas to meet the peak demand during the cold winter months. Due to the prevailing cold winters, large population centers and developed infrastructure, it is not surprising that this region has the highest level of working gas storage capacity of the other regions and the largest number of storage sites, mainly in depleted reservoirs. In addition to underground storage, LNG is increasingly playing a crucial role in providing supplemental backup and/or peaking supply to LDCs on a short term basis. Although the total capacity for these LNG facilities does not match those of underground storage in scale, the short term high deliverability makes up for that.
Consuming West
The consuming west region has the smallest share of gas storage both in terms of the number of sites as well as gas capacity/deliverability. Storage in this area is mostly used to allow domestic and Albertan gas, coming from Canada, to flow at a rather constant rate. In northern California, Pacific Gas and Electric (PG&E) has underground storage capacity for about of gas across three storage facilities. PG&E uses the storage to store gas when it is inexpensive in summer to use in winter when purchased gas is expensive.
Producing South
The producing south's storage facilities are linked to the market centers and play a crucial role in the efficient export, transmission and distribution of natural gas produced to the consuming regions. These storage facilities allow the storage of gas that is not immediately marketable to be stored for later use.
Canada
In Canada, the maximum working gas stored was in 2006. Alberta storage accounts for 47.5 percent of the total working gas volume. It is followed by Ontario which accounts for 39.1 percent, British Columbia which accounts for 7.6 percent, Saskatchewan which accounts for 5.1 percent and finally Quebec which accounts for 0.9 percent.
Regulation and deregulation
United States
Interstate pipeline companies in the US are subject to the jurisdiction of the Federal Energy Regulatory Commission (FERC). Prior to 1992, these companies owned all the gas that flowed through their systems. This also included gas in their storage facility, over which they had complete control. Then FERC Order 636 was implemented. This required the companies to operate their facilities, including gas storage on an open access basis. For gas storage, this meant that these companies could only reserve the capacity needed to maintain system integrity. The rest of the capacity would be available for leasing to third parties on a nondiscriminatory basis. Open access has opened a wide variety of application for gas storage, particularly for marketers which can now exploit price arbitrage opportunities. Any storage capacity would be priced at cost-based pricing, unless the provider can demonstrate to FERC that it lacks market power, in which case it may be allowed to price at market-based rates to gain market share. FERC defines market power as "..the ability of a seller profitably to maintain prices above competitive levels for a significant period of time".
The underlying pricing structure for storage has discouraged development in the gas storage sector, which has not seen many new storage facilities constructed, besides current ones being expanded. In 2005, FERC announced a new Order 678 targeted particularly to gas storage. This rule is intended to stimulate the development of new gas storage facility in the ultimate goal of reducing natural gas price volatility. Commission Chairman Joseph T. Kelliher observed: "Since 1988, natural gas demand in the United States has risen 24 percent. Over the same period, gas storage capacity has increased only 1.4 percent. While construction of storage capacity has lagged behind the demand for natural gas, we have seen record levels of price volatility. This suggests that current storage capacity is inadequate. Further, this year, what storage capacity exists may be full far earlier than in any previous year. According to some analysts, that raises the prospect that some domestic gas production may be shut-in. Our final rule should help reduce price volatility and expand storage capacity."
This ruling aims at opening up two approaches for developers of natural gas storage, to be able to charge market-based rates. The first one is the redefinition of the relevant product market for storage that includes alternatives for storage such as available pipeline capacity, local gas production and LNG terminals. The second approach aims at implementing section 312 of the Energy Policy Act. It would allow an applicant to request authority to charge "market-based rates even if a lack of market power has not been demonstrated, in circumstances where market-based rates are in the public interest and necessary to encourage the construction of storage capacity in the area needing storage service and that customers are adequately protected," the Commission said. It is expected that this new order will entice developers, especially independent storage operators, to develop new facilities in the near future.
Canada
In Alberta, gas storage rates are not regulated and providers negotiate rates with their customers on a contract-by-contract basis. However the Carbon facility which is owned by ATCO gas is regulated, since ATCO is a utility company. Therefore, ATCO Gas has to charge cost-based rates for its customers, and can market any additional capacities at market-based rates. In Ontario, gas storage is regulated by the Ontario Energy Board. Currently all the available storage is owned by vertically integrated utilities. The utility companies have to price their storage capacity sold to their customers at cost-based rates, but can market any remaining capacity at market-based rates. Storage developed by independent storage developers can charge market-based rates. In British Columbia, gas storage is not regulated. All available storage capacity is marketed at market-based rates.
United Kingdom
The regulation of gas storage, transportation and sale is overseen by Ofgem (a government regulator). This has been the case since the gas industry was privatised in 1986. Most forms of gas storage were owned by Transco (now part of National Grid plc), however the national network has now largely been broken down into regional networks, owned by different companies, they are however all still answerable to Ofgem.
Storage economics
Storage development cost
As with all infrastructural investments in the energy sector, developing storage facilities is capital-intensive. Investors usually use the return on investment as a financial measure for the viability of such projects. It has been estimated that investors require a rate or return between 12 percent to 15 percent for regulated projects, and close to 20 percent for unregulated projects. The higher expected return from unregulated projects is due to the higher perceived market risk. In addition significant expenses are accumulated during the planning and location of potential storage sites to determine its suitability, which further increases the risk.
The capital expenditure to build the facility mostly depends on the physical characteristics of the reservoir. First of all, the development cost of a storage facility largely depends on the type of the storage field. As a general rule of thumb, salt caverns are the most expensive to develop on a volume of Working Gas Capacity Basis. However one should keep in mind that because the gas in such facilities can be cycled repeatedly, on a Deliverability basis, they may be less costly. A Salt Cavern facility might cost anywhere from $10 million to $25 million per billion cubic feet () of working gas capacity. The wide price range is because of region difference which dictates the geological requirements. These factors include the amount of compressive horsepower required, the type of surface and the quality of the geologic structure to name a few. A depleted reservoir costs between $5 million to $6 million per billion cubic feet of Working Gas Capacity. Finally another major cost incurred when building new storage facilities is that of base gas. The amount of base gas in a reservoir could be as high as 80% for aquifers making them very unattractive to develop when gas prices are high. On the other hand, salt caverns require the least amount of base gas. The high cost of base gas is what drives the expansion of current sites vs the development of new ones. This is because expansions require little addition to base gas.
The expected cash flows from such projects depend on a number of factors. These include the services the facility provides as well as the regulatory regime under which it operates. Facilities that operate primarily to take advantage of commodity arbitrage opportunities are expected to have different cash flow benefits than ones primarily used to ensure seasonal supply reliability. Rules set by regulators can on one hand restrict the profit made by storage facility owners or on the other hand guarantee profit, depending on the market model.
Storage valuation
To understand the economics of gas storage, it is crucial to be able to value it. Several approaches have been proposed. They include:
Cost-of-service valuation
Least-cost planning
Seasonal valuation
Option-based valuation
The different valuation modes co-exist in the real world and are not mutually exclusive. Buyers and sellers typically use a combination of the different prices to come up with the true value of storage. An example of the different valuations and the price they generate can be found in the table below.
Cost-of-service valuation
This valuation mode is typically used to value regulated storage, for instance storage operated by interstate pipeline companies. These companies are regulated by FERC. This pricing method allows the developers to recover their cost and an agreed upon return on investment. The regulatory body requires that the rates and tariffs are maintained and publicly published. The services provided by these companies include firm and interruptible storage as well as no-notice storage services. Usually, cost of service pricing is used for depleted reservoir facilities. If it is used to price, say salt cavern formations, the cost would be very high, due to the high cost of development of such facilities.
Least-cost planning
This valuation mode is typically used by local distribution companies (LDCs). It is based on pricing storage, according to the savings resulting from not having to resort to other more expensive options. This pricing mode depends on the consumer and their respective load profile/shape.
Seasonal valuation
The seasonal valuation of storage is also referred to as the intrinsic value. It is evaluated as the difference between the two prices in a pair of forward prices. The idea being that one can lock-in a forward spread, either physically or financially. For developers seeking to study the feasibility of building a storage facility, they would typically look at the long-term price spreads.
Option-based valuation
In addition to possessing an intrinsic value, storage may also have extrinsic value. Intrinsic valuation of storage does not take the cycling ability of high-deliverability storage. The extrinsic valuation reflects the fact that in such facilities, say salt cavern formations, a proportion of the space can be used more than once, thus increasing value. Such high-deliverability storage facility allows its user to respond to variations in demand/price within a season or during a given day rather than just seasonal variations as was the case with single cycle facilities.
Effects of natural gas prices on storage
In general as we see in the graph below, high gas prices are typically associated to low storage periods. Usually when prices are high during the early months of the refill season (April–October), many users of storage adopt a wait and see attitude. They limit their gas intake in anticipation that the prices will drop before the heating season begins (November–March). However, when that decrease does not occur, they are forced to buy natural gas at high prices. This is particularly true for Local Distribution and other operators who rely on storage to meet the seasonal demand for their customers. On the other hand, other storage users, who use storage as a marketing tool (hedging or speculating) will hold off storing a lot of gas when the prices are high.
Future of storage technology
Research is being conducted on many fronts in the gas storage field to help identify new improved and more economical ways to store gas. Research being conducted by the US Department of Energy is showing that salt formations can be chilled allowing for more gas to be stored. This will reduce the size of the formation needed to be treated, and have salt extracted from it. This will lead to cheaper development costs for salt formation storage facility type0.
Another aspect being looked at, are other formations that may hold gas. These include hard rock formations such as granite, in areas where such formations exists and other types currently used for gas storage do not.
In Sweden a new type of storage facility has been built, called "lined rock cavern". This storage facility consists of installing a steel tank in a cavern in the rock of a hill and surrounding it with concrete. Although the development cost of such facility is quite expensive, its ability to cycle gas multiple times compensates for it, similar to salt formation facilities. Finally, another research project sponsored by the Department of Energy, is that of hydrates. Hydrates are compounds formed when natural gas is frozen in the presence of water. The advantage being that as much as 181 standard cubic feet of natural gas could be stored in a single cubic foot of hydrate.
See also
Natural gas prices
Natural gas processing
Carbon dioxide (CO2)
Compressed natural gas (CNG)
Fuel station
Future energy development
Hydrogen storage
List of North American natural gas pipelines
Underground hydrogen storage
Steam reforming
Strategic natural gas reserve
World energy consumption
External links
Cedigaz - UGS Worldwide Database
EIA — Energy Information Administration — Topics for Natural Gas Storage
FERC — Federal Energy Regulatory Commission - Natural Gas Storage
Natural Gas Media — Natural Gas News and Analysis for Investment and Trading
References | Natural gas storage | [
"Chemistry"
] | 5,846 | [
"Natural gas storage",
"Natural gas technology"
] |
11,124,204 | https://en.wikipedia.org/wiki/NGC%20250 | NGC 250 is a lenticular galaxy in the constellation
Pisces.
References
External links
0250
02765
0487
2765
Lenticular galaxies
18851110
Pisces (constellation) | NGC 250 | [
"Astronomy"
] | 41 | [
"Pisces (constellation)",
"Constellations"
] |
11,124,301 | https://en.wikipedia.org/wiki/Mercury%20probe | The mercury probe is an electrical probing device to make rapid, non-destructive contact to a sample for electrical characterization. Its primary application is semiconductor measurements where otherwise time-consuming metallizations or photolithographic processing are required to make contact to a sample. These processing steps usually take hours and have to be avoided where possible to reduce device processing times.
The mercury probe applies mercury contacts of well-defined areas to a flat sample. The nature of the mercury-sample contacts and the instrumentation connected to the mercury probe define the application. If the mercury-sample contact is ohmic (non-rectifying) then current-voltage instrumentation can be used to measure resistance, leakage currents, or current-voltage characteristics. Resistance can be measured on bulk samples or on thin films. The thin films can be composed of any material that does not react with mercury. Metals, semiconductors, oxides, and chemical coatings have all been measured successfully.
Applications
The mercury probe is a versatile tool for investigation of parameters of conducting, insulating and semiconductor materials.
One of the first successful mercury probe applications was the characterization of epitaxial layers grown on silicon. It is critical to device performance to monitor the doping level and thickness of an epitaxial layer. Prior to the mercury probe, a sample had to undergo a metallization process, which could take hours. A mercury probe connected to capacitance-voltage doping profile instrumentation could measure an epitaxial layer as soon as it came out of the epitaxial reactor. The mercury probe formed a Schottky barrier of well-defined area that could be measured as easily as a conventional metallized contact.
Another mercury probe application popular for it speed is oxide characterization. The mercury probe forms a gate contact and enables measurement of the capacitance-voltage or current-voltage parameters of the mercury-oxide-semiconductor structure. Using this device, material parameters such as permittivity, doping, oxide charge, and dielectric strength may be evaluated. The contact area of a mercury droplet resting on a semiconductor can be modified by electrowetting, meaning that accurate parameter extraction may need to take this effect into account.
A mercury probe with concentric dot and ring contacts as well as a back contact extends mercury probe applications to silicon on insulator (SOI) structures, where a pseudo-MOSFET device is formed. This Hg-FET can be used to study mobility, interface trap density, and transconductance.
The same mercury-sample structures can be measured with capacitance-voltage instrumentation to monitor permittivity and thickness of dielectric materials. These measurements are a convenient gauge for development of novel dielectrics of both low-k and high-k types.
If the mercury-sample contact is rectifying then a diode has formed and offers other measurement possibilities. Current-voltage measurements of the diode can reveal properties of the semiconductor such as breakdown voltage and lifetime. Capacitance-voltage measurements allow computation of the semiconductor doping level and uniformity. These measurements are successfully made on many materials including SiC, GaAs, GaN, InP, CdS, and InSb.
References
Semiconductor fabrication equipment | Mercury probe | [
"Engineering"
] | 656 | [
"Semiconductor fabrication equipment"
] |
11,124,457 | https://en.wikipedia.org/wiki/AVR%20reactor | The AVR reactor () was a prototype pebble-bed reactor, located immediately adjacent to Jülich Research Centre in West Germany, constructed in 1960, grid connected in 1967 and shut down in 1988. It was a 15 MWe, 46 MWt test reactor used to develop and test a variety of fuels and machinery.
The AVR was based on the concept of a "Daniels pile" by Farrington Daniels, the inventor of pebble bed reactors. Rudolf Schulten is commonly recognized as the intellectual father of the reactor.
A consortium of 15 community electric companies owned and operated the plant. Over its lifetime the reactor had many accidents, earning it the name "shipwreck." From 2011 to 2014, outside experts examined the historical operations and operational hazards and described serious concealed problems and wrongdoings in their final 2014 report. For example, in 1978 operators bypassed reactor shutdown controls to delay an emergency shutdown during an accident for six days. In 2014 the JRC and AVR publicly admitted to failures.
Its decommissioning has been exceptionally difficult, time-consuming and expensive. Since the original operators were overwhelmed by the effort, government agencies took over dismantling and disposal. In 2003 the reactor and its nuclear waste became government property. The temporary storage of 152 casks of spent fuel has been a controversy since 2009. The approval expired in 2013, because stress tests could not sufficiently demonstrate safety; no permanent solution has been reached. Since 2012 plans to export the casks to the United States have been considered due to the extremely high disposal expenses. In 2014, a massive concrete wall to protect against terrorist plane crashes was to be built. On July 2, 2014, the Federal Environment ministry issued an evacuation order for the temporary storage.
AVR was the basis of the technology licensed to China to build HTR-10 and the HTR-PM, which became operational in 2021.
The reactor is located next to the largest open-pit coal mine in Germany, the Tagebau Hambach.
History
In 1959, 15 municipal electric companies established the "Association of Experimental Reactor GmbH" (AVR Ltd) to demonstrate the feasibility and viability of a gas-cooled, graphite-moderated high temperature reactor. In 1961, BBC and Krupp began AVR construction, led by Rudolf Schulten, performed on almost purely industrial basis until 1964. The federal government provided financial assistance, supported by the politician and founder of the Jülich Research Center (JRC), Leo Brandt.
In 1964, Schulten became Director of the JRC and started to devote more attention to the pebble bed reactor. In 1966, AVR first achieved criticality, and was connected to the national power grid in 1967. Construction cost figures vary between 85 and 125 million Deutsche marks
Since about 1970 the AVR GmbH was de facto dependent on JRC, although it remained formally independent until 2003. JRC provided generous operating grants to the AVR GmbH to ensure continued operation, since electricity generation only covered a small part of the operating costs. In the mid-1970s annual revenue was about 3 million DM, versus operating and fuel disposal costs of 11 million DM. JRC also subsidized AVR through procurement and disposal of fuel as JRC has been owner of AVR fuel. In addition, the AVR operation was scientifically supervised by JRC.
Fuels tested
From 1974 to 1978, mainly carbide BISO fuel was in the core. From 1983 to 1988, oxide fuel with TRISO particles was used.
Higher temperatures
During its initial years (1967-1973) the AVR was nominally operated with cooling gas outlet temperatures of . In February 1974, the cooling gas outlet temperature was raised to 950 °C. These final high temperatures were a world record for nuclear facilities, though later exceeded by the US test reactor UHTREX. Such high temperatures were supposed to demonstrate the suitability of the AVR for coal gasification, and thus contribute to long-term plans for coal in Northrhine-Westfalia.
Because a pebble bed core cannot be equipped with instruments, the high AVR core temperatures were unknown until one year before the AVR shut-down, in 1988.
In 2000, AVR admitted that it was contaminated with , being the most heavily contaminated nuclear facility worldwide.
Design
The core held about 100,000 fuel element pebbles. Each contained about 1g of . On average each would take 6 to 8 months to pass through the core.
Helium flowed up through the core of pebbles.
Contamination, internal and external
AVR's helium outlet temperature was 950 °C, but fuel temperature instabilities occurred during operation with localised exceedingly high temperatures. As a consequence the whole reactor vessel became heavily contaminated by and . Concerning beta-contamination AVR is the highest contaminated nuclear installation worldwide as AVR management confirmed 2001.
Thus in 2008, the reactor vessel was filled with light concrete to fixate the radioactive fine particle dust. In 2012, the reactor vessel of 2100 metric tons was to be transported about 200 meters by air-cushion sled and seven cranes to an intermediate storage site.
During a severe water accident in 1978, leaked, and in 1999 soil and groundwater contamination below the reactor was discovered, as confirmed by the German government in February 2010.
Decommissioning
Fuel removal out of AVR was difficult and lasted four years. During this time it became obvious that the AVR bottom reflector was broken; about 200 fuel pebbles remain wedged in its crack. Currently no dismantling method for the AVR vessel exists. It is planned to develop some procedure during the next 60 years and to start with vessel dismantling at the end of the 21st century. After the AVR vessel is moved into intermediate storage, the reactor buildings will be dismantled, and soil and groundwater will be decontaminated. Costs from 1988 to present are €700 million. The total AVR decommissioning costs are expected to be in the order of €1.5 to 2.5 billion, all public funds, i.e. to exceed its construction costs by far.
Independent expert review report, 2014
From 2011 to 2014, outside experts examined the historical operations and operational hazards and in April 2014, published a report on the AVR operation. The report listed hidden or downplayed events and accidents and described serious concealed problems and wrongdoings. For example, in 1978 operators bypassed reactor shutdown controls to delay an emergency shutdown during an accident for six days. In 2014 the JRC and AVR publicly admitted to failures and issued a regret about its failures and scientific misconduct with respect to the AVR.
See also
Skyshine
Thorium High Temperature Reactor
References
External links
Jülich Research Centre.
The Pebble Bed Evolution June 2005 (PDF, 17KB).
Pebble bed reactors
Former nuclear power stations in Germany
Radioactively contaminated areas
Jülich Research Centre | AVR reactor | [
"Chemistry",
"Technology"
] | 1,397 | [
"Radioactively contaminated areas",
"Soil contamination",
"Radioactive contamination"
] |
11,125,044 | https://en.wikipedia.org/wiki/Direct%20methods%20%28crystallography%29 | In crystallography, direct methods are a family of methods for estimating the phases of the Fourier transform of the scattering density from the corresponding magnitudes. The methods generally exploit constraints or statistical correlations between the phases of different Fourier components that result from the fact that the scattering density must be a positive real number.
In two dimensions, it is relatively easy to solve the phase problem directly, but not so in three dimensions. The key step was taken by Hauptman and Karle, who developed a practical method to employ the Sayre equation for which they were awarded the 1985 Nobel prize in Chemistry. The Nobel Prize citation was "for their outstanding achievements in the development of direct methods for the determination of crystal structures."
At present, direct methods are the preferred method for phasing crystals of small molecules having up to 1000 atoms in the asymmetric unit. However, they are generally not feasible by themselves for larger molecules such as proteins.
Several software packages implement direct methods.
See also
Direct methods (electron microscopy)
Phase problem
X-ray crystallography
References
Crystallography | Direct methods (crystallography) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 219 | [
"Materials science stubs",
"Materials science",
"Crystallography stubs",
"Crystallography",
"Condensed matter physics"
] |
11,125,142 | https://en.wikipedia.org/wiki/Extouch%20triangle | In Euclidean geometry, the extouch triangle of a triangle is formed by joining the points at which the three excircles touch the triangle.
Coordinates
The vertices of the extouch triangle are given in trilinear coordinates by:
or equivalently, where are the lengths of the sides opposite angles respectively,
Related figures
The triangle's splitters are lines connecting the vertices of the original triangle to the corresponding vertices of the extouch triangle; they bisect the triangle's perimeter and meet at the Nagel point. This is shown in blue and labelled "N" in the diagram.
The Mandart inellipse is tangent to the sides of the reference triangle at the three vertices of the extouch triangle.
Area
The area of the extouch triangle, , is given by:
where and are the area and radius of the incircle, is the semiperimeter of the original triangle, and are the side lengths of the original triangle.
This is the same area as that of the intouch triangle.
References
Circles
Objects defined for a triangle | Extouch triangle | [
"Mathematics"
] | 218 | [
"Circles",
"Pi"
] |
11,126,710 | https://en.wikipedia.org/wiki/List%20of%20AIGA%20medalists | Following is a list of AIGA medalists who have been awarded the American Institute of Graphic Arts medal.
On its website, AIGA says "The medal of the AIGA, the most distinguished in the field, is awarded to individuals in recognition of their exceptional achievements, services or other contributions to the field of graphic design and visual communication."
AIGA Medals have been awarded since 1920. Nine medals were awarded in the 1920s, seven in the 1930s, eight in the 1940s, twelve in the 1950s, ten in the 1960s, 13 in the 1970s, 13 in the 1980s, 33 in the 1990s, and 45 in the 2000s.
2020s
2022
Source:
Andrew Satake Blauvelt
Emily Oberman
Louise Sandhaus
2021
Source:
Archie Boston, Jr.
Cheryl D. Miller
Terry Irwin
Thomas Miller (honorary)
2010s
2019
Alexander Girard
Geoff McFetridge
Debbie Millman
2018
Aaron Douglas
Arem Duplessis
Karin Fong
Susan Kare
Victor Moscoso
2017
Art Chantry
Emmett McBain
Rebeca Méndez
Mark Randall
Nancy Skolos and Tom Wedell
Lance Wyman
2016
Ruth Ansel
Richard Grefé
Maira Kalman
Gere Kavanaugh
Corita Kent
2015
Paola Antonelli
Hillman Curtis
Emory Douglas
Dan Friedman
Marcia Lausen
2014
Sean Adams and Noreen Morioka
Charles S. Anderson
Dana Arnett
Kenneth Carbone and Leslie Smolan
David Carson
Kyle Cooper
Michael Patrick Cronan
Richard Danne
Michael Donovan and Nancye Green
Stephen Doyle
Louise Fili
Bob Greenberg
Sylvia Harris
Cheryl Heller
Alexander Isley
Chip Kidd
Michael Mabry
J. Abbott Miller
Bill Moggridge
Gael Towey
Ann Willoughby
2013
John Bielenberg
William Drenttel
Tobias Frere-Jones
Jessica Helfand
Jonathan Hoefler
Stefan Sagmeister
Lucille Tenazas
Wolfgang Weingart
2011
Ralph Caplan
Elaine Lustig Cohen
Armin Hofmann
Robert Vogele
2010
Steve Frykholm
John Maeda
Jennifer Morla
2000s
2009
Pablo Ferro
Carin Goldberg
Doyald Young
2008
Gail Anderson
Clement Mok
LeRoy Winbush
2007
Edward Fella
Ellen Lupton
Bruce Mau
Georg Olden
2006
Michael Bierut
Rick Valicenti
Lorraine Wild
2005
Bart Crosby
Meredith Davis
Steff Geissbuhler
2004
Joseph Binder
Charles Coiner
Richard, Jean and Patrick Coyne
James Cross
Sheila Levrant de Bretteville
Jay Doblin
Joe Duffy
Martin Fox
Caroline Warner Hightower
Kit Hinrichs
Walter Landor
Philip Meggs
James Miho
Silas Rhodes
Jack Stauffacher
Alex Steinweiss
Deborah Sussman
Edward Tufte
Fred Woodward
Richard Saul Wurman
2003
B. Martin Pedersen
Woody Pirtle
2002
Robert Brownjohn
Chris Pullman
2001
Samuel Antupit
Paula Scher
2000
P. Scott Makela and Laurie Haycock Makela
Fred Seibert
Michael Vanderbyl
1990s
1999
Tibor Kalman
Steven Heller
Katherine McCoy
1998
Louis Danziger
April Greiman
1997
Lucian Bernhard
Zuzana Licko and Rudy VanderLans
1996
Cipe Pineles
George Lois
1995
Matthew Carter
Stan Richards
Ladislav Sutnar
1994
Muriel Cooper
John Massey
1993
Alvin Lustig
Tomoko Miho
1992
Rudolph de Harak
George Nelson
Lester Beall
1991
Colin Forbes
E. McKnight Kauffer
1990
Alvin Eisenman
Frank Zachary
1980s
Paul Davis, 1989
Bea Feitler, 1989
William Golden, 1988
George Tscherny, 1988
Alexey Brodovitch, 1987
Gene Federico, 1987
Walter Herdeg, 1986
Seymour Chwast, 1985
Leo Lionni, 1984
Herbert Matter, 1983
Massimo Vignelli and Lella Vignelli, 1982
Saul Bass, 1981
Herb Lubalin, 1980
1970s
Ivan Chermayeff and Thomas Geismar, 1979
Lou Dorfsman, 1978
Charles and Ray Eames, 1977
Henry Wolf, 1976
Jerome Snyder, 1976
Bradbury Thompson, 1975
Robert Rauschenberg, 1974
Richard Avedon, 1973
Allen Hurlburt, 1973
Philip Johnson, 1973
Milton Glaser, 1972
Will Burtin, 1971
Herbert Bayer, 1970
1960s
Dr. Robert L. Leslie, 1969
Dr. Giovanni Mardersteig, 1968
Romana Javitz, 1967
Paul Rand, 1966
Leonard Baskin, 1965
Josef Albers, 1964
Saul Steinberg, 1963
William Sandberg, 1962
Paul A. Bennett, 1961
Walter Paepcke, 1960
1950s
May Massee, 1959
Ben Shahn, 1958
Dr. M. F. Agha, 1957
Ray Nash, 1956
P. J. Conkwright, 1955
Will Bradley, 1954
Jan Tschichold, 1954
George Macy, 1953
Joseph Blumenthal, 1952
Harry L. Gage, 1951
Earnest Elmo Calkins, 1950
Alfred A. Knopf, 1950
1940s
Lawrence C. Wroth, 1948
Elmer Adler, 1947
Stanley Morison, 1946
Frederic G. Melcher, 1945
Edward Epstean, 1944
Edwin and Robert Grabhorn, 1942
Carl Purington Rollins, 1941
Thomas M. Cleland, 1940
1930s
William A. Kittredge, 1939
Rudolph Ruzicka, 1935
J. Thompson Willing, 1935
Henry Lewis Bullen, 1934
Porter Garnett, 1932
Dard Hunter, 1931
Henry Watson Kent, 1930
1920s
William A. Dwiggins, 1929
Timothy Cole, 1927
Frederic W. Goudy, 1927
Burton Emmett, 1926
Bruce Rogers, 1925
John G. Agar, 1924
Stephen H. Horgan, 1924
Daniel Berkeley Updike, 1922
Norman T. A. Munder, 1920
See also
Art Directors Club Hall of Fame
Masters Series (School of Visual Arts)
References
Design awards
AIGA | List of AIGA medalists | [
"Engineering"
] | 1,117 | [
"Design",
"Design awards"
] |
11,127,125 | https://en.wikipedia.org/wiki/Georgia%20Navigator | Georgia Navigator (sometimes also as Georgia NaviGAtor) is an Advanced Traffic Management System used in the U.S. state of Georgia. It is operated by the Georgia Department of Transportation (GDOT), and was first activated in April 1996, just before the 1996 Summer Olympics in Atlanta.
Metro Atlanta
Most of the Georgia Navigator system is installed in metro Atlanta, where at least half of the state's population lives. It includes traffic cameras, changeable message signs, ramp meters, and a traffic speed sensor system. Unlike other ITS deployments around the world, Georgia Navigator almost exclusively uses video detection cameras to gather traffic flow data, as opposed to traditional sensors embedded in the pavement. Additionally, a portion of the system (Georgia 400 and parts of I-16, I-75 and I-85 outside of Atlanta) receives traffic flow information from floating car data gathered by anonymously tracking cell phones. All devices are connected by buried optical fiber, which in turn links to GDOT's command center at its Transportation Management Center (TMC) in Atlanta.
Beyond Atlanta
Outside of Atlanta, Georgia Navigator components were installed on Interstate 475 near Macon during its expansion from four lanes to six lanes. The Macon system is connected to the Atlanta TMC via fiber, allowing communication between the two centers. Georgia Navigator also has weather stations with pavement sensors mainly in the mountain and coastal areas of Georgia. Traffic sensors are installed on official evacuation routes, but are only activated during a hurricane approaching the Georgia coast or eastern Florida panhandle.
Distribution of information
Information from the system is distributed to the public through a variety of outlets. GDOT administers two of its own websites (a standard version and a customizable "My Navigator" version), and operates a 511 telephone information service. Additionally, Navigator data is used by several other companies, who typically enhance and package the data for sale to various media outlets or private websites. An example of a third-party use of Navigator data is The Weather Channel, which shows current traffic conditions (provided by Traffic Pulse) during the local forecast portion of its broadcast.
Deployment progress
Georgia Navigator is in the midst of a large expansion program. The system covers nearly all of the Perimeter (Interstate 285) highway around Atlanta, and all Interstates within and several miles beyond it. It also covers the freeway portions of Peachtree Industrial Boulevard (SR 141) and Langford Parkway (SR 166), as well as Georgia 400 from I-285 to the Alpharetta area. As of May 2009, work on I-285 is nearing completion on the south side from I-85 east to I-75. Other expansion projects underway include US 78, GA 400 inside I-285, and I-85 in the Union City / Peachtree City area. By late 2009, nearly all freeways in metro Atlanta will have full Navigator coverage.
Several ramp meters began operation in 2008 and 2009 in metro Atlanta. Some of the first corridors to be metered were I-285, I-85 in Gwinnett County, I-75 in Cobb County, and I-575. Unlike early systems which used induction loops, the new meters will employ video detection cameras to sense the density of traffic and allow an optimized rate of vehicles to proceed onto the freeway.
On local roads, Navigator includes cameras and signs that are operated by local county and city governments, though coverage is not nearly as dense as the freeway portion of the system. The local road devices also feed into the Georgia Navigator system and are controlled by a common software platform. Traffic light operation is not currently part of the system, but work to integrate the signals into Navigator is underway.
References
External links
Georgia Navigator web site
My Navigator (Customizable version of website)
Transportation in Georgia (U.S. state)
Intelligent transportation systems
Road traffic management | Georgia Navigator | [
"Technology"
] | 771 | [
"Information systems",
"Warning systems",
"Intelligent transportation systems",
"Transport systems"
] |
11,127,278 | https://en.wikipedia.org/wiki/Pair-instability%20supernova | A pair-instability supernova is a type of supernova predicted to occur when pair production, the production of free electrons and positrons in the collision between atomic nuclei and energetic gamma rays, temporarily reduces the internal radiation pressure supporting a supermassive star's core against gravitational collapse. This pressure drop leads to a partial collapse, which in turn causes greatly accelerated burning in a runaway thermonuclear explosion, resulting in the star being blown completely apart without leaving a stellar remnant behind.
Pair-instability supernovae can only happen in stars with a mass range from around 130 to 250 solar masses and low to moderate metallicity (low abundance of elements other than hydrogen and helium – a situation common in Population III stars).
Physics
Photon emission
Photons given off by a body in thermal equilibrium have a black-body spectrum with an energy density proportional to the fourth power of the temperature, as described by the Stefan–Boltzmann law. Wien's law states that the wavelength of maximum emission from a black body is inversely proportional to its temperature. Equivalently, the frequency, and the energy, of the peak emission is directly proportional to the temperature.
Photon pressure in stars
In very massive, hot stars with interior temperatures above about (), photons produced in the stellar core are primarily in the form of very high-energy gamma rays. The pressure from these gamma rays fleeing outward from the core helps to hold up the upper layers of the star against the inward pull of gravity. If the level of gamma rays (the energy density) is reduced, then the outer layers of the star will begin to collapse inwards.
Gamma rays with sufficiently high energy can interact with nuclei, electrons, or one another. One of those interactions is to form pairs of particles, such as electron-positron pairs, and these pairs can also meet and annihilate each other to create gamma rays again, all in accordance with Albert Einstein's mass-energy equivalence equation
At the very high density of a large stellar core, pair production and annihilation occur rapidly. Gamma rays, electrons, and positrons are overall held in thermal equilibrium, ensuring the star's core remains stable. By random fluctuation, the sudden heating and compression of the core can generate gamma rays energetic enough to be converted into an avalanche of electron-positron pairs. This reduces the pressure. When the collapse stops, the positrons find electrons and the pressure from gamma rays is driven up, again. The population of positrons provides a brief reservoir of new gamma rays as the expanding supernova's core pressure drops.
Pair-instability
As temperatures and gamma ray energies increase, more and more gamma ray energy is absorbed in creating electron–positron pairs. This reduction in gamma ray energy density reduces the radiation pressure that resists gravitational collapse and supports the outer layers of the star. The star contracts, compressing and heating the core, thereby increasing the rate of energy production. This increases the energy of the gamma rays that are produced, making them more likely to interact, and so increases the rate at which energy is absorbed in further pair production. As a result, the stellar core loses its support in a runaway process, in which gamma rays are created at an increasing rate; but more and more of the gamma rays are absorbed to produce electron–positron pairs, and the annihilation of the electron–positron pairs is insufficient to halt further contraction of the core. Finally, the thermal runaway ignites detonation fusion of oxygen and heavier elements. When the temperature reaches the level when electrons and positrons carry the same energy fraction as gamma-rays, pair production cannot increase any further, it is balanced by annihilation. Contraction no longer accelerates, but the core now produces much more energy than prior to collapse, and this results in a supernova: the outer layers of the star are blown away by sudden large increase of power production in the core. Calculations suggest that so much of the outer layers are lost that the very hot core itself is no longer under sufficient pressure to keep it intact, and it is completely disrupted too.
Stellar susceptibility
For a star to undergo pair-instability supernova, the increased creation of positron/electron pairs by gamma ray collisions must reduce outward pressure enough for inward gravitational pressure to overwhelm it. High rotational speed and/or metallicity can prevent this. Stars with these characteristics still contract as their outward pressure drops, but unlike their slower or less metal-rich cousins, these stars continue to exert enough outward pressure to prevent gravitational collapse.
Stars formed by collision mergers having a metallicity Z between 0.02 and 0.001 may end their lives as pair-instability supernovae if their mass is in the appropriate range.
Very large high-metallicity stars are probably unstable due to the Eddington limit, and would tend to shed mass during the formation process.
Stellar behavior
Several sources describe the stellar behavior for large stars in pair-instability conditions.
Below 100 solar masses
Gamma rays produced by stars of fewer than 100 or so solar masses are not energetic enough to produce electron-positron pairs. Some of these stars will undergo supernovae of a different type at the end of their lives, but the causative mechanisms do not involve pair-instability.
100 to 130 solar masses
These stars are large enough to produce gamma rays with enough energy to create electron-positron pairs, but the resulting net reduction in counter-gravitational pressure is insufficient to cause the core-overpressure required for supernova. Instead, the contraction caused by pair-creation provokes increased thermonuclear activity within the star that repulses the inward pressure and returns the star to equilibrium. It is thought that stars of this size undergo a series of these pulses until they shed sufficient mass to drop below 100 solar masses, at which point they are no longer hot enough to support pair-creation. Pulsing of this nature may have been responsible for the variations in brightness experienced by Eta Carinae in 1843, though this explanation is not universally accepted.
130 to 250 solar masses
For very high-mass stars, with mass at least 130 and up to perhaps roughly 250 solar masses, a true pair-instability supernova can occur. In these stars, the first time that conditions support pair production instability, the situation runs out of control. The collapse proceeds to efficiently compress the star's core; the overpressure is sufficient to allow runaway nuclear fusion to burn it in several seconds, creating a thermonuclear explosion. With more thermal energy released than the star's gravitational binding energy, it is completely disrupted; no black hole or other remnant is left behind. This is predicted to contribute to a "mass gap" in the mass distribution of stellar black holes. (This "upper mass gap" is to be distinguished from a suspected "lower mass gap" in the range of a few solar masses.)
In addition to the immediate energy release, a large fraction of the star's core is transformed to nickel-56, a radioactive isotope which decays with a half-life of 6.1 days into cobalt-56. Cobalt-56 has a half-life of 77 days and then further decays to the stable isotope iron-56 (see Supernova nucleosynthesis). For the hypernova SN 2006gy, studies indicate that perhaps 40 solar masses of the original star were released as Ni-56, almost the entire mass of the star's core regions. Collision between the exploding star core and gas it ejected earlier, and radioactive decay, release most of the visible light.
250 solar masses or more
A different reaction mechanism, photodisintegration, follows the initial pair-instability collapse in stars of at least 250 solar masses. This endothermic (energy-absorbing) reaction absorbs the excess energy from the earlier stages before the runaway fusion can cause a hypernova explosion; the star then collapses completely into a black hole.
Appearance
Luminosity
Pair-instability supernovae are popularly thought to be highly luminous. This is only the case for the most massive progenitors since the luminosity depends strongly on the ejected mass of radioactive 56Ni. They can have peak luminosities of over 1037 W, brighter than type Ia supernovae, but at lower masses peak luminosities are less than 1035 W, comparable to or less than typical type II supernovae.
Spectrum
The spectra of pair-instability supernovae depend on the nature of the progenitor star. Thus they can appear as type II or type Ib/c supernova spectra. Progenitors with a significant remaining hydrogen envelope will produce a type II supernova, those with no hydrogen but significant helium will produce a type Ib, and those with no hydrogen and virtually no helium will produce a type Ic.
Light curves
In contrast to the spectra, the light curves are quite different from the common types of supernova. The light curves are highly extended, with peak luminosity occurring months after onset. This is due to the extreme amounts of 56Ni expelled, and the optically dense ejecta, as the star is entirely disrupted.
Remnant
Pair-instability supernovae completely destroy the progenitor star and do not leave behind a neutron star or black hole. The entire mass of the star is ejected, so a nebular remnant is produced and many solar masses of heavy elements are ejected into interstellar space.
Pair-instability supernovae candidates
Some supernovae candidates for classification as pair-instability supernovae include:
SN 2006gy
SN 2007bi,
SN 2213-1745
SN 1000+0216,
SN 2010mb
OGLE14-073,
SN 2016aps
SN 2016iet,
SN 2018ibb,
See also
Pair production
Pulsational pair-instability supernova
Thermal runaway
Type Ia supernova, "thermonuclear supernova"
Intermediate-mass black hole
References
External links
List of possible pair-instability supernovae at The Open Supernova Catalog .
Supernovae
Hypernovae
de:Supernova#Paarinstabilitätssupernova | Pair-instability supernova | [
"Chemistry",
"Astronomy"
] | 2,077 | [
"Supernovae",
"Astronomical events",
"Hypernovae",
"Explosions"
] |
11,127,471 | https://en.wikipedia.org/wiki/Alternaria%20brassicae | Alternaria brassicae is a plant pathogen able to infect most Brassica species including important crops such as broccoli, cabbage and oil seed rape. It causes damping off if infection occurs in younger plants and less severe leaf spot symptoms on infections of older plants.
References
External links
Index Fungorum
USDA ARS Fungal Database
Alternaria brassicae host list : Pathogens of Plants of Hawaii
brassicae
Fungal plant pathogens and diseases
Eudicot diseases
Fungi described in 1880
Fungus species | Alternaria brassicae | [
"Biology"
] | 104 | [
"Fungi",
"Fungus species"
] |
11,127,486 | https://en.wikipedia.org/wiki/Alternaria%20japonica | Alternaria japonica is a fungal plant pathogen. It is a cause of black spot disease in cruciferous plants. It is not a major source of crop loss, but is considered dangerous for plants during the seedling stage.
Symptoms
Alternaria japonica affects its hosts in all stages of life. Infection causes a black or grey sunken lesion with a characteristic yellow border. On the leaves of some plants, infection can cause dark, water-soaked spots. The lesions can be observed anywhere on the plant. In seedlings, fungal lesions on the stem are a cause of damping-off. Infected seeds appear black or grey.
Identification
The fungus can first be detected by visually observing symptoms on infected plants. When cultured on potato carrot agar, it will form a grey or brownish, cobweb-like mycelium. Upon microscopic inspection, A. japonica has septate, branched hyphae and appears colorless to greenish grey. Chlamydospores are multicellular with thick, rough walls. Conidia are solitary and beakless. Sequencing of the ribosomal DNA is commonly used for positive identification because the symptoms and microscopic appearance can resemble those of related species.
Hosts and distribution
Transmission of A. japonica occur from infected seeds and plant debris or conidia produced by the fungus in wet conditions. The major hosts of this organism are species in Brassicaceae such as cauliflower, turnip, and cabbage. Whether it can infect species outside of this family is unclear. This fungus is not thought to be a cause of disease in humans, unlike other members of Alternaria. Occurrences of black spot caused by A. japonica have been reported worldwide.
Management
Once A. japonica has been established in an area, it can be difficult to eradicate because it can survive in a dormant state in the soil for years. Prevention of the spread of A. japonica by controlling the transportation of infected plant materials and seeds is crucial. Disinfection of seeds is an effective preventative measure. A variety of chemical fungicides can be used to protect seedlings. Integrated pest management practices such as crop rotation with non-cruciferous plants can be beneficial for farmers dealing with this fungus.
References
japonica
Fungal plant pathogens and diseases
Fungi described in 1941
Fungus species | Alternaria japonica | [
"Biology"
] | 478 | [
"Fungi",
"Fungus species"
] |
11,127,500 | https://en.wikipedia.org/wiki/Alternaria%20raphani | Alternaria raphani is a fungal plant pathogen.
References
External links
USDA ARS Fungal Database
raphani
Fungal plant pathogens and diseases
Fungi described in 1944
Fungus species | Alternaria raphani | [
"Biology"
] | 35 | [
"Fungi",
"Fungus species"
] |
11,127,506 | https://en.wikipedia.org/wiki/Alternaria%20triticina | Alternaria triticina is a fungal plant pathogen that causes leaf blight on wheat. A. triticina is responsible for the largest leaf blight issue in wheat and also causes disease in other major cereal grain crops. It was first identified in India in 1962 and still causes significant yield loss to wheat crops on the Indian subcontinent. The disease is caused by a fungal pathogen and causes necrotic leaf lesions and in severe cases shriveling of the leaves.
Hosts and symptoms
Successful inoculation of A. triticina has been repeatedly confirmed in Triticum turgidum subsp. Durum (durum wheat) and Triticum aestivum (common wheat, bread wheat) with bread wheat varieties showing more severe infection. Barley, sorghum, triticale, oats, rye, and millet have all been experimentally colonized, but field-level infection is restricted to varieties of durum and bread wheat. Infection will only occur on hosts older than three weeks with symptoms appearing at 7–8 weeks of age.
Lesions will start as oval-shaped scars on the lower leaves and infect more leaves as the plant grows. Later in the season, the lesions enlarge and coalesce, becoming darker and forming chlorotic margins around the necrotic lesions. If the infection becomes sufficiently severe and widespread, the entire field will exhibit a burnt appearance. Depending on the initial concentration of inoculum and environmental conditions, infection can spread to the leaf sheath, stem, awns, and glumes. Spike infections lead to infected seed. These seeds may exhibit no symptoms, or they may become brown and shriveled. In either case, they will contain the disease-spreading agent successfully to the next season.
In addition to symptoms derived from nutrient extraction, A. triticina releases several nonspecific toxins, often resulting in chlorotic leaf flag streaks.
Lesions are not easily differentiated from those of other leaf blight pathogens. However, they will have black powder of conidia and not pycnidia or perithecia common to some leaf lesion fungi, which distinguishes it from many ascomycete pathogens of wheat and cereal grains.
Disease cycle
The fungus overwinters largely as seed-born spores. These asexual spores multiply in the soil and transfer primary inoculum to susceptible plant leaves through direct soil contact or by soil that is splashed onto the lowest leaves in rainfall or irrigation. At this point, the polycyclic nature of A. triticina is evident when conidia, the secondary inoculum are produced. Conidia germinate in temperatures between and with 10 hours of water film on the leaves or 48 hours of humidity greater than 90%. Conidia germinate, producing 2-4 germ tubes, each with an appressorium and penetration peg. Hyphae infect via direct penetration and proliferate inter- and intra-cellularly. Hyphae reach the deep mesophyll tissue within 72 hours of inoculation. Mycelium will spread to the epidermis and parenchyma tissue but not so deep as to infect the vasculature. Leaf tissue thickness becomes greatly reduced and chloroplasts of infected cells grow larger and irregularly shaped. Mycelium will produce conidiophores which extend out of host tissue stomata and bear conidia either singly or in chains. These conidia serve as secondary inoculum for further infections within the season. Lesions appear between 2–5 days after inoculation. Infections in the seed head produce spores for the next season. Conidia in leaf and stem tissue can survive in debris, but its viability is greatly reduced when left on top of the soil surface or in hot, wet environments; their survival is limited to 2 months on the soil surface and 4 months when buried.
Management
The wide array of chemical, cultural, and biological inhibitions of leaf blight of wheat make both conventional and organic management reliable and economic. Infection of wheat and other cereal varieties can be prevented with the selection of resistant cultivar and planting of clean, disease-free seed. Seeds can also be treated with chemical agents or with hot water treatments. Biological methods, such as soil treatments of Bacillus spp. or fluorescent pseudomonads have proven effective. The fungi Trichoderma viride, T. harzianum and Pseudomonas fluorescens all exhibit antagonistic growth against A. triticina hyphae in vitro and led to significantly higher yields in treated versus control plants infected with the leaf blight.
Once infection is detected, foliar fungicides, such as mancozeb, ziram, copper oxychloride, and propineb, can prevent further infection from secondary inoculum. One common recommendation for control in India is 2 applications of copperoxychloride + Mancozeb 15 days apart. If overwintering of plant debris conidia is a concern, leaving residues on the soil surface is recommended, as burying of residue increases its likelihood of survival to the next season. Delaying tillage for several months can also help with plant debris inoculum.
Importance
Leaf blight of wheat via Alternaria triticina is “one of the most important foliar diseases of wheat in India”. As the world's second largest producer of wheat, trailing only China, India produces 8.7% of the world wheat supply and dedicates 13% of cultivated land to wheat production. With production levels so important to the agriculture sector of India, leaf blight of wheat is a major concern for growers and other stakeholders. Infection can lead to a 46-75% weight reduction of individual grains with yield losses reaching 60%. In the 1960s, India saw widespread, heavy wheat yield losses due to A. triticina with the introduction of a popular Mexican rust-resistant wheat variety. It is not uncommon to see yield losses of 20% attributed to Alternaria leaf blight of wheat.
The Australian Industry Biosecurity plan for the Grains Industry rated A. triticina a risk rating of HIGH for the years 2004 and 2009 and thus they have created a contingency plan for the containment of the disease. The fungus is a quarantine pathogen and has prompted New Zealand, Brazil, and South Africa to impose regulations on the importation of wheat, requiring freedom statements from the area before reaccepting imports. A. triticina has been found in Argentina, southern Italy, parts of southwestern Asia, North Africa, Greece, the Middle East, and several other eastern European countries.
References
triticina
Fungal plant pathogens and diseases
Wheat diseases
Fungi described in 1963
Fungus species | Alternaria triticina | [
"Biology"
] | 1,386 | [
"Fungi",
"Fungus species"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.