id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
344,140 | https://en.wikipedia.org/wiki/Baroque%20architecture | Baroque architecture is a highly decorative and theatrical style which appeared in Italy in the late 16th century and gradually spread across Europe. It was originally introduced by the Catholic Church, particularly by the Jesuits, as a means to combat the Reformation and the Protestant church with a new architecture that inspired surprise and awe. It reached its peak in the High Baroque (1625–1675), when it was used in churches and palaces in Italy, Spain, Portugal, France, Bavaria and Austria. In the Late Baroque period (1675–1750), it reached as far as Russia, the Ottoman Empire and the Spanish and Portuguese colonies in Latin America. In about 1730, an even more elaborately decorative variant called Rococo appeared and flourished in Central Europe.
Baroque architects took the basic elements of Renaissance architecture, including domes and colonnades, and made them higher, grander, more decorated, and more dramatic. The interior effects were often achieved with the use of quadratura (i.e. trompe-l'œil painting combined with sculpture): the eye is drawn upward, giving the illusion that one is looking into the heavens. Clusters of sculpted angels and painted figures crowd the ceiling. Light was also used for dramatic effect; it streamed down from cupolas, and was reflected from an abundance of gilding. Twisted columns were also often used, to give an illusion of upwards motion, and cartouches and other decorative elements occupied every available space. In Baroque palaces, grand stairways became a central element.
The Early Baroque (1584–1625) was largely dominated by the work of Roman architects, notably the Church of the Gesù by Giacomo della Porta (consecrated 1584) façade and colonnade of St. Peter's Basilica by Carlo Maderno (completed 1612) and the lavish Barberini Palace interiors by Pietro da Cortona (1633–1639), and Santa Susanna (1603), by Carlo Maderno. In France, the Luxembourg Palace (1615–45) built by Salomon de Brosse for Marie de' Medici was an early example of the style.
The High Baroque (1625–1675) produced major works in Rome by Pietro da Cortona, including the (Church of Santi Luca e Martina) (1635–50); by Francesco Borromini (San Carlo alle Quattro Fontane (1634–1646)); and by Gian Lorenzo Bernini (The colonnade of St. Peter's Square) (1656–57). In Venice, High Baroque works included Santa Maria della Salute by Baldassare Longhena. Examples in France included the Pavillon de l’Horloge of the Louvre Palace by Jacques Lemercier (1624–1645), the Chapel of the Sorbonne by Jacques Lemercier (1626–35) and the Château de Maisons by François Mansart (1630–1651).
The Late Baroque (1675–1750) saw the style spread to all parts of Europe, and to the colonies of Spain and Portugal in the New World. National styles became more varied and distinct. The Late Baroque in France, under Louis XIV, was more ordered and classical; examples included the Hall of Mirrors of the Palace of Versailles and the dome of . An especially ornate variant, appeared in the early 18th century; it was first called Rocaille in France; then Rococo in Spain and Central Europe. The sculpted and painted decoration covered every space on the walls and ceiling. Its most celebrated architect was Balthasar Neumann, noted for the Basilica of the Fourteen Holy Helpers and the Würzburg Residence (1749–51).
History
Early Baroque (1584–1625)
Baroque architecture first appeared in the late 16th and early 17th century in religious architecture in Rome as a means to counter the popular appeal of the Protestant Reformation. It was a reaction against the more severe and academic earlier style of earlier churches, it aimed to inspire the common people with the effects of surprise, emotion and awe. To achieve this, it used a combination of contrast, movement, trompe-l'œil and other dramatic and theatrical effects, such as quadraturathe use of painted ceilings that gave the illusion that one was looking up directly at the sky. The new style was particularly favored by the new religious orders, including the Theatines and the Jesuits, who built new churches designed to attract and inspire a wide popular audience.
Rome
One of the first Baroque architects, Carlo Maderno, used Baroque effects of space and perspective in the new façade and colonnade of Saint Peter's Basilica, which was designed to contrast with and complement the gigantic dome built earlier by Michelangelo. Other influential early examples in Rome included the Church of the Gesù by Giacomo della Porta (consecrated 1584), with the first Baroque façade and a highly ornate interior, and Santa Susanna (1603), by Carlo Maderno.
Paris
The Jesuits soon imported the style to Paris. The Church of St-Gervais-et-St-Protais in Paris (1615–1621) had the first Baroque façade in France, featuring, like the Italian Baroque façades, the three superimposed classical orders. The Italian style of palaces was also imported to Paris by Marie de' Medici for her new residence, the Luxembourg Palace (1615–1624) by architect Salomon de Brosse, and for a new wing of the Château of Blois by François Mansard (1635–38). Nicolas Fouquet, the superintendent of finances for the young King Louis XIV, chose the new style for his château at Vaux-le-Vicomte (1612–1670) by Louis Le Vau. He was later imprisoned by the King because of the extravagant cost of the palace.
Southern Netherlands
In the Southern Netherlands, the Baroque architecture was introduced by the Catholic Church in the context of the Counter-Reformation and the Eighty Years' War. After the separation of the Netherlands Baroque churches were set up across the country. One of the first architects was Wenceslas Cobergher (1560-1634), who built the Basilica of Our Lady of Scherpenheuvel from 1609 until 1627 and the Church of Saint Augustine, Antwerp. Other churches are for example the St. Charles Borromeo Church, Antwerp (1615-1621) and the St. Walburga Church (Bruges) (1619-1641), both built by Pieter Huyssens. Later, secular buildings, such as the Guildhalls on the Grand-Place in Brussels and several Belfries, were constructed too.
Central Europe
The first example of early Baroque in Central Europe was the Corpus Christi Church, Nesvizh in the Polish–Lithuanian Commonwealth, built by the Jesuits on the Roman model between 1586 and 1593 in Nieśwież (after 1945 Niasvizh in Belarus). The church also holds a distinction of being the first domed basilica with a Baroque façade in the Commonwealth and Eastern Europe.
Another early example in Poland is the Church of Saints Peter and Paul Church, Kraków, built between 1597 and 1619 by the Italian Jesuit architect Giovanni Maria Bernardoni.
High Baroque (1625–1675)
Italy
Pope Urban VIII, who occupied the Papacy from 1623 to 1644, became the most influential patron of the Baroque style. After the death of Carlo Maderno in 1629, Urban named the architect and sculptor Gian Lorenzo Bernini as the chief Papal architect. Bernini created not only Baroque buildings, but also Baroque interiors, squares and fountains, transforming the center of Rome into an enormous theater. Bernini rebuilt the Church of Santa Bibiana and the Church of San Sebastiano al Palatino on the Palatine Hill into Baroque landmarks, planned the Fontana del Tritone in the Piazza Barberini, and created the soaring baldacchino as the centerpiece of St Peter's Basilica.
The High Baroque spread gradually across Italy, beyond Rome. The period saw the construction of Santa Maria della Salute by Baldassare Longhena in Venice (1630–31). Churches were not the only buildings to use the Baroque style. One of the finest monuments of the early Baroque is the Barberini Palace (1626–1629), the residence of the family of Urban VIII, begun by Carlo Maderno, and completed and decorated by Bernini and Francesco Borromini. The outside of the Pope's family residence, was relatively restrained, but the interiors, and especially the immense fresco on the ceiling of the salon, the Allegory of Divine Providence and Barberini Power painted by Pietro da Cortona, are considered masterpieces of Baroque art and decoration. Curving façades and the illusion of movement were a speciality of Francesco Borromini, most notably in San Carlo alle Quattro Fontane (1634–1646), one of the landmarks of the high Baroque. Another important monument of the period was the Church of Santi Luca e Martina in Rome by Pietro da Cortona (1635–50), in the form of a Greek cross with an elegant dome. After the death or Urban VIII and the brief reign of his successor, the Papacy of Pope Alexander VII from 1666 until 1667 saw more construction of Baroque churches, squares and fountains in Rome by Carlo Rainaldi, Bernini and Carlo Fontana.
France
King Louis XIII had sent the architect Jacques Lemercier to Rome between 1607 and 1614 to study the new style. On his return to France, he designed the Pavillon de l’Horloge of the Louvre Palace (beginning 1626), and, more importantly, the Sorbonne Chapel, the first church dome in Paris. It was designed in 1626, and construction began in 1635. The next important French Baroque project was a much larger dome for the church of Val-de-Grâce begun in 1645 by Lemercier and François Mansart, and finished in 1715. A third Baroque dome was soon added for the Collège des Quatre-Nations (now the ).
In 1661, following the death of Cardinal Mazarin, the young Louis XIV took direct charge of the government. The arts were put under the direction of his Controller-General of Finances, Jean-Baptiste Colbert. Charles Le Brun, director of the Royal Academy of Painting and Sculpture, was named Superintendent of Buildings of the King, in charge of all royal architectural projects. The Académie royale d'architecture was founded in 1671, with the mission of making Paris, not Rome, the artistic and architectural model for the world.
The first architectural project of Louis XIV was a proposed reconstruction of the façade of the east wing of the Louvre Palace. Bernini, then Europe's most famous architect, was summoned to Paris to submit a design. Beginning in 1664, Bernini proposed several Baroque variants, but in the end the King selected a design by a French architect, Charles Perrault, in a more classical variant of Baroque. This gradually became the Louis XIV style. Louis was soon engaged in an even larger project, the construction of the new Palace of Versailles. The architects chosen were Louis Le Vau and Jules Hardouin-Mansart, and the façades of the new palace were constructed around the earlier Marble Court between 1668 and 1678. The Baroque grandeur of Versailles, particularly the façade facing the garden and the Hall of Mirrors by Jules Hardouin-Mansart, became models for other palaces across Europe.
Late Baroque (1675–1750)
During the period of the Late Baroque (1675–1750), the style appeared across Europe, from England and France to Central Europe and Russia, from Spain and Portugal to Scandinavia, and in the colonies of Spain and Portugal in the New World and the Philippines. It often took different names, and the regional variations became more distinct. A particularly ornate variant appeared in the early 18th century, called Rocaille in France and Rococo in Spain and Central Europe. The sculpted and painted decoration covering every space on the walls and ceiling. The most prominent architects of this style included Balthasar Neumann, noted for the Basilica of the Fourteen Holy Helpers and the Wurzburg Residence (1749–51). These works were among the final expressions of the Rococo or the Late Baroque.
Italy
By the early 18th century, Baroque buildings could be found in all parts of Italy, often with regional variations. Notable examples included the Basilica of Superga, overlooking Turin, by Filippo Juvarra (1717–1731), which was later used as model for the Panthéon in Paris. The Stupinigi Palace (1729–31) was a hunting lodge and one of the Residences of the Royal House of Savoy near Turin. It was also built Filippo Juvarra.
France
The Late Baroque period in France saw the evolving decoration of the Palace of Versailles, including the Hall of Mirrors and the Chapel. Later in the period, during the reign of Louis XV, a new, more ornate variant, the Rocaille style, or French Rococo, appeared in Paris and flourished between about 1723 and 1759. The most prominent example was the salon of the Princess in Hôtel de Soubise in Paris, designed by Germain Boffrand and Charles-Joseph Natoire (1735–40).
England
Christopher Wren was the leading figure of the late Baroque in England, with his reconstruction of St. Paul's Cathedral (1675–1711) inspired by the model of St. Peter's Basilica in Rome, his plan for Greenwich Hospital (begun 1695), and Hampton Court Palace (1690–96). Other British figures of the late Baroque included Inigo Jones for Wilton House (1632–1647 and two pupils of Wren, John Vanbrugh and Nicholas Hawksmoor, for Castle Howard (1699–1712) and Blenheim Palace (1705–1724).
Lithuania
In the 17th century Late Baroque style buildings in Lithuania were built in an Italian Baroque style, however in the first half of the 18th century a distinctive Vilnian Baroque architectural style of the Late Baroque was formed in capital Vilnius (in which architecture was taught at Vilnius Jesuit Academy, Jesuits colleges, Dominican schools) and spread throughout Lithuania. The most distinctive features of churches built in the Vilnian Baroque style are very tall and slender towers of the main façades with differently decorated compartments, undulation of cornices and walls, decorativeness in bright colors, and multi-colored marble and stucco altars in the interiors. The Lithuanian nobility funded renovations and constructions of Late Baroque churches, monasteries (e.g. Pažaislis Monastery) and their personal palaces (e.g. Sapieha Palace, Slushko Palace, Minor Radvilos Palace).
Notable architects who built buildings in a Late Baroque style in Lithuania are Johann Christoph Glaubitz, Thomas Zebrowski, Pietro Perti (cooperated with painters Michelangelo Palloni, Giovanni Maria Galli), Giambattista Frediani, Pietro Puttini, Carlo Puttini, Jan Zaor, G. Lenkiewicz, Abraham Würtzner, Jan Valentinus Tobias Dyderszteyn, P. I. Hofer, , etc.
Central Europe
Many of the most extraordinary buildings of the Late Baroque were constructed in Austria, Germany, and Czechia. In Austria, the leading figure was Fischer von Erlach, who built the Karlskirche, the largest church of Vienna, to glorify the Habsburg emperors. These works sometimes borrowed elements from Versailles combined with elements of the Italian Baroque to create grandiose new effects, as in the Schwarzenberg Palace (1715). Johann Lukas von Hildebrandt used grand stairways and ellipses to achieve his effects at the upper and lower Belvedere Palace in Vienna (1714–1722). In The Abbey of Melk, Jakob Prandtauer used an abundance of polychrome marble and stucco, statuary and ceiling paintings to achieve harmonious and highly theatrical effects.
Another important figure of German Baroque was Balthasar Neumann (1687–1753), whose works included the Würzburg Residence for the Prince-Bishops of Würzburg, with its famous staircase.
In Bohemia, the leading Baroque architect was Christoph Dientzenhofer, whose building featured complex curves and counter-curves and elliptical forms, making Prague, like Vienna, a capital of the late Baroque.
Spain
Political and economic crises in the 17th century largely delayed the arrival of the Baroque in Spain until the late period, though the Jesuits strongly promoted it. Its early characteristics were a lavish exterior contrasting with a relatively simple interior and multiple spaces. They carefully planned lighting in the interior to give an impression of mystery. Early 18th century, Notable Spanish examples included the new west façade of Santiago de Compostela Cathedral, (1738–50), with its spectacular towers, by Fernando de Casas Novoa. In Seville, Leonardo de Figueroa was the creator of the Palacio de San Telmo, with a façade inspired by the Italian Baroque. The most ornate works of the Spanish Baroque were made by Jose Benito de Churriguera in Madrid and Salamanca. In his work, the buildings are nearly overwhelmed by the ornament of gilded wood, gigantic twisting columns, and sculpted vegetation. His two brothers, Joaquin and Alberto, also made important, if less ornamented, contributions to what became known simply as the Churrigueresque style.
Latin America and North America
The Baroque style was imported into Latin America in the 17th century by the Spanish and the Portuguese, particularly by the Jesuits for the construction of churches. The style was sometimes called Churrigueresque, after the family of Baroque architects in Salamanca. A particularly fine example is Zacatecas Cathedral in Zacatecas City, in north-central Mexico, with its lavishly sculpted façade and twin bell towers. Another important example is San Cristobal de las Casas in Mexico. A notable example in Brazil is the São Bento Monastery in Rio de Janeiro. begun in 1617, with additional decoration after 1668. The Metropolitan Tabernacle the Mexico City Metropolitan Cathedral, to the right of the main cathedral, built by Lorenzo Rodríguez between 1749 and 1760, to house the archives and vestments of the archbishop, and to receive visitors.
Portuguese colonial architecture was modeled after the architecture of Lisbon, different from the Spanish style. The most notable architect in Brazil was Aleijadinho, who was native of Brazil, half-Portuguese, and self-taught. His most famous work is the Church of Saint Francis of Assisi (Ouro Preto).
Characteristics
Baroque architecture often used visual and theatrical effects, designed to surprise and awe the viewer:
domes were a common feature. Their interiors were often painted with a sky filled with angels and sculpted sunbeams, suggesting glory or a vision of heaven. Pear-shaped domes were sometimes used in the Bavarian, Czech, Polish and Ukrainian Baroque
quadratura. Paintings in trompe-l'œil of angels and saints in the dome and on the ceiling, combined with stucco frames or decoration, which give the illusion of three dimensions, and of looking through the ceiling to the heavens. Sometimes painted or sculpted figures of Atlantes appear to be holding up the ceiling. In some Baroque churches, illusionistic ceiling painting gave the illusion of three dimensions.
grand stairways. Stairways often occupied a central place and were used for dramatic effect. winding upwards in stages, giving changing views from different levels, serving as a setting for ceremonies.
cartouche in elaborate forms and sculpted frames break up the surfaces and add three-dimensional effects to the walls.
mirrors to give the impression of depth and greater space, particularly when combined with windows, as in the Hall of Mirrors at the Palace of Versailles.
incomplete architectural elements, such as frontons with sections missing, causing sections to merge and disorienting the eye.
chiaroscuro. Use of strong contrasts of darkness and light for dramatic effect.
overhead sculpture. Putti or figures on or just below the ceiling, made of wood (often gilded), plaster or stucco, marble or faux finishing, giving the impression of floating in the air.
Solomonic columns, which gave an illusion of motion.
elliptical or oval spaces, eliminating right angles. Sometimes an oval nave was surrounded by radiating circular chapels. This was a distinctive feature of the Basilica of the Fourteen Holy Helpers of Balthasar Neumann.
Plans
Major Baroque architects and works, by country
Italy
Carlo Maderno – Santa Susanna (1595–603); St. Peter's Basilica and Sant'Andrea della Valle, Rome
Pietro da Cortona – Santa Maria della Pace (1656–68), Santi Luca e Martina, Rome
Gian Lorenzo Bernini – Saint Peter's Square, Palazzo Barberini, Sant'Andrea al Quirinale, Rome
Francesco Borromini – San Carlo alle Quattro Fontane, Sant'Ivo alla Sapienza, Rome
Carlo Fontana – San Marcello al Corso (1692–1697)
Francesco de Sanctis – Spanish Steps (1723)
Luigi Vanvitelli – Caserta Palace (begun 1752)
Guarino Guarini – Palazzo Carignano in Turin (1679), Chapel of the Holy Shroud, Turin
Filippo Juvarra – Basilica of Superga, Turin (1717–31)
France
Salomon de Brosse – Luxembourg Palace (1615–1645)
Louis Le Vau – (Vaux-le-Vicomte) (1658–1661), Collège des Quatre-Nations (1662–1688), Cour Carrée of the Louvre Palace (1668–1680)
Jules Hardouin-Mansart – domed chapel of (finished 1708); garden façade and Hall of Mirrors of Palace of Versailles
Robert de Cotte – Chapel of Palace of Versailles (1643–1715), Grand Trianon (1643–1715)
England
Christopher Wren – St. Paul's Cathedral (1675–1711), Hampton Court Palace (1690–1696), Greenwich Hospital (begun 1695)
Nicholas Hawksmoor and John Vanbrugh – Castle Howard (1699–1712); Blenheim Palace (1705–1724)
James Gibbs – Radcliffe Camera, Oxford (1739–49)
The Netherlands
Jacob Van Campen – Royal Palace of Amsterdam (then city hall) (begun 1648), Noordeinde Palace (1640) and Mauritshuis (1641)
Lieven de Key – City Hall (Haarlem) (1620)
Pieter Post – Huis ten Bosch (1645–1652) and Maastricht City Hall (1686)
Maurits Post – Soestdijk Palace (1650)
Daniël Stalpaert – Het Scheepvaartmuseum (1655–1656) and Trompenburgh (1684)
Daniel Marot – Het Loo Palace (1684–1686)
Bartholomeus van Bassen – Nieuwe Kerk (The Hague) (1656)
Pierre Cuypers – Oudenbosch Basilica (1892)
Germany
Agostino Barelli – Nymphenburg Palace, Munich (1664–1675)
Matthäus Daniel Pöppelmann – Zwinger, Dresden (1697–1716)
Georg Bahr – Dresden Frauenkirche (1722–1738, destroyed in 1944, rebuilt in 1994–2005)
Johann Arnold Nering – Charlottenburg Palace, Berlin (1695–1713)
Balthasar Neumann – Basilica of the Fourteen Holy Helpers (1743–1772), Würzburg Residence (1735)
Johann Dientzenhofer and Johann Lukas von Hildebrandt – Schloss Weißenstein in Pommersfelden, Bavaria (1711–1718)
Augustusburg Palace
Austria
Johann Lukas von Hildebrandt, Upper Belvedere Palace in Vienna (1721–23)
Johann Bernhard Fischer von Erlach – University Church, Salzburg (begun 1696); Karlskirche, Vienna (1716–37); Austrian National Library (begun 1722)
Johann Bernhard Fischer von Erlach and Johann Lukas von Hildebrandt – Palais Auersperg in Vienna
Jakob Prandtauer and Josef Munggenast, Abbey of Melk (1702–1738)
Santino Solari, Salzburg Cathedral (façade and interior of dome) (1614–1628)
Czech Republic
Jean-Baptiste Mathey – Troja Palace, Prague (1679–1691)
Christoph Dientzenhofer – Břevnov Monastery, Prague (1708–1721) – Church of St Nicholas. Prague (1704–55)
Kilian Ignaz Dientzenhofer – Kinský Palace (Prague) (1755–1765)
Slovakia
Pietro Spozzo – Jesuit Church of Trnava (1629–37)
Hungary
András Mayerhoffer – Gödöllő Palace near Budapest (begun 1733)
Ignác Oraschek and Márton Wittwer: Esterházy Palace in Fertőd
Romania
Johann Eberhard Blaumann – Bánffy Palace in Cluj (1774–75)
Johann Lukas von Hildebrandt – Bishopric Palace in Oradea (1736–1750)
Joseph Emanuel Fischer von Erlach – St. George's Cathedral, Timișoara
Anton Erhard Martinelli – Holy Trinity Cathedral, Blaj (1738–1749)
Samuel von Brukenthal – Brukenthal Palace in Sibiu (1777–87)
Franz Burger – Brukenthal High School in Sibiu (1779–81)
Jesuit Church, Sibiu (1726–33)
Gheorghe Lazăr National College (Sibiu)
Poland
Giovanni Maria Bernardoni – Saints Peter and Paul Church, Kraków (1597–1619)
Joseph Emanuel Fischer von Erlach – Chapel of the Holy Sacrament, Wroclaw Cathedral
Karl Friedrich Pöppelmann – Blue Palace in Warsaw (1728)
Tylman van Gameren – Krasinski Palace, Warsaw (1677–1682)
Johann Lukas von Hildebrandt – Royal Castle, Warsaw (1711)
Friedrich Karcher – Enlargement of Royal Castle, Warsaw (1700)
Augustyn Wincenty Locci and Andreas Schlüter – Reconstruction of Wilanów Palace (1677–1696)
Portugal
João Antunes – Church of Santa Engrácia, Lisbon (now National Pantheon of Portugal; begun 1681)
Nicolau Nasoni – Clérigos Church in Porto (1732–1763); Mateus Palace in Vila Real (1739–1743)
Portuguese Colonial Baroque
Aleijadinho – Church of Saint Francis of Assisi (Ouro Preto), Brazil (1771–1794)
Basilica and Convent of Nossa Senhora do Carmo, Recife, Brazil (1665–1767)
Church of St. Anne, Talaulim, India (1577–1695)
Church of Saint Dominic, Macau, China (1587)
Spain
Fernando de Casas Novoa – West façade of Cathedral of Santiago de Compostela (1738–1750)
Alonzo Cano – Baroque additions to Granada Cathedral (1667)
Leonardo de Figueroa – Palacio de San Telmo, Seville, (1682)
Jose Benito de Churriguera – San Cayetano Church, Madrid – Altar of the Church of San Esteban, Salamanca (1693)
Francisco Hurtado Izquierdo – Granada Charterhouse, Granada (1727–1764)
Spanish American Baroque
Lorenzo Rodriguez – Metropolitan Tabernacle of Mexico City Metropolitan Cathedral, Mexico (1749–1760)
Cathedral Basilica of Zacatecas in Zacatecas City, Mexico (1729–1772)
Spaniard José de la Cruz, Antonio de Nava and Luigi Tomassi – Cathedral of Chihuahua, Mexico, (1725–1760)
Convent of San Francisco, Madero Street, Mexico City, built around the 16th century
Flemish Jean-Baptiste Gilles and Diego Martínez de Oviedo – Iglesia de la Compañía de Jesús, Cusco, Peru (1668)
Juan Miguel de Veramendi, Juan Correa, Miguel Gutiérrez Sencio – Cusco Cathedral, in Cusco, Peru (1560–1664)
Palacio de Torre Tagle, in Lima, Peru (1715)
Cathedral Basilica of Lima, Peru (1535–1649)
Basilica and Convent of Nuestra Señora de la Merced, Lima, Peru (1535)
Basilica of San Francisco, La Paz, Bolivia (1743–1772)
Havana Cathedral in Cuba, built between 1748 and 1777
Basilica Menor de San Francisco de Asís in Havana, Cuba, built between 1580 and 1738.
Nordic Countries
Elias David Häusser (Denmark) – Christiansborg Palace (1st)
Lambert van Haven (Denmark) – Church of Our Saviour, Copenhagen (1682–1747)
Nicodemus Tessin the Elder (Sweden) – Drottningholm Palace (1662–1681) – Kalmar Cathedral in Småland, Sweden (1660–1703)
Russia
Giovanni Maria Fontana – Menshikov Palace (Saint Petersburg) (1710–1720s)
Georg Johann Mattarnovi – Kunstkamera in Petrine Baroque, Saint Petersburg, completed by 1727
Bartolomeo Francesco Rastrelli – Façade of Smolny Convent, Saint Petersburg (1748–1754); Stroganov Palace (1753—1754); Vorontsov Palace (Saint Petersburg) (1749—1757); Winter Palace in Saint Petersburg (1754–1762)
Domenico Trezzini – Peter and Paul Fortress, Saint Petersburg (1706–1740)
Mikhail Zemtsov – Transfiguration Cathedral (Saint Petersburg) (1743–54)
Ukraine
Francesco Bartolomeo Rastrelli – Mariinskyi Palace in Kyiv (1744–1752); St Andrew's Church, Kyiv (1744–1767), both in Elizabethan style
By the command of Yakiv Lyzohub – House of Lyzohub (1690s); Catherine's Church, Chernihiv (1715)
St. Nicholas Cathedral, Nizhyn (17th century), the prototype of Ukrainian baroque
Most of Vydubychi Monastery (17th–19th century)
Portions of Kyiv Pechersk Lavra (17th–18th century)
Malta
Bontadino de Bontadini – Wignacourt Aqueduct (1612–1615) and Wignacourt Arch
Francesco Bounamici – Church of the Jesuits, Valletta (1635)
Mattia Preti – Saint John's Co-Cathedral (1660s); Our Lady of Victories Church, Valletta (1752)
Lorenzo Gafà – Saint Lawrence's Church, Vittoriosa (1681–97); St. Paul's Cathedral, Mdina (1696–1705); the Cathedral of the Assumption, Gozo (1697–1711)
Andrea Belli – Auberge de Castille (1741–45)
See also
List of Baroque architecture
List of Baroque residences
Baroque music
Baroque sculpture
Ottoman Baroque architecture
References
Bibliography
Bailey, Gauvin Alexander. Baroque & Rococo. London: Phaidon Press, 2012.
Bourke, John. Baroque Churches of Central Europe. London: Faber, 1962 (2nd revised ed.)
Ducher, Robert, Caractéristique des Styles, (1988), Flammarion, Paris (In French);
Sitwell, Sacheverell. Southern Baroque Revisited. London: Weidenfeld & Nicolson, 1967.
Robbins Landon, H. C. and David Wyn Jones (1988) Haydn: His Life and Music. Thames and Hudson.
External links
Siberian Baroque
Architectural styles
Architectural history
16th-century architecture
17th-century architecture
18th-century architectural styles
18th century in the arts
Architecture
Architecture in Italy | Baroque architecture | [
"Engineering"
] | 6,413 | [
"Architectural history",
"Architecture"
] |
344,142 | https://en.wikipedia.org/wiki/Audio%20frequency | An audio frequency or audible frequency (AF) is a periodic vibration whose frequency is audible to the average human. The SI unit of frequency is the hertz (Hz). It is the property of sound that most determines pitch.
The generally accepted standard hearing range for humans is 20 to 20,000 Hz. In air at atmospheric pressure, these represent sound waves with wavelengths of to . Frequencies below 20 Hz are generally felt rather than heard, assuming the amplitude of the vibration is great enough. Sound frequencies above 20 kHz are called ultrasonic.
Sound propagates as mechanical vibration waves of pressure and displacement, in air or other substances. In general, frequency components of a sound determine its "color", its timbre. When speaking about the frequency (in singular) of a sound, it means the property that most determines its pitch. Higher pitches have higher frequency, and lower pitches are lower frequency.
The frequencies an ear can hear are limited to a specific range of frequencies. The audible frequency range for humans is typically given as being between about 20 Hz and 20,000 Hz (20 kHz), though the high frequency limit usually reduces with age. Other species have different hearing ranges. For example, some dog breeds can perceive vibrations up to 60,000 Hz.
In many media, such as air, the speed of sound is approximately independent of frequency, so the wavelength of the sound waves (distance between repetitions) is approximately inversely proportional to frequency.
Frequencies and descriptions
See also
Absolute threshold of hearing
Hypersonic effect, controversial claim for human perception above 20,000 Hz
Loudspeaker
Musical acoustics
Piano key frequencies
Scientific pitch notation
Whistle register
References
Acoustics
Sound
Sound measurements
Physical quantities
Audio engineering | Audio frequency | [
"Physics",
"Mathematics",
"Engineering"
] | 343 | [
"Physical phenomena",
"Sound measurements",
"Physical quantities",
"Quantity",
"Classical mechanics",
"Acoustics",
"Electrical engineering",
"Audio engineering",
"Physical properties"
] |
344,173 | https://en.wikipedia.org/wiki/Rotational%20energy | Rotational energy or angular kinetic energy is kinetic energy due to the rotation of an object and is part of its total kinetic energy. Looking at rotational energy separately around an object's axis of rotation, the following dependence on the object's moment of inertia is observed:
where
The mechanical work required for or applied during rotation is the torque times the rotation angle. The instantaneous power of an angularly accelerating body is the torque times the angular velocity. For free-floating (unattached) objects, the axis of rotation is commonly around its center of mass.
Note the close relationship between the result for rotational energy and the energy held by linear (or translational) motion:
In the rotating system, the moment of inertia, I, takes the role of the mass, m, and the angular velocity, , takes the role of the linear velocity, v. The rotational energy of a rolling cylinder varies from one half of the translational energy (if it is massive) to the same as the translational energy (if it is hollow).
An example is the calculation of the rotational kinetic energy of the Earth. As the Earth has a sidereal rotation period of 23.93 hours, it has an angular velocity of . The Earth has a moment of inertia, I = . Therefore, it has a rotational kinetic energy of .
Part of the Earth's rotational energy can also be tapped using tidal power. Additional friction of the two global tidal waves creates energy in a physical manner, infinitesimally slowing down Earth's angular velocity ω. Due to the conservation of angular momentum, this process transfers angular momentum to the Moon's orbital motion, increasing its distance from Earth and its orbital period (see tidal locking for a more detailed explanation of this process).
See also
Flywheel
List of energy storage projects
Rigid rotor
Rotational spectroscopy
Notes
References
Resnick, R. and Halliday, D. (1966) PHYSICS, Section 12-5, John Wiley & Sons Inc.
Forms of energy
Rotation | Rotational energy | [
"Physics"
] | 409 | [
"Physical phenomena",
"Physical quantities",
"Classical mechanics",
"Rotation",
"Forms of energy",
"Energy (physics)",
"Motion (physics)"
] |
344,175 | https://en.wikipedia.org/wiki/Purple%20frog | The purple frog (Nasikabatrachus sahyadrensis), Indian purple frog, or pignose frog is a frog species of the genus Nasikabatrachus. It is endemic to the Western Ghats in India. Although the adult frog was formally described in October 2003, the juvenile form of the species was described earlier in 1917.
History of the discovery
The species was described from specimens collected in the Idukki district of Kerala by S.D. Biju from the Tropical Botanic Garden and Research Institute in Palode, India, from the Vrije Universiteit Brussel (Free University of Brussels), in 2003. However, it was already well known to the local people and several earlier documented specimens and publications had been ignored by the authors in the 2003 paper that describes the genus and species.
Nasikabatrachus sahyadrensis closest living relatives are considered to be the Sooglossidae, only known in the Seychelles, an island chain in the Indian Ocean.
Name
The scientific name Nasikabatrachus sahyadrensis is a Latinized portmanteau of the Sanskrit () for "nose", Greek () for "frog", and Sahyadri, the native name for the Western Ghats which forms the purple frog's natural habitat.
One of its common names, the purple pig-nosed frog, also makes reference to the elongated morphology of its snout, which is well adapted to the acquisition of fossorial termites.
Description
The body of Nasikabatrachus sahyadrensis appears robust and bloated and is relatively rounded compared to other more dorsoventrally flattened frogs. Their flattened body assists them to cling to submerged rocks and boulders which essentially helps them fight strong currents, allowing them to remain near stream banks where they typically reside. Its arms and legs splay out in the standard anuran body form. Compared to other frogs, N. sahyadrensis has a small head and an unusual pointed snout. Adults are typically dark purplish-grey in color. Males are about a third of the length of females. The specimen with which the species was originally described was long from the tip of the snout to the vent. Tadpoles of the species had been described in 1917 by Nelson Annandale and C. R. Narayan Rao as having oral suckers that allowed them to live in torrential streams. Suckers are also present in rheophilic fishes of genera such as Glyptothorax, Travancoria, Homaloptera, and Bhavania, adaptations that are the result of convergent evolution. Some of these fishes co-occur with Nasikabatrachus tadpoles in the hill streams. Its vocalization is a drawn-out harsh call that sounds similar to a chicken clucking. Males of this species exhibit the unique behavior of calling from under a thin layer of soil. Some other burrowing frogs (Myobatrachus gouldii and Arenophyrne rotunda) are known to do this, but these frogs have also been observed to call from the surface, while N. sahyadrensis has not. The frogs may switch to headfirst burrowing due to their wedge-shaped skull and other shaped limbs.
Distribution
Earlier thought to be restricted to the south of the Palghat Gap in the Western Ghats, additional records have extended its known range farther north of the gap. The species is now known to be quite widely distributed in the Western Ghats, ranging from the Camel's Hump Hill Range in the north, all the way to the northernmost portions of the Agasthyamalai Hill Range in the south.
Ecology
Like many frogs, the Indian purple frog is well-adapted to its subterranean environment. The frog spends most of its life underground and surfaces only during the monsoon, for a period of two weeks, for mating. With few field scientists out in the field during the rainy season, the species was discovered and studied only in recent times. Males emerge to call beside temporary rainwater streams. They mount females and grip them (amplexus) along the vertebral column. The females then carry the male frogs on their backs to the egg laying sites which are usually crevices along the fast-flowing streams. Around 3000 eggs are laid in a rock pool and the tadpoles metamorphose after around 100 days.
Unlike many other burrowing species of frogs that emerge and feed above the ground, this species has been found to forage underground, feeding mainly on termites using its tongue and a special buccal groove.
In 2015, tadpoles of the species were discovered to be traditionally consumed by tribal communities.
The major threat to these amphibians in the Western Ghats of India is caused by the alteration of natural habitats by an ever-increasing human population, resulting in large areas being converted for settlement and agricultural use. Recent studies have shown frog utilization to be one of the major threats, which include the utilization of frogs for food, traditional medicine such as a cure for burns, asthma, and other lung ailments, research purposes, and pet trade has also been considered a major contributor to their decline. Tadpole-harvesting was prevalent in the monsoon season during July–September every year. The Nadukani-Moolamattom-Kulamaav tribal people have developed an indigenous method for collecting these uniquely adapted suctorial tadpoles. Usually, about 2–5 individuals would participate in each harvesting event.
The Purple Frog growth also depends on the velocity of the water. When the velocity of water increased, there was a greater number of tadpoles than the lower velocity of water areas in both streams. The tadpoles also had constant activity in the streams as well. They also have a huge influence on the number of tadpoles in the environments they are in.
Due to increasing population in India where the purple frogs are native to, large open areas where purple frogs typically reside are being reconstructed for agricultural and settlement purposes. This has led to almost 40% of all amphibians in the Western Ghats of India going extinct, due to a lack of data the remaining amphibians are mostly unresearched with no knowledge of ecology, biology, defining characteristics, threats faced (Thomas & Biju, 2015).
The building of dams during monsoon season is affecting the loss of microhabitat that is needed for survival of the purple frog. The harvesting of tadpoles by indigenous communities also contributes to their endangerment.
References
External links
Edge of Existence page on Nasikabatrachus sahyadrensis
Continental drift and the Sooglossidae
AmphibiaWeb page on Nasikabatrachus sahyadrensis
THOMAS, A., & BIJU, S. D. (2015). Tadpole consumption is a direct threat to the endangered purple frog, Nasikabatrachus sahyadrensis. Salamandra, 51(3), 252–258.
Endemic fauna of the Western Ghats
Frogs of India
Amphibians described in 2003
EDGE species
Articles containing video clips
Taxa named by Sathyabhama Das Biju
Nasikabatrachidae | Purple frog | [
"Biology"
] | 1,477 | [
"EDGE species",
"Biodiversity"
] |
344,195 | https://en.wikipedia.org/wiki/Living%20fossil | A living fossil is a deprecated term for an extant taxon that phenotypically resembles related species known only from the fossil record. To be considered a living fossil, the fossil species must be old relative to the time of origin of the extant clade. Living fossils commonly are of species-poor lineages, but they need not be. While the body plan of a living fossil remains superficially similar, it is never the same species as the remote relatives it resembles, because genetic drift would inevitably change its chromosomal structure.
Living fossils exhibit stasis (also called "bradytely") over geologically long time scales. Popular literature may wrongly claim that a "living fossil" has undergone no significant evolution since fossil times, with practically no molecular evolution or morphological changes. Scientific investigations have repeatedly discredited such claims.
The minimal superficial changes to living fossils are mistakenly declared as an absence of evolution, but they are examples of stabilizing selection, which is an evolutionary process—and perhaps the dominant process of morphological evolution.
The term is currently deprecated among paleontologists and evolutionary biologists.
Characteristics
Living fossils have two main characteristics, although some have a third:
Living organisms that are members of a taxon that has remained recognizable in the fossil record over an unusually long time span.
They show little morphological divergence, whether from early members of the lineage, or among extant species.
They tend to have little taxonomic diversity.
The first two are required for recognition as a living fossil; some authors also require the third, others merely note it as a frequent trait.
Such criteria are neither well-defined nor clearly quantifiable, but modern methods for analyzing evolutionary dynamics can document the distinctive tempo of stasis. Lineages that exhibit stasis over very short time scales are not considered living fossils; what is poorly-defined is the time scale over which the morphology must persist for that lineage to be recognized as a living fossil.
The term living fossil is much misunderstood in popular media in particular, in which it often is used meaninglessly. In professional literature the expression seldom appears and must be used with far more caution, although it has been used inconsistently.
One example of a concept that could be confused with "living fossil" is that of a "Lazarus taxon", but the two are not equivalent; a Lazarus taxon (whether a single species or a group of related species) is one that suddenly reappears, either in the fossil record or in nature, as if the fossil had "come to life again". In contrast to "Lazarus taxa", a living fossil in most senses is a species or lineage that has undergone exceptionally little change throughout a long fossil record, giving the impression that the extant taxon had remained identical through the entire fossil and modern period. Because of the mathematical inevitability of genetic drift, though, the DNA of the modern species is necessarily different from that of its distant, similar-looking ancestor. They almost certainly would not be able to cross-reproduce, and are not the same species.
The average species turnover time, meaning the time between when a species first is established and when it finally disappears, varies widely among phyla, but averages about 2–3million years. A living taxon that had long been thought to be extinct could be called a Lazarus taxon once it was discovered to be still extant. A dramatic example was the order Coelacanthiformes, of which the genus Latimeria was found to be extant in 1938. About that there is little debate – however, whether Latimeria resembles early members of its lineage sufficiently closely to be considered a living fossil as well as a Lazarus taxon has been denied by some authors in recent years.
Coelacanths disappeared from the fossil record some 80million years ago (in the upper Cretaceous period) and, to the extent that they exhibit low rates of morphological evolution, extant species qualify as living fossils. It must be emphasised that this criterion reflects fossil evidence, and is totally independent of whether the taxa had been subject to selection at all, which all living populations continuously are, whether they remain genetically unchanged or not.
This apparent stasis, in turn, gives rise to a great deal of confusion – for one thing, the fossil record seldom preserves much more than the general morphology of a specimen. To determine much about its physiology is seldom possible; not even the most dramatic examples of living fossils can be expected to be without changes, no matter how persistently constant their fossils and the extant specimens might seem. To determine much about noncoding DNA is hardly ever possible, but even if a species were hypothetically unchanged in its physiology, it is to be expected from the very nature of the reproductive processes, that its non-functional genomic changes would continue at more-or-less standard rates. Hence, a fossil lineage with apparently constant morphology need not imply equally constant physiology, and certainly neither implies any cessation of the basic evolutionary processes such as natural selection, nor reduction in the usual rate of change of the noncoding DNA.
Some living fossils are taxa that were known from palaeontological fossils before living representatives were discovered. The most famous examples of this are:
Coelacanthiform fishes (2 species)
Metasequoia, the dawn redwood discovered in a remote Chinese valley (1 species)
Glypheoid lobsters (2 species)
Mymarommatid wasps (10 species)
Eomeropid scorpionflies (1 species)
Jurodid beetles (1 species)
Soft sea urchins (59 species)
All the above include taxa that originally were described as fossils but now are known to include still-extant species.
Other examples of living fossils are single living species that have no close living relatives, but are survivors of large and widespread groups in the fossil record. For example:
Ginkgo biloba
Syntexis libocedrii, the cedar wood wasp
Dinoflagellates (typified on coccoid dinocysts: occasionally calcareous cell remnants)
All of these were described from fossils before later being found alive.
The fact that a living fossil is a surviving representative of an archaic lineage does not imply that it must retain all the "primitive" features (plesiomorphies) of its ancestral lineage. Although it is common to say that living fossils exhibit "morphological stasis", stasis, in the scientific literature, does not mean that any species is strictly identical to its ancestor, much less remote ancestors.
Some living fossils are relicts of formerly diverse and morphologically varied lineages, but not all survivors of ancient lineages necessarily are regarded as living fossils. See for example the uniquely and highly autapomorphic oxpeckers, which appear to be the only survivors of an ancient lineage related to starlings and mockingbirds.
Evolution and living fossils
The term living fossil is usually reserved for species or larger clades that are exceptional for their lack of morphological diversity and their exceptional conservatism, and several hypotheses could explain morphological stasis on a geologically long time-scale. Early analyses of evolutionary rates emphasized the persistence of a taxon rather than rates of evolutionary change. Contemporary studies instead analyze rates and modes of phenotypic evolution, but most have focused on clades that are thought to be adaptive radiations rather than on those thought to be living fossils. Thus, very little is presently known about the evolutionary mechanisms that produce living fossils or how common they might be. Some recent studies have documented exceptionally low rates of ecological and phenotypic evolution despite rapid speciation. This has been termed a "non-adaptive radiation" referring to diversification not accompanied by adaptation into various significantly different niches. Such radiations are explanation for groups that are morphologically conservative. Persistent adaptation within an adaptive zone is a common explanation for morphological stasis. The subject of very low evolutionary rates, however, has received much less attention in the recent literature than that of high rates.
Living fossils are not expected to exhibit exceptionally low rates of molecular evolution, and some studies have shown that they do not. For example, on tadpole shrimp (Triops), one article notes, "Our work shows that organisms with conservative body plans are constantly radiating, and presumably, adapting to novel conditions... I would favor retiring the term 'living fossil' altogether, as it is generally misleading." Some scientists instead prefer a new term stabilomorph, being defined as "an effect of a specific formula of adaptative strategy among organisms whose taxonomic status does not exceed genus-level. A high effectiveness of adaptation significantly reduces the need for differentiated phenotypic variants in response to environmental changes and provides for long-term evolutionary success."
The question posed by several recent studies pointed out that the morphological conservatism of coelacanths is not supported by paleontological data. In addition, it was shown recently that studies concluding that a slow rate of molecular evolution is linked to morphological conservatism in coelacanths are biased by the a priori hypothesis that these species are 'living fossils'. Accordingly, the genome stasis hypothesis is challenged by the recent finding that the genome of the two extant coelacanth species L. chalumnae and L. menadoensis contain multiple species-specific insertions, indicating transposable element recent activity and contribution to post-speciation genome divergence. Such studies, however, challenge only a genome stasis hypothesis, not the hypothesis of exceptionally low rates of phenotypic evolution.
History
The term was coined by Charles Darwin in his On the Origin of Species from 1859, when discussing Ornithorhynchus (the platypus) and Lepidosiren (the South American lungfish):
Other definitions
Long-enduring
A living taxon that lived through a large portion of geologic time.
The Australian lungfish (Neoceratodus fosteri), also known as the Queensland lungfish, is an example of an organism that meets this criterion. Fossils identical to modern specimens have been dated at over 100million years old. Modern Queensland lungfish have existed as a species for almost 30million years. The contemporary nurse shark has existed for more than 112million years, making this species one of the oldest, if not actually the oldest extant vertebrate species.
Resembles ancient species
A living taxon morphologically and/or physiologically resembling a fossil taxon through a large portion of geologic time (morphological stasis).
Retains many ancient traits
A living taxon with many characteristics believed to be primitive. This is a more neutral definition. However, it does not make it clear whether the taxon is truly old, or it simply has many plesiomorphies. Note that, as mentioned above, the converse may hold for true living fossil taxa; that is, they may possess a great many derived features (autapomorphies), and not be particularly "primitive" in appearance.
Relict population
Any one of the above three definitions, but also with a relict distribution in refuges.
Some paleontologists believe that living fossils with large distributions (such as Triops cancriformis) are not real living fossils. In the case of Triops cancriformis (living from the Triassic until now), the Triassic specimens lost most of their appendages (mostly only carapaces remain), and they have not been thoroughly examined since 1938.
Low diversity
Any of the first three definitions, but the clade also has a low taxonomic diversity (low diversity lineages).
Oxpeckers are morphologically somewhat similar to starlings due to shared plesiomorphies, but are uniquely adapted to feed on parasites and blood of large land mammals, which has always obscured their relationships. This lineage forms part of a radiation that includes Sturnidae and Mimidae, but appears to be the most ancient of these groups. Biogeography strongly suggests that oxpeckers originated in eastern Asia and only later arrived in Africa, where they now have a relict distribution.
The two living species thus seem to represent an entirely extinct and (as Passerida go) rather ancient lineage, as certainly as this can be said in the absence of actual fossils. The latter is probably due to the fact that the oxpecker lineage never occurred in areas where conditions were good for fossilization of small bird bones, but of course, fossils of ancestral oxpeckers may one day turn up enabling this theory to be tested.
Operational definition
An operational definition was proposed in 2017, where a 'living fossil' lineage has a slow rate of evolution and occurs close to the middle of morphological variation (the centroid of morphospace) among related taxa (i.e. a species is morphologically conservative among relatives). The scientific accuracy of the morphometric analyses used to classify tuatara as a living fossil under this definition have been criticised however, which prompted a rebuttal from the original authors.
Examples
Some of these are informally known as "living fossils".
Bacteria
Cyanobacteria – the oldest living fossils, emerging 3.5 billion years ago. They exist as single bacteria or in the form of stromatolites, layered rocks produced by colonies of cyanobacteria.
Protists
The dinoflagellate †Calciodinellum operosum.
The dinoflagellate †Dapsilidinium pastielsii.
The dinoflagellate †Posoniella tricarinelloides.
The coccolithophore Tergestiella adriatica.
Plants
Moss
Pteridophytes
Horsetails – Equisetum
Lycopods
Tree ferns and ferns
Gymnosperms
Conifers
Agathis – kauri in New Zealand, Australia and the Pacific and almasiga in the Philippines
Araucaria araucana – the monkey puzzle tree (as well as other extant Araucaria species)
Metasequoia – dawn redwood (Cupressaceae; related to Sequoia and Sequoiadendron)
Sciadopitys – a unique conifer endemic to Japan known in the fossil record for about 230 million years.
Taiwania cryptomerioides – one of the largest tree species in Asia.
Wollemia tree (Araucariaceae – a borderline example, related to Agathis and Araucaria)
Cycads – although this has been challenged by multiple lines of evidence
Ginkgo tree (Ginkgoaceae)
Welwitschia
Angiosperms
Amborella – a plant from New Caledonia, possibly closest to base of the flowering plants
Magnolia – a genus whose form is little changed since the earliest days of flowering plant evolution in the Cretaceous and possibly earlier
Trapa – water caltrops, seeds, and leaves of numerous extinct species are known all the way back to the Cretaceous.
Nelumbo – several species of lotus flower are known exclusively from fossils dating back to the Cretaceous.
Sassafras – many fossils of sassafras are known from the late cretaceous through the late pleistocene.
Platanus Sycamore fossils are very abundant throughout the northern hemisphere with several extinct species. Sycamore leaves and fruits are quite common in plant fossils. Sycamores exhibit many primitive features as well such their exfoliating bark which is a result of a lack of elasticity. Platanus Occidentalis fossils are known from the pliocene and the pleistocene in North America.
Nyssa Blackgum fossils go way back to the late cretaceous period. Many extinct species are recorded as well.
Liriodendron Fossils from the cretaceous and the tertiary period are found with many extinct species. Tulip Trees at one point were present in europe during the cretaceous and the early paleocene. Liriodendron Tulipifera fossils dating from the pliocene and pleistocene were discovered at the chowan formation in North Carolina.
Liquidambar Sweetgums appeared during the mid-late cretaceous and several extinct species are found throughout Asia Europe and North America. The genus was once widespread in europe and asia especially during the miocene. The American Sweetgum is a living fossil itself since fossil specimens dating from the miocene, pliocene and pleistocene were discovered in the eastern united states
Fungi
Neolecta
Animals
Vertebrates
Mammals
Aardvark (Orycteropus afer)
Amami rabbit (Pentalagus furnessi)
Nesolagus (Asian striped rabbits)
Chevrotain (Tragulidae)
Chousingha (Tetracerus quadricornis)
Elephant shrew (Macroscelidea)
Giant panda (Ailuropoda melanoleuca)
Baiji (Lipotes vexillifer) (One living species)
Ganges river dolphin (Platanista gangetica)
Indus river dolphin (Platanista minor)
Hawaiian monk seal (Neomonachus schauinslandi)
Koala (Phascolarctos cinereus)
Laotian rock rat (Laonastes aenigmamus)
Monito del monte (Dromiciops gliroides)
Monotremes (the platypus and echidna)
Mountain beaver (Aplodontia rufa)
Okapi (Okapia johnstoni)
Opossums (Didelphidae)
Clouded leopard (Neofelis nebulousa)
Bush dog (Speothos venaticus)
Maned wolf (Chrysocyon brachyurus)
Red panda (Ailurus fulgens)
Solenodon (Solenodon cubanus and Solenodon paradoxus)
Shrew opossum (Caenolestidae)
Spectacled bear (Tremarctos ornatus)
False killer whale (Pseudorca crassidens)
Pygmy right whale (Caperea marginata)
Pacarana (Dinomys branickii)
Rhinoceroses (Rhinocerotidae)
Tapirs (Tapiridae)
Birds
Pelicans (Pelecanus) – form has been virtually unchanged since the Eocene, and is noted to have been even more conserved across the Cenozoic than that of crocodiles.
Acanthisittidae (New Zealand "wrens") – 2 living species, a few more recently extinct. Distinct lineage of Passeriformes.
Broad-billed sapayoa (Sapayoa aenigma) – One living species. Distinct lineage of Tyranni.
Bearded reedling (Panurus biarmicus) – One living species. Distinct lineage of Passerida or Sylvioidea.
Picathartes (rockfowls)
Coliiformes (mousebirds) – 6 living species in 2 genera. Distinct lineage of Neoaves.
Hoatzin (Ophisthocomus hoazin) – One living species. Distinct lineage of Neoaves.
Magpie goose (Anseranas semipalmata) – One living species. Distinct lineage of Anseriformes.
Sandhill crane (Antigone canadensis) – Oldest living species.
Seriema (Cariamidae) – 2 living species. Distinct lineage of Cariamae.
Tinamiformes (tinamous) 50 living species. Distinct lineage of Palaeognathae.
Reptiles
Crocodilia (crocodiles, gavials, caimans and alligators)
Pig-nosed turtle (Carettochelys insculpta)
Hickatee (Dermatemys mawii)
Snapping turtle (Chelydridae) family
Tuatara (Sphenodon punctatus and Sphenodon guntheri)
Asian forest tortoise (Manouria emys)
Impressed tortoise (Manouria impressa)
Sunbeam snake (Xenopeltis hainanensis and Xenopeltis unicolor)
Leatherback sea turtle (Dermochelys coriacea)
Amphibians
Giant salamanders (Cryptobranchus and Andrias)
Hula painted frog (Latonia nigriventer)
Purple frog (Nasikabatrachus sahyadrensis)
Jawless fish
Hagfish (Myxinidae) family
Lamprey (Petromyzontiformes)
Bony fish
Arowana and arapaima (Osteoglossidae)
Bowfin (Amia calva)
Coelacanth (the lobed-finned Latimeria menadoensis and Latimeria chalumnae)
Gar (Lepisosteidae)
Queensland lungfish (Neoceratodus fosteri)
African lungfish (Protopterus sp.)
Sturgeons and paddlefish (Acipenseriformes)
Bichir (family Polypteridae)
Protanguilla palau
Mudskipper (Oxudercinae)
Sharks
Blind shark (Brachaelurus waddi)
Bullhead shark (Heterodontus sp.)
Cow shark (sixgill sharks and relatives) (Hexanchidae)
Elephant shark (Callorhinchus milii)
Frilled shark (Chlamydoselachus sp.)
Goblin shark (Mitsukurina owstoni)
Gulper shark (Centrophorus sp.)
Invertebrates
Insects
Helorid wasps (1 living genus, 11 extinct genera)
Mantophasmatodea (gladiators; a few living species)
Meropeidae (3 living species, 4 extinct)
Micromalthus debilis (a beetle)
Mymarommatid wasps (10 living species in genus Palaeomymar)
Nevrorthidae (3 species-poor genera)
Nothomyrmecia (known as the 'dinosaur ant')
Notiothauma reedi (a scorpionfly relative)
Orussidae (parasitic wood wasps; about 70 living species in 16 genera)
Peloridiidae (peloridiid bugs; fewer than 30 living species in 13 genera)
Rhinorhipid beetles (1 living species, Triassic origin)
Rotoitid wasps (2 living species, 14 extinct)
Sikhotealinia zhiltzovae (a jurodid beetle)
Syntexis libocedrii (Anaxyelidae cedar wood wasp)
Cyatta abscondita (most recent common relative of Atta and Acromyrmex ant genera)
Crustaceans
Glypheidea (2 living species: Neoglyphea inopinata and Laurentaeglyphea neocaledonica)
Stomatopods (mantis shrimp)
Polychelida (deep sea blind lobster)
Triops cancriformis (also known as tadpole shrimp; a notostracan crustacean)
Molluscs
Nautilina (e.g., Nautilus pompilius)
Neopilina – Monoplacophoran
Slit snail (e.g., Entemnotrochus rumphii)
Vampyroteuthis infernalis – the vampire squid
Pleurocerid snails
Other invertebrates
Crinoids
Springtails
Horseshoe crabs (only 4 living species of the class Xiphosura, family Limulidae)
Lingula anatina (an inarticulate brachiopod)
Liphistiidae (trapdoor spiders)
Onychophorans (velvet worms)
Rhabdopleura (a hemichordate)
Valdiviathyris quenstedti (a craniforman brachiopod)
Paleodictyon nodosum (unknown)
See also
Relict (biology)
Breeding back
Lazarus taxon
Notes
Baiji is not officially classified as extinct, but instead critically endangered, possibly extinct and has the unofficial status of functional extinction.
References
External links
MyTriops introduces Triops as living fossils
Evolutionary biology concepts
Extinction
Fossils | Living fossil | [
"Biology"
] | 4,837 | [
"Evolutionary biology concepts"
] |
344,269 | https://en.wikipedia.org/wiki/Erogenous%20zone | An erogenous zone (from Greek , érōs "love"; and English -genous "producing", from Greek , -genḗs "born") is an area of the human body that has heightened sensitivity, the stimulation of which may generate a sexual response such as relaxation, sexual fantasies, sexual arousal, and orgasm.
Erogenous zones are located all over the human body, but the sensitivity of each varies, and depends on concentrations of nerve endings that can provide pleasurable sensations when stimulated. The touching of another person's erogenous zone is regarded as an act of physical intimacy. Whether a person finds stimulation in these areas to be pleasurable or objectionable depends on a range of factors, including their level of arousal, the circumstances in which it takes place, the cultural context, the nature of the relationship between the partners, and the partners' personal histories.
Erogenous zones may be classified by the type of sexual response that they generate. Many people are gently aroused when their eyelids, eyebrows, temples, shoulders, hands, arms, and hair are subtly touched. Gently touching or stroking of these zones stimulates a partner during foreplay and increases the arousal level. Also, the gentle massage or stroke of the abdominal area along with kissing or simply touching the navel can be a type of stimulation.
Classification
Specific zones
Specific zones are associated with sexual response, and include the lips and nipples in addition to areas of the genitals, notably corona of the glans penis, clitoris and perianal skin. These zones have a high density of innervation, and may have an efficiency of wound healing and a capacity to stimulate generalized cerebral arousal.
Nonspecific zones
In these zones, the skin is similar to normal-haired skin and has the normal high density of nerves and hair follicles. These areas include the sides and back of the neck, the inner arms, the axillae (armpits) and sides of the thorax (chest).
Genital areas
Male
For males, erogenous zones consist of the glans and the penis itself, along with the scrotum, the perineum, and the anus.
Males may also experience sexual stimulation via the prostate, either from anal sex or massage.
Female
For females, parts of the vulva, especially the clitoris, as well as the perineum and anus, are erogenous zones.
While the vagina is not especially sensitive as a whole, its lower third (the area close to the entrance) has concentrations of the nerve endings that can provide pleasurable sensations during sexual activity when stimulated; this is also called the anterior wall of the vagina or the outer one-third of the vagina, and it contains the majority of the vaginal nerve endings, making it more sensitive to touch than the inner two-thirds of the vaginal barrel.
Within the anterior wall of the vagina, there is a patch of ribbed rough tissue which has a texture that is sometimes described as similar to the palate (the roof of a mouth) or a raspberry, and may feel spongy when a woman is sexually aroused. This is the urethral sponge, which may also be the location of an area that some women report is an erogenous zone; this is sometimes called the G-spot. When stimulated, it may lead to sexual arousal, an orgasm, or female ejaculation. The existence of the G-spot and whether or not it is a distinct structure is debated among researchers, as reports of its location vary from woman to woman, it appears to be nonexistent in some women, and scientists commonly believe that it is an extension of the clitoris.
Head
Mouth
The lips and tongue are sensitive and can be stimulated by kissing and licking. Biting at the lip can also provide stimulus.
Neck
The neck, clavicle area and the back of the neck are very sensitive, and can be stimulated by licking, kissing or light caressing. Some people also like being bitten gently in these areas, often to the point that a "hickey", or "love-bite" is formed.
Ears
Some people find whispering or breathing softly in the ear to be pleasurable and relaxing, as well as licking, biting, caressing and/or kissing it especially the area of and behind the earlobe.
Torso
Chest
The areola and nipple contain Golgi-Mazzoni, Vater-Pacini, and genital corpuscles. No Meissner's corpuscles and few organized nerve endings are present. There are concentrations of nerve tissue in the area of ducts and masses of smooth muscle. The hair surrounding the areola adds additional sensory tissue. The mass of smooth muscle and glandular-duct tissue in the nipple and areola block the development of normal dermal nerve networks which are present in other erogenous regions and the development of special end organs. The entire breast has a network of nerve endings, and it has the same number of nerve endings no matter how large the breast is, so that larger breasts may need more stimulation than smaller ones.
Intense nipple stimulation may result in a surge in the production of oxytocin and prolactin which could have a significant effect on the individual's genitals, even to the point that some people of both sexes can achieve orgasm through nipple stimulation alone. Having the chest, breasts and nipples stimulated manually (hands, fingers) or orally (mouth, lips, teeth, tongue) is a pleasurable experience for many people of both sexes.
Abdomen and navel
Many people find stimulation (kissing, biting, scratching, tickling, caressing) of the abdomen to be pleasurable, especially close to the pubic region. It can cause strong arousal in men and women, in some even stronger than stimulation of the genitals. The navel is one of the many erogenous zones that has heightened sensitivity. In a 1982 study of eroticism in dress entitled "Skin to Skin", Prudence Glynn claimed that the waist symbolized virginity and that it was the first place that a man would touch a woman "when indicating more than a formal courtesy".
The navel and the region below when touched by the finger or the tip of the tongue result in the production of erotic sensations.
Arms
The skin of the arms, and specifically the softer skin of the inner arms and across the creased mid-arm bend covering the ventral side of the elbow, are highly sensitive to manual or oral stimulation. Caressing with fingers or tongue, more vigorous kneading, and butterfly kissing can initiate arousal and, in some cases, induce clitoral/vaginal orgasm or penile ejaculation without direct contact with the latter areas. The mid-arm bend is especially sensitive due to the thinner skin found there, which makes nerve endings more accessible. Arm sensitivity may be reduced or concentrated to a more narrow range by excessive muscularity or obesity on the one hand, or transformed to uncomfortable tenderness by excessive thinness on the other.
Armpits
Some consider the armpits to be an erogenous zone, despite the similarity of the axillae (armpits) to normal-haired skin in both the density of nerves and hair follicles. Exaggerated or anticipated digital (fingers, toes) or oral (mouth, lips, tongue) stimulation is believed to be responsible for the heightened sensual response.
If pheromones exist for humans, they would likely be secreted by a mixture of liquid from the apocrine glands with other organic compounds in the body. George Preti, an organic chemist at the Monell Chemical Senses Center in Philadelphia and Winnefred Cutler of the University of Pennsylvania's psychology department, discovered that women with irregular menstrual cycles became regular when exposed to male underarm extracts. They hypothesized that the only explanation was that underarms contain pheromones, as there was no other explanation for the effects, which mirrored how pheromones affect other mammals.
Fingers
The fingertips have many nerves and are responsive to very light touches, like the brushing of a tongue, light fingernail scratching or teeth stroking. The sides of the fingers are somewhat less sensitive and more ticklish. Both light and firmer touches work well at the junction of the fingers. Human fingertips are the second-most sensitive parts of the body, after the tongue.
Legs
The thighs can be sensitive to touch.
An exaggerated tickle on the back of the legs and knees can also be sensitive for some.
Feet and toes
Because of the concentration of nerve endings in the sole and digits of the human foot—and possibly due to the close proximity between the area of the brain dealing with tactile sensations from the feet and the area dealing with sensations from the genitals—the sensations produced by both the licking of the feet and sucking of toes can be pleasurable to some people. Similarly, massaging the sole of the foot can also produce stimulation. Many people are extremely ticklish in the foot area, especially on the soles.
See also
Foreplay
Human sexuality
Neuroanatomy of intimacy
Partialism
References
External links
Human sexuality
Sexual arousal
Human surface anatomy | Erogenous zone | [
"Biology"
] | 1,875 | [
"Human sexuality",
"Behavior",
"Human behavior",
"Sexuality"
] |
344,374 | https://en.wikipedia.org/wiki/Ryugyong%20Hotel | The Ryugyong Hotel (; sometimes spelled as Ryu-Gyong Hotel), or Yu-Kyung Hotel, is a tall unfinished pyramid-shaped skyscraper in Pyongyang, North Korea. Its name ( "capital of willows") is also one of the historical names for Pyongyang. The building has been planned as a mixed-use development, which would include a hotel.
Construction began in 1987 but was halted in 1992 as North Korea entered a period of economic crisis after the dissolution of the Soviet Union. After 1992, the building stood topped out, but without any windows or interior fittings. In 2008, construction resumed, and the exterior was completed in 2011. The hotel was planned to open in 2012, the centenary of founding leader Kim Il Sung's birth. A partial opening was announced for 2013, but this was cancelled. In 2018, an LED display was fitted to one side, which is used to show propaganda animations and film scenes.
Architecture
The Ryugyong Hotel is tall, making it the most prominent feature of Pyongyang's skyline and the tallest building in North Korea. Construction of the Ryugyong Hotel was intended to be completed in time for the 80th birthday of General Secretary of the Workers' Party of Korea and President Kim Il Sung in 1992; if this had been achieved, it would have held the title of world's tallest hotel. Before Goldin Finance 117 in China, it was considered the tallest unoccupied building in the world.
The building consists of three wings, each measuring long and wide, lightly stepped once but otherwise sloping at 75 degrees to the ground, which converge at a common point to form a pinnacle. The building is topped by a truncated cone wide, consisting of eight floors that are intended to rotate, topped by a further six static floors. The structure was originally intended to house five revolving restaurants, and either 3,000 or 7,665 guest rooms, according to different sources. According to Orascom's Khaled Bichara in 2009, the Ryugyong will not be just a hotel, but rather a mixed-use development, including "revolving restaurant" facilities along with a "mixture of hotel accommodation, apartments and business facilities".
Construction history
Beginning
The plan for a large hotel was reportedly a Cold War response to the completion of the world's then-tallest hotel, the Westin Stamford Hotel in Singapore, in 1986 by the South Korean company SsangYong Group. North Korean leadership envisioned the project as a channel for Western investors to step into the marketplace. A firm, The Ryugyong Hotel Investment and Management, was established to attract a hoped-for $230 million in foreign investment. A representative for the North Korean government promised relaxed oversight, allowing "foreign investors [to] operate casinos, nightclubs or Japanese lounges". North Korean construction firm Baikdoosan Architects & Engineers (also known as Baekdu Mountain Architects and Engineers) began construction on a pyramid‑shaped hotel in 1987.
The hotel was originally scheduled to open in 1992 for the 80th birthday of Kim, but problems with building methods and materials delayed completion. If it opened on schedule, it would have surpassed the Westin Stamford to become the world's tallest hotel, and would have been the seventh-tallest building in the world, now it was the tallest abandoned building in the world.
Halt
In 1992, after the building had reached its full architectural height, work was halted due to the economic crisis in North Korea following the collapse of the Soviet Union. Japanese newspapers estimated the cost of construction was $750 million, consuming 2 percent of North Korea's GDP. For over a decade, the unfinished building sat vacant and without windows, fixtures, or fittings, appearing as a massive concrete shell. A rusting construction crane remained at the top, which the BBC called "a reminder of the totalitarian state's thwarted ambition". According to Marcus Noland, in the late 1990s, the European Chamber of Commerce in Korea inspected the building and concluded that the structure was irreparable. Questions were raised regarding the quality of the building's concrete and the alignment of its elevator shafts, which some sources said were "crooked".
In a 2006 article, ABC News questioned whether North Korea had sufficient raw materials or energy for such a massive project. A North Korean government official told the Los Angeles Times in 2008 that construction was not completed "because [North Korea] ran out of money".
Though mocked-up images of the completed hotel had appeared on North Korean stamps during the initial construction period, the North Korean government ignored the building's existence during the construction hiatus even though it dominated the Pyongyang skyline. The government manipulated official photographs in order to remove the unfinished structure from the skyline, and excluded it from printed maps of Pyongyang.
The halt in construction, the rumours of problems and the mystery about its future led foreign media sources to dub it "the worst building in the world", "Hotel of Doom" and "Phantom Hotel".
Resumption
In April 2008, after 16 years of inactivity, work on the building was restarted by the Egyptian construction firm Orascom Group. The firm, which had entered into a US$400 million deal with the North Korean government to build and run a cellular network, said that their telecommunications deal was not directly related to the Ryugyong Hotel work. In 2008, North Korean officials stated that the hotel would be completed by 2012, coinciding with the 100th anniversary of the birth of Kim. In 2009, Orascom's chief operating officer Bichara noted that they "had not had too many problems" resolving the reported structural issues of the building, and that a revolving restaurant would be located at the top of the building.
In July 2011, it was reported that the exterior work was complete. Features that Orascom had installed include exterior glass panels and telecommunications antennas. In September 2012, photographs taken by Koryo Tours were released, showing the interior for the first time. The photographs showed no wiring, cabling, or pipes in the structure, which was bare and unfurnished.
Opening announced, then cancelled
In November 2012, international hotel operator Kempinski announced it would be running the hotel, which was expected to partially open in mid‑2013. In March 2013, plans to open the hotel were suspended. Kempinski clarified its earlier statements, saying that only "initial discussions" had ever occurred, but that no agreement had been signed because "market entry is not currently possible".
Kempinski did not elaborate on its reasons, but commentators suggested that international tensions related to the 2013 North Korean nuclear test, economic risks, and delays in construction probably played a part.
Renewal
Activity resumed in late 2016 and a representative of Orascom visited North Korea. In 2017 and early 2018, there were signs of work at the site, with access roads being constructed.
In April 2018, a large LED display featuring the North Korean flag had been added to the top of the building. By May, an LED display had been added to one entire side of the structure, and there were reports that the building was being readied for occupation. By July, the LED display was showing animations and movie scenes. In June 2019, there was new signage bearing the hotel's name (in Korean and English) and its logo over the main entrance.
In 2024, the North Korean government reportedly started to look for a casino operator willing to complete the building in exchange for profits made by the casino.
Gallery
See also
Korean architecture
List of buildings with 100 floors or more
List of hotels in North Korea
List of tallest buildings in North Korea
List of tallest hotels
References
External links
Ryugyong Hotel Tower in Pyongyang
Ryugyong Hotel – Google Maps
Hotels in Pyongyang
Buildings and structures under construction
Buildings and structures with revolving restaurants
Pyramids in Asia
Skyscraper hotels
Skyscrapers in North Korea
Unfinished buildings and structures
1992 establishments in North Korea
20th-century architecture in North Korea | Ryugyong Hotel | [
"Engineering"
] | 1,631 | [
"Construction",
"Buildings and structures under construction"
] |
344,413 | https://en.wikipedia.org/wiki/World%20Solar%20Challenge | The World Solar Challenge (WSC), since 2013 named Bridgestone World Solar Challenge, is an international event for solar powered cars driving 3000 kilometres through the Australian outback.
With the exception of a four-year gap between the 2019 and 2023 events, owing to the cancellation of the 2021 event, the World Solar Challenge is typically held every two years. The course is over through the Australian Outback, from Darwin, Northern Territory, to Adelaide, South Australia. The event was created to foster the development of solar-powered vehicles.
The WSC attracts teams from around the world, most of which are fielded by universities or corporations, although some are fielded by high schools. It has a 32-year history spanning fifteen events, with the inaugural event taking place in 1987. Initially held once every three years, the event became biennial from the turn of the century.
Since 2001 the WSC was won seven times out of ten efforts by the Nuna team and cars of the Delft University of Technology from the Netherlands. The Tokai Challenger, built by the Tokai University of Japan, was able to win 2009 and 2011. In the most recent editions (2019 & 2023), the Belgian Innoptus Solar Team formerly known as the Agoria Solar Team from KU Leuven University won.
Starting in 2007, the WSC has multiple classes. After the German team of Bochum University of Applied Sciences competed with a four-wheeled, multi-seat car, the BoCruiser (in 2009), in 2013 a radically new "Cruiser Class" was introduced, stimulating the technological development of practically usable, and ideally road-legal, multi-seater solar vehicles. Since its inception, Solar Team Eindhoven's four- and five-seat Stella solar cars from Eindhoven University of Technology (Netherlands) won the Cruiser Class in all four events so far.
Remarkable technological progress has been achieved since the General Motors led, highly experimental, single-seat Sunraycer prototype first won the WSC with an average speed of . Once competing cars became steadily more capable to match or exceed legal maximum speeds on the Australian highway, the challenge rules were consistently made more demanding and challenging — for instance after Honda's Dream car first won with an average speed exceeding in 1996. In 2005 the Dutch Nuna team were the first to beat an average speed of .
The 2017 Cruiser class winner, the five-seat Stella Vie vehicle, was able to carry an average of 3.4 occupants at an average speed of . Like its two predecessors, the vehicle was successfully road registered by the Dutch team, further emphasizing the great progress in real-world compliance and practicality that has been achieved.
The WSC held its 30th anniversary event on 8–15 October 2017.
Objective
The objective of the challenge is to promote the innovation of solar-powered cars. It is a design competition at its core, and every team/car that successfully crosses the finish line is considered successful. Teams from universities and enterprises participate. In 2015, 43 teams from 23 countries competed in the challenge.
Challenge strategy
Efficient balancing of power resources and power consumption is the key to success during the challenge. At any moment in time, the optimal driving speed depends on the weather forecast and the remaining capacity of the batteries. The team members in the escort cars will continuously remotely retrieve data from the solar car about its condition and use these data as input for prior developed computer programs to work out the best driving strategy.
It is equally important to charge the batteries as much as possible in periods of daylight when the car is not driving. To capture as much solar energy as possible, the solar panels are generally directed such that these are perpendicular to the incident sun rays. Sometimes the whole solar array is tilted for this purpose.
Important rules
The timed portion of the challenge stops at the outskirts of Adelaide, 2998 km from Darwin. However, for the timings recorded at that point to count, competitors must reach the official finish line in the centre of the city under solar power alone.
As the challenge utilises public roads, the cars have to adhere to the normal traffic regulations.
A minimum of 2 and maximum 4 drivers have to be registered. If the weight of a driver (including clothes) is less than , ballast will be added to make up the difference.
Driving time is between 8:00 and 17:00 (from 8 a.m. to 5 p.m.). In order to select a suitable place for the overnight stop (alongside the highway) it is possible to extend the driving period for a maximum of 10 minutes, which extra driving time will be compensated by a starting time delay the next day.
At various points along the route there are checkpoints where every car has to pause for 30 minutes. Only limited maintenance tasks (no repairs) are allowed during these compulsory stops.
The capacity of the batteries is limited to a mass for each chemistry (such as Lithium Ion) equivalent to approximately 5 kWh maximum. At the start of the route, the batteries may be fully charged. Batteries may not be replaced during the competition, except in the situation of a breakdown. However, in that case, a penalty time will apply.
Except for the maximum outer dimensions, there are no further restrictions on the design and construction of the car.
The deceleration of the dual braking system must be at least 3.8 m/s2 (149.6 in/s2).
Rule evolution
By 2005, several teams were handicapped by the South Australian speed limit of , as well as the difficulties of support crews keeping up with solar vehicles. It was generally agreed that the challenge of building a solar vehicle capable of crossing Australia at vehicular speeds had been met and exceeded. A new challenge was set: to build a new generation of solar car, which, with little modification, could be the basis for a practical proposition for sustainable transport.
Entrants to the 2007 event chose between racing in the Adventure and Challenge classes. Challenge class cars were restricted to 6 square meters of Si solar collectors (a 25% reduction), and later to 3 square meters for GaAs, driver access and egress were required to be unaided, seating position upright, steering controlled with a steering wheel, and many new safety requirements were added. Competitors also had to adhere to the new speed limit across the Northern Territory portion of the Stuart Highway. The 2007 event again featured a range of supplementary classes, including the Greenfleet class, which features a range of non-solar energy-efficient vehicles exhibiting their fuel efficiency.
For the 2009 challenge class several new rules were adopted, including the use of profiled tyres. Battery weight limits depend on secondary cell chemistries so that competitors have similar energy storage capabilities. Battery mass is now 20 kg for Li-ion and Li-polymer battery (was reduced from 25 and 21 kg in the past).
In 2013, a new Cruiser Class was introduced. The route took place in four stages. Final placings were based on a combination of time taken (56.6%), number of passengers carried (5.7%), battery energy from the grid between stages (18.9%), and a subjective assessment of practicality (18.9%)
In the 2015 Cruiser Class regulations, the scoring formula emphasized practicality less than before. Elapsed time will account for 70% of the score, passengers 5%, grid energy use 15%, and practicality 10%.
In 2017, solar array areas were reduced, and the Cruiser Class was changed to a Regularity Trial, with scoring based on energy efficiency and practicality.
History
The idea for the competition originates from Danish-born adventurer Hans Tholstrup. He was the first to circumnavigate the Australian continent in a open boat. At a later stage in his life he became involved in various competitions with fuel-saving cars and trucks. Already in the 1980s, he became aware of the necessity to explore sustainable energy as a replacement for the limited available fossil fuel. Sponsored by BP, he designed the world's first solar car, called The Quiet Achiever, and traversed the between Sydney, New South Wales and Perth, Western Australia in 20 days. That was the precursor of the WSC.
After the 4th event, he sold the rights to the state of South Australia and leadership of the event was assumed by Chris Selwood.
The event was held every three years until 1999 when it was switched to every two years.
1987
The first edition of the World Solar Challenge was run in 1987 when the winning entry, GM's Sunraycer won with an average speed of . Ford Australia's "Sunchaser" came in second. The "Solar Resource", which came in 7th overall, was first in the Private Entry category.
1990
The 1990 WSC was won by the "Spirit of Biel", built by Biel School of Engineering and Architecture in Switzerland followed by Honda in second place. Video coverage here.
1993
The 1993 WSC was won by the Honda Dream, and Biel School of Engineering and Architecture took second. Video coverage here.
1996
In the 1996 WSC, the Honda Dream and Biel School of Engineering and Architecture once again placed first and second overall, respectively.
1999
The 1999 WSC was finally won by a "home" team, the Australian Aurora team's Aurora 101 took the prize while Queen's University was the runner-up in the most closely contested WSC so far. The SunRayce class of American teams was won by Massachusetts Institute of Technology.
2001
The 2001 WSC was won by Nuna of the Delft University of Technology from the Netherlands, participating for the first time. Aurora took second place.
2003
In the 2003 WSC Nuna 2, the successor to the winner of 2001 won again, with an average speed of , while Aurora took second place again.
2005
In the 2005 WSC the top finishers were the same for the third consecutive event as Nuon's Nuna 3 won with a record average speed of , and Aurora was the runner-up.
2007
The 2007 WSC saw the Dutch Nuon Solar team score their fourth successive victory with Nuna 4 in the Challenge Class, averaging under the new, more restrictive rules, while the Belgian Punch Powertrain Solar Team's Umicar Infinity placed second.
The Adventure Class was added this year, run under the old rules, and won by Japanese Ashiya team's Tiga.
The Japanese Ashiya team's Tiga won the Adventure Class, run under the old rules, with an average speed of .
2009
The 2009 WSC was won by the "Tokai Challenger", built by the Tokai University Solar Car Team in Japan with an average speed of . The longtime reigning champion Nuon Solar Team's Nuna 5 finished in second place.
The Sunswift IV built by students at the University of New South Wales, Australia was the winner of the Silicon-based Solar Cell Class, while Japan's Osaka Sangyo University's OSU Model S won the Adventure class.
2011
In the 2011 WSC Tokai University took their second title with an updated "Tokai Challenger" averaging , and finishing just an hour before Nuna 6 of the Delft University of Technology. The challenge was marred by delays caused by wildfires.
2013
The 2013 WSC featured the introduction of the Cruiser Class, which comprised more 'practical' solar cars with 2–4 occupants. The inaugural winner was Solar Team Eindhoven's Stella from Eindhoven University of Technology in the Netherlands with an average speed of , while second place was taken by the PowerCore SunCruiser vehicle from team Hochschule Bochum in Germany, who inspired the creation of the Cruiser Class by racing more practical solar cars in previous WSC events. The Australian team, the University of New South Wales solar racing team Sunswift was the fastest competitor to complete the route, but was awarded third place overall after points were awarded for 'practicality' and for carrying passengers.
In the Challenger Class, the Dutch team from Delft University of Technology took back the title with Nuna 7 and an average speed of , while defending champions Tokai University finished second after an exciting close competition, which saw a 10–30 minute distance, though they drained the battery in final stint due to bad weather and finished some 3 hours later; an opposite situation of the previous challenge in 2011.
The Adventure Class was won by Aurora's Aurora Evolution.
2015
The 2015 WSC was held on 15–25 October with the same classes as the 2013 challenge.
In the Cruiser Class, the winner was once again Solar Team Eindhoven's Stella Lux from Eindhoven University of Technology in the Netherlands with an average speed of , while the second place team was Kogakuin University from Japan who was the first to cross the finish line, but did not receive as many points for passenger-kilometers and practicality. Bochum took 3rd place this year with the latest in their series of cruiser cars.
In the Challenger Class, the team from Delft University of Technology retained the title with Nuna 8 and an average speed of , while their Dutch counterparts, the University of Twente, who led most of the challenge, finished just 8 minutes behind them in second place, making 2015 the closest finish in WSC history. Tokai University passed the University of Michigan on the last day of the event to take home the bronze.
The Adventure Class was won by the Houston High School solar car team from Houston, Mississippi, United States.
2017
The 2017 WSC was held on 8–15 October, featuring the same classes as 2015. The Dutch NUON team won again in the Challenger class, which concluded on 2017-10-12, and in the Cruiser Class, the winner was once again Solar Team Eindhoven, from the Netherlands as well.
2019
The 2019 WSC was held from 13 to 20 October. 53 teams from 24 countries entered the competition, featuring the same three classes, Challenger (30 teams), Cruiser (23 teams) and Adventure. In the Challenger class, Agoria Solar Team (formerly Punch Powertrain) won for the first time. Tokai University Solar Car Team finished in second place.
In the Cruiser class, Solar Team Eindhoven won their fourth consecutive title. Despite multiple incidents on the road, Team Sonnenwagen Aachen managed to beat other teams and finished in 6th position.
Several teams had mishaps. Vattenfall was leading when their car Nuna X caught fire. The driver was uninjured, but the vehicle was destroyed. It was the first no-finish for that team in 20 years. Others were badly affected by strong winds.
Dutch team Twente was leading the journey at , when their car was forced off the road by winds and rolled over. The driver was taken to hospital. Within 30 minutes team Sonnenwagen Aachen was also blown off the road north of Coober Pedy, the driver was not hurt. An speed limit was then imposed by event officials, lifted when conditions improved. The day before, wind damage to solar panels put the team from Western Sydney University out of the challenge. The driver of Agoria from Belgium escaped injury when their vehicle was "uprooted" at 100 km/h (62 mph) by severe winds, but still went on to win the Challenger class.
2021
In response to the COVID-19 pandemic in Australia the WSC closed entries three months earlier than normal, on 18 December 2020. They were then to "… review all current government measures relating to social distancing, density and contact tracing, international travel restrictions and isolation requirements." On 12 February 2021, the South Australian Government confirmed the cancellation of the 2021 staging of the event. While the COVID-19 pandemic was not explicitly cited as the reason, the "complexities of international border closures" affecting Australia at the time appear to be the primary reason for the event's cancellation. The same statement also noted the next event would take place in October 2023 - at least 962 days from the date of announcement, and resulting in a four-year gap between events. Registered teams should receive a full refund of all fees.
2023
The 2023 World Solar Challenge was held from October 22-29. At the beginning of the race, 31 teams were participating, with 23 in the Challenger division and 8 in the Cruiser division. The Challenger division was won by defending champions Innoptus (formerly Agoria) with an average speed of 88.2km/h, and the Cruiser division was won by UNSW Sunswift with a score of 91.1. Uniquely, no Cruisers were able to finish the race this year.
Many of the leading teams faced trouble during the competition. Dutch team Top Dutch raced on a perovskite-tandem solar array damaged from testing in the month leading up to race. Michigan experienced electrical issues during qualifying and had to start last. German team Sonnenwagen was blown off the road just outside of Port Agusta and had to withdraw due to new regulations. Tokai had to stop for several hours on Day 4 to repair their car after sustaining damage from crossing a cattle grid. Kogakuin had consistent problems with their MPPT charge controller, and reported in an Instagram post that their panels were generating less than half the power than they should have been. On the fifth day of the competition, only 4 teams (Innoptus, Twente, Brunel, and Michigan) had finished the course, and by the official end of timing, only 12 teams made it to the finish line successfully.
See also
Solar car racing
List of prototype solar-powered cars
List of solar car teams
Shell Eco-marathon
The Quiet Achiever, the world's first solar-powered racecar
Other solar vehicle challenges
American Solar Challenge, a biennial United States event held since 1990 that has previously included Canada
Formula Sun Grand Prix, an annual U.S. event held on race tracks.
The Solar Car Challenge, an annual event for High School students from the U.S. and (to a lesser extent) other parts of the world, first held in 1995
South African Solar Challenge, a biennial South African event that was first held in 2008
Victorian Model Solar Vehicle Challenge, an annual event in Australia for schoolchildren
European Solar Challenge, a biennial 24-hour race in Belgium
Atacama Solar Race, a biennial event held in Chile
Movie
Race the Sun, a movie loosely based on a participating team
References
External links
Images from Alice Springs, Australia – 2007
An overview of all the competing teams in the 2013 WSC.
Solar car races
Engineering competitions
Auto races in Australia
Scientific organisations based in Australia
Science competitions
Photovoltaics
Recurring sporting events established in 1987
Motorsport in the Northern Territory
Motorsport in South Australia
Australian outback | World Solar Challenge | [
"Technology"
] | 3,826 | [
"Science and technology awards",
"Engineering competitions",
"Science competitions"
] |
344,500 | https://en.wikipedia.org/wiki/Zl%C3%ADn | Zlín (in 1949–1989 Gottwaldov; ; ) is a city in the Czech Republic. It has about 74,000 inhabitants. It is the seat of the Zlín Region and it lies on the Dřevnice River. It is known as an industrial centre. The development of the modern city is closely connected to the Bata Shoes company and its social scheme, developed after World War I. A large part of Zlín is urbanistically and architecturally valuable and is protected by law as an urban monument zone.
Administrative division
Zlín consists of 16 municipal parts (in brackets population according to the 2021 census):
Zlín (48,317)
Prštné (3,345)
Louky (1,027)
Mladcová (2,525)
Příluky (2,931)
Jaroslavice (822)
Kudlov (2,195)
Malenovice (7,156)
Chlum (144)
Klečůvka (332)
Kostelec (1,909)
Lhotka (235)
Lužkovice (634)
Salaš (195)
Štípa (1,798)
Velíková (613)
Prštné, Louky, Mladcová, Příluky, Jaroslavice, Kudlov and Malenovice are urbanistically fused with Zlín. They are sometimes called Zlín II–VIII, which was part of their name at the time when they were administratively merged with Zlín.
Etymology
There are several legends about the origin of the name of the city, according to which it was derived from slín (i.e. "marl") or zlaté jablko (i.e. "golden apple"). However, the name Zlín was most likely derived from the old personal Slavic name Zla, Zlen or Zleš.
From 1949 to 1989, the city was renamed Gottwaldov after the first communist president of Czechoslovakia Klement Gottwald. On 1 January 1990, the city's name was changed back to Zlín.
Geography
Zlín is located about east of Brno. It forms an urban area together with the town of Otrokovice. The territory of the city lies in the Vizovice Highlands. The highest point is the hill Tlustá hora at above sea level. The Dřevnice River flows through the city. The Fryšták Reservoir is situated in the northern part of the municipal territory.
History
14th–16th centuries
The first written mention of Zlín is from 1322, when it was acquired by Queen Elizabeth Richeza. In that time, Zlín was already a market town and served as a craft guild centre for the surrounding area of Moravian Wallachia. From 1358, the Zlín estate was owned by Bishop Albrecht of Šternberk and soon became the seat of the Moravian branch of the Šternberk family. In 1397, the town privileges of Zlín were extended and Zlín became a town. This significantly helped the economic development of Zlín.
The Hussite Wars badly affected properties of the Sternbergs and they were forced to sell Zlín in 1437. In the second half of the 15th century, Zlín was threatened by the Bohemian–Hungarian War. The 16th century brought peace and prosperity to the town. Trade and crafts flourished, mainly drapery, pottery and shoemaking. New villages were founded in the vicinity of Zlín, which became a large town and economic centre.
17th–19th centuries
In 1605, Zlín was raided and burned by Hungarian rebels. The Thirty Years' War left the town severely damaged and half deserted. The residents of Zlín, along with people from the whole Wallachian region, led an uprising against the Habsburg monarchy. The rebellion was however bloodily suppressed in 1644. After the war, Zlín became property of the Hungarian noble family of Serényi, but they did not care much for the town, and therefore Zlín recovered only slowly.
Economic activity was restored in the 18th century. Larger industrial enterprises appeared in the mid-19th century. A small match factory was established in 1850 and a shoe factory in 1870, but both were soon closed, and the town continued to live mainly from the work of craftsmen. In 1899, the railway was built.
20th century
Zlín began to grow rapidly after Tomáš Baťa and his siblings founded a shoe factory there in 1894, known as Bata Company. Production gradually increased, as did the number of employees and the population of the town. Baťa's factory supplied the Austro-Hungarian army in World War I. Due to the remarkable economic growth of the company and the increasing prosperity of its workers, Baťa himself was elected mayor of Zlín in 1923.
Baťa became the leading manufacturer and marketer of footwear in Czechoslovakia in 1922. Besides producing footwear, the company diversified into engineering, chemistry, rubber technology and many more areas. The factory hired thousands of workers who moved to Zlín. A new large complex of modern buildings and facilities was gradually built by the Baťa's company on the outskirts of the town in 1923–1938. It included thousands of flats, schools, department stores, scientific facilities, and a hospital. The development took place in a controlled manner and was based on modern urban concepts with the contribution of important architects of the time. Zlín became a hypermodern industrial city with functionalist character unique in Europe.
After death of Tomáš Baťa in 1932, the company was managed by Jan Antonín Baťa, Hugo Vavrečka and Dominik Čipera, who also became the mayor. The Baťa company and also the city of Zlín continued growing. In 1929–1935, a strong economic agglomeration Zlín – Otrokovice – Napajedla developed. In 1935, the city became the seat of the administrative district.
During World War II, life in the city was controlled by German occupiers, and development of both the city and the company stopped. Zlín was most severely affected by the war in 1944, when it was bombed by the U.S. army and large parts of the factories were destroyed. Zlín was liberated by the Soviet and Romanian armies on 2 May 1945.
The communists took over management of Zlín and Baťa factories, and in October 1945 the Bata company in Czechoslovakia was nationalised. In the following decades, Zlín preserved its significant position thanks to its extensive industrial production. The city strengthened its position as administrative, economic, educational and cultural centre of eastern Moravia. Zlín further expanded with construction of new housing estates.
Demographics
Economy
The largest industrial employer with headquarters in Zlín is TAJMAC-ZPS, a manufacturer of machine tools with more than 500 employees. Bata Corporation (in the Czech Republic officially known as Baťa a.s.) is now primarily a trading company and shoe production takes place outside the city.
Zlín is home to many large companies and organizations of the service sector. The largest employer in the city is the Regional Hospital of T. Baťa with more than 3,000 employees. Other notable employers are HP Tronic (main activity is trade in consumer electronics under the Datart and Eta brands), Tomas Bata University in Zlín (education) and Tescoma (trade and manufacture of kitchen utensils).
The Zlín agglomeration was defined as a tool for drawing money from the European Structural and Investment Funds. It is an area that includes the city and its surroundings, linked to the city by commuting and migration. It has about 130,000 inhabitants.
Transport
In the 1920s local passenger transportation started to operate. Later, in 1939 the town council decided to build three trolleybus routes, numbered lines A, B and C. New trolleybus lines were finished in 1944, after the construction proceeding during the Nazi occupation. Through the times, Zlín's public transport, now owned by DSZO (Zlín & Otrokovice Transportation Company), was one of the fastest-growing public transportation networks in the Czech Republic.
The city is currently served by 14 bus routes and 14 trolleybus routes, and also railway services on line 331, which runs from Otrokovice (located on the international corridor) to Vizovice. There are nine stations on this line within the city of Zlín, the largest of which is Zlín střed.
Education
In 1969, the Faculty of Technology was founded here as a branch of the Brno University of Technology. In 2001, it was one of two faculties which formed the newly established Tomas Bata University in Zlín. With more than 9,000 students, it ranks as a medium-sized Czech university. It is formed by six faculties: Technology, Management and Economics, Multimedia Communications, Applied Informatics, Humanities, and Logistics and Crisis Management.
Culture
Zlín is located in the cultural region of Moravian Wallachia near the tripoint of the cultural regions of Moravian Wallachia, Moravian Slovakia and Hanakia.
Given Zlín's history as one of the biggest centres of filmmaking in the Czech Republic, probably the biggest cultural event is the Zlín Film Festival with subtitle "International Film Festival for Children and Youth".
Winter version of international music festival Masters of Rock takes place in Zlín.
Zlín is home to the Bohuslav Martinů Philharmonic Orchestra; its chief conductor is Tomáš Brauner, while its principal guest conductor is Leoš Svárovský.
Sport
Zlín's ice hockey team PSG Berani Zlín plays in the 1st Czech League (2nd tier) and has won national titles in 2004 and in 2014. The association football team FC Zlín plays in the Czech National Football League (2nd tier), but played in the top tier in 2015–2024. The city also has teams in other sports including volleyball, basketball, Czech handball, softball and rugby.
Architecture
The city's architectural development was a characteristic synthesis of two modernist urban utopian visions: the first inspired by Ebenezer Howard's Garden city movement and the second tracing its lineage to Le Corbusier's vision of urban modernity. From the very beginning Baťa pursued the goal of constructing the Garden City proposed by Ebenezer Howard. However, the shape of the city had to be 'modernized' so as to suit the needs of the company and of the expanding community. The urban plan of Zlín was the creation of František Lydie Gahura, a student at Le Corbusier's atelier in Paris.
Sights
The Villa of Tomáš Baťa was an early architectural achievement. The construction was completed in 1911. The building's design was carried out by the architect Jan Kotěra. After its confiscation in 1946, the building served as a Pionýr' house. Being returned to Tomáš J. Baťa, the son of the company's founder, the building now houses the headquarters of the Thomas Bata Foundation.
Baťa's Hospital was founded in 1927 and quickly developed into one of the most modern hospitals in Central Europe. The original architectural set up was designed by F. L. Gahura.
The Grand Cinema was designed by the architect F. L. Gahura and built in 1932. This technological marvel became the largest cinema in Central Europe in its time with a capacity of 2,270 seated viewers. Today it has 1,010 seats.
Tomas Bata Memorial was built in 1933 by F. L. Gahura. The original purpose of the building was to commemorate the achievements of Baťa. The building itself is a Constructivist masterpiece. It has served as the seat of the Bohuslav Martinů Philharmonic Orchestra since 1955.
Baťa's Skyscraper was built as the headquarters for the worldwide Baťa organization. Designed by Vladimír Karfík, the huge building was erected in 1936–1939. It included a room-sized elevator housing the office for the boss, comfortably furnished – with a sink, a telephone, and air conditioning. When it was built it was the tallest Czechoslovak building at . After a costly reconstruction in 2004, it became the seat of the Regional Office of the Zlín Region and the headquarters of the tax office.
In the village of Štípa, there is Lešná Castle. It was built in the Neogothic, Neorenaissance and Neobaroque styles in 1887–1893. It is one of the youngest aristocratic residences in Moravia. The castle was built for the Seilern-Aspang family on the site of an older castle from the 18th century. Today the castle is open to the public and there are collections of unique and historically valuable objects. The castle is located inside the Zlín-Lešná Zoo complex. It is the second most-visited zoo in the country, and as of 2022, it was overall the third most visited tourist destination in the country.
Malenovice Castle is located in Malenovice. It was founded in the second half of the 14th century. The Gothic castle was modified in the Renaissance style in the following centuries. Today part of the castle is open to the public and contains several expositions.
Notable people
Tomáš Baťa (1876–1932), industrialist, founder of Bata Corporation
Miloslav Petrusek (1936–2012), sociologist
John Tusa (born 1936), British arts administrator, and radio and television journalist
Tom Stoppard (born 1937), British playwright and screenwriter
Josef Abrhám (1939–2022), actor
Eva Jiřičná (born 1939), architect
Ivana Trump (1949–2022), Czech-American businesswoman and model
Vladimír Hučín (born 1952), dissident and political celebrity
Stanislava Nopová (born 1953), author, poet and publisher
Bohumil Brhel (born 1965), speedway rider
Roman Čechmánek (1971–2023), ice hockey player
Tomáš Dvořák (born 1972), decathlete, Olympic medalist
Daniel Málek (born 1973), breaststroke swimmer
Roman Hamrlík (born 1974), ice hockey player
Petr Čajánek (born 1975), ice hockey player
Mojmír Hampl (born 1975), economist
Petr Janda (born 1975), architect
Jiří Novák (born 1975), tennis player
Silvia Saint (born 1976), pornographic film actress
Jan Zakopal (born 1977), footballer
Karel Rachůnek (1979–2011), ice hockey player
Twin towns – sister cities
Zlín is twinned with:
Altenburg, Germany
Chorzów, Poland
Groningen, Netherlands
Izegem, Belgium
Limbach-Oberfrohna, Germany
Möhlin, Switzerland
Romans-sur-Isère, France
Sesto San Giovanni, Italy
Trenčín, Slovakia
Zlín also cooperates with Turin, Italy.
Gallery
References
Bibliography
External links
Municipal Information and Tourist Centre of Zlín
History of Zlín, old photos and postcards
Cities and towns in the Czech Republic
Populated places in Zlín District
Planned communities
Bata Corporation
Architecture related to utopias | Zlín | [
"Engineering"
] | 3,083 | [
"Architecture related to utopias",
"Architecture"
] |
11,946,500 | https://en.wikipedia.org/wiki/HD%20189733%20b | HD 189733 b is an exoplanet in the constellation of Vulpecula approximately away from the Solar System. Astronomers in France discovered the planet orbiting the star HD 189733 on October 5, 2005, by observing its transit across the star's face. With a mass 11.2% higher than that of Jupiter and a radius 11.4% greater, HD 189733 b orbits its host star once every 2.2 days at an orbital speed of , making it a hot Jupiter with poor prospects for extraterrestrial life.
The closest transiting hot Jupiter to Earth, HD 189733 b has been the subject of close atmospheric observation. Scientists have studied it with high- and low-resolution instruments, both from the ground and from space. Researchers have found that the planet's weather includes raining molten glass. HD 189733 b was also the first exoplanet to have its thermal map constructed, possibly to be detected through polarimetry, its overall color determined (deep blue), its transit viewed in the X-ray spectrum, and to have carbon dioxide confirmed as being present in its atmosphere.
In July 2014, NASA announced the discovery of very dry atmospheres on three exoplanets that orbited Sun-like stars: HD 189733 b, HD 209458 b, and WASP-12b.
Detection and discovery
Transit and Doppler spectroscopy
On October 6, 2005, a team of astronomers announced the discovery of transiting planet HD 189733 b. The planet was then detected using Doppler spectroscopy. Real-time radial velocity measurements detected the Rossiter–McLaughlin effect caused by the planet passing in front of its star before photometric measurements confirmed that the planet was transiting. In 2006, a team led by Drake Deming announced detection of strong infrared thermal emission from the transiting exoplanet planet HD 189733 b, by measuring the flux decrement (decrease of total light) during its prominent secondary eclipse (when the planet passes behind the star).
The mass of the planet is estimated to be 16% larger than Jupiter's, with the planet completing an orbit around its host star every 2.2 days and an orbital speed of .
Infrared spectrum
On February 21, 2007, NASA released news that the Spitzer Space Telescope had measured detailed spectra from both HD 189733 b and HD 209458 b. The release came simultaneously with the public release of a new issue of Nature containing the first publication on the spectroscopic observation of the other exoplanet, HD 209458 b. A paper was submitted and published by the Astrophysical Journal Letters. The spectroscopic observations of HD 189733 b were led by Carl Grillmair of NASA's Spitzer Science Center.
Visible color
In 2008, a team of astrophysicists appeared to have detected and monitored the planet's visible light using polarimetry, which would have been the first such success. This result seemed to be confirmed and refined by the same team in 2011. They found that the planet albedo is significantly larger in blue light than in the red, most probably due to Rayleigh scattering and molecular absorption in the red. The blue color of the planet was subsequently confirmed in 2013, which would have made HD 189733 the first planet to have its overall color determined by two different techniques. The measurements in polarized light have since been disputed by two separate teams using more sensitive polarimeters, with upper limits of the polarimetric signal provided therein.
The rich cobalt blue colour of HD 189733 b may be the result of Rayleigh scattering. In mid January 2008, spectral observation during the planet's transit using that model found that if molecular hydrogen exists, it would have an atmospheric pressure of 410 ± 30 mbar of 0.1564 solar radii. The Mie approximation model also found that there is a possible condensate in its atmosphere, magnesium silicate (MgSiO3) with a particle size of approximately 10−2 to 10−1 μm. Using both models, the planet's temperature would be between 1340 and 1540 K. The Rayleigh effect is confirmed in other models, and by the apparent lack of a cooler, shaded stratosphere below its outer atmosphere. In the visible region of the spectrum, thanks to their high absorption cross sections, atomic sodium and potassium can be investigated. For example, using high-resolution UVES spectrograph on the Very Large Telescope, sodium has been detected on this atmosphere and further physical characteristics of the atmosphere such as temperature has been investigated.
X-ray spectrum
In July 2013, NASA reported the first observations of planet transit studied in the X-ray spectrum. It was found that the planet's atmosphere blocks three times more X-rays than visible light.
Evaporation
In March 2010, transit observations using HI Lyman-alpha found that this planet is evaporating at a rate of 1-100 gigagrams per second. This indication was found by detecting the extended exosphere of atomic hydrogen. HD 189733 b is the second planet after HD 209458 b for which atmospheric evaporation has been detected.
Physical characteristics
This planet exhibits one of the largest photometric transit depth (amount of the parent star's light blocked) of extrasolar planets so far observed, approximately 3%. The apparent longitude of ascending node of its orbit is 16 degrees +/- 8 away from the north–south in our sky. It and HD 209458 b were the first two planets to be directly spectroscopically observed. The parent stars of these two planets are the brightest transiting-planet host stars, so these planets will continue to receive the most attention from astronomers. Like most hot Jupiters, this planet is thought to be tidally locked to its parent star, meaning it has a permanent day and night.
The planet is not oblate, and has neither satellites with greater than 0.8 the radius of Earth nor a ring system like that of Saturn.
The international team under the direction of Svetlana Berdyugina of Zurich University of Technology, using the Swedish 60-centimeter telescope KVA, which is located in Spain, was able to directly see the polarized light reflected from the planet. The polarization indicates that the scattering atmosphere is considerably larger (> 30%) than the opaque body of the planet seen during transits.
The atmosphere was at first predicted "pL class", lacking a temperature-inversion stratosphere; like L dwarfs which lack titanium and vanadium oxides. Follow-up measurements, tested against a stratospheric model, yielded inconclusive results. Atmospheric condensates form a haze above the surface as viewed in the infrared. A sunset viewed from that surface would be red. Sodium and potassium signals were predicted by Tinetti 2007. First obscured by the haze of condensates, sodium was eventually observed at three times the concentration of HD 209458 b's sodium layer. The potassium was also detected in 2020, although in significantly smaller concentrations. HD 189733 is also the first extrasolar planet confirmed to have carbon dioxide in its atmosphere. In 2024, hydrogen sulfide was detected in HD 189733 b's atmosphere.
Map of the planet
In 2007, the Spitzer Space Telescope was used to map the planet's temperature emissions. The planet and star system was observed for 33 consecutive hours, starting when only the night side of the planet was in view. Over the course of one-half of the planet's orbit, more and more of the dayside came into view. A temperature range of 973 ± 33 K to 1,212 ± 11 K was discovered, indicating that the absorbed energy from the parent star is distributed fairly evenly through the planet's atmosphere. The region of peak temperature was offset 30 degrees east of the substellar point, as predicted by theoretical models of hot Jupiters taking into account a parameterized day to night redistribution mechanism.
Scientists at the University of Warwick determined that HD 189733 b has winds of up to blowing from the day side to the night side. NASA released a brightness map of the surface temperature of HD 189733 b; it is the first map ever published of an extra-solar planet.
Water vapor, oxygen, and organic compounds
On July 11, 2007, a team led by Giovanna Tinetti published the results of their observations using the Spitzer Space Telescope concluding there is solid evidence for significant amounts of water vapor in the planet's atmosphere. Follow-up observations made using the Hubble Space Telescope confirm the presence of water vapor, neutral oxygen and also the organic compound methane. Later, Very Large Telescope observations also detected the presence of carbon monoxide on the day side of the planet. It is currently unknown how the methane originated as the planet's high temperature should cause the water and methane to react, replacing the atmosphere with carbon monoxide. Nonetheless, the presence of roughly 0.004% of water vapour fraction by volume in atmosphere of HD 189733 b was confirmed with high-resolution emission spectra taken in 2021.
Evolution
While transiting the system also clearly exhibits the Rossiter–McLaughlin effect, shifting in photospheric spectral lines caused by the planet occulting a part of the rotating stellar surface. Due to its high mass and close orbit, the parent star has a very large semi-amplitude (K), the "wobble" in the star's radial velocity, of 205 m/s.
The Rossiter–McLaughlin effect allows the measurement of the angle between the planet's orbital plane and the equatorial plane of the star. These are well aligned, misalignment equal to -0.5°. By analogy with HD 149026 b, the formation of the planet was peaceful and probably involved interactions with the protoplanetary disc. A much larger angle would have suggested a violent interplay with other protoplanets.
Star-planet interaction controversy
In 2008, a team of astronomers first described how as the exoplanet orbiting HD 189733 A reaches a certain place in its orbit, it causes increased stellar flaring. In 2010, a different team found that every time they observe the exoplanet at a certain position in its orbit, they also detected X-ray flares. Theoretical research since 2000 suggested that an exoplanet very near to the star that it orbits may cause increased flaring due to the interaction of their magnetic fields, or because of tidal forces. In 2019, astronomers analyzed data from Arecibo Observatory, MOST, and the Automated Photoelectric Telescope, in addition to historical observations of the star at radio, optical, ultraviolet, and X-ray wavelengths to examine these claims. They found that the previous claims were exaggerated and the host star failed to display many of the brightness and spectral characteristics associated with stellar flaring and solar active regions, including sunspots. Their statistical analysis also found that many stellar flares are seen regardless of the position of the exoplanet, therefore debunking the earlier claims. The magnetic fields of the host star and exoplanet do not interact, and this system is no longer believed to have a "star-planet interaction." Some researchers had also suggested that HD 189733 accretes, or pulls, gas from its orbiting exoplanet at a rate similar to those found around young protostars in T Tauri Star systems. Later analysis demonstrated that very little, if any, gas was accreted from the "hot Jupiter" companion.
Possible exomoons
Some studies have proposed candidate exomoons around HD 189733 b. A 2014 study proposed a moon based on studying periodic increases and decreases in light given off from HD 189733 b. This moon would be outside of the planet's Hill sphere, making its existence implausible. Two studies by the same team in 2019 and 2020 proposed exo-Io candidates around a number of hot Jupiters, including HD 189733 b and WASP-49b, based on detected sodium and potassium, consistent with evaporating exomoons and/or their corresponding gas torus. A follow-up study in 2022 did not find evidence for an exomoon around HD 189733 b.
See also
Dimidium (51 Pegasi b)
HD 2039 b
HD 149026 b
Kepler-186f
Osiris (HD 209458 b)
WASP-3b
WASP-12b
WASP-189 b
References
External links
Vulpecula
Hot Jupiters
Transiting exoplanets
Exoplanets discovered in 2005
Exoplanets detected by radial velocity
Articles containing video clips | HD 189733 b | [
"Astronomy"
] | 2,561 | [
"Vulpecula",
"Constellations"
] |
11,946,749 | https://en.wikipedia.org/wiki/Respirocyte | Respirocytes are hypothetical, microscopic, artificial red blood cells that are intended to emulate the function of their organic counterparts, so as to supplement or replace the function of much of the human body's normal respiratory system. Respirocytes were proposed by Robert A. Freitas Jr in his 1998 paper "A Mechanical Artificial Red Blood Cell: Exploratory Design in Medical Nanotechnology".
Respirocytes are an example of molecular nanotechnology, a field of technology still in the very earliest, purely hypothetical phase of development. Current technology is not sufficient to build a respirocyte due to considerations of power, atomic-scale manipulation, immune reaction or toxicity, computation and communication.
Structure of a respirocyte
Freitas proposed a spherical robot made up of 18 billion atoms arranged as a tiny pressure tank, which would be filled up with oxygen and carbon dioxide.
Uses
In Freitas' proposal, each respirocyte could store and transport 236 times more oxygen than a natural red blood cell, and could release it in a more controlled manner.
Freitas has also proposed "microbivore" robots that would attack pathogens in the manner of white blood cells.
See also
Artificial cell
Biotechnology
Blood substitute
Oxycyte
Synthetic biology
References
External links
Respirocytes at foresight.org
Synthetic biology
Blood cells
Hypothetical technology
Blood substitutes | Respirocyte | [
"Engineering",
"Biology"
] | 285 | [
"Synthetic biology",
"Molecular genetics",
"Biological engineering",
"Bioinformatics"
] |
11,946,947 | https://en.wikipedia.org/wiki/Centre-to-centre%20distance | Centre-to-centre distance (c.t.c. distance or ctc distance) is a concept for distances, also called on-center spacing (o.c. spacing or oc spacing), heart distance, and pitch.
It is the distance between the centre (the heart) of a column and the centre (the heart) of another column. By expressing a distance in c.t.c., one can measure distances between columns with different diameters without confusion. This concept applies to other architectural features that may have variable diameters/widths and spacings, such as pillars or ceiling beams and baffles.
Architectural terminology
Columns and entablature
Technical drawing | Centre-to-centre distance | [
"Technology",
"Engineering"
] | 146 | [
"Design engineering",
"Structural system",
"Civil engineering",
"Columns and entablature",
"Architectural terminology",
"Technical drawing",
"Architecture"
] |
11,950,076 | https://en.wikipedia.org/wiki/Earth%20Institute%20Center%20for%20Environmental%20Sustainability | The Earth Institute Center for Environmental Sustainability (EICES, pronounced ), formerly known as the Center for Environmental Research and Conservation (CERC), consists of two institutions located at Columbia University. The first is an Earth Institute which started as the first Earth Institute in 1995. The second is the Secretariat of the Consortium for Environmental Research and Conservation, established in cooperation with The Earth Institute, the American Museum of Natural History, the New York Botanical Garden, the Wildlife Conservation Society and EcoHealth Alliance on biodiversity conservation.
EICES's primary goal is protecting biodiversity and ecosystems. The Earth Institute Center for Environmental Sustainability is "dedicated to the development of a rich, robust and vibrant world within which we can secure a sustainable future."
EICES is headquartered at The Earth Institute, Columbia University. This location facilitates multidisciplinary work within the university and with external collaborators. EICES also provides training and education to the non-science community through the application of understandable, robust conservation science.
Research
EICES facilitates the development of research programs among its consortium members: the American Museum of Natural History (AMNH), Columbia University, the New York Botanical Garden (NYBG), the Wildlife Conservation Society (WCS) and EcoHealth Alliance. Some activities are consortium-wide, representing all the institutions. Others involve only two or three consortium partners. Collectively, the consortium's research is global, with programs in over 60 countries.
Throughout EICES's 18-year history, consortium researchers, volunteers, interns, students, faculty and staff have been involved in:
Finding new species of plants and animals in biodiversity hotspots
Mapping the movement of wildlife and zoonotic diseases that pass from animals to humans
Studying the evolution of primate behavior
Examining how forests respond to disturbance
Studying ecosystem processes and services like carbon storage by tropical trees and grasslands
Understanding how to develop participatory conservation programs
Working on the restoration of damaged habitats
Exploring models for sustainable development through a balance of good economics, governance, and conservation
In addition to research activities and projects, EICES' adjunct faculty and research scientists teach science courses in the Department of Ecology, Evolution, and Environmental Biology (E3B), as well as in the EICES's Summer Ecosystem Experience for Undergraduates Program (SEE-U) and Certificate Program in Conservation and Environmental Sustainability. Instructors are faculty and staff at consortium institutions. The consortium often provides research opportunities for Columbia's undergraduate, master's and Ph.D. students, especially those in E3B.
Education and training
The Earth Institute Center for Environmental Sustainability brings together five renowned scientific, academic, and cultural institutions: Columbia University, The American Museum of Natural History, the New York Botanical Garden, the Wildlife Conservation Society, and the EcoHealth Alliance (formerly known as the Wildlife Trust). Since its inception in 1994, EICES's ambitious educational agenda has evolved in response to the emergent issues of environmental and ecological sustainability. Programs encompass graduate, undergraduate, and K-12 levels and for private and public sector executives and citizens interested in environmental sustainability. The overarching goal of EICES’ education programs is to ensure that research informs what we teach in the classroom.
References
Sources
The Earth Institute
EICES Research
EICES Home Page
EICES Research
External links
Columbia University
Environmental research institutes
Sustainability organizations | Earth Institute Center for Environmental Sustainability | [
"Environmental_science"
] | 668 | [
"Environmental research institutes",
"Environmental research"
] |
11,950,089 | https://en.wikipedia.org/wiki/Poliya | Poliya Composite Resins and Polymers, Inc. (Poliya) was founded in 1983 and specializes in developing and manufacturing polymers and composite resins. Poliya's headquarters are located in Istanbul, Turkey with other Poliya locations and manufacturing facilities in Turkey and Russia.
As of 2022, Poliya is listed in the Top 500 largest companies of Turkey. Most widely known for their flagship product, Polijel high performance gelcoat series, the company also manufactures UPE-polyester resins, vinyl ester resins, pigment color pastes, solid surface chips, adhesives, bonding pastes, mold release agents and waxes. Poliya's diverse product portfolio makes it a thriving international company. As the industry leader, Poliya serves 25 countries throughout the world and is the fastest-growing composite resin manufacturer in Europe.
The company is a member of the European Chemical Industry Council (Cefic), Turkish Chemical Manufacturers Association, and the Turkish Composites Manufacturers Association.
History
Ismet Cakar, a chemical engineer, began making early contributions to polyester resin modification and gelcoat UV stabilizers. Cakar worked on polymerization and resins at Ilkester, leaving to found a start-up. In 1983, Cakar launched the company that would become Poliya. Early on, Poliya recognized that composite materials would need special functions under different usage conditions (UV resistant, chemical resistant, etc.), and these composite resins would be required in various low weight and corrosion resistant applications which would require similar modification technology.
Research
Poliya contributes scientific research and local industrial activities in Turkey, which has a short history starting in the 1980s. Most recently, Poliya sponsored the first TURK-KOMPOZIT 2013 Composites Event. Other research and events include Polymeric Composites Symposium, Exhibition and Workshops. Sakarya University Advanced Applied Technologies – Saugar X7 and Sahimo projects as well as Yildiz Technical University AE2 Project and many others to support students and colleges. Also Poliya took place in TUBİTAK-TEYDEP Technology and Innovation Support Programs. Poliya also took part in Industrial Partnership Program designed by TUBITAK-MAM and supported by World Bank About nanocomposites research, Poliya in partnership with Technische Universität Hamburg-Harburg and IYTE has created a joint project and published various scientific articles about polyester resin and carbon nano tubes for the advancement of nanocomposite knowledge. Another joint project created by Poliya and Dokuz Eylül University, Institute of Marine Sciences and Technologies of scientific studies was about the use of Biocides and silver ions with Polijel gelcoats in the marine environment.
Organization
Poliya's core businesses focus on composite performance materials, composite adhesives, composite coatings, solid surface materials, pigment color pastes and release agent technologies, which have been supplemented through several notable expansions. It has also divested itself of less profitable segments.
Composite performance materials
Combining Polipol polyester resins and Polives vinyl ester resin as well as gelcoat products, composite performance materials provide products for the construction, transport, marine, defense, wind energy, sports equipment, chemical containment industries. The main manufacturing plant is located in Southeastern Europe, Cerkezkoy-Turkey.
References
Chemical companies of Turkey
Composite materials
Manufacturing companies based in Istanbul
Companies established in 1983 | Poliya | [
"Physics"
] | 695 | [
"Materials",
"Composite materials",
"Matter"
] |
11,951,096 | https://en.wikipedia.org/wiki/List%20of%20active%20Solar%20System%20probes | This is a list of active space probes which have escaped Earth orbit. It includes lunar space probes, but does not include space probes orbiting at the Sun–Earth Lagrangian points (for these, see List of objects at Lagrangian points). A craft is deemed "active" if it is still able to transmit usable data to Earth (whether or not it can receive commands).
The craft are further grouped by mission status – "en-route", "mission in progress" or "mission complete" – based on their primary mission. For example, though Voyager 1 is still contactable en-route to the Oort Cloud and has exited the Solar System, it is listed as "mission complete" because its primary task of studying Jupiter and Saturn has been accomplished. Once a probe has reached its first primary target, it is no longer listed as "en route" whether or not further travel is involved.
Missions in progress
Moon
ARTEMIS P1/P2
Mission: studying the effect of the solar wind on the Moon. Originally launched as Earth satellites, they were later repurposed and moved to lunar orbit.
Launched: February 17, 2007
Destination: Moon (in lunar orbit)
Arrival: July 2011
Institution: NASA
Lunar Reconnaissance Orbiter
Mission: Orbiter engaged in lunar mapping intended to identify safe landing sites, locate potential resources on the Moon, characterize the radiation environment, and demonstrate new technology.
Launched: 18 June 2009
Destination: Moon (in lunar orbit)
Arrival: 23 June 2009
Institution: NASA
Queqiao
Mission: Halo orbiter serving as communications satellite for Chang'e 4 lunar far-side mission; conducting joint China-Netherlands low frequency astronomy experiment.
Launched: 21:28 UT on 20 May 2018
Destination: in halo orbit about Earth-Moon L2
Arrival: 14 June 2018
Institution: CNSA
Chang'e 4 lander and rover
Mission: Lander engaging in low-frequency radio spectrometry experiment, neutron and dosimetry experiment, and biological experiment. Rover seeking to characterize lunar far-side environment (including possible lunar mantle material) using visible/near-infrared spectrometer, ground penetrating radar, cameras, and neutral particle analyzer.
Launched: 18:23 UT on 8 December 2018
Destination: Lunar far side
Arrival: 02:26 UT on 3 January 2019
Institution: CNSA
Chandrayaan-2 Orbiter
Mission: engaged in lunar topography and mineralogy, elemental abundance, the lunar exosphere, and signatures of hydroxyl and water.
Launched: 22 July 2019
Destination: Moon (in lunar orbit)
Arrival: 20 August 2019
Institution: ISRO
CAPSTONE
Mission: Lunar orbiting CubeSat that will test and verify the calculated orbital stability planned for the Gateway space station.
Launched: 28 June 2022
Destination: Moon (in a Near-rectilinear halo orbit (NRHO))
Arrival: 14 November 2022
Institution: NASA
Danuri (Korea Pathfinder Lunar Orbiter)
Mission: Lunar Orbiter by the Korea Aerospace Research Institute (KARI) of South Korea. The orbiter, its science payload and ground control infrastructure are technology demonstrators. The orbiter will also be tasked with surveying lunar resources such as water ice, uranium, helium-3, silicon, and aluminium, and produce a topographic map to help select future lunar landing sites.
Launched: 4 August 2022
Destination: Moon (in lunar orbit)
Arrival: 16 December 2022
Institution: collaboration between KARI and NASA
EQUULEUS
Mission: Halo orbiter to image the Earth's plasmasphere, impact craters on the Moon's far side and L2 experiments.
Launched: 16 November 2022
Destination: in halo orbit about Earth-Moon L2
Arrival: November 2022
Institution: JAXA
Queqiao-2
Mission: lunar orbiter serving as communications satellite for Chang'e 6, Chang'e 7 Chang'e 8 and International Lunar Research Station on lunar far-side mission;
Launched: 20 March 2024
Destination: Moon (in lunar orbit)
Arrival: 2024 (planned)
Institution: CNSA
Tiandu-1
Mission: Testing technologies for a future lunar Satellite constellation.
Launched: 20 March 2024
Destination: Moon (in lunar orbit)
Arrival: 24 March 2024
Institution: Deep Space Exploration Laboratory
Tiandu-2
Mission: Testing technologies for a future lunar Satellite constellation.
Launched: 20 March 2024
Destination: Moon (in lunar orbit)
Arrival: 24 March 2024
Institution: Deep Space Exploration Laboratory
DRO A/B
Mission: Testing technologies to establish lunar navigation and communications infrastructure to support lunar exploration.
Launched: 3 March 2024
Destination: Moon (in DRO)
Arrival: ~20 August 2024
Institution: China Academy of Sciences
ICUBE-Q
Mission: First Pakistani lunar mission piggybacking with Chang'e 6.
Launched: 3 May 2024
Destination: Moon (in lunar orbit)
Arrival: 8 May 2024
Institution: SUPARCO
Blue Ghost M1
Mission: lunar lander, carrying NASA-sponsored experiments and commercial payloads as a part of Commercial Lunar Payload Services program to Mare Crisium
Launched: 15 January 2025
Destination: Lunar surface
Arrival: 2 March 2025
Institution: NASA
Hakuto-R Mission 2 Resilience lander and Tenacious rover
Mission: Lunar landing demonstration mission.
Launched: 06:11 UT on 15 January 2025
Destination: Lunar far side
Arrival: April 2025
Institution: Ispace Inc. Ispace Europe
Mercury
BepiColombo
Mission: Spacecraft consists of the Mercury Transfer Module (MTM), Mercury Planetary Orbiter (MPO), and the Mercury Magnetospheric Orbiter (MMO or Mio). MTM and MPO are built by ESA while the MMO is mostly built by JAXA. Once the MTM delivers the MPO and MMO to Mercury orbit, the two orbiters will have the following objectives: to study Mercury's form, interior structure, geology, composition, and craters; to study the origin, structure, and dynamics of its magnetic field; to characterize the composition and dynamics of Mercury's vestigial atmosphere; to test Einstein's theory of general relativity; to search for asteroids sunward of Earth; and to generally study the origin and evolution of a planet close to a parent star.
Launched: 01:45:28 UT on 19 October 2018
Destination: Mercury
Arrival: En route (anticipated to enter Mercury polar orbit in November 2026)
Institution: ESA JAXA
Mars
2001 Mars Odyssey
Mission: Mars Odyssey was designed to map the surface of Mars and also acts as a relay for the Curiosity rover. Its name is a tribute to the novel and 1968 film 2001: A Space Odyssey.
Launched: 7 April 2001
Destination: Mars
Arrival: 24 October 2001
Institution: NASA
Mars Express
Mission: Mars orbiter designed to study the planet's atmosphere and geology and search for sub-surface water. In 2017 the mission was extended until at least the end of 2020.
Launched: 2 June 2003
Destination: Mars
Arrival: 25 December 2003
Institution: ESA
Mars Reconnaissance Orbiter
Mission: the second NASA satellite orbiting Mars. It is specifically designed to analyze the landforms, stratigraphy, minerals, and ice of the red planet.
Launched: 12 August 2005
Destination: Mars
Arrival: 10 March 2006
Institution: NASA
[[Curiosity rover|Curiosity rover]]
Mission: searching for evidence of organic material on Mars, monitoring methane levels in the atmosphere, and engaging in exploration of the landing site at Gale Crater.
Launched: 26 November 2011
Destination: Mars
Arrival: 6 August 2012
Institution: NASAMAVEN — Mars Atmosphere and Volatile Evolution.
Mission: study the Martian upper atmosphere and its gradual loss to space
Launched: 18 November 2013
Destination: Mars
Arrival: September 2014
Institution: NASATrace Gas Orbiter (ExoMars 2016)
Mission: study methane and other trace gases in the Martian atmosphere
Launched: 14 March 2016
Destination: Mars
Arrived: 19 October 2016 (Mars orbit insertion), 21 April 2018 (final orbit)
Institution: ESAEmirates Mars MissionMission: study weather and atmosphere.
Launched: 19 July 2020
Destination: Mars
Arrival: 9 February 2021
Institution: UAESATianwen-1 orbiterMission: find evidence for current and past life and produce Martian surface maps. Orbital studies of Martian surface morphology, soil, and atmosphere.
Launched: 23 July 2020
Destination: Mars
Arrival: 10 February 2021
Institution: CNSAPerseverance roverMission: searching for evidence of organic material on Mars, and engaging in exploration of the landing site at Jezero crater.
Launched: 30 July 2020
Destination: Jezero crater, Mars
Arrival: 18 February 2021
Institution: NASA
Asteroids and comets Hayabusa2
Mission: asteroid study and sample-return
Launched: 3 December 2014
First Destination: 162173 Ryugu
Arrival: 27 June 2018
Left Ryugu: 12 November 2019
Second Destination:
Institution: JAXA
OSIRIS-APEX
Mission: asteroid study and sample-return
Launched: 8 September 2016
Destination: 101955 Bennu
Arrival: 3 December 2018
Left Bennu: 10 May 2021
Destination: 99942 Apophis
Arrival: April 2029
Institution: NASA
Lucy
Mission: to flyby 8 Jupiter trojan and one main belt asteroid
Launched: 16 October 2021
Destination: 52246 Donaldjohanson
Arrival: 20 April 2025
Institution: NASA
Psyche
Mission: to orbit a main belt asteroid
Launched: 13 October 2023
Destination: 16 Psyche
Arrival: August 2029
Institution: NASA
Hera
Mission: to orbit a binary asteroid and observe the asteroids, post DART impact.
Launched: 7 October 2024
Destination: 65803 Didymos system
Arrival: December 2026
Institution: ESA
Heliocentric orbit
Parker Solar Probe
Mission: observation of solar wind, magnetic fields, and coronal energy flow.
Launched: 12 August 2018
Destination: low solar orbit, perihelion 6.9 million km
Arrival: 19 January 2019
Institution: NASA
Solar Orbiter
Mission: detailed measurements of the inner heliosphere and nascent solar wind, and close observations of the polar regions of the Sun.
Launched: 10 February 2020
Destination: High inclination solar orbit
Arrival: Operational orbit in 2023
Institution: ESA
Outer Solar System
Europa Clipper
Mission: mission to study Jupiter and Europa.
Launched: 14 October 2024
Destination: Jupiter
Arrival: 11 April 2030 (en route)
Institution: NASA
Juice (Jupiter Icy Moons Explorer)
Mission: mission to study Jupiter's three icy moons Callisto, Europa and Ganymede, eventually orbiting Ganymede as the first spacecraft to orbit a satellite of another planet.
Launched: 14 April 2023
Destination: Jupiter
Arrival: July 2031 (en route)
Destination: Ganymede
Arrival: December 2034 (en route)
Institution: ESA
Juno
Mission: studying Jupiter from polar orbit. Originally intended to de-orbit into the Jovian atmosphere after 2021, now operating until 2025.
Launched: 5 August 2011
Destination: Jupiter
Arrival: 4 July 2016
Institution: NASA
New Horizons
Mission: the first spacecraft to study Pluto up close, and ultimately the Kuiper Belt. It was the fastest spacecraft when leaving Earth and will be the fifth probe to leave the Solar System.
Launched: 19 January 2006
Destination: Pluto and Charon
Arrival: 14 July 2015
Left Charon: 14 July 2015
Institution: NASA
Voyager 1
Mission: investigating Jupiter and Saturn, and the moons of these planets. Its continuing data feed offered the first direct measurements of the heliosheath and the heliopause. It is currently the furthest man-made object from Earth, as well as the first object to leave the heliosphere and cross into interstellar space. As of November 2017 it has a distance from the Sun of about 140 astronomical units (AU) (21 billion kilometers, or 0.002 light years), and it will not be overtaken by any other current craft. In August 2012, Voyager 1 became the first human-built spacecraft to enter interstellar space. Though declining, the onboard power source should keep some of the probe's instruments running until 2025.
Launched: 5 September 1977
Destination: Jupiter and Saturn
Arrival: January 1979
Institution: NASA
Primary mission completion: November 1980
Current trajectory: entered interstellar space August 2012
Voyager 2
Mission: studying all four giant planets. This mission was one of NASA's most successful, yielding a wealth of new information. As of November 2017 it is some 116 AU from the Sun (17.34 billion kilometers). It left the heliosphere and crossed into interstellar space in December 2018. As with Voyager 1, scientists are now using Voyager 2 to learn what the Solar System is like beyond the heliosphere.
Launched: 20 August 1977
Destination: Jupiter, Saturn, Uranus, Neptune
Arrival: 9 July 1979
Institution: NASA
Primary mission completion: August 1989
Current trajectory: entered interstellar space December 2018
See also
Lists of spacecraft
References
Probes
Solar System, Active
Probes, Active
Probes
Probes | List of active Solar System probes | [
"Astronomy"
] | 2,623 | [
"Astronomy-related lists",
"Solar System-related lists",
"History of astronomy",
"Solar System",
"Discovery and exploration of the Solar System"
] |
11,952,163 | https://en.wikipedia.org/wiki/Carbon%20monitoring | Carbon monitoring as part of greenhouse gas monitoring refers to tracking how much carbon dioxide or methane is produced by a particular activity at a particular time. For example, it may refer to tracking methane emissions from agriculture, or carbon dioxide emissions from land use changes, such as deforestation, or from burning fossil fuels, whether in a power plant, automobile, or other device. Because carbon dioxide is the greenhouse gas emitted in the largest quantities, and methane is an even more potent greenhouse gas, monitoring carbon emissions is widely seen as crucial to any effort to reduce emissions and thereby slow climate change.
Monitoring carbon emissions is key to the cap-and-trade program currently being used in Europe, as well as the one in California, and will be necessary for any such program in the future, like the Paris Agreement. The lack of reliable sources of consistent data on carbon emissions is a significant barrier to efforts to reduce emissions.
Data sources
Sources of such emissions data include:
Carbon Monitoring for Action (CARMA) – An online database provided by the Center for Global Development, that includes plant-level emissions for more than 50,000 power plants and 4,000 power companies around the world, as well as the total emissions from power generation of countries, provinces (or states), and localities. Carbon emissions from power generation account for about 25 percent of global emissions.
ETSWAP – An emissions monitoring and reporting system currently in use in the UK and Ireland, which enables relevant organizations to monitor, verify and report carbon emissions, as is required by the EU ETS (European Union Emissions Trading Scheme).
FMS – A system used in Germany to record and calculate annual emission reports for plant operators subject to the EU ETS.
Remaining global carbon budget
Carbon emissions are also monitored on a global scale (with data for countries, sectors, companies, activities, etc).
In the United States
Almost all climate change regulations in the US have stipulations to reduce carbon dioxide and methane emissions by economic sector, so being able to accurately monitor and assess these emissions is crucial to being able to assess compliance with these regulations. Emissions estimates at the national level have been shown to be fairly accurate, but at the state level there is still much uncertainty. As part of the Paris Agreement, the US pledged to "decrease its GHG emissions by 26–28 % relative to 2005 levels by 2025 as part of the Paris Agreement negotiated at COP21. To comply with these regulations, it is necessary to quantify emissions from specific source sectors. A source sector is a sector of the economy that emits a particular greenhouse gas, i.e. methane emissions from the oil and gas industry, which the US has pledged to decrease by 40–45 % relative to 2012 levels by 2025 as a more specific action towards achieving its Paris Agreement contribution.
Currently, most governments, including the US government, estimate carbon emissions with a "bottom-up" approach, using emission factors which give the rate of carbon emissions per unit of a certain activity, and data on how much of that activity has taken place. For example an emission factor can be determined for the amount of carbon dioxide emitted per gallon of gasoline burned, and this can be combined with data on gasoline sales to get an estimate of carbon emissions from light duty vehicles. Other examples include determining the number of cows in various locations, or the mass of coal burned at power plants, and combining these data with the appropriate emissions factors to estimate methane or carbon dioxide emissions. Sometimes "top-down" methods are used to monitor carbon emissions. These involve measuring the concentration of a greenhouse gas in the atmosphere and using these measurements to determine the distribution of emissions which caused the resulting concentrations.
Accounting by sector can be complicated when there is a chance of double counting. For example, when coal is gasified to produce synthetic natural gas, which is then mixed with natural gas and burned at a natural gas powered power plant, if accounted for as part of the natural gas sector, this activity must be subtracted from the coal sector and added to the natural gas sector in order to be properly accounted for.
NASA Carbon Monitoring System (CMS)
NASA Carbon Monitoring System (CMS) is a climate research program created by a congressional order in 2010 that provides grants of about $500,000 a year for climate research that measure carbon dioxide and methane emissions. Using instruments in satellites and airplanes CMS funded research projects provide data to the United States and other countries that help track progress of individual nations regarding their Paris climate emission cuts agreements. For example, CMS projects measured carbon emissions from deforestation and forest degradation. CMS "stitch[ed] together observations of sources and sinks into high-resolution models of the planet's flows of carbon." The 2019 federal budget specifically assured funding for CMS, after the Trump administration proposed to end funding.
In the European Union
As part of the European Union Emission Trading Scheme (EU-ETS), carbon monitoring is necessary in order to ensure compliance with the cap-and-trade program. This carbon monitoring program has three main components: atmospheric carbon dioxide measurements, bottom-up carbon dioxide emissions maps, and an operational data-assimilation system to synthesize the information from the first two components.
The top-down, atmospheric measurement approach involves satellite data and in-situ measurements of carbon dioxide concentrations, as well as atmospheric models that model atmospheric transport of carbon dioxide. These have limited ability to determine carbon dioxide emissions at highly resolved spatial scales and can typically not represent finer scales than a 1 km grid. The models also must resolve the fluxes of carbon dioxide from anthropogenic sources like fossil fuel burning, and from natural interactions like terrestrial ecosystems and the ocean. Due to the complexities and limitations of the top-down approach, the EU combines this method with a bottom-up approach.
The current bottom-up data are based on information that is self-reported by emitters in the trading scheme. However, the EU is trying to improve this information source and has proposed plans for improved bottom-up emissions maps, which will have greatly improved spatial resolution and near real-time updates.
An operational data system to combine the information gathered from the two aforementioned sources is also planned. The EU hopes that by the 2030s, this will be operational and enable a highly sophisticated carbon monitoring program across the European Union.
Satellites
Satellites can be used to monitor carbon dioxide concentrations from orbit. NASA currently operates a satellite named the Orbiting Carbon Observatory-2 (OCO-2), and Japan operates their own satellite, the Greenhouse Gases Observing Satellite (GOSAT). These satellites can provide valuable information to fill in data gaps from emission inventories. The OCO-2 measured a strong flux of carbon dioxide over the Middle East, which had not been represented in emissions inventories, indicating that important sources were being neglected in bottom-up estimates of emissions. These satellites currently have errors of about 0.5% in their measurements, but the American and Japanese teams hope to reduce the errors to 0.25%. China recently launched their own satellite to monitor greenhouse gas concentrations on Earth, the TanSat, in December 2016. It currently has a three-year mission planned and will take readings of carbon dioxide concentrations every 16 days.
See also
Top contributors to greenhouse gas emissions
List of countries by carbon dioxide emissions
Supply chain management
References
External links
Climatechange.gov.au
Edie.net
Greenhouse gas emissions
Environmental impact assessment
Environmental monitoring | Carbon monitoring | [
"Chemistry"
] | 1,497 | [
"Greenhouse gases",
"Greenhouse gas emissions"
] |
11,952,224 | https://en.wikipedia.org/wiki/1-Hexene%20%28data%20page%29 | This page provides supplementary chemical data on 1-Hexene.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions.
MSDS
Structure and properties
Thermodynamic properties
Spectral data
References
Lide, D. R. (Ed.) (1996). CRC Handbook of Chemistry and Physics (76th Edn.). Boca Raton (FL):CRC Press. .
Spectral Database for Organic Compounds SDBS
Hexene
Chemical data pages cleanup | 1-Hexene (data page) | [
"Chemistry"
] | 135 | [
"Chemical data pages",
"nan"
] |
11,952,413 | https://en.wikipedia.org/wiki/Pedestrian%20Accessibility%20and%20Movement%20Environment%20Laboratory | The Pedestrian Accessibility and Movement Environment Laboratory (PAMELA) was a research facility located in Upper Holloway, part of the University College London in the United Kingdom.
It was designed to study human interactions in controlled conditions by replicating real-world environments such as urban streets and public parks. The laboratory had an artificial pavement platform which was used to simulate everyday scenarios, from different types of pedestrians to varying pavement conditions. Its experiments were intended to create safer streets and more user-friendly public spaces.
PAMELA in Upper Holloway was replaced by PEARL (Person-Environment-Activity Research Laboratory) in the London East Business and Technical Park, Dagenham.
See also
Safety engineering
NIST stone test wall
References
External links
Pedestrian infrastructure in the United Kingdom
University College London
Buildings and structures in the London Borough of Islington | Pedestrian Accessibility and Movement Environment Laboratory | [
"Engineering"
] | 159 | [
"Architecture stubs",
"Architecture"
] |
11,952,902 | https://en.wikipedia.org/wiki/AIM%20Multiuser%20Benchmark | The AIM Multiuser Benchmark, also called the AIM Benchmark Suite VII or AIM7, is a job throughput benchmark widely used by UNIX computer system vendors. Current research operating systems such as K42 use
the reaim
form of the benchmark for performance analysis.
The AIM7 benchmark measures some of the same things as the SDET benchmark.
The original code was developed by Gene Dronek for AIM Technology, Inc., who licensed it to others. The first AIM Benchmarks were for single user PCs. The suite was expanded and enhanced to become multi-user benchmarks by Donald Steiny. Caldera International, Inc., bought the license and released
the source code for Suite VII and Suite IX under the GPL.
AIM7 is a program written in C that forks many processes called tasks, each of which concurrently runs in random order a set of subtests called jobs. There are 53 kinds of jobs, each of which exercises a different aspect of the operating system, such as disk-file operations, process creation, user virtual memory operations, pipe I/O, and compute-bound arithmetic loops
.
An AIM7 benchmark run is composed of a sequence of subruns with the number of tasks incrementing by one between each subrun. Each subrun goes until each of its tasks has completed its set of jobs. Each subrun reports a metric of jobs completed per minute, with the final report for the overall benchmark being a table of that throughput metric versus number of tasks. A given system will have a peak number of tasks N at which the jobs per minute is maximized. Either N or the value of the jobs per minute at N is typically used as the metric of interest.
References
Benchmarks (computing) | AIM Multiuser Benchmark | [
"Technology"
] | 359 | [
"Benchmarks (computing)",
"Computing comparisons",
"Computer performance"
] |
11,953,644 | https://en.wikipedia.org/wiki/Landing%20lights | Landing lights are lights, mounted on aircraft, that illuminate the terrain and runway ahead during takeoff and landing, as well as being used as a collision avoidance measure against other aircraft and bird strikes. Landing lights must be activated when the aircraft is under 10,000 feet in altitude.
Overview
Almost all modern aircraft are equipped with landing lights if approved for nighttime operations. Landing lights are usually of very high intensity, because of the considerable distance that may separate an aircraft from terrain or obstacles. The landing lights of large aircraft can easily be seen by other aircraft over 100 miles away.
Key considerations of landing light design include intensity, reliability, weight, and power consumption. Ideal landing lights are extremely intense, require little electrical power, are lightweight, and have long and predictable service lives. Past and present technologies include ordinary incandescent lamps, halogen lamps, various forms of arc lamps and discharge lamps, and LED lamps.
Landing lights are typically only useful as visibility aids to the pilots when the aircraft is very low and close to terrain, as during take-off and landing. Landing lights are usually extinguished in cruise flight, especially if atmospheric conditions are likely to make the lights reflect or glare back into the eyes of the pilots. However, the brightness of landing lights makes them useful for increasing the visibility of an aircraft to other pilots, and so pilots are often encouraged to keep their landing lights on while below certain altitudes or in crowded airspace. Some aircraft (especially business jets) have lights that— when not needed to directly illuminate the ground—can operate in a flashing mode to enhance visibility to other aircraft. One convention is for commercial aircraft to turn on their landing lights when changing flight levels. Landing lights are sometimes used in emergencies to communicate with ground personnel or other aircraft, especially if other means of communication are not available (radio failures and the like). Additionally, landing lights have at times been installed as a vehicle high beam in the hot rod scene, although this is not legal.
Legal considerations
In many jurisdictions, landing light fixtures and the lamps they use must be certified for use in a given aircraft by a government authority. The use of the landing light may be required or forbidden by local regulations, depending on a variety of factors such as the local time, weather, or flight operations.
In the United States, for example, landing lights are not required or used for many types of aircraft, but their use is strongly encouraged, both for take-off and landing and during any operations below or within of an airport (FAA AIM 4-3-23). According to CFR 14 and FAR Part 91.205, a landing light is required for all aircraft used in commercial operations at night.
Landing lights may not be lit when taxiing or near an airport gate; this can cause flash blindness to ground crew and other pilots.
See also
Aircraft warning lights
Aviation navigation lights
Optical landing system
Precision approach path indicator
References
Federal Aviation Administration (U.S.), Aeronautical Information Manual, FAA, March 2007
Federal Aviation Administration (U.S.), Airplane Flying Handbook (FAA-H-8083-3A), FAA, 2004
Federal Aviation Administration (U.S.), Air Traffic Control (Order 7110.65R), February 16, FAA, 2006
Federal Aviation Administration (U.S.), Instrument Procedures Handbook (FAA-H-8261-1), FAA, 2004
Federal Aviation Administration (U.S.), Pilot's Handbook of Aeronautical Knowledge (FAA-H-8083-25), FAA, 2003
Murphy, Kevin D. and Bell, Leisha, "Airspace for Everyone," Safety Advisor, Regulations 1 (SA02-9/05), AOPA Air Safety Association, September 2005
Aircraft external lights
Optical communications | Landing lights | [
"Engineering"
] | 764 | [
"Optical communications",
"Telecommunications engineering"
] |
13,493,012 | https://en.wikipedia.org/wiki/Relative%20volatility | Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as .
Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages.
Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide).
Definition
For a liquid mixture of two components (called a binary mixture) at a given temperature and pressure, the relative volatility is defined as
When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a value (= ) for a more volatile component is larger than a value for a less volatile component. That means that ≥ 1 since the larger value of the more volatile component is in the numerator and the smaller of the less volatile component is in the denominator.
is a unitless quantity. When the volatilities of both key components are equal, = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same (azeotrope). As the value of increases above 1, separation by distillation becomes progressively easier.
A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation column consists predominantly of the more volatile component and some small amount of the less volatile component and the bottoms fraction consists predominantly of the less volatile component and some small amount of the more volatile component.
A liquid mixture containing many components is called a multi-component mixture. When a multi-component mixture is distilled, the overhead fraction and the bottoms fraction typically contain much more than one or two components. For example, some intermediate products in an oil refinery are multi-component liquid mixtures that may contain alkane, alkene and alkyne hydrocarbons—ranging from methane, having one carbon atom, to decanes having ten carbon atoms. For distilling such a mixture, the distillation column may be designed (for example) to produce:
An overhead fraction containing predominantly the more volatile components ranging from methane (having one carbon atom) to propane (having three carbon atoms)
A bottoms fraction containing predominantly the less volatile components ranging from isobutane (having four carbon atoms) to decanes (ten carbon atoms).
Such a distillation column is typically called a depropanizer.
The designer would designate the key components governing the separation design to be propane as the so-called and isobutane as the so-called . In that context, a lighter component means a component with a lower boiling point (or a higher vapor pressure) and a heavier component means a component with a higher boiling point (or a lower vapor pressure).
Thus, for the distillation of any multi-component mixture, the relative volatility is often defined as
Large-scale industrial distillation is rarely undertaken if the relative volatility is less than 1.05.
The values of have been correlated empirically or theoretically in terms of temperature, pressure and phase compositions in the form of equations, tables or graph such as the well-known DePriester charts.
values are widely used in the design of large-scale distillation columns for distilling multi-component mixtures in oil refineries, petrochemical and chemical plants, natural gas processing plants and other industries.
See also
References
External links
Distillation Theory by Ivar J. Halvorsen and Sigurd Skogestad, Norwegian University of Science and Technology (scroll down to: 2.2.3 K-values and Relative Volatility)
Distillation Principals by Ming T. Tham, University of Newcastle upon Tyne (scroll down to Relative Volatility)
Engineering thermodynamics
Distillation
Chemical engineering
Petroleum engineering | Relative volatility | [
"Physics",
"Chemistry",
"Engineering"
] | 905 | [
"Separation processes",
"Chemical engineering",
"Engineering thermodynamics",
"Petroleum engineering",
"Energy engineering",
"Distillation",
"Thermodynamics",
"nan",
"Mechanical engineering"
] |
13,494,252 | https://en.wikipedia.org/wiki/Lanz%20Bulldog | The Lanz Bulldog was a series of tractors manufactured by Heinrich Lanz AG in Mannheim, Baden-Württemberg, Germany. Production started in 1921 with the Lanz HL, and various versions of the Bulldog were produced up to 1960, one of them being the Lanz Bulldog D 9506. John Deere purchased Lanz in 1956 and started using the name "John Deere Lanz" for the Lanz product line. A few years after the Bulldog was discontinued the Lanz name fell into disuse. The Lanz Bulldog was one of the most popular German tractors, with over 220,000 of them produced in its long production life. The name "Bulldog" is widely used in Germany as a synonym for tractors even today, especially in Bavaria.
Engine
The Lanz Bulldog was built with a single-cylinder, two-stroke Akroyd engine – the so-called Bulldog engine – that was designed by Fritz Huber. The Bulldog engine was installed horizontally, with the ignition device – the hot bulb – facing forward. It has crankcase scavenging, and intake ports instead of valves. Due to its few moving parts – the piston, crank assembly and flywheel, the fuel injection system and oil system are the only parts that move – it was simple to manufacture, operate and maintain. In the Bulldog engine, fuel is sprayed under low pressure onto the hot-bulb ignition device, where the fuel is ignited and gradually undergoes combustion. This makes the Bulldog engine thermodynamically inefficient, but it requires neither a carburettor like an Otto engine, nor high compression like a Diesel engine. It does not require a special fuel to operate; it can burn regular fuels like diesel fuel or petrol, but also a wide variety of low grade fuel oils – even waste oils. This made the Bulldog engine reasonably economical to operate, despite its high fuel consumption. The original Bulldog had evaporative cooling. Later models use a thermosiphon cooler. For starting, the ignition device has to be heated to ignition temperature using a blow torch, then the engine is hand-cranked with the steering wheel. Late Bulldog engines have a redesigned hot-bulb with direct injection; they were offered with electric glowplugs and an electric starter motor. Lanz sold these as "Halbdiesel" (half diesel) and "Volldiesel" (full diesel) models, albeit that the engine was not a diesel engine. The Bulldog engine was made with various different displacements, with the 4.8 and 10.3 litre versions being the most common ones.
130 mm × 170 mm, cm³
140 mm × 170 mm, cm³
145 mm × 170 mm, cm³
150 mm × 210 mm, cm³
160 mm × 190 mm, cm³
160 mm × 210 mm, cm³
170 mm × 210 mm, cm³
190 mm × 220 mm, cm³
210 mm × 210 mm, cm³
190 mm × 260 mm, cm³
225 mm × 260 mm, cm³
Lanz Iberica
Bulldogs were also produced in Spain by Lanz Iberica S.A. at Getafe near Madrid. A total of 17,100 tractors were built from 1956 to 1963.
Lanz Alldog
From 1951-57, Lanz manufactured a rear-engined derivative of the Bulldog, known as the Lanz Alldog. This had a large bed over the front wheels, for transporting cargo over ground unsuitable for a trailer. Tractors of this type were popular among Israeli farmers during the 1950s, especially on hilly and mountainous terrain.
Bulldog Copies
The Bulldog design was copied in other countries by several different manufacturers. While some of these copies were legitimately produced under license from Bulldog, most of them were built with each respective builder's own frame and body design, being powered by unlicensed copies of the patented Bulldog hot-bulb engine. Some of these examples are:
France
"Le Percheron" was a licensed copy of the 25 HP hot-bulb Bulldog, built by Société Nationale de Construction Aeronautic du Centre (SNCAC) at Colombe in France from about 1939. It is believed that nearly 3,700 were built before production ceased in 1956.
Australia
The KL Bulldog was produced by Kelly & Lewis of Springvale, Victoria, Australia from 1948 to December 1952. Just over 860 were built, based on the 35 HP Model N Bulldog.
Poland
Ursus produced a copy of the 45 HP Bulldog at the ZM Ursus factory (Zakłady Mechanicze Ursus) in Poland in Ursus near Warsaw from 1947, called the C-45. It was replaced by the C-451 in 1957, and from 1960 the production was moved to Zakłady Mechaniczne in Gorzów Wielkopolski. About 55,000 Ursus C-45/C-451 were built from 1947 until 1965.
Argentina
In 1951 a copy of the 55 HP Bulldog was produced by Industrias Aeronáuticas y Mecánicas del Estado in Argentina. The tractor was called "Pampa" and the badge on the front read IAME. From 1955 the tractor was produced by Dirección Nacional de Fabricaciones e Investigaciones Aeronáuticas and the badge was changed to DINFIA. A total of 3,760 Pampas were produced from 1951 to 1960.
Similar tractors
The Bulldog was similar to other European hot-bulb tractors that were being produced around the same time. Some of these examples are the SF Vierzon from France, the Landini tractor from Italy, and the HSCS from Hungary. The Field Marshall that was produced in England, was a similar design to the Bulldog hot-bulb engine with the exception of an internally designed vaporing plate which replaced the conventional externally located hot-bulb, this internal design required ignition papers in place of the external blow lamp to start the engine.
References
External links
Tractors
John Deere
1921 establishments in Germany | Lanz Bulldog | [
"Engineering"
] | 1,212 | [
"Engineering vehicles",
"Tractors"
] |
13,494,319 | https://en.wikipedia.org/wiki/Pragmatic%20mapping | Pragmatic mapping — a term in current use in linguistics, computing, cognitive psychology, and related fields — is the process by which a given abstract predicate (a symbol) comes to be associated through action (a dynamic index) with some particular logical object (an icon). The logical object may be a thing, person, relation, event, situation, or a string of these at any conceivable level of complexity. A relatively simple example is the conventional — successful, appropriate, and mundanely “true” — linking of a proper name to the person of whom it is a conventional designation.
There are three parts to this process when it succeeds. There is the abstract symbol which is used to represent something else (the name or the entire signifying predication, for instance); there is the something else that is represented by that symbol (whatever is signified); and there is the act of using the symbol in a conventional way to represent whatever it usually represents (the act of signifying). Pragmatic mapping is the process by which any material argument, or any imagined one, comes to be associated with a predicate that purports to be and succeeds in being about it. That is the predicate must be appropriate ("true" in the most mundane sense relative) to its logical object. The predication may be as simple as a naming act or as complex as a representation consisting of many distinct propositions with many associated clauses.
For instance, if we say "Jesse James was an American outlaw" the name "Jesse James" purports to be about a certain historical person whom we may know to have been shot by another individual named Robert Ford. We may know that a movie featuring Brad Pitt as Jesse James was released in September 2007 in select theaters across America. If the pragmatic mapping of the name "Jesse James" is complete, i.e., if it succeeds, it is mapped onto that certain individual that was actually shot by Robert Ford.
Nothing of importance changes in the pragmatic mapping process if it turns out that Jesse James and Robert Ford are figments of someone’s imagination, excepting, of course, the truth value of the propositions that include the logical object of the name, Jesse James. In ordinary conversation and human communication in general, it has been demonstrated logically and mathematically that meaning is utterly dependent on the true and appropriate pragmatic mapping of symbols to their conventional logical objects. Infants depend on exemplification of such mapping relations to acquire languages and all meaningful linguistic representations have been proved to depend on such mappings.
See also
Wörter und Sachen
References
Frege, G. (1967). Begriffsschrift, a formula language modeled upon that of arithmetic for pure thought. In J. van Heijenoort (Ed. and Trans.), Frege and Gödel: Two fundamental texts in mathematical logic (pp. 5–82). Harvard University Press, Cambridge, Massachusetts. (Original work published 1879)
Krashen, S. D. 1982. Principles and Practices in Second Language Acquisition. New York: Pergamon.
Oller, J. W., Jr. (1975). Pragmatic mappings. Lingua, 35, 333-344.
Oller, J. W., Jr. (2005). Common ground between form and content: The pragmatic solution to the bootstrapping problem. Modern Language Journal, 89, 92-114.
Oller, J. W., Jr., Oller, S. D., & Badon, L. C. (2006). Milestones: Normal speech and language development across the life span. San Diego, CA: Plural Publishing, Inc.
Pearson, L. (2007). Patterns of development in Spanish L2 pragmatic acquisition: An analysis of novice learners' production of directives. The Modern Language Journal 90 (4), 473–495.
Peirce, C. S. (1897). The logic of relatives. The Monist, 7, 161 – 217. Also in C. Hartshorne & P.Weiss (Eds), (1932), Collected Papers of C. S. Peirce, Volume 2 (pp. 288 – 345). Cambridge, MA: Harvard University Press. See https://www.cs.cmu.edu/afs/cs/project/jair/pub/volume20/fox03a-html/node16.html for the use of the term “pragmatic mapping” in modern computing.
Tarski, A. (1949). The semantic conception of truth. In H. Feigl & W. Sellars (Eds. and Trans.), Readings in philosophical analysis (pp. 341–374). New York: Appleton. (Original work published 1944)
Tarski, A. (1956). The concept of truth in formalized languages. In J. J. Woodger (Ed. and Trans.), Logic, semantics, and metamathematics (pp. 152–278). Oxford: Oxford University. (Original work published 1936)
Logic
Cognitive psychology
Pragmatics
Computational linguistics | Pragmatic mapping | [
"Technology",
"Biology"
] | 1,071 | [
"Behavior",
"Computational linguistics",
"Behavioural sciences",
"Cognitive psychology",
"Natural language and computing"
] |
13,494,983 | https://en.wikipedia.org/wiki/Lonafarnib | Lonafarnib, sold under the brand name Zokinvy, is a medication used to reduce the risk of death due to Hutchinson-Gilford progeria syndrome and for the treatment of certain processing-deficient progeroid laminopathies in people one year of age and older. It is under trial for its use as combination treatment for Hepatitis D Virus.
The most common side effects included nausea, vomiting, headache, diarrhea, infection, decreased appetite and fatigue.
Lonafarnib was approved for medical use in the United States in November 2020, and in the European Union in July 2022. The U.S. Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Lonafarnib is indicated to be used to reduce the risk of death due to Hutchinson-Gilford progeria syndrome and for the treatment of certain other processing-deficient progeroid laminopathies in people one year of age and older.
Ongoing studies and clinical trials have found a correlation of lonafarnib treatment with cure of hepatitis D (HDV). Up to now, the trials have proven an efficacy of lonafarnib on HDV if combined with ritonavir, as a supportive treatment for pegylated interferon alpha therapy.
Contraindications
Lonafarnib is contraindicated for co-administration with strong or moderate CYP3A inhibitors and inducers, as well as midazolam and certain cholesterol-lowering medications.
History
Lonafarnib, a farnesyltransferase inhibitor, is an oral medication that helps prevent the buildup of defective progerin or progerin-like protein. The effectiveness of lonafarnib for the treatment of Hutchinson-Gilford progeria syndrome was demonstrated in 62 patients from two single-arm trials (Trial 1/NCT00425607 and Trial 2/NCT00916747) that were compared to matched, untreated patients from a separate natural history study. Compared to untreated patients, the lifespan of Hutchinson-Gilford progeria syndrome patients treated with lonafarnib increased by an average of three months through the first three years of treatment and by an average of 2.5 years through the maximum follow-up time of 11 years. Lonafarnib's approval for the treatment of certain processing-deficient progeroid laminopathies that are very rare took into account similarities in the underlying genetic mechanism of disease and other available data. The participants were from 34 countries around the world, including the United States.
The U.S. Food and Drug Administration (FDA) granted the application for lonafarnib priority review, orphan drug, and breakthrough therapy designations. In addition, the manufacturer received a rare pediatric disease priority review voucher. The FDA granted the approval of Zokinvy to Eiger BioPharmaceuticals, Inc.
Society and culture
Legal status
On 19 May 2022, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization under exceptional circumstances for the medicinal product Zokinvy, intended for the treatment of patients with progeroid syndromes. The applicant for this medicinal product is EigerBio Europe Limited. It was approved for medical use in the European Union in July 2022.
Research
Lonafarnib is a farnesyltransferase inhibitor (FTI) that has been investigated in a human clinical trial as a treatment for progeria, which is an extremely rare genetic disorder in which symptoms resembling aspects of aging are manifested at a very early age.
Lonafarnib is a synthetic tricyclic halogenated carboxamide with antineoplastic properties. As such, it is used primarily for cancer treatment. For those with progeria, research has shown that the drug reduces the prevalence of stroke and transient ischemic attack, and the prevalence and frequency of headaches while taking the medication. A phase II clinical trial was completed in 2012, which showed that a cocktail of drugs that included lonafarnib and two other drugs met clinical efficacy endpoints that improved the height and diminished the rigidity of the bones of progeria patients.
References
External links
"Experimental Drug Is First To Help Kids With Premature-Aging Disease", NPR, 24 September 2012
Benzocycloheptapyridines
Chloroarenes
Farnesyltransferase inhibitors
Bromoarenes
Orphan drugs
Piperidines
Ureas | Lonafarnib | [
"Chemistry"
] | 963 | [
"Organic compounds",
"Ureas"
] |
13,495,046 | https://en.wikipedia.org/wiki/Quantum%20spin%20Hall%20effect | The quantum spin Hall state is a state of matter proposed to exist in special, two-dimensional semiconductors that have a quantized spin-Hall conductance and a vanishing charge-Hall conductance. The quantum spin Hall state of matter is the cousin of the integer quantum Hall state, and that does not require the application of a large magnetic field. The quantum spin Hall state does not break charge conservation symmetry and spin- conservation symmetry (in order to have well defined Hall conductances).
Description
The first proposal for the existence of a quantum spin Hall state was developed by Charles Kane and Gene Mele who adapted an earlier model for graphene by F. Duncan M. Haldane which exhibits an integer quantum Hall effect. The Kane and Mele model is two copies of the Haldane model such that the spin up electron exhibits a chiral integer quantum Hall Effect while the spin down electron exhibits an anti-chiral integer quantum Hall effect. A relativistic version of the quantum spin Hall effect was introduced in the 1990s for the numerical simulation of chiral gauge theories; the simplest example consisting of a parity and time reversal symmetric U(1) gauge theory with bulk fermions of opposite sign mass, a massless Dirac surface mode, and bulk currents that carry chirality but not charge (the spin Hall current analogue). Overall the Kane-Mele model has a charge-Hall conductance of exactly zero but a spin-Hall conductance of exactly (in units of ). Independently, a quantum spin Hall model was proposed by Andrei Bernevig and Shoucheng Zhang in an intricate strain architecture which engineers, due to spin-orbit coupling, a magnetic field pointing upwards for spin-up electrons and a magnetic field pointing downwards for spin-down electrons. The main ingredient is the existence of spin–orbit coupling, which can be understood as a momentum-dependent magnetic field coupling to the spin of the electron.
Real experimental systems, however, are far from the idealized picture presented above in which spin-up and spin-down electrons are not coupled. A very important achievement was the realization that the quantum spin Hall state remains to be non-trivial even after the introduction of spin-up spin-down scattering, which destroys the quantum spin Hall effect. In a separate paper, Kane and Mele introduced a topological invariant which characterizes a state as trivial or non-trivial band insulator (regardless if the state exhibits or does not exhibit a quantum spin Hall effect). Further stability studies of the edge liquid through which conduction takes place in the quantum spin Hall state proved, both analytically and numerically that the non-trivial state is robust to both interactions and extra spin-orbit coupling terms that mix spin-up and spin-down electrons. Such a non-trivial state (exhibiting or not exhibiting a quantum spin Hall effect) is called a topological insulator, which is an example of symmetry-protected topological order protected by charge conservation symmetry and time reversal symmetry. (Note that the quantum spin Hall state is also a symmetry-protected topological state protected by charge conservation symmetry and spin- conservation symmetry. We do not need time reversal symmetry to protect quantum spin Hall state. Topological insulator and quantum spin Hall state are different symmetry-protected topological states. So topological insulator and quantum spin Hall state are different states of matter.)
In HgTe quantum wells
Since graphene has extremely weak spin-orbit coupling, it is very unlikely to support a quantum spin Hall state at temperatures achievable with today's technologies. Two-dimensional topological insulators (also known as the quantum spin Hall insulators) with one-dimensional helical edge states were predicted in 2006 by Bernevig, Hughes and Zhang to occur in quantum wells (very thin layers) of mercury telluride sandwiched between cadmium telluride, and were observed in 2007.
Different quantum wells of varying HgTe thickness can be built. When the sheet of HgTe in between the CdTe is thin, the system behaves like an ordinary insulator and does not conduct when the Fermi level resides in the band-gap. When the sheet of HgTe is varied and made thicker (this requires the fabrication of separate quantum wells), an interesting phenomenon happens. Due to the inverted band structure of HgTe, at some critical HgTe thickness, a Lifshitz transition occurs in which the system closes the bulk band gap to become a semi-metal, and then re-opens it to become a quantum spin Hall insulator.
In the gap closing and re-opening process, two edge states are brought out from the bulk and cross the bulk-gap. As such, when the Fermi level resides in the bulk gap, the conduction is dominated by the edge channels that cross the gap. The two-terminal conductance is in the quantum spin Hall state and zero in the normal insulating state. As the conduction is dominated by the edge channels, the value of the conductance should be insensitive to how wide the sample is. A magnetic field should destroy the quantum spin Hall state by breaking time-reversal invariance and allowing spin-up spin-down electron scattering processes at the edge. All these predictions have been experimentally verified in an experiment performed in the Molenkamp labs at Universität Würzburg in Germany.
See also
Spin Hall effect
Quantum Hall effect
References
Further reading
Hall effect
Condensed matter physics
Quantum electronics
Spintronics | Quantum spin Hall effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,114 | [
"Physical phenomena",
"Quantum electronics",
"Hall effect",
"Spintronics",
"Phases of matter",
"Quantum mechanics",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering",
"Matter"
] |
13,495,550 | https://en.wikipedia.org/wiki/Controlled-release%20fertiliser | A controlled-release fertiliser (CRF) is a granulated fertiliser that releases nutrients gradually into the soil (i.e., with a controlled release period). Controlled-release fertilizer is also known as controlled-availability fertilizer, delayed-release fertilizer, metered-release fertilizer, or slow-acting fertilizer. Usually CRF refers to nitrogen-based fertilizers. Slow- and controlled-release involve only 0.15% (562,000 tons) of the fertilizer market (1995).
History
Controlled-nitrogen-release technologies based on polymers derived from combining urea and formaldehyde were first produced in 1936 and commercialized in 1955. The early product had 60 percent of the total nitrogen cold-water-insoluble, and the unreacted (quick-release) less than 15%. Methylene ureas, e.g. methylene diurea, were commercialized in the 1960s and 1970s, having 25% and 60% of the nitrogen as cold-water-insoluble, and unreacted urea nitrogen in the range of 15% to 30%.
In the 1960s in the U.S., the Tennessee Valley Authority National Fertilizer Development Center began developing sulfur-coated urea. Sulfur was used as the principal coating material because of its low cost and its value as a secondary nutrient. Usually wax or polymer is added to perfect the encapsulation. The slow-release properties depend on the degradation of the secondary sealant by soil microbes as well as mechanical imperfections (cracks, etc.) in the capsule. 6 to 16 weeks of delayed release in turf applications is typical. When a hard polymer is used as the secondary coating, the properties are a cross between diffusion-controlled particles and traditional sulfur-coated.
Advantages
Many factors motivate the use of CRF, including more efficient use of the fertilizer. Illustrating the problem, it is estimated that, on average, 16% of conventional nitrogen-based fertilizers is lost by evaporation (as NH3, N2O, N2) or run-off ammonia. Another factor favoring CRT protecting crops from chemical damage (fertiliser burn). In addition to their providing the nutrition to plants, excess fertilizers can be poisonous to the same plant. Finally important advantages are economic: fewer applications and the use of less fertiliser overall. The results (yield) is in most cases improved by >10%.
Environmental considerations
CRF has the potential to decrease nitrogenous pollution, which leads to eutrophication. The efficient use of nitrogen-base fertilizers is also relevant to the emission of into the atmosphere each year, of which 36% is due to human activity. The anthropogenic is produced by microorganisms acting on ammonia faster than the plant can uptake this nutrient.
Implementation
The fertiliser is administered either by topdressing the soil, or by mixing the fertiliser into the soil before sowing. Polymer coating of fertilizer ingredients gives tablets and spikes a 'true time-release' or 'staged nutrient release' (SNR) of fertilizer nutrients. NBPT functions as an inhibitor of the enzyme urease. Urease inhibitors, at levels of 0.05 weight percent, are added to urea-based fertilizers to control its conversion to ammonia.
Mechanisms of release
The rate of the release is determined by various main factors: (i) the low solubility of the compounds in the soil moisture, (ii) the breakdown of protective coating applied to fertilizer pellets, and (iii) the conversion of the chemicals into ammonia or similarly effective plant nutrient.
Conventional fertilisers are soluble in water, the nutrients disperse. Because controlled-release fertilisers are not water-soluble, their nutrients disperse into the soil more slowly. The fertiliser granules may have an insoluble substrate or a semi-permeable jacket that prevents dissolution while allowing nutrients to flow outward.
Definitions
The Association of American Plant Food Control Officials (AAPFCO) has published the following general definitions (Official Publication 57):
Slow- or controlled-release fertilizer: A fertilizer containing a plant nutrient in a form which delays its availability for plant uptake and use after application, or which extends its availability to the plant significantly longer than a reference ‘rapidly available nutrient fertilizer’ such as ammonium nitrate or urea, ammonium phosphate or potassium chloride. Such delay of initial availability or extended time of continued availability may occur by a variety of mechanisms. These include controlled water solubility of the material by semi-permeable coatings, occlusion, protein materials, or other chemical forms, by slow hydrolysis of water-soluble low molecular weight compounds, or by other unknown means.
Stabilized nitrogen fertilizer: A fertilizer to which a nitrogen stabilizer has been added. A nitrogen stabilizer is a substance added to a fertilizer which extends the time the nitrogen component of the fertilizer remains in the soil in the urea-N or ammoniacal-N form.
Nitrification inhibitor: A substance that inhibits the biological oxidation of ammoniacal-N to nitrate-N.
Urease inhibitor: A substance that inhibits hydrolytic action on urea by the enzyme urease.
Examples
Most slow-release fertilizers are derivatives of urea, a straight fertilizer providing nitrogen. Isobutylidenediurea ("IBDU") and urea-formaldehyde slowly convert in the soil to urea, which is rapidly uptaken by plants. IBDU is a single compound with the formula (CH3)2CHCH(NHC(O)NH2)2 whereas the urea-formaldehydes consist of mixtures of the approximate formula (HOCH2NHC(O)NH)nCH2.
Controlled release fertilizers are traditional fertilizers encapsulated in a shell that degrades at a specified rate. Sulfur is a typical encapsulation material. Other coated products use thermoplastics (and sometimes ethylene-vinyl acetate and surfactants, etc.) to produce diffusion-controlled release of urea or other fertilizers. "Reactive Layer Coating" can produce thinner, hence cheaper, membrane coatings by applying reactive monomers simultaneously to the soluble particles. "Multicote" is a process applying layers of low-cost fatty acid salts with a paraffin topcoat. Recently, biodegradable polymers as coatings for slow/controlled-release fertilizer have attracted interest for their potential to increase fertilizer/pesticide utilization efficiency and reduce negative environmental effects.
See also
Seed ball
Coated urea
References
Further reading
Fertilizers | Controlled-release fertiliser | [
"Chemistry"
] | 1,431 | [
"Fertilizers",
"Soil chemistry"
] |
13,495,825 | https://en.wikipedia.org/wiki/Dynorphin%20A | Dynorphin A is a dynorphin, an endogenous opioid peptide that activates the κ-opioid receptor. Its amino acid sequence is Tyr-Gly-Gly-Phe-Leu-Arg-Arg-Ile-Arg-Pro-Lys-Leu-Lys, a tridecapeptide.
Dynorphin A1–8 is a truncated form of dynorphin A with the amino acid sequence: Tyr-Gly-Gly-Phe-Leu-Arg-Arg-Ile. Dynorphin A1–8 is an agonist at the mu-, kappa-, and delta-opioid receptors; it has the highest binding affinity for the kappa-opioid receptor. Structures of dynorphin A bound to the κ-opioid receptor have been reported.
References
Neuropeptides
Kappa-opioid receptor agonists
Opioid peptides | Dynorphin A | [
"Chemistry"
] | 216 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
13,495,833 | https://en.wikipedia.org/wiki/Leu-enkephalin | Leu-enkephalin is an endogenous opioid peptide neurotransmitter with the amino acid sequence Tyr-Gly-Gly-Phe-Leu that is found naturally in the brains of many animals, including humans. It is one of the two forms of enkephalin; the other is met-enkephalin. The tyrosine residue at position 1 is thought to be analogous to the 3-hydroxyl group on morphine. Leu-enkephalin has agonistic actions at both the μ- and δ-opioid receptors, with significantly greater preference for the latter. It has little to no effect on the κ-opioid receptor.
A nasal spray formulation of leu-enkephalin (developmental code names NES-100, NM-0127, NM-127, PES-200; proposed brand name Envelta) is under development by Virpax Pharmaceuticals for the treatment of pain and post-traumatic stress disorder (PTSD). As of November 2023, it is up to the preclinical stage of development for these indications.
See also
Met-enkephalin
References
Delta-opioid receptor agonists
Experimental drugs
Opioid peptides | Leu-enkephalin | [
"Chemistry",
"Biology"
] | 265 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
13,495,887 | https://en.wikipedia.org/wiki/Big%20dynorphin | Big dynorphin is an endogenous opioid peptide of the dynorphin family that is composed of both dynorphin A and dynorphin B. Big dynorphin has the amino acid sequence: Tyr-Gly-Gly-Phe-Leu-Arg-Arg-Ile-Arg-Pro-Lys-Leu-Lys-Trp-Asp-Asn-Gln-Lys-Arg-Tyr-Gly-Gly-Phe-Leu-Arg-Arg-Gln-Phe-Lys-Val-Val-Thr. It has nociceptive and anxiolytic-like properties, as well as effects on memory in mice.
Big dynorphin is a principal endogenous, agonist at the human kappa-opioid receptor.
References
Neuropeptides
Kappa-opioid receptor agonists
Opioid peptides | Big dynorphin | [
"Chemistry",
"Biology"
] | 216 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
13,496,433 | https://en.wikipedia.org/wiki/Contryphan | The contryphans (conus + tryptophan) are a family of peptides that are active constituents of the potent venom produced by cone snail (genus Conus). The two amino acid cysteine residues in contryphans are linked by a disulfide bond. In addition, contryphans undergo an unusual degree of post-translational modification including
epimerization of leucine and tryptophan, tryptophan bromination, amidation of the C-terminus, and proline hydroxylation. In the broader scheme of genetic conotoxin classification, contryphans are members of "Conotoxin Superfamily O2."
Family members
Contryphan family members include:
where the sequence abbreviations stand for:
O = 4-trans-hydroxyproline,
l = D-leucine, L = L-leucine,
w = D-tryptophan, W = L-tryptophan,
γ = gamma-carboxyglutamic acid,
NH2 = C-terminal amidation
and the remainder of the letters refer to the standard one letter abbreviations for amino acids.
Mechanism of toxicity
The venom of cone snails cause paralysis of their fish prey. The molecular target has not been determined for all contryphan peptides, however it is known that contryphan-Vn is a Ca2+-dependent K+ channel modulator, while glacontryphan-M is a L-type calcium channel blocker.
References
External links
Neurotoxins
Ion channel toxins
Snail toxins | Contryphan | [
"Chemistry"
] | 334 | [
"Neurochemistry",
"Neurotoxins"
] |
13,496,530 | https://en.wikipedia.org/wiki/Category%20algebra | In category theory, a field of mathematics, a category algebra is an associative algebra, defined for any locally finite category and commutative ring with unity. Category algebras generalize the notions of group algebras and incidence algebras, just as categories generalize the notions of groups and partially ordered sets.
Definition
If the given category is finite (has finitely many objects and morphisms), then the following two definitions of the category algebra agree.
Group algebra-style definition
Given a group G and a commutative ring R, one can construct RG, known as the group algebra; it is an R-module equipped with a multiplication. A group is the same as a category with a single object in which all morphisms are isomorphisms (where the elements of the group correspond to the morphisms of the category), so the following construction generalizes the definition of the group algebra from groups to arbitrary categories.
Let C be a category and R be a commutative ring with unity. Define RC (or R[C]) to be the free R-module with the set of morphisms of C as its basis. In other words, RC consists of formal linear combinations (which are finite sums) of the form , where fi are morphisms of C, and ai are elements of the ring R. Define a multiplication operation on RC as follows, using the composition operation in the category:
where if their composition is not defined. This defines a binary operation on RC, and moreover makes RC into an associative algebra over the ring R. This algebra is called the category algebra of C.
From a different perspective, elements of the free module RC could also be considered as functions from the morphisms of C to R which are finitely supported. Then the multiplication is described by a convolution: if (thought of as functionals on the morphisms of C), then their product is defined as:
The latter sum is finite because the functions are finitely supported, and therefore .
Incidence algebra-style definition
The definition used for incidence algebras assumes that the category C is locally finite (see below), is dual to the above definition, and defines a different object. This isn't a useful assumption for groups, as a group that is locally finite as a category is finite.
A locally finite category is one where every morphism can be written in only finitely many ways as the composition of two non-identity morphisms (not to be confused with the "has finite Hom-sets" meaning). The category algebra (in this sense) is defined as above, but allowing all coefficients to be non-zero.
In terms of formal sums, the elements are all formal sums
where there are no restrictions on the (they can all be non-zero).
In terms of functions, the elements are any functions from the morphisms of C to R, and multiplication is defined as convolution. The sum in the convolution is always finite because of the local finiteness assumption.
Dual
The module dual of the category algebra (in the group algebra sense of the definition) is the space of all maps from the morphisms of C to R, denoted F(C), and has a natural coalgebra structure. Thus for a locally finite category, the dual of a category algebra (in the group algebra sense) is the category algebra (in the incidence algebra sense), and has both an algebra and coalgebra structure.
Examples
If C is a group (thought of as a groupoid with a single object), then RC is the group algebra.
If C is a monoid (thought of as a category with a single object), then RC is the monoid ring.
If C is a partially ordered set, then (using the appropriate definition), RC is the incidence algebra.
While partial orders only allow for viewing upper or lower triangular matrices as incidence algebras, the concept of category algebras also encompasses the ring of matrices of R. Indeed, if C is the preorder on n points where every point has a relation to every other (a complete graph), then RC is the matrix ring .
If C is a discrete category, then RC may be seen as the ring of functions with pointwise addition and multiplication, or equivalently the direct product of copies of R indexed over C. In the case of infinite C, one needs to distinguish the "group algebra-style" and the "incidence algebra-style", because in the former, one only allows for finitely many terms in the formal linear combination, resulting in RC being instead the direct sum of copies of R.
The path algebra of a quiver Q is the category algebra of the free category on Q.
References
Haigh, John. On the Möbius Algebra and the Grothendieck Ring of a Finite Category J. London Math. Soc (2), 21 (1980) 81–92.
Further reading
http://www.math.umn.edu/~webb/Publications/CategoryAlgebras.pdf Standard text.
Category theory | Category algebra | [
"Mathematics"
] | 1,051 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
13,496,987 | https://en.wikipedia.org/wiki/Nachman%20Aronszajn | Nachman Aronszajn (26 July 1907 – 5 February 1980) was a Polish American mathematician. Aronszajn's main field of study was mathematical analysis, where he systematically developed the concept of reproducing kernel Hilbert space. He also contributed to mathematical logic.
Life
An Ashkenazi Jew, Aronszajn received his Ph.D. from the University of Warsaw, in 1930, in Poland. Stefan Mazurkiewicz was his thesis advisor. He also received a Ph.D. from Paris University, in 1935; this time Maurice Fréchet was his thesis advisor. He joined the Oklahoma State University faculty, but moved to the University of Kansas in 1951 with his colleague Ainsley Diamond after Diamond, a Quaker, was fired for refusing to sign a newly instituted loyalty oath. Aronszajn retired in 1977. He was a Summerfield Distinguished Scholar from 1964 to his death.
Work
He introduced, together with Prom Panitchpakdi, injective metric spaces under the name of "hyperconvex metric spaces". Together with Kennan T. Smith, Aronszajn offered proof of the Aronszajn–Smith theorem. Also, the existence of Aronszajn trees was proven by Aronszajn; Aronszajn lines, also named after him, are the lexicographic orderings of Aronszajn trees.
He also made a contribution to the theory of reproducing kernel Hilbert space. The Moore–Aronszajn theorem is named after him.
References
External links
Nachman Aronszajn on Scientific Commons.
Guide to the Nachman Aronszajn Collection – personal papers of Nachman Aronszajn, 1951–1977
1907 births
1980 deaths
American people of Polish-Jewish descent
Mathematical analysts
Polish emigrants to the United States
Warsaw School of Mathematics
20th-century American mathematicians
University of Kansas faculty
Oklahoma State University faculty
University of Warsaw alumni
People from Warsaw
University of Paris alumni | Nachman Aronszajn | [
"Mathematics"
] | 400 | [
"Mathematical analysis",
"Mathematical analysts"
] |
13,497,493 | https://en.wikipedia.org/wiki/CDMA%20mobile%20test%20set | A CDMA Mobile Test Set is a call simulating device that is used to test CDMA cell phones. It provides a network-like environment forming a platform to test the cell phone. This reduces cost of manufacturing and testing the cell phone in a real environment. It can be used to test all major 2G, 2.5G, 3G and 3.5G wireless technologies.
In a lab, high-precision measurement correction over the entire frequency and dynamic range as well as compensation for temperature effects in realtime are critical factors for achieving accuracy. A good quality mobile test set helps in achieving excellent accuracy, which is a major concern for mobile manufacturers.
Technologies supported
A mobile test set should ideally support the following technologies:
CDMA2000
WCDMA
Bluetooth
GSM
1xEVDO
Analog
TDMA
Tests that can be performed
RF (Antenna)
Audio
LC Display
DUT Camera and Keypad
Other DUT Interfaces
Companies that manufacture Mobile test set
Rohde & Schwarz
Agilent
Anritsu
Product Types
Agilent 8960
Agilent 8924C (Older model)
R&S CMU200 Universal Radio Communication Tester
Anritsu MT8820C
Anritsu MT8870A
Anritsu MD8475A
References
Agilent Technologies, http://www.home.agilent.com/agilent/product.jspx?nid=-536900143.0.00&lc=eng&cc=US
Rohde & Schwarz International, http://www2.rohde-schwarz.com/en/products/test_and_measurement/product_categories/mobile_radio/
Anritsu Corporation, http://www.anritsu.com/en-US/Products-Solutions/Test-Measurement/Mobile-Wireless-Communications/Handset-One-Box-Testers/index.aspx
Electronic test equipment | CDMA mobile test set | [
"Technology",
"Engineering"
] | 395 | [
"Electronic test equipment",
"Measuring instruments"
] |
13,498,739 | https://en.wikipedia.org/wiki/Black%20%26%20Veatch | Black & Veatch (BV) is a global engineering, procurement, consulting and construction company based in the Kansas City metropolitan area. Founded in 1915 in Kansas City, Missouri it is now headquartered in Overland Park, Kansas. It specializes in infrastructure development in power, oil and gas, water, telecommunications, government, mining, data centers and smart cities markets.
In 2022, BV was the 9th largest 100% employee-owned company in the United States. In 2022, the company reported total revenue of $4.25 billion. According to Engineering-News Record (ENR) magazine, Black & Veatch is the 14th-largest design firm in the United States based on revenue for design services performed in 2022. In its annual ENR 500 rankings, the magazine also reports that BV is the nation's 3rd largest provider of design services to the Power market, 5th largest in Telecommunications, 8th largest in Water and 11th largest in Sewer and Waste.
BV has more than 100 offices worldwide and has completed projects in more than 100 countries on six continents.
History
Black & Veatch was formed in 1915 when Ernest Bateman (E.B.) Black dissolved his partnership with J.S. Worley and created a new firm with Nathan Thomas Veatch. Black and Veatch met while attending the University of Kansas.
Company timeline
1915 Ernest Bateman Black and Nathan Thomas Veatch form a partnership called Black & Veatch with 12 employees on the payroll.
1940 The War Department requests that Black & Veatch rebuild Camp Robinson in Little Rock, Arkansas. Other camp projects include Camp Chaffee in Fort Smith, Arkansas, Camp Hale in Pando, Colorado, and other military installations in the Midwest.
1948 Work begins for the Atomic Energy Commission at Los Alamos, New Mexico.
1950 N.T. Veatch appointed by President Harry Truman to the President's Water Pollution Control Advisory Board.
1963 Black & Veatch International is formed.
1964 Black & Veatch opens its first regional office in Denver, CO to design a 100 million gallon per day water treatment plant by the Denver Water Board of Colorado.
1967 Black & Veatch wins a contract to produce a 60-megawatt power generating unit for Yanhee Electricity Authority of Thailand, now known as EGAT, Electricity Generating Authority of Thailand.
1976 Black & Veatch opens new building at 11401 Lamar Avenue in Overland Park, Kansas.
1985 Black & Veatch acquires Pritchard Corporation.
1988 Black & Veatch power division introduces a new computer-aided engineering and project management system called POWRTRAK to be more time efficient and capture new business.
1993 Black & Veatch forms UK-based partnership with UK business Tarmac following the latter's acquisition of the privatised UK government agency PSA Projects in 1992. This was initially called TBV Consult; after the partnership was discontinued, it was renamed Tarmac Professional Services in 1998, and became part of Carillion in 1999.
1995 Black & Veatch merges with Binnie & Partners.
1996 Black & Veatch acquires Paterson Candy Ltd., a UK-based water treatment process contractor and expands building at 11401 Lamar Avenue.
1999 Black & Veatch changes company structure from general partnership to an employee-owned corporation.
2005 Black & Veatch acquires RJ Rudden Associates, Lukens Energy Group, and Fortegra, a move that doubles the size of its management consulting business.
2006 Black & Veatch acquires the water business of MJ Gleeson in the UK, more than doubling the size of its existing UK water operations.
2008 Black & Veatch selected by Eskom to provide project management and engineering services for a 4,800 megawatt power generation facility in South Africa.
2009 Black & Veatch repurchases 11401 Lamar Avenue office building in Overland Park, Kansas, and establishes the location as the company's World Headquarters.
2009 Black & Veatch launched the infraManagement Group LLC (www.inframanagementgroup.com), a wholly owned subsidiary to assist asset owners with management of water, wastewater, and power-generating assets.
2010 Black & Veatch acquired Enspiria Solutions Inc. to expand its scope of smart-grid services.
2013 Steve Edwards assumes role as Black & Veatch Chairman, President, and CEO.
2015 Black & Veatch celebrates its 100th anniversary.
2018 Black & Veatch and the University of Missouri release a report on the Missouri Hyperloop
2021 The Europe and Asian water businesses of Black & Veatch were acquired by RSK Group and renamed Binnies.
Ukraine arm: BTRIC
In 2008, the Defense Threat 123 Agency (DTRA) awarded BV the first of its Biological Threat Reduction Integrating Contracts (BTRIC). The five-year IDIQ contract has a collective ceiling of $4 billion among the five selected contractors. DTRA awarded BV, as Integrating Contractor, the first BTRIC in Ukraine in 2008, which "is a vital part" of the Cooperative Threat Reduction (CTR) and Biological Threat Reduction (BTR) program of the DTRA. The Implementing (Executive) Agents were three in number: the Ukraine Ministry of Health, Ukraine Academy of Agrarian Sciences and Ukraine State Committee for Veterinary Medicine.
In 2010, BV commissioned Ukraine's first Bio-Safety Level 3 laboratory. This was the first BSL-3 laboratory commissioned for the DTRA. Constructed by Black & Veatch under the "to renovate a decades-old facility into a state-of-the-art diagnostics laboratory that will become the nexus of Ukraine’s biosurveillance network... Ukrainian personnel in molecular diagnostics, biosafety, operations and maintenance, and laboratory management techniques" were trained over three years from 2010 to "provide Ukrainian scientists with the necessary resources to manage the BSL-3 laboratory and the Ukrainian biosurveillance system."
References
Employee-owned companies of the United States
Companies based in Overland Park, Kansas
Companies based in Kansas City, Missouri
Engineering companies of the United States
International engineering consulting firms
Engineering consulting firms of the United States
Consulting firms established in 1915
Construction and civil engineering companies established in 1915
1915 establishments in Missouri
Technology companies established in 1915 | Black & Veatch | [
"Engineering"
] | 1,288 | [
"Engineering consulting firms",
"International engineering consulting firms"
] |
13,498,931 | https://en.wikipedia.org/wiki/Laurent%20C.%20Siebenmann | Laurent Carl Siebenmann (the first name is sometimes spelled Laurence or Larry) (born 1939) is a Canadian mathematician based at the Université de Paris-Sud at Orsay, France.
After working for several years as a Professor at Orsay he became a Directeur de Recherches at the Centre national de la recherche scientifique in 1976. He is a topologist who works on manifolds and who co-discovered the Kirby–Siebenmann class.
Education
Siebenmann's undergraduate studies were at the University of Toronto. He received a Ph.D. from Princeton University under the supervision of John Milnor in 1965 with the dissertation The obstruction to finding a boundary for an open manifold of dimension greater than five. His doctoral students at Orsay included Francis Bonahon and Albert Fathi.
Recognition
In 1985 he was awarded the Jeffery–Williams Prize by the Canadian Mathematical Society. In 2012 he became a fellow of the American Mathematical Society.
Selected publications
References
External links
Kirby and the promised land of topological manifolds: memories and memorable arguments; a talk by Siebenmann
Photos at Oberwolfach
Home page
1939 births
Living people
Topologists
Princeton University alumni
University of Toronto alumni
Academic staff of Paris-Sud University
Fellows of the American Mathematical Society
Canadian mathematicians
Scientists from Toronto
20th-century French mathematicians | Laurent C. Siebenmann | [
"Mathematics"
] | 273 | [
"Topologists",
"Topology"
] |
13,499,772 | https://en.wikipedia.org/wiki/Journal%20of%20Organometallic%20Chemistry | The Journal of Organometallic Chemistry is a peer-reviewed scientific journal published by Elsevier, covering research on organometallic chemistry. According to the Journal Citation Reports, the journal has a 2021 impact factor of 2.345.
References
External links
Organic chemistry journals
Elsevier academic journals
Academic journals established in 1964
English-language journals
Monthly journals | Journal of Organometallic Chemistry | [
"Chemistry"
] | 71 | [
"Organic chemistry journals"
] |
13,500,239 | https://en.wikipedia.org/wiki/Theta%20Piscium | Theta Piscium, Latinized from θ Piscium, is a single, orange-hued star in the zodiac constellation of Pisces, the fish. The annual parallax shift of this star was measured during the Hipparcos mission as 21.96 mas, which yields a distance estimate of about 149 light years. It is a faint star but visible to the naked eye with an apparent visual magnitude of 4.27. The star is moving away from the Sun with a radial velocity of +6 km/s.
At the estimated age of 2.5 billion years, this is an aging giant star with a stellar classification of K1 III, which means it has exhausted the supply of hydrogen at its core. It is a red clump star, indicating it is on the horizontal branch of its evolution and is generating energy through helium fusion at its core. Theta Piscium has 158% of the Sun's mass and its outer atmosphere has swollen to about 11 times the girth of the Sun. It is brighter yet cooler than the Sun, radiating 51.3 times the Sun's luminosity from its enlarged photosphere at an effective temperature of about 4,684 K.
Naming
In Chinese, (), meaning Thunderbolt, refers to an asterism consisting of refers to an asterism consisting of θ Piscium, β Piscium, γ Piscium, ι Piscium and ω Piscium. Consequently, the Chinese name for θ Piscium itself is (, .)
References
K-type giants
Horizontal-branch stars
Pisces (constellation)
Piscium, Theta
BD+05 5173
Piscium, 010
220954
115830
8916 | Theta Piscium | [
"Astronomy"
] | 356 | [
"Pisces (constellation)",
"Constellations"
] |
13,500,312 | https://en.wikipedia.org/wiki/Ideotype | In systematics, an ideotype is a specimen identified as belonging to a specific taxon by the author of that taxon, but collected from somewhere other than the type locality.
The concept of ideotype in plant breeding was introduced by Donald in 1968 to describe the idealized appearance of a plant variety. It literally means 'a form denoting an idea'. According to Donald, an ideotype is a biological model which is expected to perform or behave in a particular manner within a defined environment: "a crop ideotype is a plant model, which is expected to yield a greater quantity or quality of grain, oil or other useful product when developed as a cultivar." Donald and Hamblin (1976) proposed the concepts of isolation, competition and crop ideotypes. Market ideotype, climatic ideotype, edaphic ideotype, stress ideotype and disease/pest ideotypes are its other concepts. The term ideotype has the following synonyms: model plant type, ideal model plant type and ideal plan type.
The term is also used in cognitive science and cognitive psychology, where Ronaldo Vigo (2011, 2013, 2014) introduced it to refer to a type of concept metarepresentation that is a compound memory trace consisting of the structural information detected by humans in categorical stimuli.
Notes
Molecular biology
Botanical nomenclature | Ideotype | [
"Chemistry",
"Biology"
] | 281 | [
"Botanical nomenclature",
"Botanical terminology",
"Biological nomenclature",
"Molecular biology stubs",
"Molecular biology",
"Biochemistry"
] |
13,500,955 | https://en.wikipedia.org/wiki/2011%20Alawwa%20rail%20accident | The 2011 Alawwa rail accident, occurred on the evening of Saturday 17 September 2011, when a passenger train, Sri Lanka Railways S11, drove into an observation car at the back of a stationary Intercity Express train near the Alawwa railway station, approximately northeast of Colombo. The accident resulted in the death of five people, including a French national, a Thai Buddhist monk and the train driver, with over 30 injured. The Intercity Express had been pushing a Rambukkana-bound train from Colombo, which had stalled near Alawwa. The accident may have been caused by human error, and the S11 train ran into the observation car at the end of the other train.
Investigation
A three-member committee, comprising Nimal Dissanayake, retired Appeal Court judge, Sanath Panawella, Director Arthur C. Clarke Centre and Sarath Perera, retired Deputy Inspector General Police, undertook an inquiry into the accident in October. The Committee's conclusion was the accidents was a result of high speed and failure by the train driver to observe the rail signals. The committee recommended several solutions including re-positioning of the signal system, maintaining a proper coloured light warning system and improving communication among drivers, control room staff and station masters.
The Ministry of Transport indicated the estimated damages resulting from the train accident as being over Rs.75 million. Compensation was paid to the families of those killed, and to the injured.
See also
List of rail accidents in Sri Lanka
List of rail accidents (2010–2019)
References
Railway accidents in 2011
Train collisions in Sri Lanka
2011 disasters in Sri Lanka
Alawwa | 2011 Alawwa rail accident | [
"Technology"
] | 325 | [
"Railway accidents and incidents",
"Rail accident stubs"
] |
13,500,970 | https://en.wikipedia.org/wiki/Pensky%E2%80%93Martens%20closed-cup%20test | The Pensky–Martens closed-cup flash-point test is a test for the determination of the flash point of flammable liquids. It is standardized as ASTM D93, EN ISO 2719 and IP 34 The United States Environmental Protection Agency (EPA) has also published Method 1010A: Test Methods for Flash Point by Pensky-Martens Closed Cup Tester, part of Test Methods for Evaluating Solid Waste, Physical/Chemical Methods, which references the ASTM standard series D93. The Pensky-Martens test is a closed-cup method as opposed to the Cleveland open-cup method.
Test Procedure
A brass test cup is filled with a test specimen and closed with a lid, through which an ignition source can be introduced periodically. The sample is heated and stirred at specified rates depending on the material that is being tested. This allows the development of an equilibrium between the liquid and the air volume. The ignition source is directed into the cup at regular intervals with simultaneous interruption of stirring. The test concludes upon observation of a flash that spreads throughout the inside of the cup. The corresponding temperature is the liquid's flash point.
Critique of test method
The different flash point methods depend on the controlled conditions in the laboratory and do not determine an intrinsic property of the material tested. They are however useful to compare different substances and is therefore widely used in road transportation and environmental safety regulations.
Closed cup testers give lower values for the flashpoint than open-cup testers (typically 5–10 K) and are a better approximation to the temperature at which the vapour pressure reaches the "Lower flammable limit" (LFL).
References
Measuring instruments | Pensky–Martens closed-cup test | [
"Technology",
"Engineering"
] | 340 | [
"Measuring instruments"
] |
13,501,019 | https://en.wikipedia.org/wiki/Lower%20flammability%20limit | The lower flammability limit (LFL), usually expressed in volume per cent, is the lower end of the concentration range over which a flammable mixture of gas or vapour in air can be ignited at a given temperature and pressure. The flammability range is delineated by the upper and lower flammability limits. Outside this range of air/vapor mixtures, the mixture cannot be ignited at that temperature and pressure. The LFL decreases with increasing temperature; thus, a mixture that is below its LFL at a given temperature may be ignitable if heated sufficiently.
For liquids, the LFL is typically close to the saturated vapor concentration at the flash point, however, due to differences in the liquid properties, the relationship of LFL to flash point (which is also dependent on the test apparatus) is not fixed and some spread in the data usually exists.
The of a mixture can be evaluated using the Le Chatelier mixing rule if the of the components are known:
Where is the lower flammability of the mixture, is the lower flammability of the -th component of the mixture, and is the molar fraction of the -th component of the mixture.
See also
Flash point
Minimum ignition energy
Stoichiometry
References
Chemical properties
Fire | Lower flammability limit | [
"Chemistry"
] | 267 | [
"Chemical reaction stubs",
"Combustion",
"Fire",
"nan"
] |
13,502,050 | https://en.wikipedia.org/wiki/Cold%20filter%20plugging%20point | Cold filter plugging point (CFPP) is the lowest temperature, expressed in degrees Celsius (°C), at which a given volume of diesel type of fuel still passes through a standardized filtration device in a specified time when cooled under certain conditions. This test gives an estimate for the lowest temperature that a fuel will give trouble free flow in certain fuel systems. This is important as in cold temperate countries, a high cold filter plugging point will clog up vehicle engines more easily.
The test is important in relation to the use of additives that allow spreading the usage of winter diesel at temperatures below the cloud point. The tests according to EN 590 show that a CloudPoint of +1 °C can have a CFPP −10 °C. Current additives allow a CFPP of −20 °C to be based on diesel fuel with a CloudPoint of −7 °C.
The trustworthiness of the EN 590 have been criticized as being too low for modern diesel motors – the German ADAC has run a test series on customary winter diesel in a cold chamber. All diesel brands did exceed the legal minimum by 3 to 11 degrees in the laboratory according to the legal DIN test. One of the real diesel motors however stopped working even before the legal minimum was reached, presumably due to an undersized filter heater. Notably the experiments did not show a direct correlation between the CFPP value of the mineral oil and the cold start capability of the diesel motors – hence the automobile club suggest the creation of a new test standard.
Test method
The ASTM no. for the test method to define cold filter plugging point is ASTM D6371.
See also
Cloud point
Petroleum
Pour point
References
External links
BP information
Chemical properties
Fuel technology | Cold filter plugging point | [
"Chemistry"
] | 351 | [
"Physical chemistry stubs",
"nan"
] |
13,502,744 | https://en.wikipedia.org/wiki/Babu%C5%A1ka%E2%80%93Lax%E2%80%93Milgram%20theorem | In mathematics, the Babuška–Lax–Milgram theorem is a generalization of the famous Lax–Milgram theorem, which gives conditions under which a bilinear form can be "inverted" to show the existence and uniqueness of a weak solution to a given boundary value problem. The result is named after the mathematicians Ivo Babuška, Peter Lax and Arthur Milgram.
Background
In the modern, functional-analytic approach to the study of partial differential equations, one does not attempt to solve a given partial differential equation directly, but by using the structure of the vector space of possible solutions, e.g. a Sobolev space W k,p. Abstractly, consider two real normed spaces U and V with their continuous dual spaces U∗ and V∗ respectively. In many applications, U is the space of possible solutions; given some partial differential operator Λ : U → V∗ and a specified element f ∈ V∗, the objective is to find a u ∈ U such that
However, in the weak formulation, this equation is only required to hold when "tested" against all other possible elements of V. This "testing" is accomplished by means of a bilinear function B : U × V → R which encodes the differential operator Λ; a weak solution to the problem is to find a u ∈ U such that
The achievement of Lax and Milgram in their 1954 result was to specify sufficient conditions for this weak formulation to have a unique solution that depends continuously upon the specified datum f ∈ V∗: it suffices that U = V is a Hilbert space, that B is continuous, and that B is strongly coercive, i.e.
for some constant c > 0 and all u ∈ U.
For example, in the solution of the Poisson equation on a bounded, open domain Ω ⊂ Rn,
the space U could be taken to be the Sobolev space H01(Ω) with dual H−1(Ω); the former is a subspace of the Lp space V = L2(Ω); the bilinear form B associated to −Δ is the L2(Ω) inner product of the derivatives:
Hence, the weak formulation of the Poisson equation, given f ∈ L2(Ω), is to find uf such that
Statement of the theorem
In 1971, Babuška provided the following generalization of Lax and Milgram's earlier result, which begins by dispensing with the requirement that U and V be the same space. Let U and V be two real Hilbert spaces and let B : U × V → R be a continuous bilinear functional. Suppose also that B is weakly coercive: for some constant c > 0 and all u ∈ U,
and, for all 0 ≠ v ∈ V,
Then, for all f ∈ V∗, there exists a unique solution u = uf ∈ U to the weak problem
Moreover, the solution depends continuously on the given data:
See also
Lions–Lax–Milgram theorem
References
External links
Theorems in analysis
Partial differential equations | Babuška–Lax–Milgram theorem | [
"Mathematics"
] | 623 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
13,502,784 | https://en.wikipedia.org/wiki/Rhizodermis | Rhizodermis is the root epidermis (also referred to as epiblem), the outermost primary cell layer of the root.
Specialized rhisodermal cells, trichoblasts, form long tubular structures (from 5 to 17 micrometers in diameter and from 80 micrometers to 1.5 millimeters in length) almost perpendicular to the main cell axis – root hairs that absorb water and nutrients. Root hairs of the rhizodermis are always in close contact with soil particles and because of their high surface to volume ratio form an absorbing surface which is much larger than the transpiring surfaces of the plant.
With some species of the family Fabaceae, the rhizodermis participates in the recognition and the uptake of nitrogen-fixing Rhizobia bacteria – the first stage of nodulation leading to formation of root nodules. Rhizodermis plays an important role in nutrient uptake by the plant roots.
In contrast with the epidermis, rhizodermis contains no stomata, and is not covered by cuticle. Its unique feature is the presence of root hairs. Root hair is the outgrowth of a single rhizodermal cell. They occur in high frequency in the adsorptive zone of the root. Root hair derives from a trichoblast as a result of an unequal division. It contains a large vacuole; its cytoplasm and nucleus are superseded to the apical region of the outgrowth. Although it does not divide, its DNA replicates so the nucleus is polyploid. Root hairs live only for few days, and die off in 1–2 days due to mechanical damages.
References
Plant morphology | Rhizodermis | [
"Biology"
] | 354 | [
"Plant morphology",
"Plants"
] |
13,503,440 | https://en.wikipedia.org/wiki/Doebner%20reaction | The Doebner reaction is the chemical reaction of an aniline with an aldehyde and pyruvic acid to form quinoline-4-carboxylic acids.
The reaction serves as an alternative to the Pfitzinger reaction.
Reaction mechanism
The reaction mechanism is not exactly known; two proposals are presented here. One possibility is at first an aldol condensation, starting from the enol form of the pyruvic acid (1) and the aldehyde, forming an β,γ-unsaturated α-ketocarboxylic acid (2). This is followed by a Michael addition with aniline to form an aniline derivative (3). After a cyclization at the benzene ring and two proton shifts, the quinoline-4-carboxylic acid (4) is formed by water elimination:
An alternative mechanism is based on the aniline and the aldehyde forming at first the Schiff base upon water elimination. The subsequent reaction with the enol form of pyruvic acid (1) leads to the formation of the above-mentioned aniline derivative (3) followed by the above-described reaction mechanism:
Side reactions
It is reported in the literature that the Doebner reaction fails in case of 2-chloro-5-aminopyridine. In this case the cyclization would take place at the amino group instead of the benzene ring and lead to a pyrrolidine derivative.
Alternative reactions
Alternative syntheses of quinoline derivatives are for example:
Pfitzinger reaction
Conrad-Limpach reaction
Doebner-Miller reaction
Combes quinoline synthesis
References
Carbon-carbon bond forming reactions
Condensation reactions
Quinoline forming reactions
Multiple component reactions
Name reactions | Doebner reaction | [
"Chemistry"
] | 370 | [
"Name reactions",
"Condensation reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
13,503,628 | https://en.wikipedia.org/wiki/Wii%20system%20software | The Wii system software is a discontinued set of updatable firmware versions and a software frontend on the Wii, a home video game console. Updates, which could be downloaded over the Internet or read from a game disc, allowed Nintendo to add additional features and software, as well as to patch security vulnerabilities used by users to load homebrew software. When a new update became available, Nintendo sent a message to the Wii Message Board of Internet-connected systems notifying them of the available update.
Most game discs, including first-party and third-party games, include system software updates so that systems that are not connected to the Internet can still receive updates. The system menu will not start such games if their updates have not been installed, so this has the consequence of forcing users to install updates in order to play these games. Some games, such as online games like Super Smash Bros. Brawl and Mario Kart Wii, contain specific extra updates, such as the ability to receive Wii Message Board posts from game-specific addresses; therefore, these games always require that an update be installed before their first time running on a given console.
Technology
IOS
The Wii's firmware has many active branches known as IOSes, thought by the Wii homebrew developers to stand for "Input Output Systems" or "Internal Operating Systems". The currently active IOS, also simply referred to as just "IOS," runs on a separate ARM926EJ-S processor unofficially nicknamed Starlet, which resides within the Hollywood GPU. The patent for the Wii U shows a similar device which is simply named "Input/Output Processor". IOS controls I/O between the code running on the main Broadway processor and the various Wii hardware that does not also exist on the GameCube.
Except for bug fixes, new IOS versions do not replace existing IOS versions. Instead, Wii consoles have multiple IOS versions installed. All native Wii software (including games distributed on Nintendo optical discs, the System Menu itself, Virtual Console games, WiiWare, and Wii Channels), with the exception of certain homebrew applications, have the IOS version hardcoded into the software.
When the software is run, the IOS that is hardcoded gets loaded by the Wii, which then loads the software itself. If that IOS does not exist on the Wii, in the case of disc-based software, it gets installed automatically with a system update (after the user is prompted). With downloaded software, this should not theoretically happen, as the user cannot access the shop to download software unless the player has all the IOS versions that they require. However, if homebrew is used to forcefully install or run a piece of software when the required IOS does not exist, the user is brought back to the system menu.
Nintendo created this system so that new updates would not unintentionally break compatibility with older games, but it does have the side effect that it uses up space on the Wii's internal NAND Flash memory. IOSes are referred to by their number, which can theoretically be between 3 and 255, although many numbers are skipped, presumably being development versions that were never completed.
Only one IOS version can run at any given time. The only time an IOS is not running is when the Wii enters GameCube backward compatibility mode, during which the Wii runs a variant of IOS specifically for GameCube games, MIOS, which contains a modified version of the GameCube's IPL. Custom IOSes, called cIOSes, can be installed with homebrew. The main purpose of cIOS is to allow homebrew users to use other homebrew apps such as USB Loader GX (allows games stored in the WBFS file format to be run from a USB stick).
User interface
The system provides a graphical interface to the Wii's abilities. All games run directly on the Broadway processor, and either directly interface with the hardware (for the hardware common to the Wii and GameCube), or interface with IOS running on the ARM architecture processor (for Wii-specific hardware). The ARM processor does not have access to the screen, and therefore neither does IOS. This means that while a piece of software is running, everything seen on the screen (including the HOME button menu) comes from that software, and not from any operating system or firmware. Therefore, the version number reported by the Wii is actually only the version number of the System Menu. This is why some updates do not result in a change of the version number: the System Menu itself is not updated, only (for example) IOSes and channels. As a side effect, this means it is impossible for Nintendo to implement any functions that would affect the games themselves, for example an in-game system menu (similar to the Xbox 360's in-game Dashboard or the PlayStation 3's in-game XMB).
The Wii Menu (known internally as the System Menu) is the name of the user interface for the Wii game console, and it is the first thing to be seen when the system boots up. It has four pages, each with a 4:3 grid, and each displaying the current time and date. Available applications, known as "channels", are displayed and can be navigated using the pointer capability of the Wii Remote. The grid is customizable; users can move channels (except for the Disc Channel) among the menu's 48 customizable slots. By pressing the plus and minus buttons on the Wii Remote users can scroll across accessing empty slots. Similar to many other video game consoles, the Wii is not only about games. For example, it is possible to install applications such as Netflix to stream media (without requiring a disc) on the Wii. The Wii Menu let users access both game and no-game functions through built-in applications called Channels, which are designed to represent television channels. There are six primary channels: the Disc Channel, Mii Channel, Photo Channel, Wii Shop Channel, Forecast Channel and News Channel, although the latter two were not initially included and only became available via system updates. Some of the functions provided by these Channels on the Wii used to be limited to a computer, such as a full-featured web browser and digital photo viewer. Users can also use Channels to create and share cartoon-like digital avatars called Miis and download new games and Channels directly from the Wii Shop Channel. New Channels include, for example, the Everybody Votes Channel and the Internet Channel. Separate Channels are graphically displayed in a grid and can be navigated using the pointer capability of the Wii Remote. Users can also rearrange these Channels if they are not satisfied with how the Channels are originally organized on the menu.
Network features
The Wii system supports wireless connectivity with the Nintendo DS handheld console with no additional accessories. This connectivity allows players to use the Nintendo DS microphone and touch screen as inputs for Wii games. Pokémon Battle Revolution is the first example Nintendo has given of a game using Nintendo DS-Wii connectivity. Nintendo later released the Nintendo Channel for the Wii allowing its users to download game demos or additional data to their Nintendo DS.
Like many other video game consoles, the Wii console is able to connect to the Internet, although this is not required for the Wii system itself to function. Each Wii has its own unique 16-digit Wii Code for use with Wii's non-game features. With Internet connection enabled users are able to access the established Nintendo Wi-Fi Connection service. Wireless encryption by WEP, WPA (TKIP/RC4) and WPA2 (CCMP/AES) is supported. AOSS support was added in System Menu version 3.0.
As with the Nintendo DS, Nintendo does not charge for playing via the service; the 12-digit Friend Code system controls how players connect to one another. The service has a few features for the console, including the Virtual Console, WiiConnect24 and several Channels. The Wii console can also communicate and connect with other Wii systems through a self-generated wireless LAN, enabling local wireless multiplayer on different television sets. The system also implements console-based software, including the Wii Message Board. One can connect to the Internet with third-party devices as well.
The Wii console also includes a web browser known as the Internet Channel, which is a version of the Opera 9 browser with menus. It is meant to be a convenient way to access the web on the television screen, although it is far from offering a comfortable user interface compared with modern web browsers. A virtual keyboard pops up when needed for input, and the Wii Remote acts like a mouse, making it possible to click anywhere on the screen and navigate through web links. However, the browser cannot always handle all the features of most normal web pages, although it does support Adobe Flash, thus capable of playing Flash files. Some third-party services such as the online BBC iPlayer were also available on the Wii via the Internet Channel browser, although BBC iPlayer was later relaunched as the separate BBC iPlayer Channel on the Wii. In addition, Internet access including the Internet Channel and system updates may be restricted by the parental controls feature of the Wii.
Backward compatibility
The original designs of the Nintendo Wii console, more specifically the Wii models made pre-2011 were fully backward compatible with GameCube devices including game discs, memory cards and controllers. This was because the Wii hardware had ports for both GameCube memory cards, and peripherals and its slot-loading drive was able to accept and read the previous console's discs. GameCube games work with the Wii without any additional configuration, but a GameCube controller is required to play GameCube titles; neither the Wii Remote or the Classic Controller functions in this capacity. The Wii supports progressive-scan output in 480p-enabled GameCube titles. Peripherals can be connected via a set of four GameCube controller sockets and two Memory Card slots (concealed by removable flip-open panels). The console retains connectivity with the Game Boy Advance and e-Reader through the Game Boy Advance Cable, which is used in the same manner as with the GameCube; however, this feature can only be accessed on select GameCube titles which previously utilized it.
There are also a few limitations in the backward compatibility. For example, online and LAN features of certain GameCube games were not available since the Wii does not have serial ports for the GameCube Broadband Adapter and Modem Adapter. The Wii uses a proprietary port for video output, and is incompatible with all GameCube audio/video cables (composite video, S-Video, component video and RGB SCART). The console also lacks the GameCube footprint and high-speed port needed for Game Boy Player support. Furthermore, only GameCube functions were available and only compatible memory cards and controllers could be used when playing a GameCube game. This is due to the fact that the Wii's internal memory would not save GameCube data.
Because of the original device's backward compatibility with earlier Nintendo products players can play older games on the console in addition to newer Wii game titles. However, South Korean units lack GameCube backward compatibility. Also, the redesigned Wii Family Edition and Wii Mini, launched in 2011 and 2013 respectively, had this compatibility stripped out. Nevertheless, there is another service called Virtual Console which allow users to download older games from prior Nintendo platforms (namely the Nintendo Entertainment System, Super NES and Nintendo 64) onto their Wii console, as well as games from non-Nintendo platforms such as the Genesis and TurboGrafx-16.
List of additional Channels
This is a list of new Wii Channels released beyond the four initial Channels (i.e. Disc Channel, Mii Channel, Photo Channel and Wii Shop Channel) included in the original consoles. The News Channel and the Forecast Channel were released as part of system updates so separate downloads were not required. As of January 30, 2019, all channels listed below have been discontinued with the exception of the Wii Fit Channel and the Internet Channel.
Pre-installed channels
Disc Channel
The Disc Channel is the primary way to play Wii and GameCube titles from supported Nintendo optical discs inserted into the console.
Each Wii game disc includes a system update partition, which includes the latest Wii software from the time the game was released. If a disc that is inserted contains newer software than the one installed on the console, installing the new software will be required to play the game. This allows users without an internet connection to still receive system updates. When loaded into the disc slot, an icon on the Disc Channel that says "Wii System Update" appears. After selecting the channel, the Wii will automatically update. If these updates are not installed, the game will remain unplayable until the update is installed, as each time the channel is loaded with the game inserted, the update prompt will appear, and declining the update will return the player to the Wii Menu instead of starting the game.
Games requiring a system update can still be played without updating using homebrew software, such as Gecko OS or a USB loader.
Mii Channel
The Mii Channel is an avatar creator, where users can design 3D caricatures of people called Miis by selecting from a group of facial and bodily features. At the Game Developers Conference 2007, Shigeru Miyamoto explained that the look and design of the Mii characters are based on Kokeshi, a form of Japanese doll used as souvenir gifts.
A Wired interview of Katsuya Eguchi (producer of Animal Crossing and Wii Sports) held in 2006 confirmed that the custom player avatar feature shown at Nintendo's E3 Media Briefing would be included in the hardware. The feature was described as part of a "profile" system that contains the Mii and other pertinent player information. This application was officially unveiled by Nintendo in September 2006. It is incorporated into Wii's operating system interface as the "Mii Channel". Users can select from pre-made Miis or create their own by choosing custom facial shapes, colors, and positioning. In certain games, each player's Mii will serve as the character the player controls in some/all forms of gameplay. Miis can interact with other Wii users by showing up on their Wii consoles through the WiiConnect24 feature or by talking with other Miis created by Wii owners all over the world. This feature is called Mii Parade. Early-created Miis as well as those encountered in Mii Parades may show up as spectators in some games. Miis can be stored on Wii Remotes and taken to other Wii consoles. The Wii Remote can hold a maximum of 10 Miis.
In addition, Mii characters can be transferred from a user's Wii to Nintendo 3DS consoles, as well as supported Nintendo DS games via the Mii Channel. While in the channel, pressing A, followed by B, then 1, and holding 2 on the Wii Remote allows the user to unlock the feature. The Mii Channel is succeeded by the Mii Maker app for both Nintendo 3DS and Wii U, and the Mii options in Settings for Nintendo Switch.
According to Nintendo president Satoru Iwata, over 160 million Mii characters had been created using the Mii Channel as of May 2010.
Photo Channel
If a user inserts an SD card into the console, or receives photos (JPEG) or videos (MJPEG) via email, they can be viewed using the Photo Channel. The user can create a slideshow simply by inserting an SD card with photos and, optionally, MP3 or AAC files (see note regarding December 10, 2007 update to version 1.1). The Wii will automatically add Ken Burns Effect transitions between the photos and play either the music on the SD card or built-in music in the background. A built-in editor allows users to add markings and effects to their photos or videos (The edits float statically above the videos). Mosaics can also be created with this feature. In "Doodle" mode, the user can draw on or make art on the photos. The "Mood" mode allows the user to make all the photos on these four following effects which is either brightening up the photo, making the photo grayscale, zapping the photo, or cooking up a hard-boiled photo. Puzzles can be created from photos or videos with varying degrees of difficulty (However, your first puzzle will be six-pieces) with 6, 12, 24 and 48 piece puzzles available, with 192 selectable while holding down 1 on the Wii Remote. Edited photos can be saved to the Wii and sent to other Wiis via the message board. According to the system's manual, the following file extensions (i.e. formats) are supported: Photos (jpeg/jpg), Movies (mov/avi), and Music (mp3/aac).
JPEG files can be up to 8192x8192 resolution and in baseline format. Video data contained within the .mov or .avi files must be in an OpenDML-compliant MotionJPEG and use some variant of this format for their videos, with a resolution of up to 848×480 pixels (Wide VGA). Photos, even high resolution ones, are compressed and decreased in resolution.
Photo Channel 1.1
Photo Channel 1.1 is an optional update to the Photo Channel that became available on the Wii Shop Channel on December 10, 2007. It allows users to customize the Photo Channel icon on the Wii Menu with photos from an SD card or the Wii Message Board. It also allows playback of songs in random order. The update replaced MP3 support with support for MPEG-4 encoded audio files encoded with AAC in the .m4a extension.
Wii owners who updated to version 1.1 can revert to version 1.0 by deleting it from the channels menu in the data management setup. Consoles released after December 10, 2007 come with the version 1.1 update pre-installed, and cannot be downgraded to version 1.0.
Owners of systems on a Japanese firmware can download a "Revert to Photo Channel 1.0" Channel from the Wii Shop Channel if they wish to do so.
Wii Shop Channel
The Wii Shop Channel allowed users to download games and other software by redeeming Wii Points, which could be obtained by purchasing Nintendo Points cards from retail outlets or directly through the Wii Shop Channel using MasterCard or Visa credit cards online. Users could browse in the Virtual Console, WiiWare, or Wii Channels sections for downloads. A feature to purchase downloaded software as gifts for others became available worldwide on December 10, 2007. Additional channels that were not released at the console's launch were available for purchase in the Wii Shop Channel. These included: Internet Channel, Everybody Votes Channel, Check Mii Out Channel, Nintendo Channel, Netflix Channel, and the Japan-only Television Friend Channel. Until the channel's shut down on January 30, 2019, all downloadable channels were free of charge. The name was originally going to be called the Shopping Channel.
Nintendo discontinued the Wii Shop Channel on January 30, 2019 (having announced that they planned to do so on September 29, 2017), with the purchase of Wii Points ending on March 26, 2018. The ability to redownload previously purchased content and/or transfer Wii data from the Wii to the Wii U still remains available.
Forecast Channel
The Forecast Channel allowed weather reports and forecasts provided by Weathernews to be shown on the console from the Internet via the WiiConnect24 service. The Forecast Channel displayed a view of the Earth as a globe (courtesy of NASA's The Blue Marble image), with which users can view weather in other regions. When fully zoomed out, an accurate star map was visible in the background. (The Big Dipper and the constellation Orion were easily recognizable, for example.) The Forecast Channel features included the current forecast, the UV index, today's overall forecast, tomorrow's forecast, a 5-day forecast (only for the selected country in which the user lives), a laundry check (Japan only) and pollen count (Japan only). The Forecast Channel first became available on December 19, 2006. Certain games could use the Forecast Channel to simulate weather conditions depending on the player's region.
There are slight variations of Forecast Channel versions in different regions. When viewing weather conditions in Japan, a different set of weather icons is used. Additionally, the laundry index was only featured in the Japanese version.
After the August 6, 2007 update, the Forecast Channel showed the icon for the current weather on the Wii Menu.
The Forecast Channel (along with the News Channel) was not available in South Korea.
Like the four other Wii channels (News Channel, Everybody Votes Channel, Check Mii Out Channel/Mii Contest Channel, Nintendo Channel), the Forecast Channel ended its seven-year support on June 27, 2013.
News Channel
The News Channel allowed users to access news headlines and current news events obtained from the Internet. News articles were available on a globe view, allowing users to view news from certain areas of the world (similar to the Forecast Channel), and as a slide show. The content was automatically updated and viewable via WiiConnect24 with clickable news images supported. The channel contained seven categories: National News, International News, Sports, Arts/Entertainment, Business, Technology and Oddities.
The News Channel became available in North America, Europe, and Australia on January 26, 2007. Content was in a variety of languages provided by the Associated Press, who had a two-year contract to provide news and photos to Nintendo. Canadian news was submitted by the Canadian Press for publication. Japanese news was provided by Goo. European news was provided by Agence France-Presse.
Starting with the August 6, 2007 update, the News Channel showed a news ticker in the Wii Menu, and when selecting the channel. However, not visiting the channel for a period of time resulted in the ticker not appearing, instead displaying "You must use the News Channel regularly for news to be displayed on this screen." on the preview screen until the channel was opened up. A December 20, 2007 PAL region update increased the number of news feeds to the channel, sourced from a larger number of news resources and agencies, providing more news that were available per country.
The News Channel (along with the Forecast Channel) was not available in South Korea.
Like the four other Wii channels (Forecast Channel, Everybody Votes Channel, Mii Contest Channel, Nintendo Channel), the News Channel ended its seven-year support on June 27, 2013.
Get Connected Video Channel
The Get Connected Video Channel or Wii & the Internet Channel (or alternatively known as the Wii + Internet Channel or Wii: See What You Can Do On the Internet) is pre-installed onto Wii console units manufactured in October 2008 or later. It contains an informational video specifying the benefits of connecting the Wii console to the Internet, such as downloading extra channels, new software, Virtual Console titles, and playing games over Nintendo Wi-Fi Connection.
The Get Connected Video Channel is the only pre-installed channel that takes up spare internal memory, and the only channel that can be manually deleted or moved to an SD card by the user. The channel takes up over half of the Wii's internal memory space. Upon connecting to the Internet and running the channel, the user will be asked if they would like to delete it. It cannot be re-downloaded or restored upon deletion.
The same video presentation contained in the channel can also be viewed on an archived version of Nintendo's official website.
The channel is also available in multiple languages. Unlike the other channels, the video in the channel is not translated digitally, but is presented in multiple dubs, which means there are multiple copies of the same video in a single channel. The language of the video is presented is respectively according to the Wii's language setting. There are three languages available in the US versions: English, French and Spanish; and six in the PAL version: English, French, Spanish, German, Italian and Dutch.
Internet Channel
The Internet Channel is a version of the Opera web browser for use on the Wii by Opera Software and Nintendo. On December 22, 2006 a free demo version (promoted as "Internet Channel: Trial Version") of the browser was released. The final version (promoted as "Internet Channel: Final Version") of the browser was released on April 11, 2007 and was free to download until June 30, 2007. After this deadline had passed, the Internet Channel cost 500 Wii Points to download until September 1, 2009, though users who downloaded the browser before June 30, 2007, could continue to use it at no cost for the lifetime of the Wii system. An update (promoted as the "Internet Channel") on October 10, 2007 added USB keyboard compatibility. On September 1, 2009 the Internet Channel was made available to Wii owners for no cost of Wii Points and updated to include improved Adobe Flash Player support. A refund was issued to those who paid for the channel in the form of one free NES game download worth 500 Wii Points.
The Internet Channel uses whichever connection is chosen in the Wii settings, and utilizes the user's internet connection directly; there is no third party network that traffic is being routed through. It receives a connection from a router/modem and uses a web browser to pull up HTTP and HTTPS (secure and encrypted) web pages. Opera, the Wii's web browser, is capable of rendering most web sites in the same manner as its desktop counterpart by using Opera's Medium Screen Rendering technology.
The software is saved to the Wii's 512 MB internal flash memory (it can be copied to an SD card after it has been downloaded). The temporary Internet files (maximum of 5MB for the trial version) can only be saved to the Wii's internal memory. The application launches within a few seconds, after connecting to the Internet through a wireless LAN using the built-in interface or a wired LAN by using the USB to the Ethernet adapter.
The Opera-based Wii browser allows users full access to the Internet and supports all the same web standards that are included in the desktop versions of Opera, including CSS and JavaScript. It is also possible for the browser to use technologies such as Ajax, SVG, RSS, and Adobe Flash Player 8 and limited support for Adobe Flash Player 9. Opera Software has indicated that the functionality will allow for third parties to create web applications specifically designed for the use on the Wii Browser, and it will support widgets, standalone web-based applications using Opera as an application platform.
Third party APIs and SDKs have been released that allow developers to read the values of the Wii Remote buttons in both Flash and JavaScript. This allows for software that previously required keyboard controls to be converted for use with the Wii Remote. The browser was also used to stream BBC iPlayer videos from April 9, 2008 after an exclusive deal was made with Nintendo UK and the BBC to offer their catch-up service for the Wii. However, the September 2009 update caused the iPlayer to no longer operate. The BBC acknowledged the issue and created a dedicated channel instead. In June 2009, YouTube released YouTube XL, a TV-friendly version of the popular video-sharing website. The regular YouTube page would redirect the browser to YouTube XL, if the website detected that the Internet Channel or the PlayStation 3 browser is being used.
Everybody Votes Channel
Everybody Votes Channel allowed users to vote in simple opinion polls and compare and contrast opinions with those of friends, family, and people across the globe.
Everybody Votes Channel was launched on February 13, 2007, and was available in the Wii Channels section of the Wii Shop Channel. The application allowed Wii owners to vote on various questions using their Mii as a registered voter. Additionally, voters were also able to make predictions for the choice that will be the most popular overall after their own vote has been cast. Each Mii's voting and prediction record is tracked and voters can also view how their opinions compare to others. Whether the Mii is correct in its predictions or not is displayed on a statistics page along with a counter of how many times that Mii has voted. Up to six Miis would be registered to vote on the console. The channel was free to download. Each player would make a suggestion for a poll a day.
Like the other four Wii channels (Forecast Channel, News Channel, Nintendo Channel, Check Mii Out Channel/Mii Contest Channel), the Everybody Votes Channel ended its seven-year support on June 27, 2013 due to Nintendo shifting its resources to its next generation projects. Unlike the other discontinued channels, Everybody Votes Channel remains accessible with users able to view the latest poll data posted, albeit the channel will never be updated again.
Check Mii Out Channel
The Check Mii Out Channel (also known as the Mii Contest Channel in Australia, Europe and Japan and Canal Miirame in Spanish-speaking countries in Latin America) was a channel that allowed players to share their Miis and enter them into popularity contests. It was first available on November 11, 2007. It was available free to download from the Wii Channels section of the Wii Shop Channel.
Users would post their own Miis in the Posting Plaza, or import other user-submitted Miis to their own personal Mii Parade. Each submitted Mii was assigned a 12-digit entry number to aid in searching. Submitted Miis were given 2 initials by their creator and a notable skill/talent to aid in sorting.
In the Contests section, players submitted their own Miis to compete in contests to best fit a certain description (e.g. Mario without his cap). After the time period for sending a Mii had expired, the user had the choice of voting for three Miis featured on the judging panel, with ten random Miis being shown at a time. Once the judging period is over, the results of the contest may be viewed. Their selection and/or submission's popularity in comparison to others was displayed, as well as the winning Mii and user.
The Check Mii Out Channel sent messages to the Wii Message Board concerning recent contests. Participants in certain contests would add their user and submitted Mii to a photo with a background related to the contest theme. This picture would then be sent to the Wii Message Board.
This channel ended its seven-year support on June 27, 2013 like the four other channels (Forecast Channel, News Channel, Everybody Votes Channel, Nintendo Channel).
Nintendo Channel
The Nintendo Channel (known as the Everybody's Nintendo Channel in Japan) allowed Wii users to watch videos such as interviews, trailers, commercials, and even download demos for the Nintendo DS line of systems. The Nintendo Channel has the ability to support Nintendo Entertainment System games, Super NES games, Nintendo 64 games, and GameCube games. Later the channel was used for the Wii U, and the Nintendo Switch under the name of the Nintendo eShop. In this capacity the channel worked in a similar way to the DS Download Station. The channel provided games, info, pages and users could rate games that they have played. A search feature was also available to assist users in finding new games to try or buy. The channel had the ability to take the user directly into the Wii Shop Channel for buying the wanted game immediately. The Nintendo Channel was launched in Japan on November 27, 2007, in North America on May 7, 2008, and in Europe and Australia on May 30, 2008. The Nintendo Channel was updated with different Nintendo DS demos and new videos every week; the actual day of the week varies across different international regions. Nintendo DS demos can be transmitted to the handheld console.
An updated version of the Nintendo Channel was released in Japan on July 15, 2009, North America on September 14, 2009, and in Europe on December 15, 2009. The update introduced a new interface and additional features, options, and statistics for users to view. However, the European version was missing some of these new additional features, such as options for choosing video quality. In addition, a weekly show known as Nintendo Week began airing exclusively on the North American edition of the channel, while another show, Nintendo TV, was available on the UK version of the channel.
The Nintendo Channel and the other 4 channels (Forecast Channel, News Channel, Everybody Votes Channel, and Check Mii Out Channel/Mii Contest Channel) ended their seven-year support on June 27, 2013.
A few shows appeared on Nintendo Channel which were no more than 20 minutes long:
Nintendo Week: The hosts were Gary and Allison, but other co-hosts appeared as well like Dark Gary, Daniel, and others.
Ultimate Wii Challenge/New Super Mario Bros. Wii Challenge: The hosts were David and Ben. They tried to beat each other's time in Nintendo Games like New Super Mario Bros. Wii, Donkey Kong Country Returns, Super Mario Galaxy 2, and Kirby's Epic Yarn. In a few episodes, Ben and David worked together in levels of a few games.
Many Nintendo DS demos were available in Nintendo Channels DS Download Service.
Disconnection
Forecast Channel, the News Channel, the Everybody Votes Channel, the Check Mii Out Channel/Mii Contest Channel, were shut down permanently on June 27, 2013, as Nintendo terminated the WiiConnect24 service which these channels required, and shifted their resources to their next-generation projects, such as the Wii U and Nintendo 3DS.
Other channels
These channels were those that could be acquired through the usage of various games and accessories.
Wii Fit/Wii Fit Plus Channel
Wii Fit allowed users to install the Wii Fit Channel to the Wii Menu. The channel allowed them to view and compare their results, and those of others, as well as their progress in the game, without requiring the game disc to be inserted.
The channel allowed users to access some of the features of Wii Fit. It allowed users to view statistics from the game including users' BMI measurements and balance test scores in the form of a line graph, as well as keep track of the various activities they have undertaken with a calendar. Users were also able to weigh themselves and do a BMI and balance test with the channel once per day. However, if the player wishes to do any exercises or play any of the aerobics games and/or balance games, the game prompted the user to insert the Wii Fit game disc.
Mario Kart Channel
Mario Kart Wii allows players to install the Mario Kart Channel on their Wii console. The channel can work without inserting the Mario Kart Wii disc into the console, but to compete in races and time trials the disc is required. The use of the Mario Kart Channel allows for a number of options. A ranking option lets players see their best Time Trial scores for each track and compare their results to those of their friends and other players worldwide, represented by their Miis. Players will have the option of racing against the random or selective ghosts, or improving their results gradually by taking on the ghosts of rivals, those with similar race times. Users have the option to submit these times for others around the world to view. Players can also manage and register friends using the channel and see if any of them are currently online.
Another feature of the channel are Tournaments, where Nintendo invited players to challenges similar to the missions on Mario Kart DS. Players were also able to compare their competition rankings with other players.
As of May 20, 2014, most features of the channel have been discontinued, such as Tournaments.
Jam with the Band Live Channel (Japan and PAL regions only)
The Nintendo DS game Jam with the Band supports the Jam with the Band Live Channel (known as the Speaker Channel in Japan) that allows players to connect their game to a Wii console and let the game's audio be played through the channel. The channel supports multiple players.
Wii Speak Channel
Users with the Wii Speak peripheral are able to access the Wii Speak Channel. Users can join one of four rooms (with no limit to the number of people in each room) to chat with others online. Each user is represented by their own Mii, which lip-syncs to their words. In addition, users can also leave audio messages for other users by sending a message to their Wii Message Board. Users can also photo slideshows and comment on them. The Wii Speak Channel became available in North America and Europe on December 5, 2008, and was discontinued on May 20, 2014. The Wii Speak Channel is succeeded by Wii U Chat, which is standardized for the Wii U console.
Rabbids Channel
This is a channel created by Rabbids Go Home. When the game is started up for the first time or when the player goes to the player profile screen, the player may install the Rabbids Channel, which will appear on the Wii Menu once it is downloaded. Players can use the channel to view other people's Rabbids and enter contests.
Downloadable channels
Downloadable Channels are Channels that can be bought from the Wii Shop Channel.
Virtual Console Channels
Virtual Console channels were channels that allowed users to play their downloaded Virtual Console games obtained from the Wii Shop Channel. The Virtual Console portion of the Wii Shop Channel specialized in older software originally designed and released for home entertainment platforms that are now defunct. These games were played on the Wii through the emulation of the older hardware. The prices were generally the same in almost every region and were determined primarily by the software's original platform. There was initially planned to be a Virtual Console channel where users could launch their Virtual Console games sorted by console, but this idea was dropped.
WiiWare Channels
Functioning similarly to the Virtual Console channels, WiiWare channels allowed users to use their WiiWare games obtained from the Wii Shop Channel. The WiiWare section specialized in downloadable software specifically designed for the Wii. The first WiiWare games were made available on March 25, 2008 in Japan. WiiWare games launched in North America on May 12, 2008, and launched in Europe and Australia on May 20, 2008.nintendo.com.au – News from Nintendo
The WiiWare section was being touted as a forum to provide developers with small budgets to release smaller-scale games without the investment and risk of creating a title to be sold at retail (somewhat similar to the Xbox Live Arcade and the PlayStation Store). While actual games have been planned to appear in this section since its inception, there had been no official word on when any would be appearing until June 27, 2007, when Nintendo made an official confirmation in a press release which revealed the first titles would surface sometime in 2008. According to Nintendo, "The remarkable motion controls will give birth to fresh takes on established genres, as well as original ideas that currently exist only in developers' minds."
Like Virtual Console games, WiiWare games were purchased using Wii Points. Nintendo handled all pricing options for the downloadable games.
Television Friend Channel (Japan only)
The Television Friend Channel allowed Wii users to check what programs are on the television. Content was provided by Guide Plus. It was developed by HAL Laboratory. The channel had been said to be "very fun and Nintendo-esque". A "stamp" feature allowed users to mark programs of interest with a Mii-themed stamp. If an e-mail address or mobile phone number would have been registered in the address book, the channel could send out an alert 30 minutes prior to the start of the selected program. The channel tracked the stamps of all Wii users and allowed users to rate programs on a five-star scale. Additionally, when the channel was active the Wii Remote could be used to change the TV's volume and channel so that users can tune into their shows by way of the channel. The Television Friend Channel launched in Japan on March 4, 2008, and was discontinued on July 24, 2011, due to the shutdown of analog television broadcasts in Japan. It was never launched outside Japan, as most countries, unlike Japan, have a guide built into set-top boxes and/or TVs. The Television Friend Channel was succeeded by the now-defunct Nintendo TVii, which was standardized for the Wii U console. It also had the Kirby 1-UP sound, since it was made by HAL Laboratory. This was later removed before the release of the channel.
Digicam Print Channel (Japan only)
The Digicam Print Channel was a channel developed in collaboration with Fujifilm that allowed users to import their digital photos from an SD card and place them into templates for printable photo books and business cards through a software wizard. The user was also able to place their Mii on a business card. The completed design would then be sent online to Fujifilm who printed and delivered the completed product to the user. The processing of individual photos was also available.
The Digicam Print Channel became available from July 23, 2008 in Japan, and ceased operation on June 26, 2013.
Today and Tomorrow Channel
The Today and Tomorrow Channel became available in Japan on December 2, 2008, and in Europe, Australia, and South Korea on September 9, 2009. The channel was developed in collaboration with Media Kobo and allows users to view fortunes for up to six Miis across five categories: love, work, study, communications, and money. The channel also features a compatibility test that compares two Miis, and also gives out "lucky words" that must be interpreted by the user. The channel uses Mii birthdate data, but users must input a birth year when they are loaded onto the channel. This channel was never released in North America, and although it was discontinued on January 30, 2019 with the Wii Shop Channel discontinuation, it can still be redownloaded if obtained before the Wii Shop Channel'''s closure.
Wii no Ma (Japan only)
A video on-demand service channel was released in Japan on May 1, 2009. The channel was a joint venture between Nintendo and Japanese advertising agency Dentsu. The channel's interface was built around a virtual living room, where up to 8 Miis can be registered and interact with each other. The virtual living room contained a TV which took the viewer to the video list. Celebrity "concierge" Miis occasionally introduced special programming. Nintendo ceased operations of Wii no Ma on April 30, 2012.This channel is also known as Wii Room in EnglishDemae Channel (Japan only)
A food delivery service channel was released in Japan on May 26, 2009. The channel was a joint venture between Nintendo and the Japanese on-line food delivery portal service Demae-can, and was developed by Denyu-sha. The channel offered a wide range of foods provided by different food delivery companies which can be ordered directly through the Wii channel. A note was posted to the Wii Message Board containing what had been ordered and the total price. The food was then delivered to the address the Wii user has registered on the channel. On February 22, 2017, Demae Channel was delisted from the Wii Shop Channel, it was later discontinued alongside the Wii U version on March 31, 2017.
BBC iPlayer Channel (UK only)
Wii access to the BBC iPlayer was interrupted on April 9, 2008, when an update to the Opera Browser turned out to be incompatible with the BBC iPlayer. The BBC chose not to make the BBC iPlayer compatible with the upgrade. This was resolved on November 18, 2009 when they released the BBC iPlayer Channel, allowing easier access to the BBC iPlayer.
The BBC had since offered a free, dedicated Wii channel version of their BBC iPlayer application which was only available in the UK. By February 10, 2015, however, the channel was retired and consequently removed from Wii Shop Channel since newer versions are not compatible, and as per BBC's policy to retire older versions as a resource management. The channel had since been succeeded by the BBC iPlayer app on the UK edition of the Wii U eShop, which was released in May 2015.
Netflix Channel
The Netflix channel was released in the United States and Canada on October 18, 2010 and in the UK and Ireland on January 9, 2012. This channel allowed Netflix subscribers to use that service's "Watch Instantly" movie streaming service over the Wii with their regular Netflix subscription fee, and replaced the previous Wii "streaming disc" mailed to Netflix customers with Wii consoles from March 27 to October 17, 2010 due to contractual limitations involving Xbox 360 exclusivity. The channel was free to download in the Wii Channels section of the Wii Shop Channel. The channel displayed roughly 12 unique categories of videos with exactly 75 video titles in each category. The TV category had many seasons of videos (i.e. 15–100 episodes) associated with each title. There were also categories for videos just watched, new releases, and videos recommended (based on the user's Netflix subscription history). On July 31, 2018, the channel was delisted from the Wii Shop Channel; Netflix would drop support for the Wii on January 30, 2019.
LoveFilm Channel (UK and Germany only)
On 4 December 2012, the LoveFilm channel was available to download on Wii consoles in the UK and Germany; the channel was discontinued on 31 October 2017, along with the closure of LoveFilm itself.
Kirby TV Channel (PAL regions only)
The Kirby TV Channel launched on June 23, 2011 in Europe, Australia and New Zealand, and has since been discontinued. The channel allowed users to view episodes of the animated series Kirby: Right Back at Ya! for free. This channel was succeeded by the Nintendo Anime Channel, a Nintendo 3DS video-on-demand app, available in Australasia and Europe, which streamed curated anime or anime-inspired shows, such as Kirby: Right Back at Ya!Hulu Plus Channel (USA only)Hulu Plus Channel was a channel for the Wii, also as announced in Nintendo Updates on Nintendo Channel. Hulu Plus Channel included classic shows and other Hulu included shows. The channel launched in 2012, and was only available in the United States. On January 30, 2019, Hulu dropped support for the Wii.
The Legend of Zelda: Skyward Sword Save Data Update ChannelThe Legend of Zelda: Skyward Sword Save Data Update Channel fixed an issue in the game The Legend of Zelda: Skyward Sword. This title was the only Wii game to ever receive a downloadable, self-patching service, wherein previous titles with technical issues, such as Metroid: Other M, required the game's owners experiencing said issues to send their Wii consoles to customer service where Nintendo had to manually fix such issues.
YouTube Channel
The YouTube channel allowed the user to view YouTube videos on the television screen and had the ability to sign into an existing YouTube account. The YouTube channel, which became available without warning, was only available in the North American, UK, Japanese, and Australian versions of the Wii system, with the North American release on November 15, 2012, only three days before the Wii U was released in North America. Google planned to gradually make the channel available on Wii in other countries besides the aforementioned regions. The YouTube channel was initially categorized on the Wii Shop Channel as a WiiWare title by mistake, but this was later fixed when the Wii U Transfer Tool channel became available. On June 26, 2017, YouTube terminated legacy support for all devices that continue using the Flash-based YouTube app (typically found in most TV devices released before 2012), which includes the Wii.
Wii U Transfer Tool Channel
This application became available on the Wii Shop Channel the day the Wii U was released per respective region. The only purpose of this channel is to assist transferring all eligible content out from a Wii console to a Wii U console, where the said content would be available via Wii Mode on the target Wii U. The application can transfer all available listed WiiWare titles (initially with the sole exemption of LostWinds for unknown reasons, but the game had since become available for both transfer to and purchase on Wii U since May 2014), all available listed Virtual Console titles, game save data, DLC data, Mii Channel data, Wii Shop Channel data (including Wii Points, conditional that accumulated total does not exceed 10,000 Wii Points on target Wii U), and Nintendo Wi-Fi Connection ID data to a target Wii U (albeit now moot since the service was discontinued in May 2014), but it cannot transfer Wii settings data, pre-installed WiiWare/Virtual Console titles (such as Donkey Kong: Original Edition that came pre-installed in the PAL version of the Super Mario Bros. 25th Anniversary Wii bundle), any game or application software that had been since delisted from the Wii Shop Channel prior to the release of Wii U (such as the Donkey Kong Country trilogy), software that is already available on the target Wii U's Wii Mode, WiiConnect24-supported software and save data (which includes the 16-digit Wii console Friend Code), and GameCube save data since the Wii U does not support the latter two. It is possible to move content from multiple Wii consoles to a single target Wii U console, as well as multiple transfers from a single Wii console if required, albeit the last Wii console's content will overwrite any similar Wii data transferred to target Wii U earlier. Due to technical limitations, the channel cannot directly transfer any eligible background data which has been saved on the console's SD card.
The Wii U Transfer Tool Channel features an animation based on the Pikmin series, wherein a visual transfer display of various Pikmin would automatically carry the eligible data and software to a space ship, likely representing the SD card used to perform the transfer, bound for the Wii U. While context dynamic, this animation is not interactive, and only exists for entertainment purposes.
The ability to transfer content from the Wii to the Wii U is still available for the foreseeable future after the Wii Shop Channels shutdown on January 30, 2019.
Amazon Instant Video (USA only)
Amazon Instant Video, a video on demand service provided by Amazon, was released as a downloadable Wii channel in the United States on January 17, 2013; the service was discontinued on January 30, 2019.
Crunchyroll
In late 2014, Crunchyroll released their video app for the Wii's successor, Wii U, in North America. However, believing there are still many actively connected Wii consoles in its twilight years, Crunchyroll had surprised users with a Crunchyroll channel for the Wii as well, launching the app categorized under WiiWare on October 15, 2015 in North America and the PAL regions. The Crunchyroll Wii channel only permitted access to Premium account holders to the majority of the prime content. On May 5, 2017, less than 20 months after its launch, Crunchyroll ceased support for the Wii due to technical limitations after the service updated with new technology.
Wii Message Board
The Message Board allows users to leave messages for friends, family members, or other users on a calendar-based message board. Users could also use WiiConnect24 to trade messages and pictures with other Wii owners, conventional email accounts (email pictures to console, but not pictures to email), and mobile phones (through text messages). Each Wii has an individual wii.com email account containing the Wii Number. Prior to trading messages it is necessary to add and approve contacts in the address book, although the person added will not get an automatic notification of the request, and must be notified by other means. The service also alerts all users of incoming game-related information.
Message Board was available for users to post messages that are available to other Wii users by usage of Wii Numbers with WiiConnect24. In addition to writing text, players can also include images from an SD card in the body of messages, as well as attaching a Mii to the message. Announcements of software updates and video game news are posted by Nintendo. The Message Board can be used for posting memos for oneself or for family members without going online. These messages could then be put on any day of the calendar. The Wii Message Board could also be updated automatically by a real-time game like Animal Crossing.Wii Sports, Wii Play, Mario Kart Wii, Wii Speak Channel, Wii Sports Resort, Super Mario Galaxy & Super Mario Galaxy 2 use the Message Board to update the player on any new high scores or gameplay advancements, such as medal placements in the former two titles, completions of races including a photo, audio messages, and letters from the Mailtoad via the Wii Message Board. Metroid Prime 3: Corruption, Super Mario Galaxy, Super Smash Bros Brawl, Elebits, Animal Crossing: City Folk, Dewy's Adventure and the Virtual Console game Pokémon Snap allow players to take screenshots and post them to the Message Board to edit later or send to friends via messages. Except for GameCube games, the Message Board also records the play history in the form of "Today's Accomplishments". This feature automatically records details of what games or applications were played and for how long. It cannot be deleted or hidden without formatting the console itself. Prior to its closure, the Nintendo Channel was able to automatically tally all Wii game play data from the Message Board and display them in an ordered list within the channel.
Subsequent system updates added a number of minor features to the Message Board, including minor aesthetic changes, USB keyboard support and the ability to receive Internet links from friends, which can be launched in the Internet Channel.
An exploit in the Wii Message Board can be used to homebrew a Wii via a tool called LetterBomb.
Discontinuation
The WiiConnect24 service has been terminated as of June 27, 2013, completely ceasing the data exchange functionality of the Wii Message Board for all Wii consoles, whether as messages or game data. However, Nintendo is still able to continue sending some notification messages after that date to any continuously up and running Wii consoles.
SD Card Menu
The SD Card Menu is a feature made available with the release of Wii Menu version 4.0. This menu allows the user to run Virtual Console games, WiiWare games, and Wii Channels directly from the SD card, which makes it possible to free up the Wii's internal memory. Applications can be downloaded to the SD card directly from the Wii Shop Channel as well.
When running an application from the SD Card Menu, it is temporarily copied to the internal memory of the Wii, meaning the internal memory still must contain an amount of free blocks equal to the application's size. If the internal memory does not have enough space, the Channel will run an "Automanager" program, which clears up space for the user in one of many ways (selectable by the user).
The manager can place the largest channels on the user's Wii in the SD card, put smaller channels on the SD card until enough space remains to run the channel, clear channels from the left side of the Wii menu to the right side, or from the right side to the left until there are enough blocks to run the channel.
History of updates
System version 1.0 was released on launch day, and was designed mainly for offline use, as connecting to the internet would trigger an update prompt to install 2.0. For a while after that, the Wii received new features such as the Forecast Channel, as well as bug fixes.
Some of these updates also included fixes to block the early forms of homebrew, the first of which was an SSL issue in the Wii Shop Channel. Later in 2007, Nintendo added code to block the GameCube Action Replay, although this update was bundled with several other features in the 3.0 update.
A week after Wii Freeloader released, Nintendo released an update containing a new IOS with the bug exploited by Freeloader fixed, although this new IOS was not used by the Wii Menu. Later that year, Nintendo released a new Wii Menu that copied this fix to the IOS user by the Wii Menu. In addition, code was added to the Wii Menu to delete the primary homebrew entrypoint on every boot, although this code was very buggy and was easily bypassed. Nintendo also patched the hole used to extract the private encryption keys of the Wii, and finally made a small change to the Mii Channel to convince people to update.
Nintendo's next few updates made similar small changes to various channels, and one of them copied the fix for the previous IOS bug to every IOS, as well as a few other exploit fixes. A few weeks later, Nintendo ported these new fixes to every IOS, made a failed attempt to block a specific homebrew IOS, and made their second attempt at fixing the main homebrew entrypoint. This attempt at stopping the homebrew entrypoint was then superseded by a successful attempt in 2009, along with other IOS fixes, and some features.
Later that year, Nintendo released another homebrew-blocking update, but unlike the previous updates, it offered no new features; instead, it updated the Wii Shop Channel to require the new version. In addition to fixing homebrew bugs, it aggressively checks for the Homebrew Channel and deletes it if it is present, replaced several IOSes used by homebrew with nonfunctional versions, and updated a bootloader to overwrite the one used by homebrew, unexpectedly causing many consoles to refuse to boot. Two similar updates were then released throughout 2010, although the only attempts to stop Wii homebrew past that were in the Wii U's Wii Mode feature.
The final update delivered in PAL and American regions added support to transfer content to the Wii U. However, two updates were released in Japan past this point that only affected Dragon Quest X players, solely updating the IOS used by Dragon Quest X''.
See also
Nintendo Wi-Fi Connection
WiiConnect24
Wii Shop Channel
Other gaming platforms from Nintendo:
Nintendo 3DS system software
Nintendo DSi system software
Wii U system software
Nintendo Switch system software
Other gaming platforms from the next generation:
PlayStation 4 system software
PlayStation Vita system software
Xbox One system software
Other gaming platforms from this generation:
PlayStation 3 system software
PlayStation Portable system software
Xbox 360 system software
References
External links
Wii System Menu and Feature Updates
Site documenting all updates during an update and how they affect homebrew and other hacks
Wii
Nintendo Network
Game console operating systems
Discontinued operating systems
Proprietary operating systems
Graphical user interface elements
Video games scored by Kazumi Totaka | Wii system software | [
"Technology"
] | 11,955 | [
"Components",
"Graphical user interface elements"
] |
7,304,939 | https://en.wikipedia.org/wiki/Random%20energy%20model | In the statistical physics of disordered systems, the random energy model is a toy model of a system with quenched disorder, such as a spin glass, having a first-order phase transition. It concerns the statistics of a collection of spins (i.e. degrees of freedom that can take one of two possible values ) so that the number of possible states for the system is . The energies of such states are independent and identically distributed Gaussian random variables with zero mean and a variance of . Many properties of this model can be computed exactly. Its simplicity makes this model suitable for pedagogical introduction of concepts like quenched disorder and replica symmetry.
Thermodynamic quantities
Critical energy per particle: .
Critical inverse temperature .
Partition function , which at large becomes when , that is, condensation does not occur. When this is true, we say that it has the self-averaging property.
Free entropy per particle
Entropy per particle
Condensation
When , the Boltzmann distribution of the system is concentrated at energy-per-particle , of which there are states.
When , the Boltzmann distribution of the system is concentrated at , and since the entropy per particle at that point is zero, the Boltzmann distribution is concentrated on a sub-exponential number of states. This is a phase transition called condensation.
Participation
Define the participation ratio asThe participation ratio measures the amount of condensation in the Boltzmann distribution. It can be interpreted as the probability that two randomly sampled states are exactly the same state. Indeed, it is precisely the Simpson index, a commonly used diversity index.
For each , the participation ratio is a random variable determined by the energy levels.
When , the system is not in the condensed phase, and so by asymptotic equipartition, the Boltzmann distribution is asymptotically uniformly distributed over states. The participation ratio is then which decays exponentially to zero.
When , the participation ratio satisfieswhere the expectation is taken over all random energy levels.
Comparison with other disordered systems
The -spin infinite-range model, in which all -spin sets interact with a random, independent, identically distributed interaction constant, becomes the random energy model in a suitably defined limit.
More precisely, if the Hamiltonian of the model is defined by
where the sum runs over all distinct sets of indices, and, for each such set, , is an independent Gaussian variable of mean 0 and variance , the Random-Energy model is recovered in the limit.
Derivation of thermodynamical quantities
As its name suggests, in the REM each microscopic state has an independent distribution of energy. For a particular realization of the disorder, where refers to the individual spin configurations described by the state and is the energy associated with it. The final extensive variables like the free energy need to be averaged over all realizations of the disorder, just as in the case of the Edwards–Anderson model. Averaging over all possible realizations, we find that the probability that a given configuration of the disordered system has an energy equal to is given by
where denotes the average over all realizations of the disorder. Moreover, the joint probability distribution of the energy values of two different microscopic configurations of the spins, and factorizes:
It can be seen that the probability of a given spin configuration only depends on the energy of that state and not on the individual spin configuration.
The entropy of the REM is given by
for . However this expression only holds if the entropy per spin, is finite, i.e., when Since , this corresponds to . For , the system remains "frozen" in a small number of configurations of energy and the entropy per spin vanishes in the thermodynamic limit.
See also
Random subcube model
References
Statistical mechanics | Random energy model | [
"Physics"
] | 778 | [
"Statistical mechanics"
] |
7,306,686 | https://en.wikipedia.org/wiki/WOUGNET | Women of Uganda Network (WOUGNET) also known as Women of Uganda Network Development Limited is Ugandan non-governmental organization that aids women and women's organisations in the use and access of information and communication technologies (ICTs) to share information and address issues their concerns such as gender norms, advocating for their rights and building communities and businesses through education.
History
WOUGNET was founded in May 2000 by women's organisations from Uganda. Its mailing lists are hosted by Kabissa.
Mission: To promote the use of information and communication technologies by women and girls for gender equality and sustainable development.
Aim: To improve the conditions of life for Ugandan women, by enhancing their capacities and opportunities for exchange, collaboration and information sharing.
Vision: An inclusive and just society where women and girls are enabled to use ICTs for sustainable development.
Programs: Information Sharing and Networking, Technical Support, and Gender and ICT Policy Advocacy.
WOUGNET does research and analysis on internet and ICT policies, promotes equal access to information, intersection of gender and technology, capacity building on online safety and emerging technology trends among other activities to ensure that women are catered for in them. It also implements other programs in agriculture, digital inclusion, entrepreneurship, governance and accountability among other programs.
Executive directors
Dorothy Okello (Founder)
Peace Oliver Amuge (May 2020 to February 2023).
Sandra Aceng from March 2023 to date.
Memberships
WOUGNET is a member of;
ICT4Democracy (ICT4D) network
Women's Rights Online (WRO) network spearheaded by World Wide Web Foundation.
Association for Progressive Communications (APC) since January 2005.
Girls Not Brides since 25 March 2012.
The Global Network Initiative (GNI) since 2019.
Digital Human Rights Lab since 2019.
Uganda Women's Network.
Forum for Agricultural Research in Africa (RUFORUM).
Tools
WOUGNET uses email, social media, the web, SMS (short messaging service) and "traditional means" such as radio, television and print media such as newspapers to communicate and share information about online gender based violence (OGBV), online safety among other issues.
Awards
In 2013, WOUGNET was awarded the Winner Of The Democracy Innovation at the closing ceremony of the second World Forum for Democracy held in Strasbourg. The Innovation Award recognized the efforts taken to involve citizens in democratic processes and the general public life.
Members
WOUGNET has no membership fees for its three types of memberships and these are individual, organisation (Women organisations based in Uganda) and affiliate (organisations that are not women organisations based in Uganda). To become a member you have to are required to subscribe to the WOUGNET mailing list.
WOUGNET members include:
Reach out Wives of Soldiers’ Association (ROWOSA)
Slum Aid Project (SAP)
Ibanda Women's Guild (IWOGU)
Gabula Atudde Women Group (GABULA ATUDDE)
Tusubira Women's Group (TUWOGRO)
Warm Hearts Foundation (WHF)
Katosi Women Development Trust (KWDT)
Ntulume Village Women Development Association (NVIWODA)
Uganda Women Entrepreneurs Association (UWEAL)
Comfort Community Empowerment Network (COCENET)
Local Sustainable Communities Organizations (LOSCO)
St Bruno Doll Making Group
Hope Case Foundation (HCF)
Kigezi Women in Development (KWID)
Uganda Muslim Women Vision (UMWV)
Grassroots Women's Association for Development (GWAD)
Disabled Women in Development (DIWODE)
Karma Rural Women's Development Organization (KRUWODO)
Community action for sustainable livelihood (CASUL)
Awards and recognistions
Inclusion & Empowerment by World Summit Award (WSA) in 2003.
Democracy Innovation Award by The council of Europe at World Forum for Democracy in 2013.
Activities, campaigns, workshops and trainings
In 2005, WOUGNET registered Ugandans who would attend the World Summit on Information Society (WSIS) that happened in Tunis in Tunisia.
WOUGNET partnered with Womensnet, South Africa and APC-Africa-Women (AAW) and ran an SMS based 16 Days of Activism campaign where messages against violence against women were sent out by both individuals and organisations.
WOUGNET partnered with Internews and trained Civil Society organisations (CSOs) and Human Rights Defenders (HRDs) that wanted to strengthen advocacy strategies for women's rights and privacy online.
WOUGNET engaged policymakers, government agencies, CSOs and lawmakers to better understand how cybercrime legislation, data protection, access to information among other issues affected women.
Projects and reports
Reports
some of the reports include;
Bridging the Digital Gender Gap in Uganda: An Assessment of Women Rights Online Based on the Principles of the African Declaration of Internet Rights and Freedoms (AFDEC) which addressed women's internet usage performance in Uganda.
WOUGNET's current, present and past projects include:
Civil Society in Uganda Digital Support Programme (CUSDS) supported by the Women Peace and Humanitarian Fund which responded to COVID-19 emergency in Uganda by strengthening the institutional digital capacity of her 23 member organisations in 2020 to remain resilient during a situation where COVID-19 erected roadblocks and restrictions on movement of staff.
Women's Rights Online Media Campaigns in Uganda supported by Association for Progressive Communications (APC) in 2020 under All Women Count Project.
Enhancing Women's Rights Online through Inclusive and effective response to online gender-based violence in Uganda. supported by Digital Human Rights Lab in 2021.
Our Voices, Our Futures (OVOF) funded by Association for Progressive Communications (APC) from 2021 to 2025.
Saving Women's Journalists from Online harassment in Uganda by Improving Legislation on Freedom of Expression in the Digital Spaces and Tackling Online harassment (SWIFT) supported by Urgent Action Funds in 2021.
Promoting Smart Policy Options in Closing Gender Digital Divide in Uganda, in partnership with CfMA supported by World Wide Web Foundation in 2020–2021.
Strengthening Uganda's Rights to Freedom of Expression through Policy Advocacy and Media (SURFACE) supported by International Centre for Not-for-Profit Law (ICNL) in 2021.
Marker-Assisted Breeding of selected Native Chickens in Mozambique and Uganda in partnership with Eduardo Mondlane Mozambique, Makerere University, Gulu University and International Rural Poultry Centre- Kyeema Foundation (Mozambique) supported by African Union from 2019 to 2022.
Strengthening use of ICTs and social media for Citizen Engagement and improved Service Delivery supported by SIDA in Eastern and Indigo Trust UK in Northern Uganda.
Strengthening use of ICTs and Social media for citizen engagement and improved service delivery, funded by Indigo Trust UK.
Increasing women's decision making and influence in Internet Governance and ICT policy for the realization of women's rights in Africa, implemented with WomensNet in Uganda and South Africa and supported by UN Women Fund for Gender Equality
See also
Association for Progressive Communications
Global Network Initiative
Dorothy Okello
References
Organizations established in 2000
Women's organisations based in Uganda
Organizations for women in science and technology
Information technology organisations based in Uganda
2000 establishments in Uganda
Women's rights in Uganda | WOUGNET | [
"Technology"
] | 1,451 | [
"Organizations for women in science and technology",
"Women in science and technology"
] |
7,307,121 | https://en.wikipedia.org/wiki/Nokia%20N95 | The Nokia N95 is a mobile phone produced by Nokia as part of their Nseries line of portable devices. Announced in September 2006, it was released to the market in March 2007. The N95 ran S60 3rd Edition, on Symbian OS v9.2. It has a two-way sliding mechanism, which can be used to access either media playback buttons or a numeric keypad. It was first released in silver and later on in black, with limited edition quantities in gold and purple. The launch price of the N95 was around (about , ).
The N95 was a high-end model that was marketed as a "multimedia computer", much like other Nseries devices. It featured a then-high 5 megapixel resolution digital camera with Carl Zeiss optics and with a flash, as well as a then-large display measuring 2.6 inches. It was also Nokia's first device with a built-in Global Positioning System (GPS) receiver, used for maps or turn-by-turn navigation, and their first with an accelerometer. It was also one of the earliest devices in the market supporting HSDPA (3.5G) signals.
After the introduction of the original model (technically named N95-1), several updated versions were released, most notably the N95 8GB with 8 gigabytes of internal storage, a larger display and improved battery. The 'classic' N95 and its upgraded variant N95 8GB are widely considered as breakthrough devices at the time of their launch. The N95 was well-regarded for its camera, GPS and mapping capabilities, and its innovative dual-slider form factor, and some have hailed it as one of the best mobile devices to have been released.
History
The phone was unveiled on 26 September 2006 at the Nokia Open Studio 2006 event in New York City. It was considered to have been a turning point in the mobile industry due to its various capabilities; however, the device took a further six months until it was released. On 8 March 2007, Nokia was shipping N95 in key European, Asian, and Middle Eastern markets. It was on sale in many more countries during the week of 11 March. The N95 was still only available in limited quantities at this early stage and therefore its price was briefly raised to 800 euros.
On 7 April 2007, the N95 went on sale in the United States through Nokia's Flagship stores in New York and Chicago and through Nokia's nseries.com website. No US carriers were expected to offer this phone. The U.S. version started retailing without carrier branding or discounts in Nokia's flagship stores in New York and Chicago on 26 September 2007.
On 29 August 2007, two updated versions of the N95 were announced at a press event in London; first, the N95-2 (N95 8 GB), an updated version for the European/Asian markets with 8 gigabytes of internal storage and larger screen; secondly, the N95-3 (N95 NAM), replacing the original 2100 MHz W-CDMA air interface with support for the 850 MHz and 1900 MHz frequencies used for the 3G networks of most GSM-compatible mobile carriers in the Americas, including AT&T Mobility.
Finally, later on 7 January 2008, Nokia introduced the N95-4, which is the US 8 GB version of the N95-3. The phone got its FCC approval on 30 January and launched 18 March. The first carrier to utilise this approval was Rogers Wireless in May 2009. Also at CES 2008, a red-coloured limited edition Nokia N95 was announced and released that year.
The N95's main competitors during its lifetime were the LG Prada, Apple's iPhone (1st generation), Sony Ericsson's W950i and K850. The N95 managed to outsell its rivals. Despite Apple's much-hyped iPhone with its multi-touch technology, thin design and advanced web capabilities, the N95 had several key features against the iPhone, such as its camera with flash, video camera, Bluetooth file sharing, 3G and 3.5G connectivity, GPS, third-party applications and several other features.
Even after the release of later Nseries phones, the N95's retail price was still around (about ) as of early 2010 despite its three-year-old age.
Features
Integrated GPS ability
The N95 contained an integrated GPS receiver which was located below the 0 key on the keypad. The phone shipped with Nokia Maps navigation software.
Multimedia features
Out of the box, the N95 supported audio in MP3, WMA, RealAudio, SP-MIDI, AAC+, eAAC+, MIDI, AMR, and M4A formats. Its two-way slide, when opened towards the keypad, allowed access to its media playback buttons. A standard 3.5 mm jack is located on the left side of the phone and allowed the user to connect any standard headphones to the unit. With the AD-43 headset adapter the N95 introduced support for multiple remote control buttons on the headset. Users can also use Bluetooth for audio output using A2DP, or use the built-in stereo speakers. The N95 is also capable of playing video in 3GP, MPEG4, RealVideo, and, in newer firmware, Flash Video formats. All of the phone's video output could also be played through the TV-out feature. TV-out is a feature offered by the phones OMAP processor, that allowed users to connect the smartphone, using the supplied cable, to a TV or any other composite video input. Its main purpose was to allow users to show photos and videos on a large screen. The N95's built in UPnP and DLNA capabilities also allowed the user to share the phones' media over a WLAN network. This provides easy access to the photos, music, and videos stored on the phone, from other UPnP/DLNA capable devices on the network, enabling them to be watched or downloaded over the air.
Internet
The N95 had built-in Wi-Fi, with which it could access the Internet (through an 802.11b/g wireless network). The N95 could also connect to the Internet through a carrier packet data network such as UMTS, HSDPA, or EDGE. The webkit-based browser displayed full web pages as opposed to simplified pages as on most other phones. Web pages may be viewed in portrait or landscape mode and automatic zooming was supported. The N95 also has built-in Bluetooth and works with wireless earpieces that use Bluetooth 2.0 technology and for file transfer.
The original N95 did not support US-based versions of UMTS/HSDPA; UMTS features in these versions of the phone are disabled by default. Furthermore, the later N95 US versions support only AT&T's 850/1900 MHz UMTS/HSDPA bands, neither 1700 MHz of T-Mobile USA nor 2100 MHz bands are supported internationally.
The phone could also act as a WAN access point allowing a tethered PC access to a carrier's packet data network. VoIP software and functionality is also included with the phone (though some carriers have opted to remove this feature).
Accelerometer
The N95 included a built-in accelerometer. This was originally only used for video stabilization and photo orientation (to keep landscape or portrait shots oriented as taken).
Nokia Research Center allowed an application interface directly to the accelerometer, allowing software to use the data from it. Nokia has released a step counter application to demonstrate this. Another Nokia-created application taking advantage of the accelerometer is Nokia Sports Tracker.
Third-party programs were created, including software that will automatically change the screen orientation when the phone is tilted, a program that simulates the sounds of a Star Wars lightsaber when the phone is waved through the air, a program allowing the user to mute the phone by turning it face-down, etc.
N-Gage
The N95 was compatible with the N-Gage mobile gaming service.
Reception
The N95 was much talked about after announcement but was initially viewed as a niche feature-packed device. However it became a huge sales success for Nokia when released in most regions. 7 million Nokia N95 units were sold by the end of 2007. In its Q1 2008 report, Nokia claimed that 3 million N95 (including 8GB variant) units were shipped that quarter, bringing the total to at least 10 million. It managed to outsell rivals such as LG Viewty and iPhone.
Its camera capabilities put it in competition with phones such as Sony Ericsson K850i.
On 6 November 2007, AllAboutSymbian declared the N95 8GB as the "best smartphone ever". Years later on 24 January 2013, PC Magazine described the Nokia N95 as "One of the best smartphones in history on any platform". Gsmarena described N95 as "the best mobile phone on the market with no adequate competitors".
A slightly improved model in a candybar form called Nokia N82 was released in late 2007. The next year saw the introduction of the Nokia N96.
The 2010 Indian Malayalam-language experimental film Jalachhayam was shot entirely using a Nokia N95 8GB,
Specification sheet
Variants
N95 8GB (N95-2)
A revision of the N95, called N95 8 GB (N95-2, internally known as RM-320), was announced on 29 August 2007, and released in October 2007. It was released in a black color, instead of silver like the N95-1.
Because of this new model, the original N95 is often referred to as N95 Classic.
The changes compared to the original N95 are:
Improvements
8 GB separate internal memory
Larger display (up from to 2.8").
128 MB RAM (up from 64 MB), 95 MB available.
Demand paging (although the N95 supports this too, since firmware version 20.0.015)
1200 mAh battery (BL-6F), up from 950 mAh
Cosmetic changes to media and front-panel buttons
New model of handsfree/remote control, AD-54 (as opposed to AD-43 for previous N95 versions)
New multimedia menu, with Nokia's Ovi content integration
Built-in Automatic Screen Rotation (ASR) in software versions v20.0.016 onwards for the N95 8 GB version and from v30.0.015 for N95-1, respectively.
Black faceplate instead of the original silver.
Sturdier battery cover.
Negative changes
Pixel density was 142 DPI, compared to 153 DPI for the N95; this is due to the larger display but with the same resolution (QVGA)
MicroSD slot removed
Slider protecting camera lens was removed to make room for the larger battery; the camera application is now started by holding down the shutter release button
Removal of built-in video editor (later added with the firmware upgrades)
Mass: 128 g, up 8 g from 120 g
N95 NAM (N95-3)
The Nokia N95-3 was a revision of the N95, internally designated as RM-160, designed specifically for the North American market. It was also available in Australian and South American market.
The following was changed from the original version:
128 MB RAM, up from 64 MB.
WCDMA (HSDPA) 850 and 1900 MHz, instead of 2100 MHz.
1200 mAh battery, up from 950 mAh.
Talk time up to 190 min (WCDMA), up to 250 min (GSM).
Slider protecting camera lens removed to make room for the larger battery.
Camera flash moved to the vertical axis of the phone, so when the phone is used as a camera it sits to the side of the camera, instead of below as in the N95-1.
Cosmetic changes to media buttons.
Height: 2.05 cm, down from 2.10 cm.
Mass: 125 g, up from 120 g.
White keyboard light instead of blue for visibility improvement.
Current firmware version V 35.2.001, 13-10-09, RM-160
N95 8GB NAM (N95-4)
The main differences to the N95-2 were:
Camera lens was now more flush with the phone's face.
Multimedia keys were less glossy.
Both N95-3 and N95-4 also had some additional changes, such as the removal of the sliding lens cover for the camera, improved battery life, and doubling of RAM from 64 to 128 MB.
N95 CHINA (N95-5)
Featuring the internal name RM-245, the N95-5 was targeted at the Chinese market. The main difference from the regular N95 was the lack of any 3G connectivity support, which has not been yet adopted in China at the time of release, and the absence of WLAN connectivity, due to Chinese regulations.
N95 8GB CHINA (N95-6)
The N95-6, internally coded RM-321 was a Chinese market-targeted version of the N95-2, lacking 3G and WLAN support just like the N95-5.
Versions comparison
This table lists only the specifications that differ between versions of the N95 models.
Cancelled revision
In late 2020, prototype videos surfaced of a planned revision of the N95 that was never put into production, which included slide-out media controls and speakers, and a kickstand.
See also
Nokia Nseries
References
External links
Official Nokia N95 8GB Technical Specifications (forums.nokia.com version)
Official Nokia N95-3 North America Technical Specifications (forums.nokia.com version)
Official Nokia N95 Product Page
Official Nokia N95 Support Page
Official Nokia Press Release
N-Gage (service) compatible devices
Mobile phones introduced in 2007
Slider phones
Nokia Nseries
Mobile phones with user-replaceable battery
Mobile phones with infrared transmitter
Discontinued flagship smartphones | Nokia N95 | [
"Technology"
] | 2,983 | [
"Discontinued flagship smartphones",
"Flagship smartphones"
] |
7,307,216 | https://en.wikipedia.org/wiki/Hammett%20acidity%20function | The Hammett acidity function (H0) is a measure of acidity that is used for very concentrated solutions of strong acids, including superacids. It was proposed by the physical organic chemist Louis Plack Hammett and is the best-known acidity function used to extend the measure of Brønsted–Lowry acidity beyond the dilute aqueous solutions for which the pH scale is useful.
In highly concentrated solutions, simple approximations such as the Henderson–Hasselbalch equation are no longer valid due to the variations of the activity coefficients. The Hammett acidity function is used in fields such as physical organic chemistry for the study of acid-catalyzed reactions, because some of these reactions use acids in very high concentrations, or even neat (pure).
Definition
The Hammett acidity function, H0, can replace the pH in concentrated solutions. It is defined using an equation analogous to the Henderson–Hasselbalch equation:
where log(x) is the common logarithm of x, and pKBH+ is −log(K) for the dissociation of BH+, which is the conjugate acid of a very weak base B, with a very negative pKBH+. In this way, it is rather as if the pH scale has been extended to very negative values. Hammett originally used a series of anilines with electron-withdrawing groups for the bases.
Hammett also pointed out the equivalent form
where is the activity, and the γ are thermodynamic activity coefficients. In dilute aqueous solution (pH 0–14) the predominant acid species is H3O+ and the activity coefficients are close to unity, so H0 is approximately equal to the pH. However, beyond this pH range, the effective hydrogen-ion activity changes much more rapidly than the concentration. This is often due to changes in the nature of the acid species; for example in concentrated sulfuric acid, the predominant acid species ("H+") is not H3O+ but rather H3SO4+, which is a much stronger acid. The value H0 = -12 for pure sulfuric acid must not be interpreted as pH = −12 (which would imply an impossibly high H3O+ concentration of 10+12 mol/L in ideal solution). Instead it means that the acid species present (H3SO4+) has a protonating ability equivalent to H3O+ at a fictitious (ideal) concentration of 1012 mol/L, as measured by its ability to protonate weak bases.
Although the Hammett acidity function is the best known acidity function, other acidity functions have been developed by authors such as Arnett, Cox, Katrizky, Yates, and Stevens.
Typical values
On this scale, pure H2SO4 (18.4 M) has a H0 value of −12, and pyrosulfuric acid has H0 ~ −15. Take note that the Hammett acidity function clearly avoids water in its equation. It is a generalization of the pH scale—in a dilute aqueous solution (where B is H2O), pH is very nearly equal to H0. By using a solvent-independent quantitative measure of acidity, the implications of the leveling effect are eliminated, and it becomes possible to directly compare the acidities of different substances (e.g. using pKa, HF is weaker than HCl or H2SO4 in water but stronger than HCl in glacial acetic acid.)
H0 for some concentrated acids:
Fluoroantimonic acid (1990): −23 > H0 > −28
Magic acid (1974): −23
Carborane superacids: H0 < −18.0
Fluorosulfuric acid (1944): −15.1
Hydrogen fluoride: −15.1
Trifluoromethanesulfonic acid (1940): −14.9
Perchloric acid: −13
Sulfurochloridic acid: −13.8; −12.78
Sulfuric acid: −12.0
For mixtures (e.g., partly diluted acids in water), the acidity function depends on the composition of the mixture and has to be determined empirically. Graphs of H0 vs mole fraction can be found in the literature for many acids.
References
Acid–base chemistry
Physical organic chemistry | Hammett acidity function | [
"Chemistry"
] | 927 | [
"Acid–base chemistry",
"Acids",
"Equilibrium chemistry",
"Superacids",
"nan",
"Physical organic chemistry"
] |
7,308,303 | https://en.wikipedia.org/wiki/Desktop%20virtualization | Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.
Desktop virtualization can be used in conjunction with application virtualization and user profile management systems, now termed user virtualization, to provide a comprehensive desktop environment management system. In this mode, all the components of the desktop are virtualized, which allows for a highly flexible and much more secure desktop delivery model. In addition, this approach supports a more complete desktop disaster recovery strategy as all components are essentially saved in the data center and backed up through traditional redundant maintenance systems. If a user's device or hardware is lost, the restore is straightforward and simple, because the components will be present at login from another device. In addition, because no data are saved to the user's device, if that device is lost, there is much less chance that any critical data can be retrieved and compromised.
System architectures
Desktop virtualization implementations are classified based on whether the virtual desktop runs remotely or locally, on whether the access is required to be constant or is designed to be intermittent, and on whether or not the virtual desktop persists between sessions. Typically, software products that deliver desktop virtualization solutions can combine local and remote implementations into a single product to provide the most appropriate support specific to requirements. The degrees of independent functionality of the client device is necessarily interdependent with the server location and access strategy. And virtualization is not strictly required for remote control to exist. Virtualization is employed to present independent instances to multiple users and requires a strategic segmentation of the host server and presentation at some layer of the host's architecture. The enabling layer—usually application software—is called a hypervisor.
Remote desktop virtualization
Remote desktop virtualization implementations operate in a client/server computing environment. Application execution takes place on a remote operating system which communicates with the local client device over a network using a remote display protocol through which the user interacts with applications. All applications and data used remain on the remote system with only display, keyboard, and mouse information communicated with the local client device, which may be a conventional PC/laptop, a thin client device, a tablet, or even a smartphone. A common implementation of this approach involves hosting multiple desktop operating system instances on a server hardware platform running a hypervisor. Its latest iteration is generally referred to as Virtual Desktop Infrastructure, or "VDI" ("VDI" is often used incorrectly to refer to any desktop virtualization implementation).
Remote desktop virtualization is frequently used in the following scenarios:
in distributed environments with high availability requirements and where desk-side technical support is not readily available, such as branch office and retail environments.
in environments where high network latency degrades the performance of conventional client/server applications
in environments where remote access and data security requirements create conflicting requirements that can be addressed by retaining all (application) data within the data center – with only display, keyboard, and mouse information communicated with the remote client.
It is also used as a means of providing access to Windows applications on non-Windows endpoints (including tablets, smartphones, and non-Windows-based desktop PCs and laptops).
Remote desktop virtualization can also provide a means of resource sharing, to distribute low-cost desktop computing services in environments where providing every user with a dedicated desktop PC is either too expensive or otherwise unnecessary.
For IT administrators, this means a more centralized, efficient client environment that is easier to maintain and able to respond more quickly to the changing needs of the user and business.
Presentation virtualization
Remote desktop software allows a user to access applications and data on a remote computer over a network using a remote-display protocol. A VDI service provides individual desktop operating system instances (e.g., Windows XP, 7, 8.1, 10, etc.) for each user, whereas remote desktop sessions run in a single shared-server operating system. Both session collections and virtual machines support full desktop based sessions and remote application deployment.
The use of a single shared-server operating system instead of individual desktop operating system instances consumes significantly fewer resources than the same number of VDI sessions. At the same time, VDI licensing is both more expensive and less flexible than equivalent remote desktop licenses. Together, these factors can combine to make remote desktop-based remote desktop virtualization more attractive than VDI.
VDI implementations allow for delivering personalized workspace back to a user, which retains all the user's customizations. There are several methods to accomplish this.
Application virtualization
Application virtualization improves delivery and compatibility of applications by encapsulating them from the underlying operating system on which they are executed. A fully virtualized application is not installed on hardware in the traditional sense. Instead, a hypervisor layer intercepts the application, which at runtime acts as if it is interfacing with the original operating system and all the resources managed by it when in reality it is not.
User virtualization
User virtualization separates all of the software aspects that define a user’s personality on a device from the operating system and applications to be managed independently and applied to a desktop as needed without the need for scripting, group policies, or use of roaming profiles. The term "user virtualization" sounds misleading; this technology is not limited to virtual desktops. User virtualization can be used regardless of platform – physical, virtual, cloud, etc. The major desktop virtualization platform vendors, Citrix, Microsoft and VMware, all offer a form of basic user virtualization in their platforms.
Layering
Desktop layering is a method of desktop virtualization that divides a disk image into logical parts to be managed individually. For example, if all members of a user group use the same OS, then the core OS only needs to be backed up once for the entire environment who share this layer. Layering can be applied to local physical disk images, client-based virtual machines, or host-based desktops. Windows operating systems are not designed for layering, therefore each vendor must engineer their own proprietary solution.
Desktop as a service
Remote desktop virtualization can also be provided via cloud computing similar to that provided using a software as a service model. This approach is usually referred to as cloud-hosted virtual desktops. Cloud-hosted virtual desktops are divided into two technologies:
Managed VDI, which is based on VDI technology provided as an outsourced managed service, and
Desktop as a service (DaaS), which provides a higher level of automation and real multi-tenancy, reducing the cost of the technology. The DaaS provider typically takes full responsibility for hosting and maintaining the computer, storage, and access infrastructure, as well as applications and application software licenses needed to provide the desktop service in return for a fixed monthly fee.
Cloud-hosted virtual desktops can be implemented using both VDI and Remote Desktop Services-based systems and can be provided through the public cloud, private cloud infrastructure, and hybrid cloud platforms. Private cloud implementations are commonly referred to as "managed VDI". Public cloud offerings tend to be based on desktop-as-a-service technology.
Local desktop virtualization
Local desktop virtualization implementations run the desktop environment on the client device using hardware virtualization or emulation. For hardware virtualization, depending on the implementation both Type I and Type II hypervisors may be used.
Local desktop virtualization is well suited for environments where continuous network connectivity cannot be assumed and where application resource requirements can be better met by using local system resources. However, local desktop virtualization implementations do not always allow applications developed for one system architecture to run on another. For example, it is possible to use local desktop virtualization to run Windows 7 on top of OS X on an Intel-based Apple Mac, using a hypervisor, as both use the same x86 architecture.
See also
Virtual machine
Remote mobile virtualization
References
Further reading
Paul Venezia (April 13, 2011) Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware. The leading server virtualization contenders tackle InfoWorld's ultimate virtualization challenge, InfoWorld
Keith Schultz (December 14, 2011) VDI shoot-out: Citrix XenDesktop vs. VMware View. Citrix XenDesktop 5.5 and VMware View 5 vie for the most flexible, scalable, and complete virtual desktop infrastructure, InfoWorld
Keith Schultz (December 14, 2011) VDI shoot-out: HDX vs. PCoIP. The differences between the Citrix and VMware remote desktop protocols are more than skin deep, InfoWorld
Centralized computing
Remote desktop
Thin clients | Desktop virtualization | [
"Technology"
] | 1,767 | [
"Centralized computing",
"IT infrastructure",
"Computer systems"
] |
7,308,831 | https://en.wikipedia.org/wiki/Nomen%20novum | In biological nomenclature, a nomen novum (Latin for "new name"), new replacement name (or replacement name, new substitute name, substitute name) is a scientific name that is created specifically to replace another scientific name, but only when this other name cannot be used for technical, nomenclatural reasons (for example because it is a homonym: it is spelled the same as an existing, older name). It does not apply when a name is changed for taxonomic reasons (representing a change in scientific insight). It is frequently abbreviated, e.g. nomen nov., nom. nov..
Zoology
In zoology establishing a new replacement name is a nomenclatural act and it must be expressly proposed to substitute a previously established and available name.
Often, the older name cannot be used because another animal was described earlier with exactly the same name. For example, Lindholm discovered in 1913 that a generic name Jelskia established by Bourguignat in 1877 for a European freshwater snail could not be used because another author Taczanowski had proposed the same name in 1871 for a spider. So Lindholm proposed a new replacement name Borysthenia. This is an objective synonym of Jelskia Bourguignat, 1877, because he has the same type species, and is used today as Borysthenia.
Also, for names of species new replacement names are often necessary. New replacement names have been proposed since more than 100 years ago. In 1859 Bourguignat saw that the name Bulimus cinereus Mortillet, 1851 for an Italian snail could not be used because Reeve had proposed exactly the same name in 1848 for a completely different Bolivian snail. Since it was understood even then that the older name always has priority, Bourguignat proposed a new replacement name Bulimus psarolenus, and also added a note why this was necessary. The Italian snail is known until today under the name Solatopupa psarolena (Bourguignat, 1859).
A new replacement name must obey certain rules; not all of these are well known.
Not every author who proposes a name for a species that already has another name, establishes a new replacement name. An author who writes "The name of the insect species with the green wings shall be named X, this is the one that the other author has named Y", does not establish a new replacement name (but a regular new name).
The International Code of Zoological Nomenclature prescribes that for a new replacement name, an expressed statement must be given by the author, which means an explicit statement concerning the process of replacing the previous name. It is not necessary to employ the term nomen novum, but something must be expressed concerning the act of substituting a name. Implicit evidence ("everybody knows why the author used that new name") is not allowed at this occasion. Many zoologists do not know that this expressed statement is necessary, and therefore a variety of names are regarded as having been established as new replacement names (often including names that were mentioned without any description, which is fundamentally contrary to the rules).
The author who proposes a new replacement name must state exactly which name shall be replaced. It is not possible to mention three available synonyms at once to be replaced. Usually, the author explains why the new replacement name is needed.
Sometimes we read "the species cannot keep this old name P. brasiliensis, because it does not live in Brazil, so I propose a new name P. angolana". Even though this would not justify a new replacement name under the Code's rules, the author believed that a new name was necessary and gave an expressed statement concerning the act of replacing. So the name P. angolana was made available at this occasion, and is an objective synonym of P. brasiliensis.
A new replacement name can only be used for a taxon if the name that it replaces cannot be used, as in the example above with the snail and the spider, or in the other example with the Italian and the Bolivian snail. The animal from Angola must keep its name brasiliensis, because this is the older name.
New replacement names do not occur very frequently, but they are not extremely rare. About 1% of the currently used zoological names might be new replacement names. There are no exact statistics covering all animal groups. In 2,200 names of species and 350 names of genera in European non-marine molluscs, which might be a representative group of animals, 0.7% of the specific and 3.4% of the generic names were correctly established as new replacement names (and a further 0.7% of the specific and 1.7% of the generic names have incorrectly been regarded as new replacement names by some authors).
Algae, fungi and plants
For those taxa whose names are regulated by the International Code of Nomenclature for algae, fungi, and plants (ICNafp), a nomen novum or replacement name is a name published as a substitute for "a legitimate or illegitimate, previously published name, which is its replaced synonym and which, when legitimate, does not provide the final epithet, name, or stem of the replacement name". For species, replacement names may be needed because the specific epithet is not available in the genus for whatever reason. Examples:
Carl Linnaeus gave the herb rosemary the scientific name Rosmarinus officinalis in 1753. It is now regarded as one of many species in the genus Salvia. It cannot be transferred to this genus as "Salvia officinalis" because Linnaeus gave this name to the herb sage. An acceptable name in the genus Salvia was published by Fridolin Spenner in 1835. The replacement name is cited as Salvia rosmarinus Spenn.; the replaced synonym is Rosmarinus officinalis L. The author of the replaced synonym is not included in the citation of the replacement name.
The plant name Polygonum persicaria was published by Linnaeus in 1753. In 1821, Samuel Gray transferred the species to the genus Persicaria. He could not do this using the name "Persicaria persicaria" because the ICNafp does not allow tautonyms. Accordingly, he published the replacement name Persicaria maculosa. The replacement name is Persicaria maculosa Gray; the replaced synonym is Polygonum persicaria L. Again, the author of the replaced synonym is not included in the citation of the replacement name.
The fungus name Marasmius distantifolius was published by Y.S. Tan and Desjardin in 2009. Later it was discovered that this combination had already been used for a different species by William Murrill in 1915, so Tan and Desjardin's name was an illegitimate later homonym. Accordingly, in 2010 Mešić and Tkalčec published the replacement name Marasmius asiaticus for the species. The replacement name is cited as Marasmius asiaticus Mešić & Tkalčec; the replaced synonym as Marasmius distantifolius Y.S. Tan & Desjardin. In this example, the replaced synonym is illegitimate.
The plant name Lycopodium densum was published by Jacques Labillardière in 1807. However, the combination had already been used for a different species by Jean-Baptiste Lamarck in 1779, so Labillardière's name is illegitimate. Werner Rothmaler knew this when in 1944 he transferred the species to the genus Lepidotis, and so explicitly published Lepidotis densa as a new name ("", ""). The replacement name is Lepidotis densa Rothm.; the replaced synonym is Lycopodium densum Labill. Even though the specific epithet in Lepidotis appears to be the same, it is nevertheless new. So when in 1983, Josef Holub transferred the species to Pseudolycopodium, the name in that genus is cited as Pseudolycopodium densum (Rothm.) Holub., the basionym being the replacement name Lepidotis densa Rothm. not the illegitimate replaced synonym Lycopodium densum Labill.
See also
Glossary of scientific naming
Nomen dubium
Nomen conservandum
Nomen nudum
Nomen oblitum
References
External links
International Code of Zoological Nomenclature (ICZN) (only English version, the French version is not online)
Latin biological phrases
Zoological nomenclature
Botanical nomenclature
Biological classification | Nomen novum | [
"Biology"
] | 1,763 | [
"Zoological nomenclature",
"Botanical nomenclature",
"Botanical terminology",
"Biological nomenclature",
"nan",
"Latin biological phrases"
] |
7,309,022 | https://en.wikipedia.org/wiki/Nearest%20neighbor%20search | Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: the less similar the objects, the larger the function values.
Formally, the nearest-neighbor (NN) search problem is defined as follows: given a set S of points in a space M and a query point q ∈ M, find the closest point in S to q. Donald Knuth in vol. 3 of The Art of Computer Programming (1973) called it the post-office problem, referring to an application of assigning to a residence the nearest post office. A direct generalization of this problem is a k-NN search, where we need to find the k closest points.
Most commonly M is a metric space and dissimilarity is expressed as a distance metric, which is symmetric and satisfies the triangle inequality. Even more common, M is taken to be the d-dimensional vector space where dissimilarity is measured using the Euclidean distance, Manhattan distance or other distance metric. However, the dissimilarity function can be arbitrary. One example is asymmetric Bregman divergence, for which the triangle inequality does not hold.
Applications
The nearest neighbor search problem arises in numerous fields of application, including:
Pattern recognition – in particular for optical character recognition
Statistical classification – see k-nearest neighbor algorithm
Computer vision – for point cloud registration
Computational geometry – see Closest pair of points problem
Cryptanalysis – for lattice problem
Databases – e.g. content-based image retrieval
Coding theory – see maximum likelihood decoding
Semantic Search
Data compression – see MPEG-2 standard
Robotic sensing
Recommendation systems, e.g. see Collaborative filtering
Internet marketing – see contextual advertising and behavioral targeting
DNA sequencing
Spell checking – suggesting correct spelling
Plagiarism detection
Similarity scores for predicting career paths of professional athletes.
Cluster analysis – assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense, usually based on Euclidean distance
Chemical similarity
Sampling-based motion planning
Methods
Various solutions to the NNS problem have been proposed. The quality and usefulness of the algorithms are determined by the time complexity of queries as well as the space complexity of any search data structures that must be maintained. The informal observation usually referred to as the curse of dimensionality states that there is no general-purpose exact solution for NNS in high-dimensional Euclidean space using polynomial preprocessing and polylogarithmic search time.
Exact methods
Linear search
The simplest solution to the NNS problem is to compute the distance from the query point to every other point in the database, keeping track of the "best so far". This algorithm, sometimes referred to as the naive approach, has a running time of O(dN), where N is the cardinality of S and d is the dimensionality of S. There are no search data structures to maintain, so the linear search has no space complexity beyond the storage of the database. Naive search can, on average, outperform space partitioning approaches on higher dimensional spaces.
The absolute distance is not required for distance comparison, only the relative distance. In geometric coordinate systems the distance calculation can be sped up considerably by omitting the square root calculation from the distance calculation between two coordinates. The distance comparison will still yield identical results.
Space partitioning
Since the 1970s, the branch and bound methodology has been applied to the problem. In the case of Euclidean space, this approach encompasses spatial index or spatial access methods. Several space-partitioning methods have been developed for solving the NNS problem. Perhaps the simplest is the k-d tree, which iteratively bisects the search space into two regions containing half of the points of the parent region. Queries are performed via traversal of the tree from the root to a leaf by evaluating the query point at each split. Depending on the distance specified in the query, neighboring branches that might contain hits may also need to be evaluated. For constant dimension query time, average complexity is O(log N) in the case of randomly distributed points, worst case complexity is O(kN^(1-1/k))
Alternatively the R-tree data structure was designed to support nearest neighbor search in dynamic context, as it has efficient algorithms for insertions and deletions such as the R* tree. R-trees can yield nearest neighbors not only for Euclidean distance, but can also be used with other distances.
In the case of general metric space, the branch-and-bound approach is known as the metric tree approach. Particular examples include vp-tree and BK-tree methods.
Using a set of points taken from a 3-dimensional space and put into a BSP tree, and given a query point taken from the same space, a possible solution to the problem of finding the nearest point-cloud point to the query point is given in the following description of an algorithm.
(Strictly speaking, no such point may exist, because it may not be unique. But in practice, usually we only care about finding any one of the subset of all point-cloud points that exist at the shortest distance to a given query point.) The idea is, for each branching of the tree, guess that the closest point in the cloud resides in the half-space containing the query point. This may not be the case, but it is a good heuristic. After having recursively gone through all the trouble of solving the problem for the guessed half-space, now compare the distance returned by this result with the shortest distance from the query point to the partitioning plane. This latter distance is that between the query point and the closest possible point that could exist in the half-space not searched. If this distance is greater than that returned in the earlier result, then clearly there is no need to search the other half-space. If there is such a need, then you must go through the trouble of solving the problem for the other half space, and then compare its result to the former result, and then return the proper result. The performance of this algorithm is nearer to logarithmic time than linear time when the query point is near the cloud, because as the distance between the query point and the closest point-cloud point nears zero, the algorithm needs only perform a look-up using the query point as a key to get the correct result.
Approximation methods
An approximate nearest neighbor search algorithm is allowed to return points whose distance from the query is at most times the distance from the query to its nearest points. The appeal of this approach is that, in many cases, an approximate nearest neighbor is almost as good as the exact one. In particular, if the distance measure accurately captures the notion of user quality, then small differences in the distance should not matter.
Greedy search in proximity neighborhood graphs
Proximity graph methods (such as navigable small world graphs and HNSW) are considered the current state-of-the-art for the approximate nearest neighbors search.
The methods are based on greedy traversing in proximity neighborhood graphs in which every point is uniquely associated with vertex . The search for the nearest neighbors to a query q in the set S takes the form of searching for the vertex in the graph .
The basic algorithm – greedy search – works as follows: search starts from an enter-point vertex by computing the distances from the query q to each vertex of its neighborhood , and then finds a vertex with the minimal distance value. If the distance value between the query and the selected vertex is smaller than the one between the query and the current element, then the algorithm moves to the selected vertex, and it becomes new enter-point. The algorithm stops when it reaches a local minimum: a vertex whose neighborhood does not contain a vertex that is closer to the query than the vertex itself.
The idea of proximity neighborhood graphs was exploited in multiple publications, including the seminal paper by Arya and Mount, in the VoroNet system for the plane, in the RayNet system for the , and in the Navigable Small World, Metrized Small World and HNSW algorithms for the general case of spaces with a distance function. These works were preceded by a pioneering paper by Toussaint, in which he introduced the concept of a relative neighborhood graph.
Locality sensitive hashing
Locality sensitive hashing (LSH) is a technique for grouping points in space into 'buckets' based on some distance metric operating on the points. Points that are close to each other under the chosen metric are mapped to the same bucket with high probability.
Nearest neighbor search in spaces with small intrinsic dimension
The cover tree has a theoretical bound that is based on the dataset's doubling constant. The bound on search time is O(c12 log n) where c is the expansion constant of the dataset.
Projected radial search
In the special case where the data is a dense 3D map of geometric points, the projection geometry of the sensing technique can be used to dramatically simplify the search problem.
This approach requires that the 3D data is organized by a projection to a two-dimensional grid and assumes that the data is spatially smooth across neighboring grid cells with the exception of object boundaries.
These assumptions are valid when dealing with 3D sensor data in applications such as surveying, robotics and stereo vision but may not hold for unorganized data in general.
In practice this technique has an average search time of O(1) or O(K) for the k-nearest neighbor problem when applied to real world stereo vision data.
Vector approximation files
In high-dimensional spaces, tree indexing structures become useless because an increasing percentage of the nodes need to be examined anyway. To speed up linear search, a compressed version of the feature vectors stored in RAM is used to prefilter the datasets in a first run. The final candidates are determined in a second stage using the uncompressed data from the disk for distance calculation.
Compression/clustering based search
The VA-file approach is a special case of a compression based search, where each feature component is compressed uniformly and independently. The optimal compression technique in multidimensional spaces is Vector Quantization (VQ), implemented through clustering. The database is clustered and the most "promising" clusters are retrieved. Huge gains over VA-File, tree-based indexes and sequential scan have been observed. Also note the parallels between clustering and LSH.
Variants
There are numerous variants of the NNS problem and the two most well-known are the k-nearest neighbor search and the ε-approximate nearest neighbor search.
k-nearest neighbors
k-nearest neighbor search identifies the top k nearest neighbors to the query. This technique is commonly used in predictive analytics to estimate or classify a point based on the consensus of its neighbors. k-nearest neighbor graphs are graphs in which every point is connected to its k nearest neighbors.
Approximate nearest neighbor
In some applications it may be acceptable to retrieve a "good guess" of the nearest neighbor. In those cases, we can use an algorithm which doesn't guarantee to return the actual nearest neighbor in every case, in return for improved speed or memory savings. Often such an algorithm will find the nearest neighbor in a majority of cases, but this depends strongly on the dataset being queried.
Algorithms that support the approximate nearest neighbor search include locality-sensitive hashing, best bin first and balanced box-decomposition tree based search.
Nearest neighbor distance ratio
Nearest neighbor distance ratio does not apply the threshold on the direct distance from the original point to the challenger neighbor but on a ratio of it depending on the distance to the previous neighbor. It is used in CBIR to retrieve pictures through a "query by example" using the similarity between local features. More generally it is involved in several matching problems.
Fixed-radius near neighbors
Fixed-radius near neighbors is the problem where one wants to efficiently find all points given in Euclidean space within a given fixed distance from a specified point. The distance is assumed to be fixed, but the query point is arbitrary.
All nearest neighbors
For some applications (e.g. entropy estimation), we may have N data-points and wish to know which is the nearest neighbor for every one of those N points. This could, of course, be achieved by running a nearest-neighbor search once for every point, but an improved strategy would be an algorithm that exploits the information redundancy between these N queries to produce a more efficient search. As a simple example: when we find the distance from point X to point Y, that also tells us the distance from point Y to point X, so the same calculation can be reused in two different queries.
Given a fixed dimension, a semi-definite positive norm (thereby including every Lp norm), and n points in this space, the nearest neighbour of every point can be found in O(n log n) time and the m nearest neighbours of every point can be found in O(mn log n) time.
See also
Ball tree
Closest pair of points problem
Cluster analysis
Content-based image retrieval
Curse of dimensionality
Digital signal processing
Dimension reduction
Fixed-radius near neighbors
Fourier analysis
Instance-based learning
k-nearest neighbor algorithm
Linear least squares
Locality sensitive hashing
Maximum inner-product search
MinHash
Multidimensional analysis
Nearest-neighbor interpolation
Neighbor joining
Principal component analysis
Range search
Similarity learning
Singular value decomposition
Sparse distributed memory
Statistical distance
Time series
Voronoi diagram
Wavelet
References
Citations
Sources
Further reading
External links
Nearest Neighbors and Similarity Search – a website dedicated to educational materials, software, literature, researchers, open problems and events related to NN searching. Maintained by Yury Lifshits
Similarity Search Wiki – a collection of links, people, ideas, keywords, papers, slides, code and data sets on nearest neighbours
Approximation algorithms
Classification algorithms
Data mining
Discrete geometry
Geometric algorithms
Mathematical optimization
Search algorithms | Nearest neighbor search | [
"Mathematics"
] | 2,836 | [
"Mathematical analysis",
"Discrete mathematics",
"Discrete geometry",
"Approximation algorithms",
"Mathematical relations",
"Mathematical optimization",
"Approximations"
] |
7,309,043 | https://en.wikipedia.org/wiki/Moisture%20analysis | Moisture analysis covers a variety of methods for measuring the moisture content in solids, liquids, or gases. For example, moisture (usually measured as a percentage) is a common specification in commercial food production. There are many applications where trace moisture measurements are necessary for manufacturing and process quality assurance. Trace moisture in solids must be known in processes involving plastics, pharmaceuticals and heat treatment. Fields that require moisture measurement in gasses or liquids include hydrocarbon processing, pure semiconductor gases, bulk pure or mixed gases, dielectric gases such as those in transformers and power plants, and natural gas pipeline transport. Moisture content measurements can be reported in multiple units, such as: parts per million, pounds of water per million standard cubic feet of gas, mass of water vapor per unit volume or mass of water vapor per unit mass of dry gas.
Moisture content vs. moisture dew point
Moisture dew point is the temperature at which moisture condenses out of a gas. This parameter is inherently related to the moisture content, which defines the amount of water molecules as a fraction of the total. Both can be used as a measure of the amount of moisture in a gas and one can be calculated from the other fairly accurately.
While both terms are sometimes used interchangeably, these two parameters, though related, are different measurements.
Loss on drying
The classic laboratory method of measuring high-level moisture in solid or semi-solid materials is loss on drying. In this technique, a sample of material is weighed, heated in an oven for an appropriate period, cooled in the dry atmosphere of a desiccator, and then reweighed. If the volatile content of the solid is primarily water, the loss on drying technique gives a good measure of moisture content. Because the manual laboratory method is relatively slow, automated moisture analysers have been developed that can reduce the time necessary for a test from a couple of hours to just a few minutes. These analysers incorporate an electronic balance with a sample tray and surrounding heating element. Under microprocessor control, the sample can be heated rapidly. The moisture loss rate is measured throughout the process and then plotted in the form of a drying curve.
Karl Fischer titration
An accurate method for determining the amount of water is the Karl Fischer titration, developed in 1935 by the German chemist, whose name it bears. This method detects only water, contrary to loss on drying, which detects any volatile substances.
Techniques used for natural gas
Natural gas poses a unique problem in terms of moisture content analysis because it can contain very high levels of solid and liquid contaminants, as well as corrosives in varying concentrations.
Measurements of moisture in natural gas are typically performed with one of the following techniques:
color indicator tubes
chilled mirrors
chilled mirror combined with spectroscopy
electrolytic
piezoelectric sorption, also known as quartz crystal microbalance
aluminum oxide and silicon oxide
spectroscopy.
Other moisture measurement techniques exist but are not used in natural gas applications for various reasons. For example, the gravimetric hygrometer and the “two-pressure” system used by the National Bureau of Standards are precise, but are not suitable for use in industrial applications.
Color indicator tubes
A color indicator tube (also referred to as a gas detector tube) is a device that natural gas pipelines use for a quick and rough measurement of moisture. Each tube contains chemicals that react to a specific compound to form a stain or color when passed through the gas. The tubes are used once and then discarded. A manufacturer calibrates the tubes, but since the measurement is directly related to exposure time, the flow rate, and the extractive technique, it is susceptible to error. In practice, the error can reach up to 25 percent. The color indicator tubes are well suited for infrequent, rough estimations of moisture in natural gas.
Chilled mirrors
This type of device is considered the most popular when it comes to measuring the dew point of water in gaseous media. In this type of device, when gas flows across a reflective cooling surface, the eponymous chilled mirror. When the surface is cold enough, the available moisture will start to condense onto it in tiny droplets. The exact temperature at which this condensation first occurs is registered, and the mirror is slowly heated until the condensed water begins to evaporate. This temperature is also registered and the average of the condensation and evaporation temperatures is reported as the dew point. All chilled-mirror devices, both manual and automatic, are based on this same basic method. It is necessary to measure temperatures of both the condensation and evaporation, because the dew point is the equilibrium temperature at which water both condense and evaporate at the same rate. When cooling the mirror, the temperature keeps dropping after it has reached the dew point thus, the condensation temperature measurement is lower than the actual dew point temperature before water starts to condense. Therefore, the temperature of the mirror is slowly increased until evaporation is observed to occur and the dew point is reported as the average of these two temperatures. By obtaining an accurate dew point temperature, one can calculate moisture content in the gas. The mirror temperature can be regulated by either the flow of a refrigerant over the mirror or by a thermoelectric cooler also known as a Peltier element.
The formation behavior of condensation on the mirror's surface can be registered by either optical or visual means. In both cases, a light source is directed onto the mirror and changes in the reflection of this light due to the formation of condensation are detected by a sensor or the human eye, respectively. The exact point at which condensation begins to occur is not discernible to the unaided eye, so modern manually operated instruments use a microscope to enhance the accuracy of measurements taken using this method.
Chilled mirror analyzers are subject to the confounding effects of some contaminants, however, at levels similar to other analyzers. With proper filtration and gas analysis preparation systems, other condensable liquids such as heavy hydrocarbons, alcohol, and glycol will not distort the results provided by these devices. It is also worth noting that in the case of natural gas, in which the aforementioned contaminants are an issue, on-line analyzers routinely measure the water dew point at line pressure, which reduces the likelihood that any heavy hydrocarbons, for example, will condense before water.
On the other hand, chilled-mirror devices are not subject to drift, and are not influenced by fluctuations in gas composition or changes in moisture content.
Chilled mirror combined with spectroscopy
This method of analysis combines some of the benefits of a chilled-mirror measurement with spectroscopy. In this method, a transparent inert material is cooled as an infrared (IR) beam is directed through it at an angle to the exterior surface. When it encounters this surface, the IR beam is reflected back through the material. A gaseous media is passed across the surface of the material at the point corresponding to the location where the IR beam is reflected. When a condensate forms on the surface of the cooling material, an analysis of the reflected IR beam will show absorption in the wavelengths that correspond to the molecular structure of the condensation formed. In this way, the device is able to distinguish between water condensation and other types of condensates, such as, for example, hydrocarbons when the gaseous media is natural gas. One advantage of this method is its relative immunity to contaminants thanks to the inert nature of the transparent material. Similar to a true chilled-mirror device, this type of analyzer can accurately measure the condensation temperature of potential liquids in a gaseous medium, but is not capable of measuring the actual water dew point as this requires the accurate measurement of the evaporation temperature as well.
Electrolytic
The electrolytic sensor uses two closely spaced, parallel windings coated with a thin film of phosphorus pentoxide (P2O5). As this coating absorbs incoming water vapor, an electrical potential is applied to the windings that electrolyze the water to hydrogen and oxygen. The current consumed by the electrolysis determines the mass of water vapor entering the sensor. The flow rate and pressure of the incoming sample must be controlled precisely to maintain a standard sample mass flow rate into the sensor.
The method is fairly inexpensive and can be used effectively in pure gas streams where response rates are not critical. Contamination from oils, liquids or glycols on the windings will cause drift in the readings and damage to the sensor. The sensor cannot react to sudden changes in moisture, i.e., the reaction on the windings’ surfaces takes some time to stabilize. Large amounts of water in the pipeline (called slugs) will wet the surface and require tens of minutes or hours to “dry-down.” Effective sample conditioning and removal of liquids are essential when using an electrolytic sensor.
Piezoelectric sorption
The piezoelectric sorption instrument compares the changes in the frequency of hygroscopic coated quartz oscillators. As the mass of the crystal changes due to the adsorption of water vapor, the frequency of the oscillator changes. The sensor is a relative measurement, so an integrated calibration system with desiccant dryers, permeations tubes and sample line switching is used frequently to correlate the system.
The system has succeeded in many applications, including natural gas. It is possible to have interference from glycol, methanol, and damage from the hydrogen sulfide, which can result in erratic readings. The sensor itself is relatively inexpensive and very precise. The required calibration system is not as precise and adds to the cost and mechanical complexity of the system. The labor for frequent replacement of desiccant dryers, permeation components, and sensor heads greatly increases the operational costs. Additionally, slugs of water render the system non-functional for the long periods of time as the sensor head has to “dry-down.”
Aluminum oxide and silicon oxide
The oxide sensor is made up of an inert substrate material and two dielectric layers, one of which is sensitive to humidity. The moisture molecules pass through the pores on the surface and cause a change to the physical property of the layer beneath it.
An aluminum oxide sensor has two metal layers that form the electrodes of a capacitor. The number of water molecules adsorbed will cause a change in the dielectric constant of the sensor. The sensor impedance correlates to the water concentration. A silicon oxide sensor can be an optical device that changes its refractive index as water is absorbed into the sensitive layer or a different impedance type in which silicon replaces the aluminum.
In the first type (optical) when light is reflected through the substrate, a wavelength shift can be detected on the output, which can be precisely correlated to the moisture concentration. A Fiber optic connector can be used to separate the sensor head and the electronics.
This type of sensor is not extremely expensive and can be installed at pipeline pressure (in-situ). Water molecules do take time to enter and exit the pores, so some wet-up and dry down delays will be observed, especially after a slug. Contaminants and corrosives may damage and clog the pores, causing a “drift” in the calibration, but the sensor heads can be refurbished or replaced and will perform better in very clean gas streams. As with the piezoelectric and electrolytic sensors, the sensor is susceptible to interference from glycol and methanol, the calibration will drift as the sensor's surface becomes inactive due to damage or blockage, so the calibration is reliable only at the beginning of the sensor's life.
In the second type (silicon oxide sensor), the device is often temperature controlled for improved stability and is considered being chemically more stable than aluminium oxide types and far faster responding due to the fact they hold less water in equilibrium at an elevated operating temperature.
Whilst most absorption type devices can be installed at pipe line pressures (up to 130 Barg) traceability to International Standards is compromised. Operation at near atmospheric pressure do provide traceability and offer other significant benefits, such as enabling the direct validation against known moisture content.
Spectroscopy
Absorption spectroscopy is a relatively simple method of passing light through a gas sample and measuring the amount of light absorbed at a specific wavelength. Traditional spectroscopic techniques have not been successful at doing this in natural gas because methane absorbs light in the same wavelength regions as water. But if one uses a very high resolution spectrometer, it is possible to find some water peaks that are not overlapped by other gas peaks.
The tunable laser provides a narrow, tunable wavelength light source that can be used to analyze these small spectral features. According to the Beer-Lambert law, the amount of light absorbed by the gas is proportional to the amount of the gas present in the light's path; therefore, this technique is a direct measurement of moisture. In order to achieve a long enough path length of light, a mirror is used in the instrument. The mirror may become partially blocked by liquid and solid contaminations, but since the measurement is a ratio of absorbed light over the total light detected, the calibration is unaffected by the partially blocked mirror (if the mirror is totally blocked, it must be cleaned).
A TDLAS analyzer has a higher upfront cost compared to most of the analyzers above. However, tunable diode laser absorption spectroscopy is superior when it comes to the following: the necessity for an analyzer that will not suffer from interference or damage from corrosive gases, liquids or solids, or an analyzer that will react very quickly to drastic moisture changes or an analyzer that will remain calibrated for very long periods of time, assuming the gas composition does not change.
See also
Karl Fischer titration
References
Desiccation
Measurement
Psychrometrics
Natural gas | Moisture analysis | [
"Physics",
"Mathematics"
] | 2,880 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
7,309,251 | https://en.wikipedia.org/wiki/Neighbourhood%20%28graph%20theory%29 | In graph theory, an adjacent vertex of a vertex in a graph is a vertex that is connected to by an edge. The neighbourhood of a vertex in a graph is the subgraph of induced by all vertices adjacent to , i.e., the graph composed of the vertices adjacent to and all edges connecting vertices adjacent to .
The neighbourhood is often denoted or (when the graph is unambiguous) . The same neighbourhood notation may also be used to refer to sets of adjacent vertices rather than the corresponding induced subgraphs. The neighbourhood described above does not include itself, and is more specifically the open neighbourhood of ; it is also possible to define a neighbourhood in which itself is included, called the closed neighbourhood and denoted by . When stated without any qualification, a neighbourhood is assumed to be open.
Neighbourhoods may be used to represent graphs in computer algorithms, via the adjacency list and adjacency matrix representations. Neighbourhoods are also used in the clustering coefficient of a graph, which is a measure of the average density of its neighbourhoods. In addition, many important classes of graphs may be defined by properties of their neighbourhoods, or by symmetries that relate neighbourhoods to each other.
An isolated vertex has no adjacent vertices. The degree of a vertex is equal to the number of adjacent vertices. A special case is a loop that connects a vertex to itself; if such an edge exists, the vertex belongs to its own neighbourhood.
Local properties in graphs
If all vertices in G have neighbourhoods that are isomorphic to the same graph H, G is said to be locally H, and if all vertices in G have neighbourhoods that belong to some graph family F, G is said to be locally F. For instance, in the octahedron graph, shown in the figure, each vertex has a neighbourhood isomorphic to a cycle of four vertices, so the octahedron is locally C4.
For example:
Any complete graph Kn is locally Kn-1. The only graphs that are locally complete are disjoint unions of complete graphs.
A Turán graph T(rs,r) is locally T((r-1)s,r-1). More generally any Turán graph is locally Turán.
Every planar graph is locally outerplanar. However, not every locally outerplanar graph is planar.
A graph is triangle-free if and only if it is locally independent.
Every k-chromatic graph is locally (k-1)-chromatic. Every locally k-chromatic graph has chromatic number .
If a graph family F is closed under the operation of taking induced subgraphs, then every graph in F is also locally F. For instance, every chordal graph is locally chordal; every perfect graph is locally perfect; every comparability graph is locally comparable; every (k)-(ultra)-homogeneous graph is locally (k)-(ultra)-homogeneous.
A graph is locally cyclic if every neighbourhood is a cycle. For instance, the octahedron is the unique connected locally C4 graph, the icosahedron is the unique connected locally C5 graph, and the Paley graph of order 13 is locally C6. Locally cyclic graphs other than K4 are exactly the underlying graphs of Whitney triangulations, embeddings of graphs on surfaces in such a way that the faces of the embedding are the cliques of the graph. Locally cyclic graphs can have as many as edges.
Claw-free graphs are the graphs that are locally co-triangle-free; that is, for all vertices, the complement graph of the neighbourhood of the vertex does not contain a triangle. A graph that is locally H is claw-free if and only if the independence number of H is at most two; for instance, the graph of the regular icosahedron is claw-free because it is locally C5 and C5 has independence number two.
The locally linear graphs are the graphs in which every neighbourhood is an induced matching.
The Johnson graphs are locally grid, meaning that each neighborhood is a rook's graph.
Neighbourhood of a set
For a set A of vertices, the neighbourhood of A is the union of the neighbourhoods of the vertices, and so it is the set of all vertices adjacent to at least one member of A.
A set A of vertices in a graph is said to be a module if every vertex in A has the same set of neighbours outside of A. Any graph has a uniquely recursive decomposition into modules, its modular decomposition, which can be constructed from the graph in linear time; modular decomposition algorithms have applications in other graph algorithms including the recognition of comparability graphs.
See also
Markov blanket
Moore neighbourhood
Von Neumann neighbourhood
Second neighborhood problem
Vertex figure, a related concept in polyhedra
Link (simplicial complex), a generalization of the neighborhood to simplicial complexes
Notes
References
; see in particular pp. 89–90
.
.
.
.
.
.
.
Graph theory objects | Neighbourhood (graph theory) | [
"Mathematics"
] | 1,016 | [
"Mathematical relations",
"Graph theory objects",
"Graph theory"
] |
7,309,643 | https://en.wikipedia.org/wiki/Deguelin | Deguelin is a derivative of rotenone. Both are compounds classified as rotenoids of the flavonoid family and are naturally occurring insecticides. They can be produced by extraction from several plant species belonging to three genera of the legume family, Fabaceae: Lonchocarpus, Derris, or Tephrosia.
Cubé resin, the root extract from cubé (Lonchocarpus utilis) and from barbasco (Lonchocarpus urucu), is used as a commercial insecticide and piscicide (fish poison). The major active ingredients are rotenone and deguelin. Although "organic" (produced by nature) cubé resin is no longer considered environmentally safe.
Rat pharmacokinetics
Mean residence time (MRT) = 6.98 h
Terminal half-life (t1/2(gamma)) = 9.26 h
Area under the curve (AUC) = 57.3 ng h/ml
Total clearance (Cl) = 4.37 L/h per kg
Apparent volume of distribution (V) = 3.421 L/kg
Volume of distribution at steady-state (Vss) = 30.46 L/kg
Tissue distributions after i.v. (intravenous) administration: heart > fat > mammary gland > colon > liver > kidney > brain > lung.
Tissue distributions after i.g. (intragastric) administration: perirenal fat > heart > mammary gland > colon > kidney > liver > lung > brain > skin.
Elimination: Within 5 days of i.g. administration, about 58.1% of the [3H]deguelin was eliminated via the feces and 14.4% via the urine. Approximately 1.7% of unchanged deguelin was found in the feces, and 0.4% in the urine.
Deguelin and anti-cancer activity
Deguelin displays anti-cancer activity by inhibiting the growth of pre-cancerous and cancerous cells - particularly for lung cancer. So far the compound has shown no toxic effects on normal cells. However, high doses of deguelin are suspected of having negative effects on the heart, lungs and nerves.
The molecular mechanisms include the induction of apoptosis, mediated through AKT/PKB signaling pathways in malignant and premalignant human bronchial epithelia (HBE) cells, with only minimal effects on normal HBE cells. Deguelin inhibits AKT by both Phosphoinositol-3-phosphate kinase (PI3K)-dependent and PI3K-independent pathways.
Deguelin and Parkinson's disease
Research has shown a correlation between intravenous deguelin and Parkinson's disease in rats. The study does not suggest that deguelin exposure is responsible for Parkinson's disease in humans, but is consistent with the belief that chronic exposure to environmental toxins can increase the likelihood of the disease.
References
External links
Plant toxin insecticides
Hydroxyquinol ethers
Rotenoids | Deguelin | [
"Chemistry"
] | 634 | [
"Plant toxin insecticides",
"Chemical ecology"
] |
7,311,233 | https://en.wikipedia.org/wiki/Domain%20specificity | Domain specificity is a theoretical position in cognitive science (especially modern cognitive development) that argues that many aspects of cognition are supported by specialized, presumably evolutionarily specified, learning devices. The position is a close relative of modularity of mind, but is considered more general in that it does not necessarily entail all the assumptions of Fodorian modularity (e.g., informational encapsulation). Instead, it is properly described as a variant of psychological nativism. Other cognitive scientists also hold the mind to be modular, without the modules necessarily possessing the characteristics of Fodorian modularity.
Domain specificity emerged in the aftermath of the cognitive revolution as a theoretical alternative to empiricist theories that believed all learning can be driven by the operation of a few such general learning devices. Prominent examples of such domain-general views include Jean Piaget’s theory of cognitive development, and the views of many modern connectionists. Proponents of domain specificity argue that domain-general learning mechanisms are unable to overcome the epistemological problems facing learners in many domains, especially language. In addition, domain-specific accounts draw support from the surprising competencies of infants, who are able to reason about things like numerosity, goal-directed behavior, and the physical properties of objects all in the first months of life. Domain-specific theorists argue that these competencies are too sophisticated to have been learned via a domain-general process like associative learning, especially over such a short time and in the face of the infant’s perceptual, attentional, and motor deficits.
Current proponents of domain specificity argue that evolution equipped humans (and indeed most other species) with specific adaptations designed to overcome persistent problems in the environment. For humans, popular candidates include reasoning about objects, other intentional agents, language, and number. Researchers in this field seek evidence for domain specificity in a variety of ways. Some look for unique cognitive signatures thought to characterize a domain (e.g. differences in ways infants reason about inanimate versus animate entities). Others try to show selective impairment or competence within but not across domains (e.g. the increased ease of solving the Wason Selection Task when the content is social in nature). Still, others use learnability arguments to argue that a cognitive process or specific cognitive content could not be learned, as in Noam Chomsky’s poverty of the stimulus argument for language.
Prominent proponents of domain specificity include Jerry Fodor, Noam Chomsky, Steven Pinker, Elizabeth Spelke,
Susan Carey,
Lawrence A. Hirschfeld,
Susan Gelman
and many others.
See also
Connectionism
Domain-specificity vs. domain-generality in evolutionary developmental psychology
Empiricism
Modularity of mind
Nature versus nurture
Neural processing for individual categories of objects
Psychological nativism
Psychology of reasoning
Notes
Further reading
Abstracts from chapters in Mapping the Mind: Domain Specificity in Cognition and Culture, a collection of essays on domain-specificity.
Developmental psychology
de:Nativismus (Psychologie)
nl:Aangeboren kennis
pl:Natywizm | Domain specificity | [
"Biology"
] | 645 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
7,311,344 | https://en.wikipedia.org/wiki/Passenger%20information%20system | A passenger information system, or passenger information display system, is an automated system for supplying users of public transport with information about the nature and the state of a public transport service through visual, voice or other media. It is also known as a customer information system or an operational information system. Among the information provided by such systems, a distinction can be drawn between:
Static or schedule information, which changes only occasionally and is typically used for journey planning prior to departure.
Real-time information, derived from automatic vehicle location systems and changes continuously as a result of real-world events, which is typically used during the course of a journey (primarily how close the service is running to time and when it is due at a stop, as well as incidents that affect service operations, platform changes, etc.).
Static information has traditionally been made available in printed form though route network maps and timetable booklets at transit stations. However, most transit operators now also use integrated passenger information systems that provide either schedule-based information through a journey planner application or schedule-based information in combination with real-time information.
Real-time information is an advance on schedule-only information, which recognises the fact that public transport services do not always operate exactly according to the published timetable. By providing real-time information to travellers, they are better able to conduct their journey confidently, including taking any necessary steps in the event of delays. That helps to encourage greater use of public transport, which for many countries is a political goal.
Real-time information is provided to passengers in a number of different ways, including mobile phone applications, platform-level signage, and automated public address systems. It may include both predictions about arrival and departure times, as well as information on the nature and the cause of disruptions.
Issues with passenger information provision
There are four principal considerations for the provision of passenger information (static or real time):
Data availability. Information can be provided only if it is available, and collecting information can be resource-intensive. Also, there may be difficulties with co-ordinating data sharing between multiple organisations.
Data accuracy. Collecting information is error-prone. Also, prediction algorithms are not perfect and so real-time announcements may be in error.
Getting information to the passenger. A variety of dissemination mechanisms may be used, but it is not always easy to ensure that the correct information reaches the passenger when it is most needed. Information overload must be avoided.
Latency or response time. Information provision must react quickly to a passenger request or a real-world update. There is little point in announcing a service three minutes after it has departed.
Real-time arrival prediction systems
Current operational information on service running is collected from automatic vehicle location (AVL) systems and from control systems, including incident capture systems. The information can be compared algorithmically with the published service timetable to generate a prediction of how services will run in the next few minutes to hours. That may be informed by additional information. For instance, bus services are affected by congestion on the road network, and all services may be affected by adverse weather conditions.
Economic rationale
The capital and revenue costs for traveller information systems can be calculated with reasonable accuracy. However, the derivation of tangible financial benefits is far more difficult to establish and as so there is very little research. That directs the business model for information systems towards the "softer" merits such as traveller confidence. There must be an actual value, as individuals are willing to pay for systems that give them access to real-time data relating to their journey. The difficulty is establishing what that is for each individual person and perhaps each individual piece of roadside hardware. Even less is known about the long-term effects of access to these types of services. The only long-term study is from 2012.
Communication channels
Information may be delivered via any electronic media, including:
Mobile phone application
LED displays and screens inside stations
E-paper displays and screens at bus stops and shelters
Internet through a website
Telephone (either a staffed bureau service or an automated answering system)
Touch screen kiosks for self-service (e.g. in customer offices)
Additional considerations include:
How the system presents information for disabled travellers
Whether the system provides information in multiple languages
Information
The information provided by a passenger information system depends on its location and the technical scope (e.g. the size of the display screen)
At a station or stop, it is normal to provide up-to-date predictions of:
Which service is operated by the next vehicle to arrive, including its route and destination. For train services in Europe, the train type is typically also indicated
When the vehicle will arrive.
How closely it is running to timetable.
Similar information for the following few services.
General advice on current travel disruptions that may be useful to the passenger in understanding the implications for their travel plans.
On a vehicle, it is normal to provide up to date predictions of:
When the vehicle will arrive at the next station or stop (express or long-distance services).
Advice on connecting services.
Personalised channels (web, mobile device, or kiosk) is normally set up to mimic the view from a station or stop, but they may in addition be linked to journey planners. Using such systems, a passenger may (re)plan their journey to take into account current circumstances (such as cancelled services or excessive delays).
Examples
France
In Paris, France, SIEL indicator systems (abbreviated from Système d’information en ligne) are installed in the RER, the Paris Métro and on 250 bus routes on the RATP bus system.
On the RER, two types of indicators are used. The first-generation model indicates only the termini of trains stopping at a station through the use of square lights beside the words bearing the name of a terminus. The second-generation model includes an LED display above the square lights indicating the terminus and train service. The displays are used only on the RER line A, RER line B and at Gare de Châtelet – Les Halles station on RER line D. They can be inaccurate at times because of the lack of communication between SNCF and RATP, the two operators of the RER.
On the Paris Métro, there are two types of information display systems. The LED numerical display installed in all Métro lines (except line 14) has been in use since 1997. The television display is installed on all stations on line 14. The displays show the time needed for a train (and the subsequent train after it) to reach a particular station.
On the bus network in Paris, monochrome LCDs have been used since 1996 to indicate the time needed for a bus on a bus route to arrive at a bus stop, after a two-year trial period on a few bus routes.
Germany
Deutsche Bahn AG offers a Travel Information System ( (RIS)). It shows current train times compared to the published timetable, as well as known delays and expected arrival and departure times of the trains. The information is made available to the train conductor (via SMS) as well as to the passenger via loudspeaker in the train station or schedule boards on the internet. The corresponding VRR and VRS information systems also process RIS data. The data can also be queried in real-time via mobile devices like mobile phones.
The RIS was started in 2003, and by 2007, it was planned to have 30,000 trains equipped with the necessary train describer (electronic train number). In an accompanying program the older split-flap displays were replaced by electronic dot-matrix signage. Large stations have platform displays with multiple rows, but the Deutsche Bahn network operator developed the Dynamic Font Indicator ( (DSA)) standard system for smaller stations with a single row. In 2011, a federal funding was granted to equip 4500 additional stations with DSA signage, making for most of the 6500 DSAs by 2015.
The federal grant came along with a Federal Railway Authority ( (EBA)) order in 2010 to have all stations connected to the travel information system to announce delays with electronic signage or loudspeakers. The Deutsche Bahn operator tried to block that order legally for stations with a very low frequency but lost all lawsuits in 2015. It was given 18 months to equip the remaining stations with DSAs. The DSA system has a GSM radio module to receive a text message to be displayed in a horizontally-moving news ticker style. A loudspeaker may optionally be mounted on top. When there is no delay, the current time is shown statically on its 96×8 LED dot-matrix display.
United Kingdom
National Rail stations are equipped with visual platform displays and audio announcements, which indicate the next service or services from the platform and warn passengers to stand clear of trains that are not scheduled to stop, not in use or are about to depart. Additionally, concourses and ticket offices have large screen displays that show all of the services available at the station for the next hour or more and, at major stations, the full route of the service and any restrictions applicable (e.g. ticket types, catering services, bicycle carriage). Many smaller and less well-used railway stations have, instead of such systems, "passenger help points", which connect the user by telephone to a control room by pressing an "Information" button.
The information is available online at the National Rail website and on mobile devices.
Most London Underground stations have "countdown" displays on each platform. They are simpler than the national rail displays since most platforms serves only a single line, and there are few or no variations in carriage restrictions and destinations served. Audio announcements are also made regularly.
Local authorities and some transport operators provide electronic versions of the bus timetables to the Traveline information service, which covers all public transport modes, and from there to other information services such as Google Transit.
The deployment of real-time bus information systems is a gradual process and currently extends to around half of the national fleet and a high proportion of town-centre stops but relatively few suburban and rural locations. The first use of such systems was in Brighton and Hove. The Traveline NextBuses information service provides the next departures from any bus stop in the UK, and some trams as well. The information has the real-time feed that has been connected in; otherwise, the scheduled times are given.
The government-sponsored Transport Direct project provided journey planning across all transport modes (including private car) and was increasingly linked to real-time information systems prior to its discontinuation in 2014.
United States
Real-time passenger information was brought to riders in the US by NextBus corporation, a small start-up, in 1999. The first systems were installed in Emeryville, California, and later in San Francisco, California. , both initial systems are still in operation.
The Washington Metro installed a passenger information display system (PIDS) in all of its stations in 2000. The system provides real-time information on next train arrivals, delayed trains, emergency announcements, and related information. Metro also provides current train and related information to customers with conventional web browsers, as well as users of smartphones and other mobile devices. In 2010, Metro began sharing its PIDS data with outside software developers for use in creating additional real-time applications for mobile devices. Free apps are available to the public on major mobile device software platforms (iPhone/iPad, Android, Windows Phone, Palm). The system also began providing real-time train information by phone in 2010.
The New York City Subway began installing its public address/customer information screens, commonly known as "countdown clocks", in its stations in 2007. In 2012, the system began offering SubTime, a website and iPhone app for real-time train arrival estimates for several of its subway services. The arrival data are shared with outside software developers to support creation of additional apps. There are also PIDS installed on some MTA Regional Bus Operations routes over the years, but mostly, the MTA offers real-time bus tracking through another website/app called MTA Bus Time.
The Boston MBTA Red, Orange, and Blue Lines introduced countdown clocks in early 2014, and the Green Line introduced them the following year. The eastern end of the Green Line introduced clocks in early 2016. They reflect how many "stops away" the train is, rather than how many minutes it will take to arrive. Amtrak has deployed PIDS throughout the Northeast Corridor.
, PIDS are being deployed with unified messaging, which can include information streamed to mobile devices, phones and translated directly to voice announcements. Text to Speech products have been designed to convert PIDS data to speech in a choice of over 20 languages.
See also
General Transit Feed Specification
Identification of Fixed Objects in Public Transport (IFOPT)
IEEE Intelligent Transportation Systems Society
Journey planner
Onboard passenger information system
Platform display
Real Time Information Group (RTIG), UK organisation
Service Interface for Real Time Information (SIRI), technical specifications and standards
Transmodel, CEN European Reference Data Model
References
Travel technology | Passenger information system | [
"Technology"
] | 2,646 | [
"Public transport information systems",
"Information systems"
] |
7,312,048 | https://en.wikipedia.org/wiki/Float-out | Float-out is the process in shipbuilding that follows the keel laying and precedes the fitting-out process. It is analogous to launching a ship, a specific process that has largely been discontinued in modern shipbuilding. Both floating-out and launching are the times when the ship leaves dry land and becomes waterborne for the first time, and often take place during ceremonies celebrating and commemorating that event.
Launching
Prior to the large-scale use of drydocks (building or graving docks) for constructing ships, most vessels were constructed on a slipway, i.e. an inclined building platform sloping toward a body of water into which the ship would be launched.
Contemporary shipbuilding
The launching of ships has been largely replaced by the "floating" process. After a ship is ordered for construction, its keel is laid in a drydock. Construction of the ship continues in the dock, usually in the form of prefabricated units that are assembled.
After the empty hull has been substantially completed, sluice gates are opened and the drydock fills with water. The dock gates are then opened and the ship is pulled out by tugboat to a berth where the remaining construction continues namely fitting out. This usually includes further construction of the superstructure, attaching of masts and funnels, and the installation of equipment and furnishings.
The completed ship will usually return to drydock for installation of other equipment, propulsion parts, and the painting of its hull.
The first superliner to be constructed in this manner was , but the history of "floating" ships rather than "launching" goes back more than one hundred years before that vessel's construction. designed by Isambard Kingdom Brunel was constructed in drydock and floated on 19 July 1843. She is currently in Bristol, England, United Kingdom.
Naming ceremony
Ships which are launched typically are christened and formally named at their launching ceremonies, even though they are not completed until later. Some recent passenger vessels which were constructed in drydocks were not formally christened when floated out. The naming ceremonies of and took place after completion and delivery to their owners, in the case of Freedom of the Seas after her first transatlantic crossing.
External links
‘’Birth of a Ship’’ (the construction process of container ship MV Maunawili)
Shipbuilding
Naval architecture | Float-out | [
"Engineering"
] | 467 | [
"Naval architecture",
"Shipbuilding",
"Marine engineering"
] |
7,312,598 | https://en.wikipedia.org/wiki/Relative%20survival | Relative survival of a disease, in survival analysis, is calculated by dividing the overall survival after diagnosis by the survival as observed in a similar population not diagnosed with that disease. A similar population is composed of individuals with at least age and gender similar to those diagnosed with the disease.
When describing the survival experience of a group of people or patients typically the method of overall survival is used, and it presents estimates of the proportion of people or patients alive at a certain point in time. The problem with measuring overall survival by using the Kaplan-Meier or actuarial survival methods is that the estimates include two causes of death: deaths from the disease of interest and deaths from all other causes, which includes old age, other cancers, trauma and any other possible cause of death. In general, survival analysis is interested in the deaths by a disease rather than all causes. Thus, a "cause-specific survival analysis" is employed to measure disease-specific survival. Thus, there are two ways in performing a cause-specific survival analysis "competing risks survival analysis" and "relative survival."
Competing risks survival analysis
This form of analysis is known by its use of death certificates. In traditional overall survival analysis, the cause of death is irrelevant to the analysis. In a competing risks survival analyses, each death certificate is reviewed. If the disease of interest is cancer, and the patient dies of a car accident, the patient is labelled as censored at death instead of being labelled as having died. Issues with this method arise, as each hospital and or registry may code for causes of death differently.
For example, there is variability in the way a patient who has cancer and commits suicide is coded/labelled. In addition, if a patient has an eye removed from an ocular cancer and dies getting hit while crossing the road because he did not see the car, he would often be considered to be censored rather than having died from the cancer or its subsequent effects.
Hazard rate
The relative survival form of analysis is more complex than "competing risks" but is considered the gold-standard for performing a cause-specific survival analysis. It is based on two rates: the overall hazard rate observed in a diseased population and the background or expected hazard rate in the general or background population.
Deaths from the disease in a single time period are the total number of deaths (overall number of deaths) minus the expected number of deaths in the general population. If 10 deaths per hundred population occur in a population of cancer patients, but only 1 death occurs per hundred general population, the disease specific number of deaths (excess hazard rate) is 9 deaths per hundred population. The classic equation for the excess hazard rate is as follows:
The equation does not define a survival proportion but simply describes the relationships between disease-specific death (excess hazard) rates, background mortality rates (expected death rate) and the overall observed mortality rates. The excess hazard rate is related to relative survival, just as hazard rates are related to overall survival.
Cancer survival
Relative survival is typically used in the analysis of cancer registry data. Cause-specific survival estimation using the coding of death certificates has considerable inaccuracy and inconsistency and does not permit the comparison of rates across registries.
The diagnosis of cause-of-death is varied between practitioners. How does one code for a patient who dies of heart failure after receiving a chemotherapeutic agent with known deleterious cardiac side-effects? In essence, what really matters is not why the population dies but if the rate of death is higher than that of the general population.
If all patients are dying of car crashes, perhaps the tumour or treatment predisposes them to have visual or perceptual disturbances, which lead them to be more likely to die in a car crash. In addition, it has been shown that patients coded in a large US cancer registry as suffering from a non-cancer death are 1.37 times as likely to die than does a member of the general population.
If the coding was accurate, this figure should approximate 1.0 as the rate of those dying of non-cancer deaths (in a population of cancer sufferers) should approximate that of the general population. Thus, the use of relative survival provides an accurate way to measure survival rates that are associated with the cancer in question.
Epidemiology
In epidemiology, relative survival (as opposed to overall survival and associated with excess hazard rates) is defined as the ratio of observed survival in a population to the expected or background survival rate. It can be thought of as the kaplan-meier survivor function for a particular year, divided by the expected survival rate in that particular year. That is typically known as the relative survival (RS).
If five consecutive years are multiplied, the resulting figure would be known as cumulative relative survival (CRS). It is analogous to the five-year overall survival rate, but it is a way of describing cancer-specific risk of death over five years after diagnosis.
Software
There are several software suites available to estimate relative survival rates. Regression modelling can be performed using maximum likelihood estimation methods by using Stata or R. For example, the R package cmprsk may be used for competing risk analyses which utilize sub-distribution or 'Fine and Gray' regression methods.
See also
Survival rate
Five-year survival rate
References
Epidemiology
Medical statistics | Relative survival | [
"Environmental_science"
] | 1,089 | [
"Epidemiology",
"Environmental social science"
] |
7,313,208 | https://en.wikipedia.org/wiki/Curve%20tracer | A curve tracer is a specialised piece of electronic test equipment used to analyze the characteristics of discrete electronic components, such as diodes, transistors, thyristors, and vacuum tubes. The device contains voltage and current sources that can be used to stimulate the device under test (DUT).
Operation
The function is to apply a swept (automatically continuously varying with time) voltage to two terminals of the device under test and measure the amount of current that the device permits to flow at each voltage. This so-called I–V (current versus voltage) data is either directly displayed on an oscilloscope screen, or recorded to a data file for later processing and graphing with a computer. Configuration includes the maximum voltage applied, the polarity of the voltage applied (including the automatic application of both positive and negative polarities), and the resistance inserted in series with the device. The main terminal voltage can often be swept up to several thousand volts, with load currents of tens of amps available at lower voltages.
For two-terminal devices (such as diodes and DIACs), this is sufficient to fully characterize the device. The curve tracer can display all of the interesting parameters such as the diode's forward voltage, reverse leakage current, reverse breakdown voltage, and so on. For triggerable devices such as DIACs, the forward and reverse trigger voltages will be clearly displayed. The discontinuity caused by negative resistance devices (such as tunnel diodes) can also be seen. This is a method for finding electrically damaged pins on integrated circuit devices.
For three-terminal devices (such as transistors) a connection to the control terminal of the device being tested is used, such as the Base or Gate terminal. For BJT transistors and other current-controlled devices, the base or other control terminal current is stepped. For FETs or other voltage-controlled devices, a stepped voltage is used instead. By sweeping the voltage through the configured range of main terminal voltages, for each voltage step of the control signal, a group of I–V curves is generated automatically. This group of curves makes it very easy to determine the gain of a transistor, or the trigger voltage of a thyristor or TRIAC.
Test Device Connection
Curve tracers usually contain convenient connection arrangements for two- or three-terminal devices, often in the form of sockets arranged to allow the plugging-in of the various common packages used for electronic components. Most curve tracers also allow the simultaneous connection of two DUTs; in this way, two DUTs can be "matched" for optimum performance in circuits (such as differential amplifiers) which depend upon the close matching of device parameters. This can be seen in the adjacent image where a toggle switch allows the rapid switching between the DUT on the left and the DUT on the right as the operator compared the respective curve families of the two devices.
I–V curves are used to characterize devices and materials through DC source-measure testing. These applications may also require the calculation of resistance and the derivation of other parameters based on I–V measurements. For example, I–V data can be used to study anomalies, locate maximum or minimum curve slopes, and perform reliability analyses. A typical application is finding a semiconductor diode's reverse bias leakage current and doing forward and reverse bias voltage sweeps and current measurements to generate its I–V curve.
Kelvin sensing
Curve tracers, especially high-current models, are usually supplied with various semiconductor device test fixture adapters that have Kelvin sensing.
Capacitive balance control
Some analog curve tracers, especially sensitive low-current models, are equipped with manual control for balancing a capacitive Bridge circuit for compensating ("nulling") the stray capacitances of the test setup. This adjustment is performed by tracing the curve of the empty test setup (with all required cables, probes, adapters, and other auxiliary devices connected, but without the DUT) and adjusting the balance control until the I curve is displayed at a constant zero level.
I–V Curve Tracing
I–V curve tracing is a method of analyzing the performance of a Photovoltaic system, ideal for testing all the possible operating points of a PV module or string of modules.
History
Before the introduction of semiconductors, there were vacuum tube curve tracers (e.g., Tektronix 570). Early semiconductor curve tracers themselves used vacuum tube circuits, as semiconductor devices then available could not do everything required in a curve tracer.
The Tektronix model 575 curve tracer shown in the gallery was a typical early instrument.
Nowadays, curve tracers are entirely solid state and are substantially automated to ease the workload of the operator, automatically capture data, and assure the safety of the curve tracer and the DUT.
Recent developments in curve tracer systems now allow three core types of curve tracing: current–voltage (I–V), capacitance-voltage (C-V), and ultra-fast transient or pulsed current–voltage (I–V). Modern curve tracer instrument designs tend to be modular, allowing system specifiers to configure them to match the applications for which they will be used. For example, new mainframe-based curve tracer systems can be configured by specifying the number and power level of the Source Measure Units (SMUs) to be plugged into the slots in the back panel of the chassis. This modular design also provides the flexibility to incorporate other types of instrumentation to handle a wider range of applications. These mainframe-based systems typically include a self-contained PC to simplify test setup, data analysis, graphing and printing, and onboard results storage. Users of these types of systems include semiconductor researchers, device modeling engineers, reliability engineers, die-sort engineers, and process development engineers.
In addition to mainframe-based systems, other curve tracer solutions are available that allow system builders to combine one or more discrete Source-Measure Units (SMUs) with a separate PC controller running curve tracer software. Discrete SMUs offer a broader range of current, voltage, and power levels than mainframe-based systems permit and allow the system to be reconfigured as test needs change. New Wizard-based user interfaces have been developed to make it easy for students or less experienced industry users to find and run the tests they need, such as the FET curve trace test.
Safety
Some curve tracers, specifically those designed for high voltage or current or power devices, are capable of generating lethal voltages and currents and so pose an electrocution hazard for the operator. Modern curve tracers often contain mechanical shields and interlocks that make it more difficult for the operator to come into contact with hazardous voltages or currents. Power DUTs can become dangerously hot during testing. Inexpensive curve tracers cannot test such devices and are less likely to be lethally dangerous.
References
External links
The Museum of Tektronix Scopes
All manufacturers of curve tracers.
A homebrew Curve Tracer.
Electronic test equipment
Laboratory equipment
Electronics work tools | Curve tracer | [
"Technology",
"Engineering"
] | 1,466 | [
"Electronic test equipment",
"Measuring instruments"
] |
7,314,249 | https://en.wikipedia.org/wiki/Richtmyer%E2%80%93Meshkov%20instability | The Richtmyer–Meshkov instability (RMI) occurs when two fluids of different density are impulsively accelerated. Normally this is by the passage of a shock wave. The development of the instability begins with small amplitude perturbations which initially grow linearly with time. This is followed by a nonlinear regime with bubbles appearing in the case of a light fluid penetrating a heavy fluid, and with spikes appearing in the case of a heavy fluid penetrating a light fluid. A chaotic regime eventually is reached and the two fluids mix. This instability can be considered the impulsive-acceleration limit of the Rayleigh–Taylor instability.
Dispersion Relation
For ideal MHD
For Hall MHD
For QMHD
History
R. D. Richtmyer provided a theoretical prediction, and E. E. Meshkov (Евгений Евграфович Мешков)(ru) provided experimental verification. Materials in the cores of stars, like Cobalt-56 from Supernova 1987A were observed earlier than expected. This was evidence of mixing due to Richtmyer–Meshkov and Rayleigh–Taylor instabilities.
Examples
During the implosion of an inertial confinement fusion target, the hot shell material surrounding the cold D–T fuel layer is shock-accelerated. This instability is also seen in magnetized target fusion (MTF). Mixing of the shell material and fuel is not desired and efforts are made to minimize any tiny imperfections or irregularities which will be magnified by RMI.
Supersonic combustion in a scramjet may benefit from RMI as the fuel-oxidants interface is enhanced by the breakup of the fuel into finer droplets. Also in studies of deflagration to detonation transition (DDT) processes show that RMI-induced flame acceleration can result in detonation.
See also
Rayleigh–Taylor instability
Mushroom cloud
Plateau–Rayleigh instability
Salt fingering
Kármán vortex street
Kelvin–Helmholtz instability
Hydrodynamics
References
External links
Wisconsin Shock Tube Laboratory
New type of interface evolution in the Richtmyer–Meshkov instability
Recent Advances in Indirect Drive ICF Target Physics at LLNL
Emergence of Detonation in the Flowfield Induced by Richtmyer–Meshkov Instability
Propagation of Fast Deflagrations and Marginal Detonations in Hydrogen-Air Mixtures
Mushrooms+Snakes: a visualization of Richtmyer–Meshkov instability
Conjugate Filter OscillationReduction (CFOR) scheme for the 2D Richtmyer–Meshkov instability
Experiments on the Richtmyer–Meshkov instability at the University of Arizona
Fluid dynamics
Plasma instabilities
Astrophysics
Fluid dynamic instabilities | Richtmyer–Meshkov instability | [
"Physics",
"Chemistry",
"Astronomy",
"Engineering"
] | 545 | [
"Physical phenomena",
"Fluid dynamic instabilities",
"Chemical engineering",
"Plasma phenomena",
"Plasma instabilities",
"Astrophysics",
"Piping",
"Astronomical sub-disciplines",
"Fluid dynamics"
] |
7,315,462 | https://en.wikipedia.org/wiki/William%20Henry%20Perkin%20Jr. | William Henry Perkin Jr., FRS FRSE (17 June 1860 – 17 September 1929) was an English organic chemist who was primarily known for his groundbreaking research work on the degradation of naturally occurring organic compounds.
Early life
He was the eldest son of Sir William Henry Perkin who had founded the aniline dye industry, and was born at Sudbury, England, close to his father's dyeworks at Greenford. His brother was Arthur George Perkin (1861–1937), Professor of Colour Chemistry and Dyeing at the University of Leeds.
Perkin was educated at the City of London School and then at the Royal College of Science, South Kensington, London, and then in Germany at the universities of Würzburg and Munich. At Munich, he was a doctoral student under Adolf von Baeyer. From 1883 to 1886, he held the position of Privatdozent at the University of Munich. He never lost contact with his friend Baeyer, and delivered the memorial lecture following Baeyer's death in 1917.
In 1887 he returned to Britain and became professor of chemistry at Heriot-Watt College, Edinburgh, Scotland, for which the Chemistry wing of the main campus is currently named The William Perkin Building.
Manchester
In 1892 he accepted the chair of organic chemistry at Owens College, Manchester, England, succeeding Carl Schorlemmer, which he held until 1912. During this period his stimulating teaching and brilliant researches attracted students from all parts, and he formed at Manchester a school of organic chemistry famous throughout Europe. This was possible because he was assigned new laboratory buildings, which he planned together with the famous architect Alfred Waterhouse, similar to those built by Baeyer in Munich. The speech at the opening ceremony was given by Ludwig Mond. An additional laboratory building together with a library and £20,300, was a donation of the chemist and industrialist Edward Schunck in 1895. His laboratory was removed brick by brick and recreated at Owens College.
Frank Lee Pyman, Robert Robinson (who later won a Nobel Prize in Chemistry), Walter Haworth and Eduard Hope graduated at Owens College while Perkin was there. The conflict with Chaim Weizmann, who held a postdoctoral position and was a friend of Perkin, over the fermentation of starch to isoamyl alcohol which was the starting material for synthetic rubber and therefore industrially relevant, led to the dismissal of Weizmann. In 1912, following a planned change in University politics involving industrial co-operations, which would have resulted in a significant loss of income for Perkin, he accepted a position in Oxford.
Oxford
In 1912 he succeeded Professor William Odling as Waynflete Professor of Chemistry at Oxford University, England, a position he held until 1929. When he started five colleges had their own laboratories. He first had to move into the Odling laboratory, a replica of the mediaeval Abbot's Kitchen at Glastonbury. During Perkin's time there, new and more extensive laboratories were built (the Dyson Perrins Laboratory), and for the first time in England a period of research became a necessary part of the academic course in chemistry for an honours degree. But the constant rivalry with the physical chemistry department, for example Frederick Soddy, lead to the situation that most of the graduates chose physical or inorganic chemistry as their subject, and Perkin got most of his postdoctoral employees from other universities.
Published work
Perkin's work was published in a series of papers in Transactions of the Chemical Society. The earlier papers dealt with the properties and modes of synthesis of cloud chain hydrocarbons and their derivatives. This work led naturally to the synthesis of many terpenes and members of the camphor group; and also to the investigation of various alkaloids and natural dyes. In addition to purely scientific work, Perkin kept in close touch with the chemical industry. Together with his brother-in-law Professor Frederick Kipping, Perkin wrote textbooks on practical chemistry, inorganic and organic chemistry; their Organic Chemistry appeared in 1899.
Honours and awards
Perkin was elected a Fellow of the Royal Society in June 1890 and was awarded their Davy Medal in 1904 and their Royal Medal in 1925. He was president of the Chemical Society from 1913 to 1916 and was awarded their Longstaff Medal in 1900. In 1910, he was made an honorary graduate of the University of Edinburgh, receiving the degree of Doctor of Laws (LL.D.).
Later life
In 1887 he married Mina Holland, one of three sisters. They had no children.
Both of his brothers-in-law were eminent scientists themselves (Arthur Lapworth and Frederick Kipping).
He died in Oxford on 17 September 1929 and is buried in Wolvercote Cemetery there.
References
Sources
1860 births
1929 deaths
Burials at Wolvercote Cemetery
British organic chemists
19th-century English chemists
20th-century English chemists
Royal Medal winners
Waynflete Professors of Chemistry
Fellows of the Royal Society
People from Wembley
Scientists from London
Members of the Royal Society of Sciences in Uppsala | William Henry Perkin Jr. | [
"Chemistry"
] | 1,018 | [
"Organic chemists",
"British organic chemists"
] |
7,315,901 | https://en.wikipedia.org/wiki/Mil%C3%BC | Milü (; "close ratio"), also known as Zulü (Zu's ratio), is the name given to an approximation to (pi) found by Chinese mathematician and astronomer Zu Chongzhi in the 5th century. Using Liu Hui's algorithm (which is based on the areas of regular polygons approximating a circle), Zu famously computed to be between 3.1415926 and 3.1415927 and gave two rational approximations of , and , naming them respectively Yuelü (; "approximate ratio") and Milü.
is the best rational approximation of with a denominator of four digits or fewer, being accurate to six decimal places. It is within % of the value of , or in terms of common fractions overestimates by less than . The next rational number (ordered by size of denominator) that is a better rational approximation of is , though it is still only correct to six decimal places. To be accurate to seven decimal places, one needs to go as far as . For eight, is needed.
The accuracy of Milü to the true value of can be explained using the continued fraction expansion of , the first few terms of which are . A property of continued fractions is that truncating the expansion of a given number at any point will give the "best rational approximation" to the number. To obtain Milü, truncate the continued fraction expansion of immediately before the term 292; that is, is approximated by the finite continued fraction , which is equivalent to Milü. Since 292 is an unusually large term in a continued fraction expansion (corresponding to the next truncation introducing only a very small term, , to the overall fraction), this convergent will be especially close to the true value of :
Zu's contemporary calendarist and mathematician He Chengtian invented a fraction interpolation method called "harmonization of the divisor of the day" () to increase the accuracy of approximations of by iteratively adding the numerators and denominators of fractions. Zu Chongzhi's approximation ≈ can be obtained with He Chengtian's method.
An easy mnemonic helps memorize this fraction by writing down each of the first three odd numbers twice: , then dividing the decimal number represented by the last 3 digits by the decimal number given by the first three digits: . (In Eastern Asia, fractions are read by stating the denominator first, followed by the numerator). Alternatively, .
See also
Continued fraction expansion of and its convergents
Approximations of π
Pi Approximation Day
Notes
References
External links
Fractional Approximations of Pi
Pi
History of mathematics
History of science and technology in China
Chinese mathematical discoveries
Chinese words and phrases
Approximations
Rational numbers
Zu Chongzhi | Milü | [
"Mathematics"
] | 566 | [
"Mathematical relations",
"Pi",
"Approximations"
] |
7,315,935 | https://en.wikipedia.org/wiki/Unit%20Operations%20of%20Chemical%20Engineering | Unit Operations of Chemical Engineering, first published in 1956, is one of the oldest chemical engineering textbooks still in widespread use. The current Seventh Edition, published in 2004, continues its successful tradition of being used as a textbook in university undergraduate chemical engineering courses. It is widely used in colleges and universities throughout the world, and often referred just "McCabe-Smith-Harriott" or "MSH".
Subjects covered in the book
The book starts with an introductory chapter devoted to definitions and principles. It then follows with 28 additional chapters, each covering a principal chemical engineering unit operation. The 28 chapters are grouped into four major sections:
Fluid mechanics
Heat transfer
Mass transfer and equilibrium stages
Operations involving particulate solids.
A more detailed table of contents is available on the Internet.
See also
Chemical engineer
:Category:Unit operations
Distillation Design
Perry's Chemical Engineers' Handbook
Process design
Transport Phenomena
Unit operations
References
Chemical engineering books
Engineering textbooks
Unit operations
1956 books | Unit Operations of Chemical Engineering | [
"Chemistry",
"Engineering"
] | 192 | [
"Chemical process engineering",
"Chemical engineering books",
"Chemical engineering",
"Unit operations"
] |
7,316,200 | https://en.wikipedia.org/wiki/Explant%20culture | In biology, explant culture is a technique to organotypically culture cells from a piece or pieces of tissue or organ removed from a plant or animal. The term explant can be applied to samples obtained from any part of the organism. The extraction process is extensively sterilized, and the culture can be typically used for two to three weeks.
The major advantage of explant culture is the maintenance of near in vivo environment in the laboratory for a short duration of time. This experimental setup allows investigators to perform experiments and easily visualize the impact of tests.
This ex vivo model requires a highly maintained environment in order to recreate original cellular conditions. The composition of extracellular matrix, for example, must be precisely similar to that of in vivo conditions in order to induce naturally observed behaviors of cells. The growth medium also must be considered, as different solutions may be needed for different experiments.
The tissue must be placed and harvested in an aseptic environment such as sterile laminar flow tissue culture hood. The samples are often minced, and the pieces are placed in a cell culture dish containing growth media. Over time, progenitor cells migrate out of the tissue onto the surface of the dish. These primary cells can then be further expanded and transferred into fresh dishes through micropropagation.
Explant culture can also refer to the culturing of the tissue pieces themselves, where cells are left in their surrounding extracellular matrix to more accurately mimic the in vivo environment e.g. cartilage explant culture, or blastocyst implant culture.
Application
Historically, explant culture has been used in several areas of biological research. Organogenesis and morphogenesis in fetus have been studied with explant cultures. Since the explant culture is grown in the lab, the area or cells of interest can be labeled with fluorescent markers. These transgenic labels can help researchers observe growth of specific cells. For example, neural tissue development and central nervous system regeneration have been studied with organotypic explant culture.
The role of a specific gene, gene expression, and the mechanism of action all can be studied with explant culture as well. Certain factors that control or contribute to growth could be identified during different stages of embryogenesis. Looking at the expression pattern would allow tracking of where the gene transcripts have been. How much gene has been expressed could be quantified too.
Coupling with stem cell research, researchers have successfully grown simple organs derived from autologous human pluripotent stem cells. So far bladder and trachea have been developed. This method attempts to address tissue rejection, and there are already cases of successful transplantation. A research team from Wake Forest Institute for Regenerative Medicine in Winston-Salem, North Carolina, successfully transplanted stem cell-engineered bladders to seven pediatric patients with malfunctioning bladders. Another case was from a team at University College, London, UK, which transplanted a wind pipe derived from the patient's own stem cells.
Even with all the advantages to explant culture, there still are several caveats. The downside of explant culture is that it does not provide sufficient time to study chronic diseases. Although two to three weeks may be enough time to study acute changes, it is not fit for experiments requiring long-term observations.
Current research
Retina
Many neurobiological processes have been studied with retinal explant cultures. Understanding retina's development has led the way for researchers to study pathological neurodegeneration and related retinal diseases more closely. Cellular grafts derived from retinal stem cell therapy is an active area of research to treat macular degeneration, retinitis, pigmentosa, and glaucoma.
References
Cell culture | Explant culture | [
"Biology"
] | 778 | [
"Model organisms",
"Cell culture"
] |
7,316,682 | https://en.wikipedia.org/wiki/Fitch%20notation | Fitch notation, also known as Fitch diagrams (named after Frederic Fitch), is a notational system for constructing formal proofs used in sentential logics and predicate logics. Fitch-style proofs arrange the sequence of sentences that make up the proof into rows. A unique feature of Fitch notation is that the degree of indentation of each row conveys which assumptions are active for that step.
Example
Each row in a Fitch-style proof is either:
an assumption or subproof assumption.
a sentence justified by the citation of (1) a rule of inference and (2) the prior line or lines of the proof that license that rule.
Introducing a new assumption increases the level of indentation, and begins a new vertical "scope" bar that continues to indent subsequent lines until the assumption is discharged. This mechanism immediately conveys which assumptions are active for any given line in the proof, without the assumptions needing to be rewritten on every line (as with sequent-style proofs).
The following example displays the main features of Fitch notation:
0 |__ [assumption, want P if not P]
1 | |__ P [assumption, want not P]
2 | | |__ not P [assumption, for reduction]
3 | | | contradiction [contradiction introduction: 1, 2]
4 | | not not P [negation introduction: 2]
|
5 | |__ not not P [assumption, want P]
6 | | P [negation elimination: 5]
|
7 | P iff not not P [biconditional introduction: 1 - 4, 5 - 6]
0. The null assumption, i.e., we are proving a tautology
1. Our first subproof: we assume the l.h.s. to show the r.h.s. follows
2. A subsubproof: we are free to assume what we want. Here we aim for a reductio ad absurdum
3. We now have a contradiction
4. We are allowed to prefix the statement that "caused" the contradiction with a not
5. Our second subproof: we assume the r.h.s. to show the l.h.s. follows
6. We invoke the rule that allows us to remove an even number of nots from a statement prefix
7. From 1 to 4 we have shown if P then not not P, from 5 to 6 we have shown P if not not P; hence we are allowed to introduce the biconditional in 7, where iff stands for if and only if
See also
Natural deduction
References
External links
Fitch's Paradox of Knowability
An online Java application for proof building
A Web implementation of Fitch proof system (propositional and first-order) at proofmod.mindconnect.cc
The Jape general-purpose proof assistant (see Jape)
Resources for typesetting proofs in Fitch notation with LaTeX (see LaTeX)
FitchJS: An open source web app to construct proofs in Fitch notation (and export to LaTeX)
Natural deduction proof editor and checker in Fitch notation
Philosophical logic
Logical calculi | Fitch notation | [
"Mathematics"
] | 646 | [
"Mathematical logic",
"Logical calculi"
] |
7,318,182 | https://en.wikipedia.org/wiki/Superordinate%20goals | In social psychology, superordinate goals are goals that are worth completing but require two or more social groups to cooperatively achieve. The idea was proposed by social psychologist Muzafer Sherif in his experiments on intergroup relations, run in the 1940s and 1950s, as a way of reducing conflict between competing groups. Sherif's idea was to downplay the two separate group identities and encourage the two groups to think of themselves as one larger, superordinate group. This approach has been applied in many contexts to reduce intergroup conflict, including in classrooms and business organizations. However, it has also been critiqued by other social psychologists who have proposed competing theories of intergroup conflict, such as contact theory and social categorization theory.
In the context of goal-setting theory, the concept is seen in terms of three goal levels. These are classified as subordinate, intermediate and superordinate. An organization's superordinate goals are expressed through its Vision and Mission Statement and support strategic alignment of activities (subordinate and intermediate goals) with the overall purpose (superordinate goals).
Origin
Superordinate goals were first described and proposed as a solution to intergroup conflict by social psychologist Muzafer Sherif. He studied conflict by creating a boys' summer camp for his Robbers Cave experiments. Sherif assigned the participating campers to two separate groups, the blue and red groups. The boys had separate games and activities, lived in different cabins, ate at different tables, and only spent time with their own group. Sherif then introduced competition between the groups, setting up athletic contests between them. This created conflict between the two groups of boys that developed into hostile attitudes towards the other group, pranking, name-calling, shows of group pride, negative stereotyping, and even occasionally physical violence.
In order to reduce the conflict between the two groups of boys, Sherif had first attempted to have both groups spend time together non-competitively. He had also encouraged them to mix and eat meals and play games with boys from the other group. However, the groups remained hostile toward each other. He had also tried to unite both groups against a common enemy, an outside summer camp, in an early version of the experiment. However, this was deemed an inadequate solution as this simply created a new conflict between the new group and the common enemy.
Sherif then introduced superordinate goals as a possible solution to the conflict. These were goals that were important to the summer camp but could only be achieved with both groups working together, such as obtaining water during a water shortage or procuring a film that both groups wanted to see but did not have enough money for. Sherif found that these goals encouraged cooperation between the boys, which reduced conflict between the groups, increased positive beliefs about boys from the other group, and increased cross-group friendships.
Background
Superordinate goals are most often discussed in the context of realistic conflict theory, which proposes that most intergroup conflicts stem from a fight over scarce resources, especially in situations that are seen as zero-sum. Under realistic conflict theory, prejudice and discrimination are functional, because groups are tools used to achieve goals, including obtaining scarce resources that would be difficult to get as an individual. In this case, groups see other groups with similar goals as threats and therefore perceive them negatively. Groups that are both competing for the same limited resource are said to have a negative interdependence. On the other hand, there are groups that benefit from working together on goals that are not zero-sum. In this case, these groups are said to have a positive interdependence.
In order to remove competition between different factions under realistic group conflict theory, it is necessary to have non-zero sum goals that create a positive interdependence within groups rather than a negative interdependence. Superordinate goals can create positive interdependence if they are seen as desirable by both groups but are not achievable by each faction independently.
Psychological Mechanisms
Work in social psychology suggests that superordinate goals differ from single group goals in that they make the larger group identity more salient and increase positive beliefs about everyone in the larger superordinate group.
Cooperation and Interdependence
Superordinate goals differ from smaller group goals in that they cannot be achieved by a single small group, and thus force multiple groups to work together, encouraging cooperation and penalizing competition. This encourages each group to consider the other group positively rather than negatively, as the other group is instrumental to achieving the common goal. This fosters a sense of positive interdependence rather than negative interdependence.
Superordinate Goals and Identity
In addition to increasing positive interdependence, having two groups work together on a single superordinate goal makes the larger group identity more salient. In effect, superordinate goals make it more likely that both groups will consider themselves as part of a larger superordinate group that has a common goal rather than two independent groups who are in conflict with each other. In the case of Sherif's summer camp, both groups of boys, the red and the blue, thought of themselves simply as campers when they were working together, rather than as part of the blue or red groups.
Ingroups
Having both groups consider themselves part of one larger superordinate group is valuable to the reduction of discrimination, because evaluation of members in one's own group tends to be more positive than evaluation of members outside of one's group. However, the two groups do not need to lose their individual identities in order to become part of the superordinate group. In fact, superordinate goals work best to reduce intergroup conflict when both groups consider themselves subgroups that have a shared identity and a common fate. This allows both groups to keep the positive aspects of their individual identities while also keeping salient everything that the two subgroups have in common.
Rebuttal of Contact Theory
Sherif's work on superordinate goals is widely seen as a rebuttal of contact theory, which states that prejudice and discrimination between groups widely exists due to a lack of contact between them. This lack of contact causes both sides to develop misconceptions about those who they do not know and to act on those misconceptions in discriminatory ways. However, Sherif's work showed that contact between groups is not enough to eliminate prejudice and discrimination. If groups are competing for the same limited resources, increasing contact between the groups will not convince the groups to see each other more positively. Instead, they will continue to discriminate, as the boys in Sherif's summer camps did. This is especially true when the groups are of unequal status and one group can control the resources and power.
Caveats and Critiques
Longevity
The effects of superordinate goals have not always been shown to last beyond the completion of such goals. In Sherif's study, the separate group identities did not dissolve until the end of the camp. The two groups of boys had less hostility toward each other but still identified with their own groups rather than the larger superordinate identity.
Zero-Sum Goals
In some cases, there are no superordinate goals that can bring together two separate groups. If there really are zero-sum goals that put groups in competition with each other, groups will remain separate and will stereotype each other and discriminate against each other. In some cases, simply the perception that goals are zero-sum, whether they are or not, can increase prejudice. Therefore, not only is there a need for non-zero-sum goals, but they must be perceived as such.
Complementarity
Superordinate goals are not as effective when both groups are performing similar or the same roles within the group to achieve the goal. If this is the case, both groups may see the other as infringing on their work or getting in the way. It is considered to be more effective to have members of each group playing complementary roles in the achievement of the goal, although the evidence to support this idea is mixed.
Absence of Trust or Inequality of Power
Some also argue that with an absence of trust, the prospect of working together to achieve a mutual goal may not serve to bring groups to a superordinate identity. In some cases, when there are inequalities of power or a lack of trust among groups, the idea that they must work together and foster trust and positive interdependence may backfire and lead to more discrimination rather than less.
Competing Theories
Social categorization theory and social identity theory differ from realistic group conflict theory in that they suggest that people do not only belong to groups to gain material advantage. Therefore, these theories propose other ways of improving intergroup social relations.
Social Categorization Theory
Social categorization theory proposes that people naturally categorize themselves and others into groups, even when there is no motive to do so. Supporting this idea is Tajfel's minimal group paradigm, which has shown there is discrimination among groups created in a laboratory that have no history, future, interaction, or motivation. Social categorization suggests that intergroup competition may be a feature of this tendency to categorize and may arise without zero-sum goals. Under Tajfel's paradigm, people will go as far as hurting their own group in order to harm the other group even more. Thus, superordinate goals may not solve all forms of discrimination.
Social Identity Theory
Social identity theory proposes that not only do people naturally categorize themselves and others, but they derive part of their own identities from being a part of a social group. Being part of a social group is a source of positive self-esteem and motivates individuals to think of their own group as better than other groups. Under social identity theory, superordinate goals are only useful insofar as they make salient the superordinate identity. It is the superordinate identity that is important for reducing intergroup conflict, and not the goals themselves. If the superordinate identity can be made salient without the use of goals, then the goals themselves are not instrumental to reducing conflict.
Applications
Superordinate goals have been applied to multiple types of situations in order to reduce conflict between groups.
Jigsaw Classroom
Elliot Aronson applied the idea of superordinate goals in Austin, Texas during the integration of the Austin public schools. Aronson used group projects in elementary school classrooms as a way to get white and black children to work together and reduce discrimination. Aronson had teachers assign projects that could only be completed if everyone in the group participated, and had the teachers give group grades. Having children work together and rely on each other for grades fostered positive interdependence and increased liking among the black and white children as well as decreased bullying and discrimination. Additionally, it increased the performance of all the children.
Business Organizations and Negotiations
Blake and Mouton applied superordinate goals to conflicts in business organizations. They specify that in a business context, the superordinate goals must be attractive to both parties in the organization or negotiation setting. If both parties are not interested in pursuing the goal or believe that they are better off without it, then the superordinate goal will not help to reduce conflict between the groups. Blake and Mouton also suggest that superordinate goals will often be a consequence of their intergroup problem-solving model.
Israeli-Palestinian Conflict
Herbert Kelman applied superordinate goals to the Israeli-Palestinian conflict to improve relations between members of the two groups. He created problem-solving workshops where Israelis and Palestinians were encouraged to solve together the problems given to them as well as to interact in a positive atmosphere. These workshops often focused on specific problems, such as tourism, economic development, or trade, which allowed both groups to find practical, positive solutions to these problems and improve relations between the groups.
Interracial Basketball Teams
McClendon and Eitzen studied interracial basketball teams in the 1970s and found that interracial basketball teams where the interdependence of black and white team members was high and the team had a high winning percentage had lower instances of anti-black attitudes among white players and higher preference for integration. However, teams that did not have high interdependence among black and white teammates or high winning percentages did not show reduced prejudice. Additionally, black members of the winning teams did not show more positive attitudes towards their white teammates than the losing teams.
References
Motivation | Superordinate goals | [
"Biology"
] | 2,566 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
9,484,259 | https://en.wikipedia.org/wiki/Global%20Ocean%20Observing%20System | The Global Ocean Observing System (GOOS) is a global system for sustained observations of the ocean comprising the oceanographic component of the Global Earth Observing System of Systems (GEOSS). GOOS is administrated by the Intergovernmental Oceanographic Commission (IOC), and joins the Global Climate Observing System, GCOS, and Global Terrestrial Observing System, GTOS, as fundamental building blocks of the GEOSS.
GOOS is a platform for:
International cooperation for sustained observation of the oceans.
Generations of oceanographic products and services.
Interaction between research, operational, and user communities.
GOOS serves oceanographic researchers, coastal managers, parties to international conventions, national meteorological and oceanographic agencies, hydrographic offices, marine and coastal industries, policymakers, and the interested general public.
GOOS is sponsored by the IOC, UNEP, WMO, and ICSU. It is implemented by member states via their government agencies, navies and oceanographic research institutions working together in a wide range of thematic panels and regional alliances.
The GOOS Scientific Steering Committee provides guidance, while Scientific and Technical Panels evaluate Essential Ocean Variable observation systems. The secretariat director, from 2004 to 2011 was Keith Alverson. The secretariat director from 2011–2022 it was Albert Fischer.
Essential ocean variables
Essential Ocean Variables are a collection of ocean properties selected in a way so as to provide the best, most cost-effective suite of data that enables quantification of key ocean processes. They are selected based on their Relevance, Feasibility, and Cost effectiveness. They fall into four categories - physics, biogeochemistry, ecosystems, and cross-disciplinary. Their consistent usage is promoted by agencies such as GOOS and Southern. The EOVs are:
Physics
Sea state
Ocean surface stress
Sea ice
Sea surface height
Sea surface temperature
Subsurface temperature
Surface currents
Subsurface currents
Sea surface salinity
Subsurface salinity
Ocean surface heat flux
Biogeochemistry
Oxygen
Nutrients
Inorganic carbon
Transient tracers
Particulate matter
Nitrous oxide
Stable carbon isotopes
Dissolved organic carbon
Ecosystems
Phytoplankton biomass and diversity
Zooplankton biomass and diversity
Fish abundance and distribution
Marine turtles, birds, mammals abundance and distribution
Hard coral cover and composition
Seagrass cover and composition
Macroalgal canopy cover and composition
Mangrove cover and composition
Microbe biomass and diversity (*emerging)
Invertebrate abundance and distribution (*emerging)
Cross-disciplinary
Ocean color
Ocean Sound
See also
Integrated Ocean Observing System
Terrestrial Ecosystem Monitoring Sites (GTOS)
References
External links
GOOS Web
Oceanography
Earth observation projects | Global Ocean Observing System | [
"Physics",
"Environmental_science"
] | 523 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
9,484,892 | https://en.wikipedia.org/wiki/Remediation%20of%20contaminated%20sites%20with%20cement | Remediation of contaminated sites with cement, also called solidification/stabilization with cement (S/S with cement) is a common method for the safe environmental remediation of contaminated land with cement. The cement solidifies the contaminated soil and prevents pollutants from moving, such as rain causing leaching of pollutants into the groundwater or being carried into streams by rain or snowmelt. Developed in the 1950s, the technology is widely used today to treat industrial hazardous waste and contaminated material at brownfield sites i.e. abandoned or underutilized properties that are not being redeveloped because of fears that they may be contaminated with hazardous waste. S/S provides an economically viable means of treating contaminated sites. This technology treats and contains contaminated soil on site thereby reducing the need for landfills.
Process
The Solidification/Stabilization method utilizes chemically reactive formulations that form stable solids that are non-hazardous or less-hazardous than the original materials. Solidification refers to the physical changes in the contaminated material when a certain binding agent is added. These changes include an increase in compressive strength, a decrease in permeability, and condensing of hazardous materials. Stabilization refers to the chemical changes between the stabilizing agent (binding agent) and the hazardous constituent. These changes should include a less soluble, less toxic constituent with hindered mobility. Common bonding agents include, but are not limited to, portland cement, lime, limestone, fly ash, slag, clay, and gypsum. Because of the vast types of hazardous materials, each agent should be tested on the site before a full-scale project is put under way. Most binding agents used are a blend of various single binding agents, depending on the hazardous material it will be used on. Portland cement has been used to treat more contaminated material than any other S/S binding agent because of its ability to bind free liquids, reduce permeability, encapsulate hazardous materials, and reduce the toxicity of certain contaminants. Lime can be used to adjust the pH of the substance of drive off water by using high heats of hydration. Limestone can also be used to adjust pH levels. Slag is often used for economical purposes because of its low cost.
Different methods
In situ
In situ is a Latin phrase meaning “in the place”. When referred to chemistry or chemical reactions it means “in the reaction mixture”. In situ S/S, accounting for 20% of S/S projects from 1982–2005, is used to mix binding agents into the contaminated material while remaining on the site. Outside benefits of in situ mixing include conserving transportation costs, no landfill usage, and lesser risk to surrounding communities to be exposed to the hazardous materials while in transport. In-situ mixing treatments can also have the added benefit of improving soil conditions on the site.
Ex situ
Ex situ is a Latin phrase meaning “off site”. In ex situ mixing, the hazardous materials are excavated, then machine-mixed with a certain bonding agent. This new, less-hazardous material is then deposited in a designated area, or reused on the initial site. From 1982–2005, ex-situ S/S technologies have accounted for 80% of the 217 projects that were completed.
Limitations and concerns
Prolonged use of the treated site and environmental and weather conditions may cause the materials used to stabilize the contaminants to erode, limiting the effect of the stabilization on the hazardous materials. Because of this, continuous monitoring of the site is required in order to ensure the contaminants have not re-assembled. Environmental factors such as freezing–thawing and wetting–drying were the focus of many studies dealing with the strength of S/S. It was found that freezing and thawing had the most adverse effects on the durability of the treated materials.
When dealing with a radioactive contaminant, the solidification process may be interfered by various other types of hazardous waste. Most S/S processes have little or limited effects on organics and pesticides. Only by destroying these wastes by heating at very high temperatures will organics and pesticides be immobilized. Prior to performing the process to these types of sites, treatability studies need to be conducted in order to conclude if the solidification/stabilization process will be beneficial. These cement processes can result in major volume changes to the site, often up to double its original volume.
Projects
Sydney Tar Ponds
The governments of Canada and the province of Nova Scotia agreed in January 2007 to clean up the infamous Sydney Tar Ponds contaminated site using S/S technology. Cement was mixed into the contaminated waste to solidify and stabilize it. When the S/S process was complete, the solidified areas were covered with an engineered cap consisting of a clay, followed by layers of gravel and soil. Finally, the surface was planted with grass and other vegetation.
Former wood treating facility in Port Newark, New Jersey
S/S technologies were used to treat a contaminated former wood treating facility in Port Newark, New Jersey. Approximately of soil was contaminated by wood with arsenic, chromium, and polycyclic aromatic hydrocarbons. 8% of Portland cement was used by wet weight of contaminated soil. Both in situ and ex situ processes were utilized to treat over 35,000 cubic meters of contaminated soil. The ex situ treated soil was mixed with Portland cement by a pugmill then placed on top of the in situ treated soil. This created an excellent base for pavement to be placed over the site. The proposed use for the treated site is a shipping container storage area.
Former electric generating station in Boston, Massachusetts
Abandoned warehouses in Boston, Massachusetts are being renovated or torn down in order to build new structures. On this site is the former Central Power System, built in 1890. When built, this power station was considered to be the biggest electric generating plant in the world. This building has been abandoned since the 1950s and has not produced electricity in over 90 years. In the early 90s, renovations were started but were quickly shut down when free-floating oil was discovered in the sewers. Cleanup efforts were unsuccessful as they brought more oil onto the site. In 1999, cement-based S/S treatments were utilized to treat 2,100 cubic meters of contaminated materials. Lead and Petroleum contaminated soils were managed and treated successfully at this site.
Dockside Green in Victoria British Columbia
A complex of mixed residential, office, retail and commercial space is being built on of former industrial land in downtown Victoria that was contaminated by lead. 10 tonnes of soil was treated with cement, which was mixed into the soil on site simply by using an excavator bucket. The soil was thus rendered completely safe as was shown by tests on soil samples.
Former battery breaking site in Brandon, Manitoba
A 10,000 square metre lot formerly occupied by the Brandon Scrap Metal and Iron Company was chosen by the City of Brandon for the site for its new fire and police headquarters. For many years, lead cell batteries were destroyed there and the lead was extracted, leaving the untreated cases on site. An environmental assessment showed that the site was contaminated due to the presence of heavy metals, lead and hydrocarbon pollution. Cement based S/S was employed to successfully remediate 600 tonnes of contaminated soil.
Skeet shooting range near St. Catharines, Ontario
A vacant 5-hectare property near the Welland Canal in St. Catharines had surface soil containing dangerous concentrations of lead and polycyclic aromatic hydrocarbons (PAHs) to a depth up to 0.4 m due to the past operations of an adjacent skeet shooting range. About 26,000 tonnes of soil were treated using S/S to bring the contamination levels below the Ontario Land Disposal Regulations criteria.
See also
Hazardous Waste
Pondcrete
Salt-concrete
Saltcrete
Soil vapor extraction
Soil contamination
Water pollution
Environmental remediation
Radioactive waste
References
External links
A Citizen’s Guide to Solidification/Stabilization
Technical References from U.S. Environmental Protection Agency, U.S. Army Corps of Engineers, and U.K Environment Agency: https://web.archive.org/web/20110317002710/http://www.cetco.com/ccs/Literature.aspx
Waste treatment technology | Remediation of contaminated sites with cement | [
"Chemistry",
"Engineering"
] | 1,681 | [
"Water treatment",
"Waste treatment technology",
"Environmental engineering"
] |
9,485,147 | https://en.wikipedia.org/wiki/CoNTub | CoNTub is a software project written in Java which runs on Windows, Mac OS X, Linux and Unix Operating systems through any Java-enabled web browser. It is the first implementation of an algorithm for generating 3D structures of arbitrary carbon nanotube connections by means of the placement of non-hexagonal (pentagonal or heptagonal) rings, also referred as defects or disclinations.
The software is a set of tools dedicated to the construction of complex carbon nanotube structures for use in computational chemistry. CoNTub 1.0[1] was the first implementation for building these complex structures and included nanotube heterojunctions, while CoNTub 2.0[2] is mainly devoted to three-nanotube junctions. Its aim is to help in the design and research about new nanotube-based devices. CoNTub is based on the strip algebra, and is able to find the unique structure for connecting two specific and arbitrary carbon nanotubes and many of the possible three-tube junctions.
CoNTub generates the geometry of various types of nanotube junctions, i.e., nanotube heterojunctions and three-nanotube junctions, including also single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs).
Although the current version of CoNTub is v2.0, this version does not supersedes v1.0, as v2.0 is dedicated currently to only three-nanotube junctions, although the incorporation of v1.0 functionality into v.2.0 is planned. Nanotube heterojunctions can be generated only with v1.0.
CoNTub v1.0 is organized in five Tabbed panels CoNTub[1], the first three being dedicated to structure generation, the fourth to the output in PDB format, and the fifth contains a short help section.
CoNTub v2.0 has experimented a major redesign, and the panes have been removed, instead, a conventional menubar has been added where the type of structure to be generated can be chosen. Although the menu item for heterojunction generation appears in the menu, the button is disabled, so NTHJ's can be only generated with v1.0
Features
3D molecular viewer
Structure generation of carbon nanotube Heterojunctions from indices(i,j) and length (l) of the two nanotubes.
Structure generation of single-walled nanotubes (SWNTs) from indices(i,j) and length (l)
Structure generation of symmetric Three-nanotube junctions (TNJ) selected from a list of possibilities, given the indices of the joined nanotubes.
Plotting the electronic band structure and density of states (DOS) for single-walled nanotubes (SWNTs)
Structure generation of multi-walled nanotubes from indices(i,j) and length (l), number of shells(N) and spacing(S).
Output the xyz coordinates of the structures in a (PDB) file format
Nanotube generation
To generate a SWNT, it is only necessary to introduce the indices of the tube, its desired length (Angstrom), and the type of atom for termination of dangling bonds. ConTub displays the resulting nanotube, as well as its electronic band structure and density of states (DOS), following a tight binding model.
MWNT - multiple tubes with the same axis and length - are created by providing the indices of the most inner tube (i,j), the desired length (l), the number of shells (N), and the approximate distance between shells or spacing (S) in Angstrom. The default value for spacing corresponds to the standard distance between layers in crystalline graphite (3.4 Å). ConTub automatically selects the indices of the remaining tubes, trying to adjust the interlayer spacing, and tries to use tubes with the same chirality as that of the inner nanotube.
Heterojunction generation
This is the core of the CoNTub[1] program. Strip algebra was implemented, which allows two perfect carbon nanotubes to be joined, independently of their geometry, radius or chirality, with the simplest geometry possible, i.e. with the lowest number of non-hexagonal rings (a pentagon and a heptagon), also called defects or disclinations. There is always a possible connection between two tubes and strip algebra ensues that the solution is unique and depending only of the indices (i,j) of both tubes.
C3 Symmetric Three-Nanotube Junction generation
A further implementation of the Strip Algebra has been released in the second version of CoNTub, in order to elucidate the precise location of atoms and rings that lead to a junction of three nanotubes.
Connection between three nanotubes requires, at least, the presence of six heptagons, instead of the single pentagon and heptagon required for an heterojunction. In this case, the set of equations that rule the geometry has more variables to solve than restrictions, so the possible geometries constitute an infinite set. The detailed procedure for the nanotube construction has been also published,
Imposing additional restrictions to the geometry can ease the finding of viable geometries, and this is what is applied in the current version of CoNTub: Forcing the tubes connected to be of the same kind, and forcing an additional C3 symmetry, an automated way to construct the geometry can be found. However, even with these restrictions, the possibilities are still infinite. Therefore, a way to estimate the viability of the junction, even before constructing it, had to be developed. Given that non-hexagonal rings
Image gallery
See also
Boron nitride nanotube
Silicon nanotubes
List of software for nanostructures modeling
Potential applications of carbon nanotubes
References
Molecular modelling software
Freeware
Science software
Java platform software | CoNTub | [
"Chemistry"
] | 1,230 | [
"Molecular modelling",
"Molecular modelling software",
"Computational chemistry software"
] |
9,485,457 | https://en.wikipedia.org/wiki/Wildlife%20of%20Iraq | The wildlife of Iraq includes its flora and fauna and their natural habitats. Iraq has multiple biomes from mountainous region in the north to the wet marshlands along the Euphrates river. The western part of the country is mainly desert and some semi-arid regions. As of 2001, seven of Iraq's mammal species and 12 of its bird species were endangered. The endangered species include the northern bald ibis and Persian fallow deer. The Syrian wild ass is extinct, and the Saudi Arabian dorcas gazelle was declared extinct in 2008.
Mesopotamian marshes
The marshes are home to 40 species of birds and several species of fish, plus they demarcate a range limit for a number of bird species. The marshes were once home to millions of birds and the stopover for millions of migratory birds, including flamingo, pelican and heron as they migrated from Siberia to Africa. At risk are 40% to 60% of the world's marbled teal population that live in the marshes, along with 90% of the world's population of Basra reed-warbler. Seven marsh species are near or fully extinct, including the Indian crested porcupine, the bandicoot rat and the marsh gray wolf. The draining of the marshes caused a significant decline in bioproductivity; following the Multi-National Force overthrow of the Saddam Hussein regime, water flow to the marshes was restored and the ecosystem has begun to recover.
Aquatic or semi-aquatic wildlife occurs in and around these lakes:
Lake Habbaniyah
Lake Milh
Lake Qadisiyah
Lake Tharthar
Water birds recorded in marshlands in southern Iraq include little grebe, great crested grebe, cormorant, darter, bittern, grey heron, night heron, purple heron, white stork, cattle egret, sacred ibis, Eurasian teal, common redshank, pied kingfisher, greater spotted eagle, marsh harrier, hooded crow, Iraq babbler, crested lark, pin-tailed sandgrouse, collared dove, Indian roller and starling.
Coral reef
Iraqi coastal waters boast a living coral reef, covering 28 km2 in the Persian Gulf, at the mouth of the Shatt al-Arab river (). The coral reef was discovered by joint Iraqi–German expeditions of scientific scuba divers carried out in September 2012 and May 2013. Prior to its discovery, it was believed that Iraq lacked coral reefs as the turbid waters prevented their detection. Iraqi corals were found to be adapted to one of the most extreme coral-bearing environments in the world, as the seawater temperature in this area ranges between 14 and 34 °C. The reef harbours several living stone corals, octocorals, ophiuroids and bivalves. There are also silica-containing demosponges.
Fauna
Due to its diversity in biomes from the Mesopotamian Marshes along the Euphrates River to the semi-arid deserts, Iraq is home to a wide variety of endemic animals and animals that are well-known worldwide for their prevalent populations.
Mammals
The Eurasian otter and the smooth-coated otter are carnivorous semiaquatic mammals located in the marshes and rivers whose diets consist of primarily fish, amphibians, crustaceans, insects and birds, but their populations have seen a decline since the 1970s. The Persian leopard is a large carnivorous feline located in the northern forest whose diet consists of primarily wild goats. A small population was recorded for the first time at the beginning of the 21st century in the border region between Iraq and Turkey. The sand cat whose presence was recorded for the first time in the desert of Al-Najaf is a small carnivorous feline located in the sandy deserts (diet consists of small rodents, cape hare, greater hoopoe lark, desert monitor, sandfish, cerastes vipers). The Wildcat is a small feline located primarily in forest whose diet consist of rodents, birds, small reptiles and poultry. The Rüppell's fox is a small omnivorous canine that located in deserts north of the Euphrates river whose diet consists of insects, small mammals, lizards, and birds. The Marbled polecat is a omnivorous weasel located in deserts of N.Iraq whose diet consists of small rodents, birds, lizards, fish, frogs, fruit, and grass. The Small Indian mongoose small omnivorous weasel located in the alluvial plains whose diet consists of insects(dragonflies, grasshoppers, mole crickets, ground beetles, earwigs), rodents, amphibians, reptiles, small birds, small grasses, small fruits. The Goitered gazelle is a herbivorous antelope that is located in mountains and areas of broken terrain. The Wild boar is a omnivorous swine located in the marshes and along the numerous rivers in Iraq whose diet consists of plants(rhizomes, roots, bulbs, tubers, nuts, berries, seeds, leaves, bark, twigs, shoots), earthworms, insectivores, insects, rodents, bird eggs, lizards, snakes, frogs, and carrion. The Bactrian camel is located in varying habitats from rocky mountains to arid deserts and has a herbivorous diet that consists of various kinds of vegetation. The European hare is a herbivorous rodent located in the plateaus of Iraq and along the Tigris river.
Other mammals include:
Indian crested porcupine
Caucasian squirrel
Broad-toothed field mouse
Yellow-necked mouse
House mouse
Black rat
Short-tailed bandicoot rat
Indian gerbil
Sundevall's jird
Extinct fauna
The only confirmed record of a Caspian tiger was a specimen killed near Mosul in 1887. The Asiatic cheetah occurred in the desert west of Basrah until 1926. The last known cheetah in the country was killed by a car. The last known Asiatic lion was killed on the lower Tigris in 1918. The last Arabian oryx was shot in 1914. Syrian elephants roamed Mesopotamia until around 700 BC.
See also
List of mammals of Iraq
Wildlife of Iran
References
External links
"Online Photo Galleries" on Nature and Wildlife of India at "India Nature Watch (INW)" - spreading the love of nature and wildlife in India through photography
Iraq's Unique Wildlife Pushed to Brink by War, Hunting
A lion in Iraq
Iraq | Wildlife of Iraq | [
"Biology"
] | 1,309 | [
"Biota by country",
"Wildlife by country"
] |
9,485,868 | https://en.wikipedia.org/wiki/Membrane%20lipid | Membrane lipids are a group of compounds (structurally similar to fats and oils) which form the lipid bilayer of the cell membrane. The three major classes of membrane lipids are phospholipids, glycolipids, and cholesterol. Lipids are amphiphilic: they have one end that is soluble in water ('polar') and an ending that is soluble in fat ('nonpolar'). By forming a double layer with the polar ends pointing outwards and the nonpolar ends pointing inwards membrane lipids can form a 'lipid bilayer' which keeps the watery interior of the cell separate from the watery exterior. The arrangements of lipids and various proteins, acting as receptors and channel pores in the membrane, control the entry and exit of other molecules and ions as part of the cell's metabolism. In order to perform physiological functions, membrane proteins are facilitated to rotate and diffuse laterally in two dimensional expanse of lipid bilayer by the presence of a shell of lipids closely attached to protein surface, called annular lipid shell.
Biological roles
The bilayer formed by membrane lipids serves as a containment unit of a living cell. Membrane lipids also form a matrix in which membrane proteins reside. Historically lipids were thought to merely serve a structural role. Functional roles of lipids are in fact many: They serve as regulatory agents in cell growth and adhesion. They participate in the biosynthesis of other biomolecules. They can serve to increase enzymatic activities of enzymes.
Non-bilayer forming lipid like monogalactosyl diglyceride (MGDG) predominates the bulk lipids in thylakoid membranes, which when hydrated alone, forms reverse hexagonal cylindrical phase. However, in combination with other lipids and carotenoids/chlorophylls of thylakoid membranes, they too conform together as lipid bilayers.
Major classes
Phospholipids
Phospholipids and glycolipids consist of two long, nonpolar (hydrophobic) hydrocarbon chains linked to a hydrophilic head group.
The heads of phospholipids are phosphorylated and they consist of either:
Glycerol (and hence the name phosphoglycerides given to this group of lipids), or
Sphingosine (e.g. sphingomyelin and ceramide).
Glycerol dialkyl glycerol tetraether (GDGT) is helping to study ancient environmental factors.
Glycolipids
The heads of glycolipids (glyco- stands for sugar) contain a sphingosine with one or several sugar units attached to it. The hydrophobic chains belong either to:
two fatty acids (FA) – in the case of the phosphoglycerides, or
one FA and the hydrocarbon tail of sphingosine – in the case of sphingomyelin and the glycolipids.
Galactolipids – monogalactosyl diglyceride (MGDG) and digalactosyl diglycreride (DGDG) form the predominant lipids in higher plant chloroplast thylakoid membranes; liposomal structures formed by total lipid extract of thylakoid membranes have been found sensitive to sucrose as it turns bilayers into micellar structures.
Fatty acids
The fatty acids in phospho- and glycolipids usually contain an even number, typically between 14 and 24, of carbon atoms, with 16- and 18-carbon being the most common. FAs may be saturated or unsaturated, with the configuration of the double bonds nearly always cis. The length and the degree of unsaturation of FAs chains have a profound effect on membranes' fluidity.
Plant thylakoid membranes maintain high fluidity, even at relatively cold environmental temperatures, due to the abundance of 18-carbon fatty acyl chains with three double bonds, linolenic acid, as has been revealed by 13-C NMR studies.
Phosphoglycerides
In phosphoglycerides, the hydroxyl groups at C-1 and C-2 of glycerol are esterified to the carboxyl groups of the FAs. The C-3 hydroxyl group is esterified to phosphoric acid. The resulting compound, called phosphatidate, is the simplest phosphoglycerate. Only small amounts of phosphatidate are present in membranes. However, it is a key intermediate in the biosynthesis of the other phosphoglycerides.
Sphingolipids
Sphingosine is an amino alcohol that contains a long, unsaturated hydrocarbon chain. In sphingomyelin and glycolipids, the amino group of sphingosine is linked to FAs by an amide bond. In sphingomyelin the primary hydroxyl group of sphingosine is esterified to phosphoryl choline.
In glycolipids, the sugar component is attached to this group. The simplest glycolipid is cerebroside, in which there is only one sugar residue, either Glc or Gal. More complex glycolipids, such as gangliosides, contain a branched chain of as many as seven sugar residues.
Sterols
The best known sterol is cholesterol, which is found in humans. Cholesterol also occurs naturally in other eukaryote cell membranes. Sterols have a hydrophobic four-membered fused ring rigid structure, and a small polar head group.
Cholesterol is bio-synthesised from mevalonate via a squalene cyclisation of terpenoids. Cell membranes require high levels of cholesterol – typically an average of 20% cholesterol in the whole membrane, increasing locally in raft areas up to 50% cholesterol (- % is molecular ratio). It associates preferentially with sphingolipids (see diagram) in cholesterol-rich lipid rafts areas of the membranes in eukaryotic cells. Formation of lipid rafts promotes aggregation of peripheral and transmembrane proteins including docking of SNARE and VAMP proteins. Phytosterols, such as sitosterol and stigmasterol, and hopanoids serve a similar function in plants and prokaryotes.
See also
Homeoviscous adaptation
Protein-lipid interaction
References
External links
Lipids
Membrane biology | Membrane lipid | [
"Chemistry"
] | 1,393 | [
"Biomolecules by chemical classification",
"Membrane biology",
"Organic compounds",
"Molecular biology",
"Lipids"
] |
9,486,102 | https://en.wikipedia.org/wiki/810%20Seventh%20Avenue | 810 Seventh Avenue is an office skyscraper a few blocks north of Times Square on Seventh Avenue between 52nd and 53rd streets within Midtown Manhattan in New York City, United States. It is owned by SL Green Realty Corp. after its acquisition of Reckson Associates Realty Corp., completed in January 2007. The back of the building is situated on Broadway, diagonally across Broadway and 53rd from CBS's Ed Sullivan Theater, home of The Late Show with Stephen Colbert.
The building has a large number of tenants, including: AT&T Wireless, Aegis Capital Corp., CompassRock Real Estate (40th Floor), Constellation Energy, EMI Entertainment, Scripps Networks - Ion Media Networks, Hearst Communications, IAC/InterActiveCorp, Insight Communications, The Raine Group, Metromedia Company, Murex, Oppenheimer & Co., TheMarkets.com (6th Floor), Pixafy
Other details
41 stories, 26 units
Office area:
Retail area:
Garage area:
LEED Certification
References
1969 establishments in New York City
Office buildings completed in 1969
Skyscraper office buildings in Manhattan
Seventh Avenue (Manhattan)
Times Square buildings
Leadership in Energy and Environmental Design certified buildings
1960s architecture in the United States | 810 Seventh Avenue | [
"Engineering"
] | 247 | [
"Building engineering",
"Leadership in Energy and Environmental Design certified buildings"
] |
9,486,251 | https://en.wikipedia.org/wiki/Stachybotrys | Stachybotrys () is a genus of molds, hyphomycetes or asexually reproducing, filamentous fungi, now placed in the family Stachybotryaceae. The genus was erected by August Carl Joseph Corda in 1837. Historically, it was considered closely related to the genus Memnoniella, because the spores are produced in slimy heads rather than in dry chains. Recently, the synonymy of the two genera is generally accepted. Most Stachybotrys species inhabit materials rich in cellulose. The genus has a widespread distribution and contained about 50 species in 2008. There are 88 records of Stachybotrys on Species Fungorum (in 2023), of which 33 species have DNA sequence data in GenBank. Species in the genus are commonly found in soil, plant litter (hay, straw, cereal grains, and decaying plant debris) and air and a few species have been found from damp paper, cotton, linen, cellulose-based building materials water-damaged indoor buildings, and air ducts from both aquatic and terrestrial habitats (Izabel et al. 2010; Lombard et al. 2016; Hyde et al. 2020a).
The name of Stachybotrys is derived from the Greek words σταχυς stakhus (ear of grain, stalk, stick; metaphorically, progeny) and βότρυς botrus (cluster or bunch as in grapes, trusses).
The most infamous species, Stachybotrys chartarum (previously known as Stachybotrys atra) and Stachybotrys chlorohalonata, are known as black mold or toxic black mold in the U.S., and are frequently associated with poor indoor air quality that arises after fungal growth on water-damaged building materials. Stachybotrys chemotypes are toxic, with one producing trichothecene mycotoxins including satratoxins, and another that produces atranones. However, the association of Stachybotrys mold with specific health conditions is not well proven and there exists a debate within the scientific community.
Conidia
Conidia are in slimy masses, smooth to coarsely rough, dark olivaceous to brownish black, obovoid, later becoming ellipsoid with age, 10–13 × 5–7 mm. Phialides are obovate or ellipsoidal, colorless early then turning to olivaceous with maturity, smooth, 12–14 × 5–7 mm, in clusters of 5 to 9 phialides. Conidiophores are simple, erect, smooth to rough, colorless to olivaceous, slightly enlarged apically, mostly unbranched but occasionally branched. Conidia of Stachybotrys are very characteristic and can be confidently identified in spore count samples. This genus is closely related to Memnoniella. Species of Memnoniella may occasionally develop Stachybotrys-like conidia, and vice versa.
Detection
Four distinctive microbial volatile organic compounds (MVOCs) – 1-butanol, 3-methyl-1-butanol, 3-methyl-2-butanol, and thujopsene – were detected on rice cultures, and only one (1-butanol) was detected on gypsum board cultures.
Pathogenicity
Symptoms of Stachybotrys exposure in humans
A controversy began in the early 1990s after analysis of two infant deaths and multiple cases in children from the poor areas of Cleveland, Ohio, United States, due to pulmonary hemorrhage were initially linked to exposure to heavy amounts of Stachybotrys chartarum. Subsequent and extensive reanalysis of the cases by the United States Centers for Disease Control and Prevention have failed to find any link between the deaths and the mold exposure.
Species
As accepted by Species Fungorum (as of July 2023);
Stachybotrys aksuensis
Stachybotrys aloicola L. Lombard & Crous (2014)
Stachybotrys alternans Bonord. (1851)
Stachybotrys aurantius
Stachybotrys bambusicola
Stachybotrys biformis
Stachybotrys bisbyi
Stachybotrys breviuscula McKenzie (1991)
Stachybotrys chartarum (Ehrenb.) S. Hughes (1958)
Stachybotrys chlorohalonatus B. Andersen & Thrane (2003)
Stachybotrys clitoriae
Stachybotrys cordylines
Stachybotrys cylindrospora C.N. Jensen (1912)
Stachybotrys dakotensis
Stachybotrys dolichophialis L. Lombard & Crous (2016)
Stachybotrys echinatus
Stachybotrys elasticae
Stachybotrys freycinetiae McKenzie (1991)
Stachybotrys frondicola (K.D. Hyde, Goh, Joanne E. Taylor & J. Fröhl.) Yong Wang bis, K.D. Hyde, McKenzie, Y.L. Jiang & D.W. Li (2015)
Stachybotrys gamsii (K.D. Hyde, Goh, Joanne E. Taylor & J. Fröhl.) Yong Wang bis, K.D. Hyde, McKenzie, Y.L. Jiang & D.W. Li (2015)
Stachybotrys globosus
Stachybotrys guttulisporus
Stachybotrys havanensis
Stachybotrys humilis
Stachybotrys indicoides
Stachybotrys indicus
Stachybotrys jiangziensis
Stachybotrys kampalensis Hansf. (1943)
Stachybotrys kapiti Whitton, McKenzie & K.D. Hyde (2001)
Stachybotrys klebahnii
Stachybotrys leprosus
Stachybotrys levisporus
Stachybotrys limonisporus
Stachybotrys littoralis
Stachybotrys longistipitatus (D.W. Li, Chin S. Yang, Vesper & Haugland) D.W. Li, Chin S. Yang, Vesper & Haugland (2015)
Stachybotrys lunzinensis
Stachybotrys mangiferae P.C. Misra & S.K. Srivast. (1982)
Stachybotrys mexicanus J. Mena & Heredia (2009); Stachybotryaceae
Stachybotrys microspora (B.L. Mathur & Sankhla) S.C. Jong & E.E. Davis (1976)
Stachybotrys mohanramii
Stachybotrys musae
Stachybotrys nepalensis
Stachybotrys nephrodes McKenzie (1991)
Stachybotrys nephrospora Hansf. (1943)
Stachybotrys nielamuensis Y.M. Wu & T.Y. Zhang (2009)
Stachybotrys oenanthes M.B. Ellis (1971)
Stachybotrys pallescens
Stachybotrys pallidus
Stachybotrys palmae
Stachybotrys palmicola
Stachybotrys palmijunci
Stachybotrys parvisporus S. Hughes (1952)
Stachybotrys parvus
Stachybotrys proliferatus
Stachybotrys punctatus
Stachybotrys queenslandicus
Stachybotrys ramosus
Stachybotrys reniformis
Stachybotrys renisporoides
Stachybotrys renisporus
Stachybotrys reniverrucosus
Stachybotrys ruwenzoriensis Matsush. (1985)
Stachybotrys sacchari
Stachybotrys sansevieriae G.P. Agarwal & N.D. Sharma (1974)
Stachybotrys sinuatophorus Matsush. (1971)
Stachybotrys socia
Stachybotrys sphaerosporus
Stachybotrys stilboideus
Stachybotrys subcylindrosporus
Stachybotrys subreniformis
Stachybotrys subsylvaticus
Stachybotrys suthepensis Photita, P. Lumyong, K.D. Hyde & McKenzie (2003)
Stachybotrys taiwanensis
Stachybotrys terrestris
Stachybotrys thaxteri
Stachybotrys theobromae Hansf. (1943)
Stachybotrys variabilis
Stachybotrys verrucisporus
Stachybotrys verrucosus
Stachybotrys virgatus
Stachybotrys voglinoi
Stachybotrys waitakere Whitton, McKenzie & K.D. Hyde (2001)
Stachybotrys xanthosomatis
Stachybotrys xigazenensis
Stachybotrys yunnanensis
Stachybotrys yushuensis
Stachybotrys zeae
Stachybotrys zhangmuensis
Stachybotrys zingiberis
Stachybotrys zuckii
See also
Bioaerosol
Mold growth, assessment, and remediation
Mold health issues
Sick building syndrome
References
Notes
Further reading
External links
Hypocreales genera
Environmental toxicology
Stachybotryaceae
Taxa named by August Carl Joseph Corda
Taxa described in 1837 | Stachybotrys | [
"Environmental_science"
] | 2,030 | [
"Toxicology",
"Environmental toxicology"
] |
9,487,080 | https://en.wikipedia.org/wiki/Rusi%20Taleyarkhan | Rusi P. Taleyarkhan is a nuclear engineer and has been a faculty member in the Department of Nuclear Engineering at Purdue University since 2003. Prior to that, he was on staff at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. He obtained his Bachelor of Technology degree in mechanical engineering from the Indian Institute of Technology, Madras in 1977 and MS and PhD (Nuclear Engineering and Science) degrees from Rensselaer Polytechnic Institute (RPI) in 1978 and 1982 respectively. He also holds an MBA (Business Administration) from RPI.
In 2008, he was judged guilty of research misconduct for "falsification of the research record" by a Purdue review board.
Sonofusion work and controversy
In 2002, while a senior scientist at ORNL, Taleyarkhan published a paper on fusion achieved by bombarding a container of liquid solvent with strong ultrasonic vibrations, a process known as sonofusion or bubble fusion. In theory, the vibrations collapsed gas bubbles in the solvent, heating them to temperatures high enough to fuse hydrogen atoms and release energy. Following his move from Oak Ridge to Purdue in 2003, Taleyarkhan published additional papers about his research in this area.
Numerous other scientists, however, were not able to replicate Taleyarkhan's work, including in published articles in Physical Review Letters from the University of Göttingen, from UCLA, from University of Illinois, from former colleagues at Oak Ridge National Labs, and a study funded by the Office of Naval Research in the University of California.
Taleyarkhan's results were reportedly repeated by Edward Forringer of LeTourneau University in Taleyarkhan's own labs at Purdue in November 2006. Purdue decided at that time not to further investigate the initial narrowly defined charges of misconduct against Taleyarkhan made by other members of the Purdue Faculty.
The Chronicle of Higher Education, however, has noted some problems with the verification. "During this time, Dr. Taleyarkhan says, two more scientists came into his laboratory and independently verified bubble fusion. Dr. Taleyarkhan contends that both were experts and did their work independently of him. But in interviews, both researchers contradict aspects of that account. One of those scientists, Edward R. Forringer, a professor of physics at LeTourneau University, in Texas, says he is certainly not an expert. Nonetheless, he says he is confident that his results do support the reality of bubble fusion."
On May 10, 2007, Purdue announced that they would add at least one scientist without ties to the university to a new inquiry of Taleyarkhan and his work, at the insistence of a Congressional panel investigating the use of federal funds in attempts to reproduce Taleyarkhan's results. The panel cited concerns that Taleyarkhan's claims of independent verification were "highly doubtful", and criticized Purdue for using three of the same members of an earlier inquiry committee in their recently completed review. Taleyarkhan called the report a "one-sided, grossly exaggerated write-up" but agreed to cooperate. On September 10, 2007, Purdue reported that its internal committee had determined that "several matters merit further investigation" and that they were re-opening formal proceedings.
This board judged him guilty of "research misconduct" for "falsification of the research record" in July 2008 and on August 27, 2008, his status as a member of the Purdue University Graduate Faculty was limited to that of 'Special Graduate Faculty.' He was permitted to serve on graduate committees, but would not be able to serve as a major professor or co-major professor for graduate students for a period of three years. Taleyarkhan received from September 2008 to August 2009 a $185,000 grant from the National Science Foundation to investigate bubble fusion. In 2009 the Office of Naval Research debarred him for 28 months, until September 2011, from receiving U.S. Federal Funding. During that period his name was listed in the 'Excluded Parties List' to prevent him from receiving grants from any government agency.
See also
List of scientific misconduct incidents
References
Further reading
External links
Living people
Year of birth missing (living people)
American people of Indian descent
American people of Parsi descent
Cold fusion
People involved in scientific misconduct incidents
Purdue University faculty
Rensselaer Polytechnic Institute alumni
American nuclear engineers
IIT Madras alumni | Rusi Taleyarkhan | [
"Physics",
"Chemistry"
] | 871 | [
"Nuclear fusion",
"Cold fusion",
"Nuclear physics"
] |
9,487,574 | https://en.wikipedia.org/wiki/Lynch%20motor | The Lynch motor is a unique axial gap permanent magnet brushed DC electric motor. The motor has a pancake-like shape and was invented by Cedric Lynch in 1979, the relevant patent being filed on 18 December 1986.
The Lynch motor is built from ferrite blocks sandwiched between strips of metal, instead of conventional copper coil windings, and is held together purely by magnets. The motor is available in several sizes from to weight and provides a power of to within an efficiency range of 80% to 90%.
History and development
From 1985 through 1992, the Lynch motor, and tooling were gradually improved, by Lynch, with much input from Richard Fletcher and his project team, including William Read, at London Innovation Network (LIN). LIN financed the patents, in Lynch's name. LIN also financed the construction of prototypes, including a batch made by Ouroussoff Engineering, which incorporated some ideas used in subsequent motors.
From 1989, LIN sought a company to manufacture the motor, having already successfully made small batches and individual units. These included the motors used in the Countess of Arrans world electric boat speed record attempt in 1989. Those motors were assembled by Lynch, with help from William Read, who assembled armatures in his hospital bed after a car accident (Motorboats Monthly Jan 1990).
Hotax, where Trevor Lees worked, was approached but, after Lynch showed Hotax how to make the motor they lost interest. Instead, Lees left Hotax in January 1993, and joined the Lynch Motor Company (part of LIN), as factory manager, to assist Lynch to set up larger scale production, which used tooling previously developed, and new tooling designed by Lynch. Some of the new tooling was made by a local toolmaker, Roger Cox.
Lynch and this small team engineered a production standard motor, which was manufactured by the Lynch Motor Company Ltd and named the Lynch motor. Following a rift between Lynch and LIN and the Lynch Motor Company in November 1996, the intellectual property rights were held by Lynch I P, with the Lynch Motor Co, having 50% rights. A new company was formed, LEMCO, which held the other 50% of the rights. This consisted of J P Hansen group main shareholder, with Lynch and Lees as small shareholders. Lynch joined the Indian company Agni Motors in 2002 where the Lynch Motor is built and marketed as the Agni motor. A further licensed design was made by Briggs & Stratton as the Etek DC Motor. The latest model of Agni Motor is manufactured and distributed by Saietta Group, which has been formed following the merger of Agility Global and Agni Motors in May 2015.
Description
The traditional Lynch motor design has a spinning armature held on a spindle between two banks of eight fixed permanent magnets. Also stationary are eight brushes (four negative, four positive) on the front side which allow electric current from the power source to reach the armature.
The design of the Lynch motor armature is significantly different compared to other types of motor. The armature coils are formed from insulated copper strips each in a 'U' shape (like a tuning fork). One leg is then bent 45 degrees clockwise, while the other leg is bent 45 degrees anticlockwise. Each coil leg contains several bends before reaching the outside of the armature to be able to pass radially through the ferrite ring before the ends finish 90 degree apart. At the outer edge each copper strip has a crimp forming an electrical connection to its companion 90 degrees rotated. The inner edge of the copper strips have the insulation removed on the front face only, to form the commutator surface where the brushes make contact.
Between each copper coil leg are placed the pieces of the sub-divided and insulated iron ferrite cores making up the ferrite ring. The ferrite ring carries the magnetic flux between the fixed permanent magnets, without needing to use the copper strips (which carry electric current). As the armature spins, current flows from the one brush, into the commutator, outwards along one copper coil leg, which is sandwiched between the iron ferrite core pieces. When the current reaches the connecting crimp positioned on the outer edge, it transfers to a new leg on rear side of the armature and runs back to the centre, again sandwiched between ferrites 45-degrees out of phase from the previous ferrites. The electric current then arrives back at the centre, 90-degrees later and swaps sides back to the front face before reaching the corresponding brush (of the opposite electrical polarity) 135-degrees from the initial brush.
In the design of the Lynch motor armature, the iron laminations are made from individual thin rectangular pieces slotted together to form a full circular ring. Because magnetic flux passes sideways through the laminations along one axis only, it is possible to use grain-oriented material normally used in large transformers. This has much better magnetic properties along the grain orientation but worse properties in other directions. In a traditional radial gap electric motor it cannot be easily aligned with the field direction, but in axial gap motors like the Lynch motor it leads to higher efficiencies.
Production
Small-scale production in 1988 with the electric vehicle conversion firm London Innovation and later with the Lynch Electric Motor Company (LEMCO). In 1989 four of them powered the boat An Stradag, driven by the Countess of Arran, to a world record speed for an electric boat of just over . The motor was adopted by the Swiss company ASMO for use in its electric go-kart drive systems. Its efficiency extends the life of the batteries and so improves the economics of running an electric kart track.
The patents and license rights for the manufacturing of the Lynch motor are held by the Lynch IP company, which has sold a license to Briggs and Stratton to manufacture the ETEK motor.
LEMCO continued to manufacture motors and now trades under the name of LMC (Lynch Motor Company) which now owns the Lynch IP company and therefore all rights and patents pertaining to the motor.
In 2009, Cedric Lynch parted company with LMC and is working for Agni Motors, which is producing similar motors.
Lynch motors are mentioned as being a unique product in the documentary "The White Diamond" about a lighter than air ship: "This is actually an interesting motor. It is designed by somebody called Lynch in England. He never went to University and doesn't know any mathematics and stuff like that, but he taught himself electrical engineering. And it turns out the motor he made is one of the world's leading motors in terms of power and mass. Lynch developed his own kind of algebra to do that but no other academic can understand what he's doing, but he seems to know more than many academics in electrical engineering departments because this motor is of very good performance ... the best that I could find."
Lynch motors powered world's first manned electric helicopter, designed and flown by Pascal Chretien. This unique helicopter set a Guinness World Record on 12 August 2011, and received the IDTechEx Electric Vehicles Land Sea & Air award in 2012.
Recent projects
2005 - ENV fuel cell motorcycle
Orange Juice electric dragster with a unique four wheel drive unit.
30 July 2008 - Four of the LMC version of the Lynch motor were used in a modified G-Wiz by the show Top Gear to beat a Ford Mustang in a straight-line race to 100 mph.
12 June 2009, Rob Barber won the inaugural Manx TTxGP on an Agni Motors bike powered by an Agni motor, an improved version of the original Lynch motor.
12 August 2011, Pascal Chretien flew world's first manned electric helicopter powered by two Agni motors 95R series, the Solution F/Chretien Helicopter.
Patents
Germany number 69419528.6
Great Britain number 0884826
Japan number 3120083
Europe number 98114008.0 (CH, DE, FR, GB, LI, IT)
References
Electric motors | Lynch motor | [
"Technology",
"Engineering"
] | 1,657 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
9,487,577 | https://en.wikipedia.org/wiki/Surgical%20staple | Surgical staples are specialized staples used in surgery in place of sutures to close skin wounds or to resect and/or connect parts of an organ (e.g. bowels, stomach or lungs). The use of staples over sutures reduces the local inflammatory response, width of the wound, and time it takes to close a defect.
A more recent development, from the 1990s, uses clips instead of staples for some applications; this does not require the staple to penetrate.
History
The technique was pioneered by "father of surgical stapling", Hungarian surgeon Hümér Hültl. Hultl's prototype stapler of 1908 weighed , and required two hours to assemble and load.
The technology was refined in the 1950s in the Soviet Union, allowing for the first commercially produced re-usable stapling devices for creation of bowel and anastomeses. Mark M. Ravitch brought a sample of stapling device after attending a surgical conference in USSR, and introduced it to entrepreneur Leon C. Hirsch, who founded the United States Surgical Corporation in 1964 to manufacture surgical staplers under its Auto Suture brand. Until the late 1970s USSC had the market essentially to itself, but in 1977 Johnson & Johnson's Ethicon brand entered the market and today both are widely used, along with competitors from the Far East. USSC was bought by Tyco Healthcare in 1998, which became Covidien on June 29, 2007.
Safety and patency of mechanical (stapled) bowel anastomoses has been widely studied. It is generally the case in such studies that sutured anastomoses are either comparable or less prone to leakage. It is possible that this is the result of recent advances in suture technology, along with increasingly risk-conscious surgical practice. Certainly modern synthetic sutures are more predictable and less prone to infection than catgut, silk and linen, which were the main suture materials used up to the 1990s.
One key feature of intestinal staplers is that the edges of the stapler act as a haemostat, compressing the edges of the wound and closing blood vessels during the stapling process. Recent studies have shown that with current suturing techniques there is no significant difference in outcome between hand sutured and mechanical anastomoses (including clips), but mechanical anastomoses are significantly quicker to perform.
In patients that are subjected to pulmonary resections where lung tissue is sealed with staplers, there is often postoperative air leakage. Alternative techniques to seal lung tissue are currently investigated.
Types and applications
The first commercial staplers were made of stainless steel with titanium staples loaded into reloadable staple cartridges.
Modern surgical staplers are either disposable and made of plastic, or reusable and made of stainless steel. Both types are generally loaded using disposable cartridges.
The staple line may be straight, curved or circular. Circular staplers are used for end-to-end anastomosis after bowel resection or, somewhat more controversially, in esophagogastric surgery. The instruments may be used in either open or laparoscopic surgery, different instruments are used for each application. Laparoscopic staplers are longer, thinner, and may be articulated to allow for access from a restricted number of trocar ports.
Some staplers incorporate a knife, to complete excision and anastomosis in a single operation.
Staplers are used to close both internal and skin wounds. Skin staples are usually applied using a disposable stapler, and removed with a specialized staple remover. Staplers are also used in vertical banded gastroplasty surgery (popularly known as "stomach stapling").
While devices for circular end-to-end anastomosis of digestive tract are widely used, in spite of intensive research circular staplers for vascular anastomosis never had yet significant impact on standard hand (Carrel) suture technique. Apart from the different modality of coupling of vascular (everted) in respect to digestive (inverted) stumps, the main basic reason could be that, particularly for small vessels, the manuality and precision required just for positioning on vascular stumps and actioning any device cannot be significantly inferior to that required to carry out the standard hand suture, then making of little utility the use of any device. An exception to that however could be organ transplantation where these two phases, i.e.device positioning at the vascular stumps and device actioning, can be carried out in different time, by different surgical team, in safe conditions when the time required does not influence donor organ preservation, i.e. at the back table in cold ischemia condition for the donor organ and after native organ removal in the recipient. This is finalized to make as brief as possible the donor organ dangerous warm ischemia phase that can be contained in the couple of minutes or less necessary just to connect the device's ends and actioning the stapler.
Although most surgical staples are made of titanium, stainless steel is more often used in some skin staples and clips. Titanium produces less reaction with the immune system and, being non-ferrous, does not interfere significantly with MRI scanners, although some imaging artifacts may result. Synthetic absorbable (bioabsorbable) staples are also now becoming available, based on polyglycolic acid, as with many synthetic absorbable sutures.
Removal of skin staples
Where skin staples are used to seal a skin wound it will be necessary to remove the staples after an appropriate healing period, usually between 5 and 10 days, depending on the location of the wound and other factors. The skin staple remover is a small manual device which consists of a shoe or plate that is sufficiently narrow and thin to insert under the skin staple. The active part is a small vertical blade that, when hand-pressure is exerted, pushes the staple down through a slot in the shoe, deforming the staple open into an 'M' shape to facilitate its removal. In an emergency, it is also possible to remove staples with a pair of artery forceps.
Skin staple removers are manufactured in many shapes and forms, some disposable and some reusable.
See also
Instruments used in general surgery
References
Surgical instruments
Fasteners
Hungarian inventions
1908 establishments in Hungary
1908 in science
1900s in medicine | Surgical staple | [
"Engineering"
] | 1,309 | [
"Construction",
"Fasteners"
] |
9,487,795 | https://en.wikipedia.org/wiki/Relativistic%20electron%20beam | Relativistic electron beams are streams of electrons moving at relativistic speeds. They are the lasing medium in free electron lasers to be used in atmospheric research conducted at entities such as the Pan-oceanic Environmental and Atmospheric Research Laboratory (PEARL) at the University of Hawaii and NASA. It has been suggested that relativistic electron beams could be used to heat and accelerate the reaction mass in electrical rocket engines that Dr. Robert W. Bussard called quiet electric-discharge engines (QEDs).
References
External links
PEARL Lab @ UHawaii
Applying REBs for the development of high-powered microwaves (HPM)
Electron beam
Quantum mechanics
Electron beam | Relativistic electron beam | [
"Physics",
"Chemistry"
] | 138 | [
"Electron",
"Electron beam",
"Theoretical physics",
"Quantum mechanics",
"Special relativity",
"Relativity stubs",
"Theory of relativity",
"Quantum physics stubs"
] |
9,487,872 | https://en.wikipedia.org/wiki/Beatty%20sequence | In mathematics, a Beatty sequence (or homogeneous Beatty sequence) is the sequence of integers found by taking the floor of the positive multiples of a positive irrational number. Beatty sequences are named after Samuel Beatty, who wrote about them in 1926.
Rayleigh's theorem, named after Lord Rayleigh, states that the complement of a Beatty sequence, consisting of the positive integers that are not in the sequence, is itself a Beatty sequence generated by a different irrational number.
Beatty sequences can also be used to generate Sturmian words.
Definition
Any irrational number that is greater than one generates the Beatty sequence
The two irrational numbers and naturally satisfy the equation .
The two Beatty sequences and that they generate form a pair of complementary Beatty sequences. Here, "complementary" means that every positive integer belongs to exactly one of these two sequences.
Examples
When is the golden ratio , the complementary Beatty sequence is generated by . In this case, the sequence , known as the lower Wythoff sequence, is
and the complementary sequence , the upper Wythoff sequence, is
These sequences define the optimal strategy for Wythoff's game, and are used in the definition of the Wythoff array.
As another example, for the square root of 2, , . In this case, the sequences are
For and , the sequences are
Any number in the first sequence is absent in the second, and vice versa.
History
Beatty sequences got their name from the problem posed in The American Mathematical Monthly by Samuel Beatty in 1926. It is probably one of the most often cited problems ever posed in the Monthly. However, even earlier, in 1894 such sequences were briefly mentioned by Lord Rayleigh in the second edition of his book The Theory of Sound.
Rayleigh theorem
Rayleigh's theorem (also known as Beatty's theorem) states that given an irrational number there exists so that the Beatty sequences and partition the set of positive integers: each positive integer belongs to exactly one of the two sequences.
First proof
Given let . We must show that every positive integer lies in one and only one of the two sequences and . We shall do so by considering the ordinal positions occupied by all the fractions and when they are jointly listed in nondecreasing order for positive integers j and k.
To see that no two of the numbers can occupy the same position (as a single number), suppose to the contrary that for some j and k. Then = , a rational number, but also, not a rational number. Therefore, no two of the numbers occupy the same position.
For any , there are positive integers such that and positive integers such that , so that the position of in the list is . The equation implies
Likewise, the position of in the list is .
Conclusion: every positive integer (that is, every position in the list) is of the form or of the form , but not both. The converse statement is also true: if p and q are two real numbers such that every positive integer occurs precisely once in the above list, then p and q are irrational and the sum of their reciprocals is 1.
Second proof
: Suppose that, contrary to the theorem, there are integers j > 0 and k and m such that
This is equivalent to the inequalities
For non-zero j, the irrationality of r and s is incompatible with equality, so
which leads to
Adding these together and using the hypothesis, we get
which is impossible (one cannot have an integer between two adjacent integers). Thus the supposition must be false.
: Suppose that, contrary to the theorem, there are integers j > 0 and k and m such that
Since j + 1 is non-zero and r and s are irrational, we can exclude equality, so
Then we get
Adding corresponding inequalities, we get
which is also impossible. Thus the supposition is false.
Properties
A number belongs to the Beatty sequence if and only if
where denotes the fractional part of i.e., .
Proof:
Furthermore, .
Proof:
Relation with Sturmian sequences
The first difference
of the Beatty sequence associated with the irrational number is a characteristic Sturmian word over the alphabet .
Generalizations
If slightly modified, the Rayleigh's theorem can be generalized to positive real numbers (not necessarily irrational) and negative integers as well: if positive real numbers and satisfy , the sequences and form a partition of integers. For example, the white and black keys of a piano keyboard are distributed as such sequences for and .
The Lambek–Moser theorem generalizes the Rayleigh theorem and shows that more general pairs of sequences defined from an integer function and its inverse have the same property of partitioning the integers.
Uspensky's theorem states that, if are positive real numbers such that contains all positive integers exactly once, then That is, there is no equivalent of Rayleigh's theorem for three or more Beatty sequences.
References
Further reading
Includes many references.
External links
Alexander Bogomolny, Beatty Sequences, Cut-the-knot
Integer sequences
Theorems in number theory
Diophantine approximation
Combinatorics on words
Articles containing proofs | Beatty sequence | [
"Mathematics"
] | 1,033 | [
"Sequences and series",
"Mathematical theorems",
"Integer sequences",
"Approximations",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Theorems in number theory",
"Numbers",
"Mathematical relations",
"Mathematical problems",
"Articles contain... |
9,487,904 | https://en.wikipedia.org/wiki/Grape%20toxicity%20in%20dogs | The consumption of grapes and raisins presents a potential health threat to dogs. Their toxicity to dogs can cause the animal to develop acute kidney injury (the sudden development of kidney failure) with anuria (a lack of urine production). The phenomenon was first identified by the Animal Poison Control Center (APCC), run by the American Society for the Prevention of Cruelty to Animals (ASPCA). Approximately 140 cases were seen by the APCC in the one year from April 2003 to April 2004, with 50 developing symptoms and seven dying.
It is not clear that the observed cases of kidney failure following ingestion are due to grapes only. Clinical findings suggest raisin and grape ingestion can be fatal, but the mechanism of toxicity is still considered unknown.
Cause and pathology
The reason some dogs develop kidney failure following ingestion of grapes and raisins is not known. Types of grapes involved include both seedless and seeded, store-bought and homegrown, and grape pressings from wineries. A mycotoxin is suspected to be involved, but none has been found in grapes or raisins ingested by affected dogs. The dose-response relationship has not been determined, but one study estimated 3 g/kg or greater for grapes or raisins. An April 2021 letter to the editor of JAVMA hypothesized that the tartaric acid in grapes could be the cause.
The most common pathological finding is proximal renal tubular necrosis. In some cases, an accumulation of an unidentified golden-brown pigment was found within renal epithelial cells.
Clinical signs and diagnosis
Vomiting and diarrhea are often the first clinical signs of grape or raisin toxicity. They often develop within a few hours of ingestion. Pieces of grapes or raisins may be present in the vomitus or stool. Further symptoms include weakness, not eating, increased drinking, and abdominal pain. Acute kidney failure develops within 48 hours of ingestion. A blood test may reveal increases in blood urea nitrogen (BUN), creatinine, phosphorus, and calcium.
Treatment
Emesis (induction of vomiting) is the generally recommended treatment if a dog has eaten grapes or raisins within the past two hours. A veterinarian may use an emetic such as apomorphine to cause the dog to vomit. Further treatment may involve the use of activated charcoal to adsorb remaining toxins in the gastrointestinal tract and intravenous fluid therapy in the first 48 hours following ingestion to induce diuresis and help to prevent acute kidney failure. Vomiting is treated with antiemetics and the stomach is protected from uremic gastritis (damage to the stomach from increased BUN) with H2 receptor antagonists. BUN, creatinine, calcium, phosphorus, sodium, and potassium levels are closely monitored. Dialysis of the blood (hemodialysis) and peritoneal dialysis can be used to support the kidneys if anuria develops. Oliguria (decreased urine production) can be treated with dopamine or furosemide to stimulate urine production.
The prognosis is guarded in any dog developing symptoms of toxicosis. A negative prognosis has been associated with oliguria or anuria, weakness, difficulty walking, and severe hypercalcemia (increased blood calcium levels). In cases where an animal is azotaemic the survival rate is 50%.
References
Dog health
Grape
Veterinary toxicology | Grape toxicity in dogs | [
"Environmental_science"
] | 717 | [
"Veterinary toxicology",
"Toxicology"
] |
9,488,174 | https://en.wikipedia.org/wiki/QPPB | The QoS Policy Propagation via BGP (QPPB) is a mechanism that allows propagation of quality of service (QoS) policy and classification by the sending party based on access lists, community lists, and autonomous system paths in the Border Gateway Protocol (BGP), thus helping to classify based on destination instead of source address.
See also
Computer network
Traffic engineering (telecommunications)
External links
ASR9000/XR: Implementing QOS policy propagation for BGP (QPPB)
Internet architecture | QPPB | [
"Technology"
] | 104 | [
"Computing stubs",
"Internet architecture",
"IT infrastructure",
"Computer network stubs"
] |
9,488,407 | https://en.wikipedia.org/wiki/Home%20server | A home server is a computing server located in a private computing residence providing services to other devices inside or outside the household through a home network or the Internet. Such services may include file and printer serving, media center serving, home automation control, web serving (on the network or Internet), web caching, file sharing and synchronization, video surveillance and digital video recorder, calendar and contact sharing and synchronization, account authentication, and backup services. In the recent times, it has become very common to run hundreds of applications as containers, isolated from the host operating system.
Because of the relatively low number of computers on a typical home network, a home server commonly does not require significant computing power. Home servers can be implemented do-it-yourself style with a re-purposed, older computer, or a plug computer; pre-configured commercial home server appliances are also available. An uninterruptible power supply is sometimes used in case of power outages that can possibly corrupt data.
Services provided by home servers
Administration and configuration
Home servers often run headless, and can be administered remotely through a command shell, or graphically through a remote desktop system such as RDP, VNC, Webmin, Apple Remote Desktop, or many others.
Some home server operating systems (such as Windows Home Server) include a consumer-focused graphical user interface (GUI) for setup and configuration that is available on home computers on the home network (and remotely over the Internet via remote access). Others simply enable users to use native operating system tools for configuration.
Centralized storage
Home servers often act as network-attached storage (NAS) providing the major benefit that all users' files can be centrally and securely stored, with flexible permissions applied to them. Such files can be easily accessed from any other system on the network, provided the correct credentials are supplied. This also applies to shared printers.
Such files can also be shared over the Internet to be accessible from anywhere in the world using remote access.
Servers running Unix or Linux with the free Samba suite (or certain Windows Server products - Windows Home Server excluded) can provide domain control, custom logon scripts, and roaming profiles to users of certain versions of Windows. This allows a user to log on from any machine in the domain and have access to their "Documents" folder and personalized Windows and application preferences - multiple accounts on each computer in the home are not needed.
Media serving
Home servers are often used to serve multi-media content, including photos, music, and video to other devices in the household (and even to the Internet; see Space shifting, Tonido and Orb). Using standard protocols such as DLNA or proprietary systems such as iTunes, users can access their media stored on the home server from any room in the house. Windows XP Media Center Edition, Windows Vista, and Windows 7 can act as a home server, supporting a particular type of media serving that streams the interactive user experience to Media Center Extenders including the Xbox 360.
Windows Home Server supports media streaming to Xbox 360 and other DLNA-based media receivers via the built-in Windows Media Connect technology. Some Windows Home Server device manufacturers, such as HP, extend this functionality with a full DLNA implementation such as PacketVideo TwonkyMedia server.
There are many open-source and fully functional programs for media serving available for Linux. LinuxMCE is one example, which allows other devices to boot off a hard drive image on the server, allowing them to become appliances such as set-top boxes. Asterisk, Xine, MythTV (another media serving solution), VideoLAN, SlimServer, DLNA, and many other open-source projects are fully integrated for a seamless home theater/automation/telephony experience.
On an Apple Macintosh server, options include iTunes, PS3 Media Server, and Elgato. Additionally, for Macs directly connected to TVs, Boxee can act as a full-featured media center interface.
Servers are typically always on so the addition of a TV or radio tuner allows recording to be scheduled at any time.
These services such as Windows Home Server have become significantly less popular in favour of services such as Plex and Jellyfin. These services allow users to store their media on a NAS and stream and sometimes download it to devices within the network and optionally to devices outside the network. These services automatically sort users media and find metadata and sometimes subtitles. They also track and remember users progress within a movie or series so they can continue from where they left off.
These services can be criticised for catering to pirates by allowing them to easily manage and view their illegally obtained media.
Remote access
A home server can be used to provide remote access into the home from devices on the Internet, using remote desktop software and other remote administration software. For example, Windows Home Server provides remote access to files stored on the home server via a web interface as well as remote access to Remote Desktop sessions on PCs in the house. Similarly, Tonido provides direct access via a web browser from the Internet without requiring any port forwarding or other setup. Some enthusiasts often use VPN technologies as well.
On a Linux server, two popular tools are (among many) VNC and Webmin. VNC allows clients to remotely view a server GUI desktop as if the user was physically sitting in front of the server. A GUI need not be running on the server console for this to occur; there can be multiple 'virtual' desktop environments open at the same time. Webmin allows users to control many aspects of server configuration and maintenance all from a simple web interface. Both can be configured to be accessed from anywhere on the Internet.
Servers can also be accessed remotely using the command line-based Telnet and SSH protocols.
Web serving
Some users choose to run a web server in order to share files easily and publicly (or privately, on the home network). Others set up web pages and serve them straight from their home, although this may be in violation of some ISPs terms of service. Sometimes these web servers are run on a nonstandard port in order to avoid the ISP's port blocking. Example web servers used on home servers include Apache and IIS.
Web proxy
Some networks have an HTTP proxy which can be used to speed up web access when multiple users visit the same websites, and to get past blocking software while the owner is using the network of some institution that might block certain sites. Public proxies are often slow and unreliable and so it is worth the trouble of setting up one's own private proxy.
Some proxies can be configured to block websites on the local network if it is set up as a transparent proxy.
E-mail
Many home servers also run e-mail servers that handle e-mail for the owner's domain name. The advantages are having much bigger mailboxes and maximum message size than most commercial e-mail services. Access to the server, since it is on the local network is much faster than using an external service. This also increases security as e-mails do not reside on an off-site server.
BitTorrent
Home servers are ideal for utilizing the BitTorrent protocol for downloading and seeding files as some torrents can take days, or even weeks to complete and perform better on an uninterrupted connection. There are many text based clients such as rTorrent and web-based ones such as TorrentFlux and Tonido available for this purpose. BitTorrent also makes it easier for those with limited bandwidth to distribute large files over the Internet.
Gopher
An unusual service is the Gopher protocol, a hypertext document retrieval protocol which pre-dated the World Wide Web and was popular in the early 1990s. Many of the remaining gopher servers are run off home servers utilizing PyGopherd and the Bucktooth gopher server.
Home automation
Home automation frequently relies on continuously operational devices for effective control and management. While traditional home servers have been instrumental in this area, the emergence and increasing use of Raspberry Pi and other Single Board Computers (SBCs) have become prominent. These devices, notably the Raspberry Pi, offer a flexible platform for running home automation software such as Gladys and Home Assistant. This shift towards SBC-based solutions has made home automation more accessible and cost-efficient, allowing a broader range of users to seamlessly control and integrate various smart home devices, thereby enhancing the overall functionality and convenience of their home automation systems.
Security monitoring
Relatively low cost CCTV DVR solutions are available that allow recording of video cameras to a home server for security purposes. The video can then be viewed on PCs or other devices in the house.
A series of cheap USB-based webcams can be connected to a home server as a makeshift CCTV system. Optionally these images and video streams can be made available over the Internet using standard protocols.
Family applications
Home servers can act as a host to family-oriented applications such as a family calendar, to-do lists, and message boards.
IRC and instant messaging
Because a server is always on, an IRC client or IM client running on it will be highly available to the Internet. This way, the chat client will be able to record activity that occurs even while the user is not at the computer, e.g. asleep or at work or school. Textual clients such as Irssi and tmsnc can be detached using GNU Screen for example, and graphical clients such as Pidgin can be detached using xmove. Quassel provides a specific version for this kind of use. Home servers can also be used to run personal XMPP servers and IRC servers as these protocols can support a large number of users on very little bandwidth.
Online gaming
Some multiplayer games such as Continuum, Tremulous, Minecraft, and Doom have server software available which users may download and use to run their own private game server. Some of these servers are password protected, so only a selected group of people such as clan members or whitelisted players can gain access to the server. Others are open for public use and may move to colocation or other forms of paid hosting if they gain a large number of players.
Federated social networks
Home servers can be used to host distributed federated social networks like Diaspora and GNU Social. Federation protocols like ActivityPub allow many small home servers to interact in a meaningful way and give the perception of being on a large traditional social network. Federation is not just limited to social networks. Many innovative new free software web services are being developed that can allow people to host their own videos, photos, blogs etc. and still participate in the larger federated networks.
Third-party platform
Home servers often are platforms that enable third-party products to be built and added over time. For example, Windows Home Server provides a Software Development Kit. Similarly, Tonido provides an application platform that can be extended by writing new applications using their SDK.
Operating systems
Home servers run many different operating systems. Enthusiasts who build their own home servers can use whatever OS is conveniently available or familiar to them, such as Linux, Microsoft Windows, BSD, Solaris or Plan 9 from Bell Labs.
Hardware
Single-board computers are increasingly being used to power home servers, with many of them being ARM devices. Old desktop and laptop computers can also be re-purposed to be used as home servers.
Mobile phones are typically just as powerful as ARM-based single board computers. Once mobile phones can run the Linux operating system, self-hosting might move to mobile devices with each person's data and services being served from their own mobile phone.
See also
Server definitions
Server (computing)
Network-attached storage (NAS)
File server
Print server
Media server
Operating systems
BSD UNIX
Hypervisor illumos distributions
Various Linux distributions
macOS Server
Solaris
Windows Home Server
Windows Server Essentials
Plan 9 from Bell Labs - The successor to Unix
Products
HP MediaSmart Server
Technologies
Client–server model
Dynamic DNS
Home network
Residential gateway
Media serving software
Front Row - for Mac OS X
LinuxMCE
MythTV
Plex Media Server
Kodi
Jellyfin
Server software
Comparison of web servers
List of mail server software
List of FTP server software
Samba (software)
RealVNC
Tonido
Home networking
DOCSIS
G.hn
HomePNA
Power line communication, HomePlug Powerline Alliance
VDSL, VDSL2
Wireless LAN, IEEE 802.11
References
Server
Servers (computing) | Home server | [
"Technology"
] | 2,538 | [
"Computing and society",
"Personal computing"
] |
9,488,412 | https://en.wikipedia.org/wiki/Acyl-CoA | Acyl-CoA is a group of CoA-based coenzymes that metabolize carboxylic acids. Fatty acyl-CoA's are susceptible to beta oxidation, forming, ultimately, acetyl-CoA. The acetyl-CoA enters the citric acid cycle, eventually forming several equivalents of ATP. In this way, fats are converted to ATP, the common biochemical energy carrier.
Functions
Fatty acid activation
Fats are broken down by conversion to acyl-CoA. This conversion is one response to high energy demands such as exercise.
The oxidative degradation of fatty acids is a two-step process, catalyzed by acyl-CoA synthetase. Fatty acids are converted to their acyl phosphate, the precursor to acyl-CoA. The latter conversion is mediated by acyl-CoA synthase"
acyl-P + HS-CoA → acyl-S-CoA + Pi + H+
Three types of acyl-CoA synthases are employed, depending on the chain length of the fatty acid. For example, the substrates for medium chain acyl-CoA synthase are 4-11 carbon fatty acids. The enzyme acyl-CoA thioesterase takes of the acyl-CoA to form a free fatty acid and coenzyme A.
Beta oxidation of acyl-CoA
The second step of fatty acid degradation is beta oxidation. Beta oxidation occurs in mitochondria. After formation in the cytosol, acyl-CoA is transported into the mitochondria, the location of beta oxidation. Transport of acyl-CoA into the mitochondria requires carnitine palmitoyltransferase 1 (CPT1), which converts acyl-CoA into acylcarnitine, which gets transported into the mitochondrial matrix. Once in the matrix, acylcarnitine is converted back to acyl-CoA by CPT2. Beta oxidation may begin now that Acyl-CoA is in the mitochondria.
Beta oxidation of acyl-CoA occurs in four steps.
1. Acyl-CoA dehydrogenase catalyzes dehydrogenation of the acyl-CoA, creating a double bond between the alpha and beta carbons. FAD is the hydrogen acceptor, yielding FADH2.
2. Enoyl-CoA hydrase catalyzes the addition of water across the newly formed double bond to make an alcohol.
3. 3-hydroxyacyl-CoA dehydrogenase oxidizes the alcohol group to a ketone. NADH is produced from NAD+.
4. Thiolase cleaves between the alpha carbon and ketone to release one molecule of Acetyl-CoA and the Acyl-CoA which is now 2 carbons shorter.
This four step process repeats until acyl-CoA has removed all carbons from the chain, leaving only Acetyl-CoA. During one cycle of beta oxidation, Acyl-CoA creates one molecule of Acetyl-CoA, FADH2, and NADH. Acetyl-CoA is then used in the citric acid cycle while FADH2 and NADH are sent to the electron transport chain. These intermediates all end up providing energy for the body as they are ultimately converted to ATP.
Beta oxidation, as well as alpha-oxidation, also occurs in the peroxisome. The peroxisome handles beta oxidation of fatty acids that have more than 20 carbons in their chain because the peroxisome contains very-long-chain Acyl-CoA synthetases. These enzymes are better equipped to oxidize Acyl-CoA with long chains that the mitochondria cannot handle.
Example using stearic acid
Beta oxidation removes 2 carbons at a time, so in the oxidation of an 18 carbon fatty acid such as Stearic Acid 8 cycles will need to occur to completely break down Acyl-CoA. This will produce 9 Acetyl-CoA that have 2 carbons each, 8 FADH2, and 8 NADH.
Clinical significance
Heart muscle primarily metabolizes fat for energy and Acyl-CoA metabolism has been identified as a critical molecule in early stage heart muscle pump failure.
Cellular acyl-CoA content correlates with insulin resistance, suggesting that it can mediate lipotoxicity in non-adipose tissues. Acyl-CoA: diacylglycerol acyltransferase (DGAT) plays an important role in energy metabolism on account of key enzyme in triglyceride biosynthesis. The synthetic role of DGAT in adipose tissue such as the liver and the intestine, sites where endogenous levels of its activity and triglyceride synthesis are high and comparatively clear. Also, any changes in the activity levels might cause changes in systemic insulin sensitivity and energy homeostasis.
A rare disease called multiple acyl-CoA dehydrogenase deficiency (MADD) is a fatty acid metabolism disorder. Acyl-CoA is important because this enzyme helps make Acyl-CoA from free fatty acids, and this activates the fatty acid to be metabolized. This compromised fatty acid oxidation leads to many different symptoms, including severe symptoms such as cardiomyopathy and liver disease and mild symptoms such as episodic metabolic decomposition, muscle weakness and respiratory failure. MADD is a genetic disorder, caused by a mutation in the ETFA, ETFB, and ETFDH genes. MADD is known as an "autosomal recessive disorder" because for one to get this disorder, one must receive this recessive gene from both parents.
See also
Acetyl-CoA
Beta oxidation
Coenzyme A
Acyl CoA dehydrogenase
Fatty acid metabolism
Fatty acyl-CoA esters
References
External links
Metabolism
Thioesters of coenzyme A | Acyl-CoA | [
"Chemistry",
"Biology"
] | 1,206 | [
"Biochemistry",
"Metabolism",
"Cellular processes"
] |
9,488,462 | https://en.wikipedia.org/wiki/Informal%20organization | The informal organization is the interlocking social structure that governs how people work together in practice. It is the aggregate of norms, personal and professional connections through which work gets done and relationships are built among people who share a common organizational affiliation or cluster of affiliations. It consists of a dynamic set of personal relationships, social networks, communities of common interest, and emotional sources of motivation. The informal organization evolves, and the complex social dynamics of its members also.
Tended effectively, the informal organization complements the more explicit structures, plans, and processes of the formal organization: it can accelerate and enhance responses to unanticipated events, foster innovation, enable people to solve problems that require collaboration across boundaries, and create footpaths showing where the formal organization may someday need to pave a way.
The informal organization and the formal organization
The nature of the informal organization becomes more distinct when its key characteristics are juxtaposed with those of the formal organization.
Key characteristics of the informal organization:
evolving constantly
grass roots
dynamic and responsive
excellent at motivation
requires insider knowledge to be seen
treats people as individuals
flat and fluid
cohered by trust and reciprocity
difficult to pin down
collective decision making
essential for situations that change quickly or are not yet fully understood
Key characteristics of the formal organization:
enduring, unless deliberately altered
top-down
missionary
static
excellent at alignment
plain to see
equates "person" with "role"
hierarchical
bound together by codified rules and order
easily understood and explained
critical for dealing with situations that are known and consistent
Historically, some have regarded the informal organization as the byproduct of insufficient formal organization—arguing, for example, that "it can hardly be questioned that the ideal situation in the business organization would be one where no informal organization existed." However, the contemporary approach—one suggested as early as 1925 by Mary Parker Follett, the pioneer of community centers and author of influential works on management philosophy—is to integrate the informal organization and the formal organization, recognizing the strengths and limitations of each. Integration, as Follett defined it, means breaking down apparent sources of conflict into their basic elements and then building new solutions that neither allow domination nor require compromise. In other words, integrating the informal organization with the formal organization replaces competition with coherence.
At a societal level, the importance of the relationship between formal and informal structures can be seen in the relationship between civil society and state authority. The power of integrating the formal organization and the informal organization can also be seen in many successful businesses.
Functions
Keith Davis suggests that informal groups serve at least four major functions within the formal organizational structure.
Perpetuate the cultural and social values
They perpetuate the cultural and social values that the group holds dear. Certain values are usually already held in common among informal group members. Day-to-day interaction reinforces these values that perpetuate a particular lifestyle and preserve group unity and integrity. For example, a college management class of 50 students may contain several informal groups that constitute the informal organization within the formal structure of the class. These groups may develop out of fraternity or sorority relationships, dorm residency, project work teams, or seating arrangements. Dress codes, hairstyles, and political party involvement are reinforced among the group members.
They provide social status and satisfaction that may not be obtained from the formal organization. In a large organization (or classroom), a worker (or student) may feel like an anonymous number rather than a unique individual. Members of informal groups, however, share jokes and gripes, eat together, play and work together, and are friends-which contributes to personal esteem, satisfaction, and a feeling of worth.
Promote communication among members
The informal group develops a communication channel or system (i.e., grapevine) to keep its members informed about what management actions will affect them in various ways. Many astute managers use the grape- vine to "informally" convey certain information about company actions and rumors.
Provide social control
They provide social control by influencing and regulating behavior inside and outside the group. Internal control persuades members of the group to conform to its lifestyle. For example, if a student starts to wear a coat and tie to class, informal group members may razz and convince the student that such attire is not acceptable and therefore to return to sandals, jeans, and T-shirts. External control is directed to such groups as management, union leadership, and other informal groups.
Disadvantages
Informal organizations also possess the following potential disadvantages and problems that require astute and careful management attention.
Resistance to change
Perpetuation of values and lifestyle causes informal groups to become overly protective of their "culture" and therefore resist change. For example, if restriction of output was the norm in an autocratic management group, it must continue to be so, even though management changes have brought about a more participative administration. This Culture makes employees more rigid.
Role conflict
The quest for informal group satisfaction may lead members away from formal organizational objectives. What is good for and desired by informal group members is not always good for the organization. Doubling the number of coffee breaks and the length of the lunch period may be desirable for group members but costly and unprofitable for the firm. Employees' desire to fulfill the requirements and services of both the informal group and management results in role conflict. Role conflict can be reduced by carefully attempting to integrate interests, goals, methods, and evaluation systems of both the informal and formal organizations, resulting in greater productivity and satisfaction on everyone's behalf.
Rumor
The grapevine dispenses truth and rumor with equal vengeance. Ill-informed employees communicate unverified and untrue information that can create a devastating effect on employees. This can undermine morale, establish bad attitudes, and often result in deviant or, even violent behavior. For example, a student who flunks an exam can start a rumor that a professor is making sexually harassing advances toward one of the students in class. This can create all sorts of ill feelings toward the professor and even result in vengeful acts like "egging" the residence or knocking over the mail box.
Conformity
Social control promotes and encourages conformity among informal group members, thereby making them reluctant to act too aggressively or perform at too high a level. This can harm the formal organization by stifling initiative, creativity, and diversity of performance. In some British factories, if a group member gets "out of line", tools may be hidden, air may be let out of tires, and other group members may refuse to talk to the deviant for days or weeks. These types of actions can force a good worker to leave the organization.
Benefits
Although informal organizations create unique challenges and potential problems for management, they also provide a number of benefits for the formal organization.
Blend with formal system
Formal plans. Policies, procedures, and standards cannot solve every problem in a dynamic organization; therefore, informal systems must blend with formal ones to get work done. As early as 1951, Robert Dubin recognized that "informal relations in the organization serve to preserve the organization from the self-destruction that would result from literal obedience to the formal policies, rules, regulations, and procedures". No college or university could function merely by everyone following the "letter of the law" with respect to written policies and procedures. Faculty, staff, and student informal groups must cooperate in fulfilling the "spirit of the law" to effectuate an organized, sensibly run enterprise.
Lighten management workload
Managers are less inclined to check up on workers when they know the informal organization is cooperating with them. This encourages delegation, decentralization, and greater worker support of the manager, which suggests a probable improvement in performance and overall productivity. When a professor perceives that students are conscientiously working on their term papers and group projects, there are likely to be fewer "pop tests" or important progress reports. This eases the professors load and that of the students and promotes a better relation- ship between both parties.
Fill gaps in management abilities
For instance, if a manager is weak in financial planning and analysis, a subordinate may informally assist in preparing reports through either suggestions or direct involvement.
Act as a safety valve
Employees experience frustration, tension, and emotional problems with management and other employees. The informal group provides a means for relieving these emotional and psychological pressures by allowing a person to discuss them among friends openly and candidly. In faculty lounge conversations, frustrations with the dean, department head, or students are "blown off" among empathetic colleagues.
Encourage improved management practice
Perhaps a subtle benefit of informal groups is that they encourage managers to prepare, plan, organize, and control in a more professional fashion. Managers who comprehend the power of the informal organization recognize that it is a "check and balance" on their use of authority. Changes and projects are introduced with more careful thought and consideration, knowing that the informal organization can easily kill a poorly planned project.
Understanding and dealing with the environmental crisis
The IRG Solution: hierarchical incompetence and how to overcome it (1984) argued that central media and government-type hierarchical organizations could not adequately understand the environmental crisis we were manufacturing, or how to initiate adequate solutions. It argued that what was required, was the widespread introduction of informal networks or Information Routing Groups which were essentially a description of social networking services prior to the internet.
Business approaches
Rapid growth. Starbucks, which grew from 100 employees to over 100,000 in just over a decade, provides structures to support improvisation. In a July 1998 Fast Company article on rapid growth, Starbucks chairman Howard Schultz said, "You can't grow if you're driven only by process, or only by the creative spirit. You've got to achieve a fragile balance between the two sides of the corporate brain."
Learning organization. Following a four-year study of the Toyota Production System, Steven J. Spear and H. Kent Bowen concluded in Harvard Business Review that the legendary flexibility of Toyota's operations is due to the way the scientific method is ingrained in its workers – not through formal training or manuals (the production system has never been written down) but through unwritten principles that govern how workers work, interact, construct, and learn.
Idea generation. Texas Instruments credits its "Lunatic Fringe"—"an informal and amorphous group of TI engineers (and their peers and contacts outside the company)," according to Fortune Magazine—for its recent successes. "There's this continuum between total chaos and total order," Gene Frantz, the hub of this informal network, explained to Fortune. "About 95% of the people in TI are total order, and I thank God for them every day, because they create the products that allow me to spend money. I'm down here in total chaos, that total chaos of innovation. As a company we recognize the difference between those two and encourage both to occur.
Related concepts
Organizational behavior; organizational structure; organizational communication
Community; community of practice; knowledge management
social network; value network; social Web
social network analysis; social network
References
Further reading
Reingold, Jennifer and Yang, Jia Lynn. "Hidden Workplace" Fortune, July 23, 2007
Creating an Informal Learning Organization." Harvard Management Update, (July 1, 2000).
Myths About Informal Networks—and How to Overcome Them." SMR (MIT Sloan Management Review), April 1, 2002
Cross, Rob and Laurence Prusak, "The People Who Make Organizations Go—or Stop." Harvard Business Review, June 1, 2002.
Goldsmith, Marshall and Jon Katzenbach, "Navigating the 'Informal' Organization." BusinessWeek, February 14, 2007
Krackhardt, David and Jeffry R. Hanson, "Informal Networks: The Company Behind the Chart." Harvard Business Review, July 1, 1993.
Follett, Mary Parker, "The Psychological Foundations of Business Administration." Paper presented before a Bureau of Personnel Administration conference group, January 1925. Reprinted in Dynamic Administration: The Collected Papers of Mary Parker Follett, edited by Henry C. Metcalf and Lyndall Urwick, in The Early Sociology of Management and Organizations, Volume III. Kenneth Thompson, series editor. Routledge, 2003.
"The Office Chart That Really Counts." BusinessWeek, February 27, 2006
Murray, Sarah, "Putting the House In Order." The Financial Times, November 8, 2006]
Shaw, Helen, "Not So Small, Still Beautiful." CFO.com, March 3, 2006
Organizational behavior
Types of organization
Sociological terminology | Informal organization | [
"Biology"
] | 2,547 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
9,488,727 | https://en.wikipedia.org/wiki/Cut-off%20%28electronics%29 | In electronics, cut-off is a state of negligible conduction that is a property of several types of electronic components when a control parameter (that usually is a well-defined voltage or electric current, but could also be an incident light intensity or a magnetic field), is lowered or increased past a value (the conduction threshold). The transition from normal conduction to cut-off can be more or less sharp, depending on the type of device considered, and also the speed of this transition varies considerably.
Cutoff values
Diodes
Copper oxide diode: Usually between germanium and silicon diodes (0.2-0.5V)
Diac: Depends on configuration.
Germanium diode:apx 0.3 V; varying with temperature.
Schottky diode:0.10–0.45, varying with temperature.
Selenium diode:Depends on age and current. Usually higher than silicon diodes.
Silicon diode: cutoff occurs when Vf falls below apx 0.7 V. The exact voltage varies with temperature.
Thermionic diode: cutoff voltage depends on device design. Much higher than for silicon devices.
Zener diode: reverse cutoff defined by diode voltage rating. Forward cutoff apx 0.6 V.
Transistors
BJT: Depends on the configuration.
Germanium transistor: apx 0.2 V, varying with temperature.
MOSFET: Depends on the configuration.
Silicon transistor: apx 0.6 V, varying with temperature.
TRIAC: Also depends on the configuration.
Valves
Triodes: triodes cut off when applied grid bias is too low. This will be a negative voltage under ordinary conditions.
Tetrode, pentode etc.: There is some degree of interaction between the grids, and values will vary from one device to another. Anode voltage also affects cutoff voltage.
Prolonged periods in cut-off leads to cathode poisoning.
Remote cutoff
A vacuum tube (such as a pentode, but also sometimes triodes, hexodes, heptodes and so on) with its control grid given a helix with a variable pitch can be made to operate with more negative grid voltages, with reduced amplification, before it is completely cut off (i.e. yielding no significant output). This ability to vary the amplification (sometimes called mu) and also the transconductance, is useful in Automatic Gain Control (AGC) stages of radio receivers. Devices with this characteristic are called remote-cutoff or variable-mu or super-control types.
Sharp cutoff
With a normal control grid arrangement, a vacuum tube will have close to a square-law relationship between input (grid) voltage and output (anode/plate) current, with the latter falling sharply to roughly zero. This characteristic is normally required for linear RF and audio uses. Examples; EF86 and 6AK5.
Semi-remote cutoff
A semi-remote cutoff device has characteristics somewhere between a remote-cutoff device and a sharp-cutoff one.
See also
Diode
Electrical conduction
Electronic component
Field effect transistor in JFET and MOSFET form
Transistor
Vacuum tube
References
External links
Explanation of sharp-cutoff control grids in vacuum tubes.
Explanation of remote-cutoff control grids in vacuum tubes.
Electrical parameters | Cut-off (electronics) | [
"Engineering"
] | 702 | [
"Electrical engineering",
"Electrical parameters"
] |
9,488,884 | https://en.wikipedia.org/wiki/Potentiator | In clinical terms, a potentiator is a reagent that enhances sensitization of an antigen. Potentiators are used in the clinical laboratory for performing blood banking procedures that require enhancement of agglutination to detect the presence of antibodies or antigens in a patient's blood sample. Examples of potentiators include albumin, LISS (low ionic-strength saline) and PEG (polyethylene glycol). Potentiators are also known as enhancement reagents.
Albumin acts as a potentiator by reducing the zeta potential around the suspended red blood cells, thus dispersing the repulsive negative charges and enhancing agglutination. Low ionic strength saline (LISS) is a potentiator that acts by not only reducing the zeta potential, but also by increasing the amount of antibody taken up by the red blood cell during sensitization. LISS is a solution of glycine and albumin. Polyethylene glycol (PEG) in a LISS solution removes water from the system and thus concentrates the antibodies present. PEG can cause non-specific aggregation of cells, thus eliminating the necessity for centrifugation after incubation. PEG is not appropriate for use in samples from patients with increased plasma protein, such as patients with multiple myeloma. False-positive results may occur more frequently with the use of polyethylene glycol due to its strong agglutination capabilities.
Pharmacology
In clinical pharmacology, a potentiator is a drug, herb, or chemical that intensifies the effects of a given drug. For example, hydroxyzine or dextromethorphan is used to get more pain relief and anxiolysis out of an equal dose of an opioid medication. The potentiation can take place at any part of the liberation, absorption, distribution, metabolism and elimination of the drug.
References
Reagents for organic chemistry
Chemical reactions | Potentiator | [
"Chemistry"
] | 407 | [
"nan",
"Reagents for organic chemistry"
] |
9,489,565 | https://en.wikipedia.org/wiki/Cardington%20test | The Cardington Fire Tests were a series of large-scale fire tests conducted in real structures (wood, steel-concrete composite, and concrete) at the BRE Cardington facility near Cardington, Bedfordshire, England, during the mid-1990s. After the tests, extensive computational and analytical studies of the behaviour of steel-framed composite structures in fire conditions were carried out by, among others, the University of Edinburgh, Sheffield University, and Imperial College London.
The results were presented in the form of a main report, which identified the main findings, together with numerous supplementary reports exploring various phenomena in detail.
References
Fire protection
Building engineering
Firefighting
Fire prevention | Cardington test | [
"Engineering"
] | 132 | [
"Building engineering",
"Fire protection",
"Civil engineering",
"Architecture"
] |
9,489,914 | https://en.wikipedia.org/wiki/Nagel%20point | In geometry, the Nagel point (named for Christian Heinrich von Nagel) is a triangle center, one of the points associated with a given triangle whose definition does not depend on the placement or scale of the triangle. It is the point of concurrency of all three of the triangle's splitters.
Construction
Given a triangle , let be the extouch points in which the -excircle meets line , the -excircle meets line , and the -excircle meets line , respectively. The lines concur in the Nagel point of triangle .
Another construction of the point is to start at and trace around triangle half its perimeter, and similarly for and . Because of this construction, the Nagel point is sometimes also called the bisected perimeter point, and the segments are called the triangle's splitters.
There exists an easy construction of the Nagel point. Starting from each vertex of a triangle, it suffices to carry twice the length of the opposite edge. We obtain three lines which concur at the Nagel point.
Relation to other triangle centers
The Nagel point is the isotomic conjugate of the Gergonne point. The Nagel point, the centroid, and the incenter are collinear on a line called the Nagel line. The incenter is the Nagel point of the medial triangle; equivalently, the Nagel point is the incenter of the anticomplementary triangle. The isogonal conjugate of the Nagel point is the point of concurrency of the lines joining the mixtilinear touchpoint and the opposite vertex.
Barycentric coordinates
The un-normalized barycentric coordinates of the Nagel point are where is the semi-perimeter of the reference triangle .
Trilinear coordinates
The trilinear coordinates of the Nagel point are as
or, equivalently, in terms of the side lengths
History
The Nagel point is named after Christian Heinrich von Nagel, a nineteenth-century German mathematician, who wrote about it in 1836.
Early contributions to the study of this point were also made by August Leopold Crelle and Carl Gustav Jacob Jacobi.
See also
Mandart inellipse
Trisected perimeter point
References
External links
Nagel Point from Cut-the-knot
Nagel Point, Clark Kimberling
Spieker Conic and generalization of Nagel line at Dynamic Geometry Sketches Generalizes Spieker circle and associated Nagel line.
Triangle centers
fr:Cercles inscrit et exinscrits d'un triangle#Point de Nagel | Nagel point | [
"Physics",
"Mathematics"
] | 527 | [
"Point (geometry)",
"Triangle centers",
"Points defined for a triangle",
"Geometric centers",
"Symmetry"
] |
9,490,726 | https://en.wikipedia.org/wiki/EtherSound | EtherSound is an audio-over-Ethernet technology for audio engineering and broadcast engineering applications. EtherSound is developed and licensed by Digigram.
EtherSound is intended by the developer to be compliant with IEEE 802.3 Ethernet standards. Just as the IEEE defines rates such as 100 Megabit and Gigabit Ethernet standards, EtherSound has been developed as both ES-100 (for use on dedicated 100 Megabit Ethernet networks or within a Gigabit network as a VLAN) and ES-Giga (for use on dedicated Gigabit Ethernet networks). The two versions of EtherSound are not compatible.
Network technology
While Ethersound is compliant with the IEEE 802.3 physical layer standards, logically it uses a token passing scheme for transporting audio data which prevents all of its features from being used on a standard Ethernet network. On a standard network, it is only able to distribute audio and control data one way. It is not designed to share Ethernet LANs with typical office operations data or Internet traffic such as email. It supports two-way communications only when wired in a daisy chain topology. For this reason Ethersound is best used in applications suitable to a daisy chain network topology or in live sound applications that benefit from its low point-to-point latency.
Low latency
Low latency is important for many users of audio over Ethernet technologies. EtherSound can deliver up to 64 channels of 48 kHz, 24-bit PCM audio data with a network latency of 125 microseconds.
If A/D and D/A conversions are included, this latency is about 1.5 milliseconds, the major part of
this latency being caused by the converters. Each device in a daisy-chain network adds 1.4 microseconds of latency.
EtherSound's network latency is stable and deterministic; The delay between any two devices on an EtherSound network can be calculated.
EtherSound Licensees
The following companies have licensed the EtherSound technology.
Allen & Heath
Amadeus
Apex Audio
Archean Technologies
Audio Performance
AuviTran
AuxTran
Barix
Bittner Audio
Bouyer
CAMCO Audio
Crest
DiGiCo
Digigram
Focusrite
Fostex
Innovason
Klein + Hummel
LabX technologies
Martin Audio
Mediachip
Nexo
Peavey Electronics
Pinanson
QSC
Richmond Sound Design
Studer
VTG Audio
Whirlwind
Yamaha Corporation
Notes
References
External links
EtherSound website
Audio engineering
Audio network protocols
Ethernet | EtherSound | [
"Engineering"
] | 515 | [
"Electrical engineering",
"Audio engineering"
] |
9,491,290 | https://en.wikipedia.org/wiki/Poshlib | Posh is a software framework used in cross-platform software development. It was created by Brian Hook. It is BSD licensed and , at version 1.3.002.
The Posh software framework provides a header file and an optional C source file.
Posh does not provide alternatives where a host platform does not offer a feature, but informs through preprocessor macros what is supported and what is not. It sets macros to assist in compiling with various compilers (such as GCC, MSVC and OpenWatcom), and different host endiannesses. In its simplest form, only a single header file is required. In the optional C source file, there are functions for byte swapping and in-memory serialisation/deserialisation.
Brian Hook also created SAL (Simple Audio Library) that utilises Posh. Both are featured in his book "Write Portable Code". Posh is also used in Ferret and Vega Strike.
See also
libslack
Simple DirectMedia Layer (SDL)
References
External links
Poshlib - Official website (username: guest, password: guest123)
POSH: The Portable Open Source Harness - Doxygen documentation
Simple Audio Library
poshlib - A GitHub repository
Computer libraries | Poshlib | [
"Technology"
] | 264 | [
"IT infrastructure",
"Computer libraries"
] |
9,491,409 | https://en.wikipedia.org/wiki/Department%20of%20Biochemistry%2C%20University%20of%20Oxford | The Department of Biochemistry of Oxford University is located in the Science Area in Oxford, England. It is one of the largest biochemistry departments in Europe. The Biochemistry Department is part of the University of Oxford's Medical Sciences Division, the largest of the university's four academic divisions, which has been ranked first in the world for biomedicine.
History
The Department of Biochemistry at Oxford University began as the physiological chemistry section of the Physiology Department, and acquired its own separate department and building in the 1920s. In 1920, Benjamin Moore was elected to the position of the Whitley Professor of Biochemistry, the newly established Chair of Biochemistry at Oxford University. He was followed by Rudolph Peters in 1923, and an endowment of £75,000 was soon granted by the Rockefeller Foundation for the construction of a new departmental building, purchase of equipment, and its maintenance. The Biochemistry Department building opened in 1927.
In 1954, Hans Krebs was appointed the Whitley Chair of Biochemistry, and his appointment brought greater prominence to the department. He brought with him the Medical Research Council unit established to conduct research on cell metabolism. In 1955, a second professorship in the department, the Iveagh Chair of Microbiology, was established with funding from Guinness and the sub-department of Microbiology created, with Donald Woods its first holder. The eight-storey Hans Krebs Building was constructed in 1964 with funds from the Rockefeller Foundation. Krebs was succeeded by Rodney Porter in 1967. Genetics was brought into the Biochemistry Department when Walter Bodmer was appointed the first Professor of Genetics in 1970. The Laboratory of Molecular Biophysics, first established in the Zoology Department with support from Krebs and also linked to the Physical Chemistry Laboratory of the Chemistry Department, became part of the Biochemistry Department. It moved into the Rex Richards building built in 1984, with David Phillips the Professor in Molecular Biophysics. The Oxford Glycobiology Institute, headed by Raymond Dwek and housed in the Rodney Porter Building, opened in 1991.
The department is now part of the Medical Sciences Division of Oxford University, under the Divisional Boards formed in 2000. In 2006, two older biochemistry buildings were demolished, followed by two more including the Han Krebs Tower in 2014, to make way for the two-phase construction of the New Biochemistry Building. Francis Barr, the EP Abraham Professor of Mechanistic Cell Biology, is the head of the Biochemistry Department, replacing Mark Sansom, the David Phillips Professor in Molecular Biophysics, in January 2019.
Research
The department is sub-divided into the following research areas:
Cell Biology, Development and Genetics
Chromosomal and RNA Biology
Infection and Disease Processes
Microbiology and Systems Biology
Structural Biology and Molecular Biophysics
Academic staff
There are around 400 research staff, with about 50 independent principal investigators who lead research groups that may range from a few people to forty or more. Members of other departments also contribute to teaching, including lecturers in physiology, pathology, pharmacology, clinical biochemistry and zoology. The department hosts the Oxford University Biochemical Society, a graduate student association that invites speakers to the University of Oxford. The head of department is Professor Francis Barr. Other members of the academic staff include Judy Armitage, Elspeth Garman, Jonathan Hodgkin, Kim Nasmyth, Neil Brockdorff, Rob Klose and Alison Woollard.
Buildings
The department currently has two main buildings:
The Dorothy Crowfoot Hodgkin building
The Rex Richards building (housing the NMR facility in the basement)
Until 2006, two older buildings housing genetics (the Walter Bodmer building) and biochemistry (the Rudolph Peters building) were also part of the department. However, these were demolished in 2006 to make way for the first phase of the construction of the New Biochemistry building, completed in October 2008. Until 2008 biochemistry also occupied the Donald Woods building and the Hans Krebs Tower, which were demolished in 2014 for the second phase of the construction. The New Biochemistry building was renamed Dorothy Crowfoot Hodgkin building in 2022. Until 2022 biochemistry also occupied the Rodney Porter building (Oxford Glycobiology Institute).
The New Biochemistry building houses interdisciplinary research in the biosciences, including physiology, chemistry, biochemistry, and clinical neurosciences. The department moved into the purpose-built new biochemistry building during the autumn of 2008 which was designed to promote interaction and collaboration as well as provide facilities for all staff. The New Biochemistry building houses a substantial amount of contemporary art.
Former departmental buildings
References
External links
Department of Biochemistry website
Map of the Science Area (see buildings 2–7)
Saltbridges website
Medical Sciences Division website
University of Oxford website
Oxford University Biochemical Society
Biochemistry research institutes
Biological research institutes in the United Kingdom
Biology education in the United Kingdom
Biochemistry
Research institutes in Oxford | Department of Biochemistry, University of Oxford | [
"Chemistry"
] | 964 | [
"Biochemistry research institutes",
"Biochemistry organizations"
] |
9,491,889 | https://en.wikipedia.org/wiki/Neurotensin%20receptor | Neurotensin receptors are transmembrane receptors that bind the neurotransmitter neurotensin. Two of the receptors encoded by the and genes contain seven transmembrane helices and are G protein coupled. Numerous crystal structures have been reported for the neurotensin receptor 1 (NTS1). The third receptor has a single transmembrane domain and is encoded by the gene.
Ligands
Agonists
Peptide
Beta-lactotensin (NTS2)
JMV-449
Neurotensin
Neuromedin N (NTS1 selective)
PD-149,163 (NTS1 selective, reduced amide bond 8-13 fragment of neurotensin)
Non-peptide
NTS1 full agonist SRI-9829
Partial agonists derived from SR-48692
Antagonists
Levocabastine (NTS2 selective, also H1 histamine antagonist)
SR-48692 (NTS1 selective)
SR-142948 (unselective, CAS# 184162-64-9)
Biophysical Investigation
Unusually for GPCRs, NTS1 can be expressed in an active form in the bacteria E. coli. It can be purified and analysed in vitro and has been analysed by a number of biophysical techniques such as surface plasmon resonance, FRET and cryo-electron microscopy.
Furthermore, high-resolution crystal structures of NTS1 have been determined in complex with the peptide full agonist NT8-13, the non-peptide full agonist SRI-9829, the partial agonist RTI-3a, and the antagonists / inverse agonists SR-48692 and SR-142948, as well as in the ligand-free apo state
References
External links
G protein-coupled receptors | Neurotensin receptor | [
"Chemistry"
] | 385 | [
"G protein-coupled receptors",
"Signal transduction"
] |
9,492,012 | https://en.wikipedia.org/wiki/Galanin%20receptor | The galanin receptor is a G protein-coupled receptor, or metabotropic receptor which binds galanin.
Galanin receptors can be found throughout the peripheral and central nervous systems and the endocrine system. So far three subtypes are known to exist: GAL-R1, GAL-R2, and GAL-R3. The specific function of each subtype remains to be fully elucidated, although as of 2009 great progress is currently being made in this respect with the generation of receptor subtype-specific knockout mice, and the first selective ligands for galanin receptor subtypes. Selective galanin agonists are anticonvulsant, while antagonists produce antidepressant and anxiolytic effects in animals, so either agonist or antagonist ligands for the galanin receptors may be potentially therapeutic compounds in humans.
Ligands
Agonists
Non-selective
Galanin
Galanin 1-15 fragment
Galanin-like peptide - agonist at GAL1 and GAL2 but not GAL3
Galmic
Galnon
NAX 5055
D-Gal(7-Ahp)-B2
GAL1 selective
M617
GAL1/2 selective
M1154 - has no GalR3 interaction
GAL2 selective
Galanin 2-11 amide - also called AR-M 1896, anticonvulsant in mice, CAS# 367518-31-8
M1145 - selective compared to both GalR1 and GalR3
M1153 - selective compared to both GalR1 and GalR3
CYM 2503 (positive allosteric modulator)
Antagonists
Non-selective
M35 peptide
GAL1 selective
SCH-202,596
GAL2 selective
M871 peptide
GAL3 selective
SNAP-37889
SNAP-398,299
References
External links
G protein-coupled receptors | Galanin receptor | [
"Chemistry"
] | 379 | [
"G protein-coupled receptors",
"Signal transduction"
] |
9,492,439 | https://en.wikipedia.org/wiki/Differential%20graded%20category | In mathematics, especially homological algebra, a differential graded category, often shortened to dg-category or DG category, is a category whose morphism sets are endowed with the additional structure of a differential graded -module.
In detail, this means that , the morphisms from any object A to another object B of the category is a direct sum
and there is a differential d on this graded group, i.e., for each n there is a linear map
,
which has to satisfy . This is equivalent to saying that is a cochain complex. Furthermore, the composition of morphisms
is required to be a map of complexes, and for all objects A of the category, one requires .
Examples
Any additive category may be considered to be a DG-category by imposing the trivial grading (i.e. all vanish for ) and trivial differential ().
A little bit more sophisticated is the category of complexes over an additive category . By definition, is the group of maps which do not need to respect the differentials of the complexes A and B, i.e.,
.
The differential of such a morphism of degree n is defined to be
,
where are the differentials of A and B, respectively. This applies to the category of complexes of quasi-coherent sheaves on a scheme over a ring.
A DG-category with one object is the same as a DG-ring. A DG-ring over a field is called DG-algebra, or differential graded algebra.
Further properties
The category of small dg-categories can be endowed with a model category structure such that weak equivalences are those functors that induce an equivalence of derived categories.
Given a dg-category C over some ring R, there is a notion of smoothness and properness of C that reduces to the usual notions of smooth and proper morphisms in case C is the category of quasi-coherent sheaves on some scheme X over R.
Relation to triangulated categories
A DG category C is called pre-triangulated if it has a suspension functor
and a class of distinguished triangles compatible with the
suspension, such that its homotopy category Ho(C) is a triangulated category.
A triangulated category T is said to have a dg enhancement C if C
is a pretriangulated dg category whose homotopy category is equivalent to T. dg enhancements of an exact functor between triangulated categories are defined similarly. In general, there need not exist dg enhancements of triangulated categories or functors between them, for example stable homotopy category can be shown not to arise from a dg category in this way. However, various positive results do exist, for example the derived category D(A) of a Grothendieck abelian category A admits a unique dg enhancement.
See also
Differential algebra
Graded (mathematics)
Graded category
Derivator
References
External links
dg-category in nLab
Homological algebra
Categories in category theory | Differential graded category | [
"Mathematics"
] | 619 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Categories in category theory",
"Homological algebra"
] |
9,493,560 | https://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c | A gömböc () is any member of a class of convex, three-dimensional and homogeneous bodies that are mono-monostatic, meaning that they have just one stable and one unstable point of equilibrium when resting on a flat surface. The existence of this class was conjectured by the Russian mathematician Vladimir Arnold in 1995 and proven in 2006 by the Hungarian scientists Gábor Domokos and Péter Várkonyi by constructing at first a mathematical example and subsequently a physical example.
The gömböc's shape helped to explain the body structure of some tortoises and their ability to return to an equilibrium position after being placed upside down. Copies of the first physically constructed example of a gömböc have been donated to institutions and museums, and the largest one was presented at the World Expo 2010 in Shanghai, China.
Name
If analyzed quantitatively in terms of flatness and thickness, the discovered mono-monostatic bodies are the most sphere-like, apart from the sphere itself. Because of this, they were given the name gömböc, a diminutive form of ("sphere" in Hungarian).
History
In geometry, a body with a single stable resting position is called monostatic, and the term mono-monostatic has been coined to describe a body which additionally has only one unstable point of balance (the previously known monostatic polyhedron does not qualify, as it has several unstable equilibria). A sphere weighted so that its center of mass is shifted from the geometrical center is mono-monostatic. However, it is inhomogeneous; its material density varies across its body. Another example of an inhomogeneous mono-monostatic body is the Comeback Kid, Weeble or roly-poly toy (see left figure). At equilibrium, the center of mass and the contact point are on the line perpendicular to the ground. When the toy is pushed, its center of mass rises and shifts away from that line. This produces a righting moment, which returns the toy to its equilibrium position.
The above examples of mono-monostatic objects are inhomogeneous. The question of whether it is possible to construct a three-dimensional body which is mono-monostatic but also homogeneous and convex was raised by Russian mathematician Vladimir Arnold in 1995. Being convex is essential as it is trivial to construct a mono-monostatic non-convex body: an example would be a ball with a cavity inside it. It was already well known, from a geometrical and topological generalization of the classical four-vertex theorem, that a plane curve has at least four extrema of curvature, specifically, at least two local maxima and at least two local minima, meaning that a (convex) mono-monostatic object does not exist in two dimensions. Whereas a common expectation was that a three-dimensional body should have at least four extrema, Arnold conjectured that this number could be smaller.
Mathematical solution
The problem was solved in 2006 by Gábor Domokos and Péter Várkonyi. Domokos met Arnold in 1995 at a major mathematics conference in Hamburg, where Arnold presented a plenary talk illustrating that most geometrical problems have four solutions or extremal points. In a personal discussion, however, Arnold questioned whether four is a requirement for mono-monostatic bodies and encouraged Domokos to seek examples with fewer equilibria.
The rigorous proof of the solution can be found in references of their work. The summary of the results is that the three-dimensional homogeneous convex (mono-monostatic) body, which has one stable and one unstable equilibrium point, does exist and is not unique. Their form is dissimilar to any typical representative of any other equilibrium geometrical class. They should have minimal "flatness" and, to avoid having two unstable equilibria, must also have minimal "thinness". They are the only non-degenerate objects having simultaneously minimal flatness and thinness. The shape of those bodies is susceptible to small variation, outside which it is no longer mono-monostatic. For example, the first solution of Domokos and Várkonyi closely resembled a sphere, with a shape deviation of only 10−5. It was dismissed as it was tough to test experimentally. The first physically produced example is less sensitive; yet it has a shape tolerance of 10−3, that is 0.1 mm for a 10 cm size.
Domokos developed a classification system for shapes based on their points of equilibrium by analyzing pebbles and noting their equilibrium points. In one experiment, Domokos and his wife tested 2000 pebbles collected on the beaches of the Greek island of Rhodes and found not a single mono-monostatic body among them, illustrating the difficulty of finding or constructing such a body.
A gömböc's unstable equilibrium position is obtained by rotating the figure 180° about a horizontal axis. Theoretically, it will rest there, but the smallest perturbation will bring it back to the stable point. All gömböcs have sphere-like properties. In particular, their flatness and thinness are minimal, and they are the only type of nondegenerate object with this property. Domokos and Várkonyi are interested in finding a polyhedral solution with a surface consisting of a minimal number of flat planes. There is a prize to anyone who finds the respective minimal numbers F, E, and V of faces, edges and vertices for such a polyhedron, which amounts to $10,000 divided by the number C = F + E + V − 2, which is called the mechanical complexity of mono-monostatic polyhedra. It has been proved that one can approximate a curvilinear mono-monostatic shape with a finite number of discrete surfaces; however, they estimate that it would take thousands of planes to achieve that. By offering this prize, they hope to stimulate finding a radically different solution from their own.
Relation to animals
The balancing properties of gömböcs are associated with the "righting response" — the ability to turn back when placed upside down — of shelled animals such as tortoises and beetles. These animals may become flipped over in a fight or predator attack, so the righting response is crucial for survival. To right themselves, relatively flat animals (such as beetles) heavily rely on momentum and thrust developed by moving their limbs and wings. However, the limbs of many dome-shaped tortoises are too short to be used for righting.
Domokos and Várkonyi spent a year measuring tortoises in the Budapest Zoo, Hungarian Museum of Natural History and various pet shops in Budapest, digitizing and analyzing their shells, and attempting to "explain" their body shapes and functions from their geometry work published by the biology journal Proceedings of the Royal Society. It was then immediately popularized in several science news reports, including the science journals Nature and Science. The reported model can be summarized as flat shells in tortoises are advantageous for swimming and digging. However, the sharp shell edges hinder the rolling. Those tortoises usually have long legs and necks and actively use them to push the ground to return to the normal position if placed upside down. On the contrary, "rounder" tortoises easily roll on their own; those have shorter limbs and use them little when recovering from lost balance (some limb movement would always be needed because of imperfect shell shape, ground conditions, etc). Round shells also resist the crushing jaws of a predator better and are better for thermal regulation.
Art
On June 7, 2012, RocketJump released "Video Game High School (VGHS) - S1: Ep. 4" to YouTube. It features the Gömböc 7:40 into the video.
In the fall of 2020, the Korzo Theatre in The Hague and the Theatre Municipal in Biarritz presented the solo dance production "Gömböc" by French choreographer Antonin Comestaz.
A 2021 solo exhibition of conceptual artist Ryan Gander evolved around the theme of self-righting and featured seven large gömböc shapes gradually covered by black volcanic sand.
Media
For their discovery, Domokos and Várkonyi were decorated with the Knight's Cross of the Republic of Hungary. The New York Times Magazine selected the gömböc as one of the 70 most interesting ideas of the year 2007.
The Stamp News website shows Hungary's new stamps issued on 30 April 2010, illustrating a gömböc in different positions. The stamp booklets are arranged so that the gömböc appears to come to life when the booklet is flipped. The stamps were issued in association with the gömböc on display at the World Expo 2010 (1 May to 31 October). This was also covered by the Linn's Stamp News magazine.
See also
Flatness measures
Instability
Monostatic polytope
Self-righting watercraft
References
External links
Non-technical description of development, with short video
Expo 2010 presentation of a gömböc shape, with photos
2006 in science
2006 introductions
2006 in Hungary
Euclidean solid geometry
Science and technology in Hungary
Statics
Hungarian inventions
Volume | Gömböc | [
"Physics",
"Mathematics"
] | 1,855 | [
"Scalar physical quantities",
"Statics",
"Physical quantities",
"Euclidean solid geometry",
"Quantity",
"Classical mechanics",
"Size",
"Extensive quantities",
"Spacetime",
"Space",
"Volume",
"Wikipedia categories named after physical quantities"
] |
9,493,613 | https://en.wikipedia.org/wiki/Sieving%20coefficient | In mass transfer, the sieving coefficient is a measure of equilibration between the concentrations of two mass transfer streams. It is defined as the mean pre- and post-contact concentration of the mass receiving stream divided by the pre- and post-contact concentration of the mass donating stream.
where
S is the sieving coefficient
Cr is the mean concentration mass receiving stream
Cd is the mean concentration mass donating stream
A sieving coefficient of unity implies that the concentrations of the receiving and donating stream equilibrate, i.e. the out-flow concentrations (post-mass transfer) of the mass donating and receiving stream are equal to one another. Systems with sieving coefficient that are greater than one require an external energy source, as they would otherwise violate the laws of thermodynamics.
Sieving coefficients less than one represent a mass transfer process where the concentrations have not equilibrated.
Contact time between mass streams is important in consider in mass transfer and affects the sieving coefficient.
In kidney
In renal physiology, the glomerular sieving coefficient (GSC) can be expressed as:
sieving coefficient = clearance / ultrafiltration rate
See also
Heat exchanger
Condenser pinch point
Sieve
References
Transport phenomena
Chemical engineering
Mechanical engineering | Sieving coefficient | [
"Physics",
"Chemistry",
"Engineering"
] | 258 | [
"Transport phenomena",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Chemical engineering",
"nan",
"Mechanical engineering"
] |
9,493,857 | https://en.wikipedia.org/wiki/Snake%20Eater%20%28identification%20system%29 | Snake Eater is a military identification system and database developed by Computer Deductions, Inc. for the United States Army. The system allows military personnel to track and identify terrorists and insurgents in much the same way that mobile data terminals are used by police officers for criminals.
Development began in late 2006 after being suggested by Major Owen West, a Marine Corps officer serving in the Anbar Province of Iraq.
Snake Eater gives military personnel access to a database including names, addresses, known associates, and other pieces of information. This information, previously collected in homemade spreadsheets or on pieces of paper, can now be accessed and expanded through hand-held devices.
Funding for the project was originally provided by the Spirit of America project, a civilian organization that advocates support of U.S. troops abroad. Spirit of America provided $30,000 for the development of a prototype model, with Goldman Sachs contributing another $14,000. The system uses technology previously released by Cross Match Technologies and Knowledge Computing Corporation of Arizona.
External links
Snake Eater (A Wall Street Journal opinion piece that explains the origin of the system)
Cross Match Technologies
United States in the Iraq War
Anbar campaign (2003–2011)
United States Army projects
Automatic identification and data capture
Terrorism databases
2000s software | Snake Eater (identification system) | [
"Technology"
] | 252 | [
"Data",
"Automatic identification and data capture"
] |
9,493,992 | https://en.wikipedia.org/wiki/List%20of%20ATSC%20standards | Below are the published ATSC standards for ATSC digital television service, issued by the Advanced Television Systems Committee.
A/49: Ghost Canceling Reference Signal for NTSC (for adjacent-channel interference or co-channel interference with analog NTSC stations nearby)
A/52B: audio data compression (Dolby AC-3 and E-AC-3)
A/53E: "ATSC Digital Television Standard" (the primary document governing the standard)
A/55: "Program Guide for Digital Television" (now deprecated in favor of A/65 PSIP)
A/56: "System Information for Digital Television" (now deprecated in favor of A/65 PSIP)
A/57A: "Content Identification and Labeling for ATSC Transport" (for assigning a unique digital number to each episode of each TV show, to assist DVRs)
A/63: "Standard for Coding 25/50 Hz Video" (for use with PAL and SECAM-originated programming)
A/64A "Transmission Measurement and Compliance for Digital Television"
A/65C: "Program and System Information Protocol for Terrestrial Broadcast and Cable" (PSIP includes virtual channels, electronic program guides, and content ratings)
A/68: "PSIP Standard for Taiwan" (defines use of Chinese characters via Unicode 3.0)
A/69: recommended practices for implementing PSIP at a TV station
A/70A: "Conditional Access System for Terrestrial Broadcast"
A/71: "ATSC Parameterized Services Standard"
A/72: "Video System Characteristics of AVC in the ATSC Digital Television System" (implementing H.264/MPEG-4 as well as MVC for 3D television)
A/76: "Programming Metadata Communication Protocol" (XML-based PMCP maintains PSIP metadata though a TV station's airchain)
A/79: "Conversion of ATSC Signals for Distribution to NTSC Viewers" (recommended practice, issued February 2009)
A/80: "Modulation and Coding Requirements for Digital TV (DTV) Applications Over Satellite" (ATSC-S)
A/81: "Direct-to-Home Satellite Broadcast Standard" (not yet implemented by any services)
A/82: "Automatic Transmitter Power Control (ATPC) Data Return Link (DRL) Standard"
A/85: "Techniques for Establishing and Maintaining Audio Loudness for Digital Television"
A/90: "Data Broadcast Standard" (for datacasting)
A/92: "Delivery of IP Multicast Sessions over Data Broadcast Standard" (for IP multicasting)
A/93: "Synchronized/Asynchronous Trigger Standard"
A/94: "ATSC Data Application Reference Model"
A/95: "Transport Stream File System Standard" (TSFS is a special file system for downloading computer files)
A/96: "ATSC Interaction Channel Protocols" (interactive TV)
A/97: "Software Data Download Service" (used by UpdateTV for upgrades and software patches in ATSC tuners)
A/98: "System Renewability Message Transport"
A/99: "Carriage Of Legacy TV Data Services" (for former analog supplemental services that used the vertical blanking interval lines, such as closed captioning and teletext)
A/100: "DTV Application Software Environment - Level 1" (DASE-1)
A/101: "Advanced Common Application Platform" (ACAP)
A/103:2014: "Non-Real-Time Delivery"
A/104: "ATSC 3D-TV Terrestrial Broadcasting"
A/105:2015: "Interactive Services Standard"
A/106:2015: "ATSC Security and Service Protection Standard"
A/107:2015: "ATSC 2.0 Standard"
A/110A: "Synchronization Standard for Distributed Transmission" (single-frequency networks)
A/112: E-VSB (Enhanced Vestigal Sideband)
A/153: ATSC-M/H
In 2004, the main ATSC standard was amended to support Enhanced ATSC (A/112); this transmission mode is backwardly compatible with the original 8-Bit Vestigal Sideband modulation scheme, but provides much better error correction.
ATSC-M/H for mobile TV has been approved and added to some stations, though it is known that it uses MPEG-4 instead of MPEG-2 for encoding, and behaves as an MPEG-4-encoded subchannel, inheriting 8VSB from the remainder of the channel.
ATSC 3.0
ATSC 3.0 is a non-backwards-compatible version of ATSC being developed (as of May 18, 2016) that uses OFDM instead of 8VSB and a much newer video codec (instead of ATSC 1 and 2's MPEG-2).
On March 28, 2016, the Bootstrap component of ATSC 3.0 (System Discovery and Signalling) was upgraded from candidate standard to finalized standard.
On May 4, 2016, the Audio Codec component of ATSC 3.0 was elevated to candidate standard, with two finalists remaining: Dolby AC-4 and MPEG-H Audio Alliance format from Fraunhofer IIS, Qualcomm and Technicolor SA. A third entry from DTS named DTS:X (a successor to DTS-HD) was withdrawn before the standard was upgraded to candidate status.
On September 8, 2016, the Physical Layer Download (OFDM) component of ATSC 3.0 was upgraded from candidate standard to finalized standard.
On October 5, 2016, the Link Layer Protocol Standard (A/330) was elevated from Candidate to final standard, along with the Audio Watermark Emission Standard (A/334) and Video Watermark Emission Standard (A/335). ATSC Technology Group 3 (TG3) members have also begun voting on elevating the following Candidate Standards to Proposed Standard status (the final step before becoming an approved standard): Service Announcement (A/322), Service Usage Reporting (A/333) and Captions and Subtitles (A/343). TG3 members also are voting to elevate Security (A/360) to Candidate Standard status, joining Schedule and Studio-to-Transmitter Link Standard (A/324), which was recently elevated. On March 30, 2016, A/324 (Schedule and Studio-to-Transmitter Link) was upgraded from Proposed to Candidate Standard.
On January 3, 2017, ATSC announced the updated status of its standards, in time for its debut at the Consumer Electronics Show in Las Vegas. As a result, this update, Captions and Subtitles (A/343) was upgraded from Candidate to Finalized Standard; Security (A-360), Lab Performance Test Plan (A-325) and Field Test Plan (A-326) were upgraded to Candidate Standard from "Under Consideration".
By March 7, 2017, ATSC announced a further update to the status of its standards, with the following as Finalized: A/321 (System Discovery and Signaling); A/322 Physical Layer Protocl (COFDM); A/326 (Field Test Plan [Recommended Practice]); A/330 (Link Layer Protocol); A/333 (Service Usage Reporting); A/334 (Audio Watermark Emission); A/335 (Video Watermark Emission); A/336 (Content Recovery in Redistribution Scenarios [ATSC 3.0 over Cable and Satellite]); A/342 Part 1 (Audio Common Elements); A/342 Part 2 (Audio: Dolby AC-4 System); A/342 Part 3 (Audio MPEG-H System); and A/343 (Captions and Subtitles). The following are Proposed Standards: A/325 (Lab Performance Test Plan [Recommended Practice]); A/332 (Service Announcement); A/338 (Companion Device); A/341 (Video - H.265/HEVC). The following are Candidate Standards: A/300 (ATSC 3.0 System); A/324 (Scheduler/Studio-to-Transmitter Link); A/331 (Signalling, Delivery, Sync Error Protection); A/337 (Application Signalling); A/344 (Interactive Content); A/360 (Security and Service Protection). The following is a Draft Standard: A/323 (Physical Layer Uplink/Downlink).
Structure/ATSC 3.0 System Layers
Bootstrap: System Discovery and Signalling
Physical Layer: Transmission (OFDM)
Link Layer Protocols: IP, MMT
Presentation: Audio and Video standards (to be determined), Ultra HD with High Definition and standard-definition multicast, Immersive Audio
Applications: Screen is a web page
Finalized Standards
A/200: Regional Service Availability (finalized on July 8, 2020)
A/300: ATSC 3.0
A/321: System Discovery and Signalling
A/322: Physical Layer Download
A/323: Dedicated Return Channel for ATSC 3.0 (Physical Layer Upload/Download (Uplink/Downlink)) (accepted on 2 November 2017, finalized on December 7, 2018)
A/324: Schedule and Studio-to-Transmitter Link (finalized on 5 January 2018)
A/325: Recommended Practice: TG3/S32 Lab Performance Test Plan
A/326: Field Test Plan (Recommended Practice)
A/330: Link Layer Protocol
A/331: Signaling, Delivery, Synchronization, and Error Protection (finalized on 6 December 2017)
A/332: Service Announcement
A/333: Service Usage Reporting
A/334: Audio Watermark Emission
A/335: Video Watermark Emission
A/336: Content Recovery in Redistribution Scenarios
A/337: Application Signaling (finalized on January 2, 2018)
A/338: Companion Device
A/339:2017, “ATSC Recommended Practice: Audio Watermark Modification and Erasure”
A/341: Video Standard (H.265, Scalable HEVC with HDR)
A/341:2018, “Video – HEVC, With Amendments No. 1 and No. 2″ (Approved January 24, 2018. Amendment No. 1 approved March 9, 2018. Amendment No. 2 approved March 12, 2018, finalized on February 14, 2019)
A/342: Audio Standard (composed of the following three parts)
A/341 Amendment – 2094-40 (HEVC Video Codec, Finalized on September 19, 2021)
A/342 Part 1: Audio Common Elements
A/342 Part 2: AC-4 System
A/342 Part 3: MPEG-H AA System (declared the audio standard for ATSC 3.0 in South Korea)
A/343: Captions and Subtitles
A/344: Application Runtime Environment Standard (apps for advanced televisions) (finalized December 18, 2017)
A/360: Security and Service Protection (encryption for broadcasters) (finalized January 9, 2018)
Proposed Standards
Candidate Standards
A/331:2021: Signaling, Delivery, Synchronization, and Error Protection (Most Recent Document, approved on December 9, 2021
A/344:2021: Interactive Content (Most Recent Document, approved November 24, 2021)
A/345: Personalization
Draft Standards
Under Consideration - Working Drafts and Recommended Practices
A/327: Guidelines for the Physical Layer Protocol (Most Recent Document Approved January 25, 2021)
A/350: Guide to the Link-Layer Protocol (Most Recent Document Approved: July 19, 2019)
A/351: Techniques for Signaling, Delivery and Synchronization (Most Recent Document Approved: February 15, 2021)
A/361: Security and Content Protection (Most Recent Document Approved: December 10, 2019)
A/362: Digital Rights Management (DRM) (Most Recent Document Approved: January 17, 2020)
A/370: Conversion of ATSC 3.0 Services for Redistribution (Most Recent Document Approved: December 11, 2019)
A/380: Haptics for ATSC 3.0 (Most Recent Document Approved: February 3, 2021)
References
External links
ATSC standards
Standards
Digital television
High-definition television
ATSC standards
MPEG
Television lists
Television technology | List of ATSC standards | [
"Technology"
] | 2,516 | [
"Information and communications technology",
"Multimedia",
"MPEG",
"Television technology"
] |
9,494,074 | https://en.wikipedia.org/wiki/KIT%20%28gene%29 | Proto-oncogene c-KIT is the gene encoding the receptor tyrosine kinase protein known as tyrosine-protein kinase KIT, CD117 (cluster of differentiation 117) or mast/stem cell growth factor receptor (SCFR). Multiple transcript variants encoding different isoforms have been found for this gene.
KIT was first described by the German biochemist Axel Ullrich in 1987 as the cellular homolog of the feline sarcoma viral oncogene v-kit.
Function
KIT is a cytokine receptor expressed on the surface of hematopoietic stem cells as well as other cell types. Altered forms of this receptor may be associated with some types of cancer. KIT is a receptor tyrosine kinase type III, which binds to stem cell factor, also known as "steel factor" or "c-kit ligand". When this receptor binds to stem cell factor (SCF) it forms a dimer that activates its intrinsic tyrosine kinase activity, that in turn phosphorylates and activates signal transduction molecules that propagate the signal in the cell. After activation, the receptor is ubiquitinated to mark it for transport to a lysosome and eventual destruction. Signaling through KIT plays a role in cell survival, proliferation, and differentiation. For instance, KIT signaling is required for melanocyte survival, and it is also involved in haematopoiesis and gametogenesis.
Structure
Like other members of the receptor tyrosine kinase III family, KIT consists of an extracellular domain, a transmembrane domain, a juxtamembrane domain, and an intracellular tyrosine kinase domain. The extracellular domain is composed of five immunoglobulin-like domains, and the protein kinase domain is interrupted by a hydrophilic insert sequence of about 80 amino acids. The ligand stem cell factor binds via the second and third immunoglobulin domains.
Cell surface marker
Cluster of differentiation (CD) molecules are markers on the cell surface, as recognized by specific sets of antibodies, used to identify the cell type, stage of differentiation and activity of a cell. KIT is an important cell surface marker used to identify certain types of hematopoietic (blood) progenitors in the bone marrow. To be specific, hematopoietic stem cells (HSC), multipotent progenitors (MPP), and common myeloid progenitors (CMP) express high levels of KIT. Common lymphoid progenitors (CLP) express low surface levels of KIT. KIT also identifies the earliest thymocyte progenitors in the thymus—early T lineage progenitors (ETP/DN1) and DN2 thymocytes express high levels of c-Kit. It is also a marker for mouse prostate stem cells. In addition, mast cells, melanocytes in the skin, and interstitial cells of Cajal in the digestive tract express KIT. In humans, expression of c-kit in helper-like innate lymphoid cells (ILCs) which lack the expression of CRTH2 (CD294) is used to mark the ILC3 population.
CD117/c-KIT is expressed not only by bone marrow-derived stem cells, but also by those found in other adult organs, such as the prostate, liver, and heart, suggesting that SCF/c-KIT signaling pathways may contribute to stemness in some organs. Additionally, c-KIT has been associated with numerous biological processes in other cell types. For example, c-KIT signaling, has been shown to regulate oogenesis, folliculogenesis, and spermatogenesis, playing important roles in female and male fertility.
Mobilization
Hematopoietic progenitor cells are normally present in the blood at low levels. Mobilization is the process by which progenitors are made to migrate from the bone marrow into the bloodstream, thus increasing their numbers in the blood. Mobilization is used clinically as a source of hematopoietic stem cells for hematopoietic stem cell transplantation (HSCT). Signaling through KIT has been implicated in mobilization. At the current time, G-CSF is the main drug used for mobilization; it indirectly activates KIT. Plerixafor (an antagonist of CXCR4-SDF1) in combination with G-CSF, is also being used for mobilization of hematopoietic progenitor cells. Direct KIT agonists are currently being developed as mobilization agents.
Role in cancer
Activating mutations in this gene are associated with gastrointestinal stromal tumors, testicular seminoma, mast cell disease, melanoma, acute myeloid leukemia, while inactivating mutations are associated with the genetic defect piebaldism.
c-KIT plays an important role in regulating many mechanisms leading to tumor formation and progression of carcinomas. c-KIT has been proposed as a regulator of stemness in several cancers. Its expression has been linked to cancer stemness in ovarian cancer cells, colon cancer cells, non-small cell lung cancer cells, and prostate cancer cells. c-KIT has also been linked to the epithelial-mesenchymal transition (EMT), which is important for tumor aggressiveness and metastatic potential. Ectopic expression of c-KIT and EMT have been linked in denoid cystic carcinoma of the salivary gland, thymic carcinomas, ovarian cancer cells, and prostate cancer cells. Several lines of evidence suggest that SCF/c-KIT signaling plays an important role in the tumor microenvironment. For example, in mice high levels of c-KIT in mast cells as well as its presence in the tumor microenvironment promote angiogenesis, leading to increased tumor growth and metastasis.
Anti-KIT therapies
KIT is a proto-oncogene, meaning that overexpression or mutations of this protein can lead to cancer. Seminomas, a subtype of testicular germ cell tumors, frequently have activating mutations in exon 17 of KIT. In addition, the gene encoding KIT is frequently overexpressed and amplified in this tumor type, most commonly occurring as a single gene amplicon. Mutations of KIT have also been implicated in leukemia, a cancer of hematopoietic progenitors, melanoma, mast cell disease, and gastrointestinal stromal tumors (GISTs). The efficacy of imatinib (trade name Gleevec), a KIT inhibitor, is determined by the mutation status of KIT:
When the mutation has occurred in exon 11 (as is the case many times in GISTs), the tumors are responsive to imatinib. However, if the mutation occurs in exon 17 (as is often the case in seminomas and leukemias), the receptor is not inhibited by imatinib. In those cases other inhibitors such as dasatinib Avapritinib or nilotinib can be used. Researchers investigated the dynamic behavior of wild type and mutant D816H KIT receptor, and emphasized the extended A-loop (EAL) region (805-850) by conducting computational analysis. Their atomic investigation of mutant KIT receptor which emphasized on the EAL region provided a better insight into the understanding of the sunitinib resistance mechanism of the KIT receptor and could help to discover new therapeutics for KIT-based resistant tumor cells in GIST therapy.
The preclinical agent, KTN0182A, is an anti-KIT, pyrrolobenzodiazepine (PBD)-containing antibody-drug conjugate which shows anti-tumor activity in vitro and in vivo against a range of tumor types.
Diagnostic relevance
Antibodies to KIT are widely used in immunohistochemistry to help distinguish particular types of tumour in histological tissue sections. It is used primarily in the diagnosis of GISTs, which are positive for KIT, but negative for markers such as desmin and S-100, which are positive in smooth muscle and neural tumors, which have a similar appearance. In GISTs, KIT staining is typically cytoplasmic, with stronger accentuation along the cell membranes. KIT antibodies can also be used in the diagnosis of mast cell tumours and in distinguishing seminomas from embryonal carcinomas.
Interactions
KIT has been shown to interact with:
APS,
BCR,
CD63,
CD81,
CD9,
CRK,
CRKL,
DOK1,
FES,
GRB10,
Grb2,
KITLG,
LNK,
LYN,
MATK,
MPDZ,
PIK3R1,
PTPN11,
PTPN6,
STAT1,
SOCS1,
SOCS6,
SRC, and
TEC.
See also
Cytokine receptor
List of genes mutated in pigmented cutaneous lesions
References
Further reading
External links
C-kit receptor entry in the public domain NCI Dictionary of Cancer Terms
Immunoglobulin superfamily cytokine receptors
EC 2.7.10
Tyrosine kinase receptors | KIT (gene) | [
"Chemistry"
] | 1,915 | [
"Tyrosine kinase receptors",
"Signal transduction"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.