id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,028,429
https://en.wikipedia.org/wiki/Cud
Cud is a portion of food that returns from a ruminant's stomach to the mouth to be chewed for the second time. More precisely, it is a bolus of semi-degraded food regurgitated from the reticulorumen of a ruminant. Cud is produced during the physical digestive process of rumination. Rumination The alimentary canal of ruminants, such as cattle, giraffes, goats, sheep, alpacas, and antelope, are unable to produce the enzymes required to break down the cellulose and hemicellulose of plant matter. Accordingly, these animals rely on a symbiotic relationship with a wide range of microbes, which largely reside in the reticulorumen, and which are able to synthesize the requisite enzymes. The reticulorumen thus hosts a microbial fermentation which yields products (mainly volatile fatty acids and microbial protein), which the ruminant is able to digest and absorb. This allows the animals to extract nutritional value from cellulose which is usually undigested. The process of rumination is stimulated by the presence of roughage in the upper part of the reticulorumen. The chest cavity is stretched, forming a vacuum in the gullet that sucks the semi-liquid stomach content into the esophagus. From the esophagus it is taken back to the mouth with retro peristaltic movements. When the stomach content, or the cud, arrives in the mouth of the ruminant, it is pushed against the palate with the tongue to remove excess liquid, the latter is swallowed and the solid material is chewed thoroughly so the cattle can extract the minerals present in the cud brought to the surface during rumination. The function of rumination is that food is physically refined to expose more surface area for bacteria working in the reticulorumen, as well as stimulation of saliva secretion to buffer the rumen pH. When food has been degraded sufficiently it passes from the reticulorumen through the reticulomasal orifice to the omasum followed by the abomasum to continue the digestion process in the lower parts of the alimentary canal. No enzymes are secreted in the rumen. Enzymes and hydrochloric acid are only secreted from the abomasum (fourth stomach) onwards, and ruminants function from that point onwards much like monogastric animals, such as pigs and humans. Chemistry The reticulorumen has an optimum pH of 6.5 for the microbe population to thrive. Consumption by ruminants of an insufficiently fibrous diet leads to little cud formation and therefore lowered amounts of saliva production. This in turn is associated with rumen acidosis, where the rumen pH can fall to 5 or lower. Rumen acidosis is associated with a lowered appetite which leads to still lower rates of saliva secretion. Eventually, a collapse of the microbial ecosystem in the rumen will occur because of the low pH. Acute rumen acidosis can lead to death of the animal, and will occur if the animal is allowed to eat a diet with no roughage but high levels of highly digestible starchy concentrate. Some dairy cows in intensive systems of milk production may have sub-acute acidosis because of the high rates of cereals in their diets relative to an insufficient amount of forage. However, most producers provide adequate fodder in the form of hay to prevent this. Jewish dietary laws Jewish dietary laws state that an animal that chews the cud and has a cloven hoof is acceptable for consumption. Any animal that doesn't chew the cud and have a cloven hoof is unclean. Cuisine Goat cud is sometimes an ingredient in Pinapaitan. See also Chymus Cecotrope References Digestive system Articles containing video clips Ruminants
Cud
[ "Biology" ]
815
[ "Digestive system", "Organ systems" ]
1,028,435
https://en.wikipedia.org/wiki/Smart%20Tag
Smart Tag is the former name of a transponder-based electronic toll collection system implemented by the Virginia Department of Transportation (VDOT). It was launched as Fastoll on April 15, 1996. Fastoll was rebranded as Smart Tag in 1998, and was placed under the umbrella of Smart Travel. In November 2007, the Smart Tag brand name was retired in favor of E-ZPass Virginia, several years after the Smart Tag system became a part of the E-ZPass network. Originally, Smart Tag only operated at certain toll roads and crossings in Virginia. The system became interoperable with the E-ZPass toll collection system on October 27, 2004, although Richmond Metropolitan Authority owned toll roads—Boulevard Bridge, the Downtown Expressway, and the Powhite Parkway (excluding the extension)—did not begin accepting E-ZPass until August 3, 2005; E-ZPass integration had been delayed due to damages from Tropical Storm Gaston. Smart Tag branded transponders operate throughout the E-ZPass network, and E-ZPass branded transponders operate at all E-ZPass Virginia (formerly Smart Tag) toll collection points. Roads and crossings that accept Smart Tag/E-ZPass Virginia/E-ZPass: Dulles Toll Road from Falls Church to Dulles International Airport (DC suburbs). Dulles Greenway, a privately owned highway from Dulles to Leesburg. Downtown Expressway in Richmond. Powhite Parkway and Powhite Parkway Extension in Richmond and Chesterfield County. Boulevard Bridge (the "Nickel Bridge", though it costs 45 cents now) in Richmond Pocahontas Parkway in Chesterfield and Henrico County. Chesapeake Expressway in Chesapeake. George P. Coleman Bridge, across the York River near Newport News (U.S. Highway 17). Chesapeake Bay Bridge-Tunnel Any road or crossing in the E-ZPass network. References External links E-ZPass Virginia Smart Travel Virginia Smart devices Electronic toll collection Transportation in Virginia
Smart Tag
[ "Technology" ]
401
[ "Home automation", "Smart devices" ]
1,028,515
https://en.wikipedia.org/wiki/De%20revolutionibus%20orbium%20coelestium
De revolutionibus orbium coelestium (English translation: On the Revolutions of the Heavenly Spheres) is the seminal work on the heliocentric theory of the astronomer Nicolaus Copernicus (1473–1543) of the Polish Renaissance. The book, first printed in 1543 in Nuremberg, Holy Roman Empire, offered an alternative model of the universe to Ptolemy's geocentric system, which had been widely accepted since ancient times. History Copernicus initially outlined his system in a short, untitled, anonymous manuscript that he distributed to several friends, referred to as the Commentariolus. A physician's library list dating to 1514 includes a manuscript whose description matches the Commentariolus, so Copernicus must have begun work on his new system by that time. Most historians believe that he wrote the Commentariolus after his return from Italy, possibly only after 1510. At this time, Copernicus anticipated that he could reconcile the motion of the Earth with the perceived motions of the planets easily, with fewer motions than were necessary in the version of the Ptolemaic system current at the time. Among other techniques, the heliocentric Copernican model made use of the Urdi Lemma developed in the 13th century by the Arab astronomer Mu'ayyad al-Din al-'Urdi, the first of the Maragha astronomers to develop a geocentric but non-Ptolemaic model of planetary motion. Observations of Mercury by Bernhard Walther (1430–1504) of Nuremberg, a pupil of Regiomontanus, were made available to Copernicus by Johannes Schöner, 45 observations in total, 14 of them with longitude and latitude. Copernicus used three of them in De revolutionibus, giving only longitudes, and erroneously attributing them to Schöner. Copernicus' values differed slightly from the ones published by Schöner in 1544 in Observationes XXX annorum a I. Regiomontano et B. Walthero Norimbergae habitae, [4°, Norimb. 1544]. A manuscript of De revolutionibus in Copernicus' own hand has survived. After his death, it was given to his pupil, Rheticus, who for publication had only been given a copy without annotations. Via Heidelberg, it ended up in Prague, where it was rediscovered and studied in the 19th century. Close examination of the manuscript, including the different types of paper used, helped scholars construct an approximate timetable for its composition. Apparently Copernicus began by making a few astronomical observations to provide new data to perfect his models. He may have begun writing the book while still engaged in observations. By the 1530s a substantial part of the book was complete, but Copernicus hesitated to publish. In 1536, Cardinal Nikolaus von Schönberg wrote to Copernicus and urged him to publish his manuscript. In 1539, Georg Joachim Rheticus, a young mathematician from Wittenberg, arrived in Frauenburg (Frombork) to study with him. Rheticus read Copernicus' manuscript and immediately wrote a non-technical summary of its main theories in the form of an open letter addressed to Schöner, his astrology teacher in Nürnberg; he published this letter as the Narratio Prima in Danzig in 1540. Rheticus' friend and mentor Achilles Gasser published a second edition of the Narratio in Basel in 1541. Due to its friendly reception, Copernicus finally agreed to publication of more of his main work—in 1542, a treatise on trigonometry, which was taken from the second book of the still unpublished De revolutionibus. Rheticus published it in Copernicus' name. Under strong pressure from Rheticus, and having seen that the first general reception of his work had not been unfavorable, Copernicus finally agreed to give the book to his close friend, Bishop Tiedemann Giese, to be delivered to Rheticus in Wittenberg for printing by Johannes Petreius at Nürnberg (Nuremberg). It was published just before Copernicus' death, in 1543. Copernicus kept a copy of his manuscript which, sometime after his death, was sent to Rheticus in the attempt to produce an authentic, unaltered version of the book. The plan failed but the copy was found during the 18th century and was published later. It is kept at the Jagiellonian University Library in Kraków, where it remains bearing the library number BJ 10 000. Contents From the first edition, Copernicus' book was prefixed with an anonymous preface which argues that the following is a calculus consistent with the observations, and cannot resolve philosophical truths. Only later was this revealed to be the unauthorized interjection by Lutheran preacher Andreas Osiander, who lived in Nuremberg when the first edition was printed there. This is followed by Copernicus' own preface, where he dedicates his work to Pope Paul III and appeals to the latter's skill as a mathematician to recognize the truth of Copernicus' hypothesis. De revolutionibus is divided into six "books" (sections or parts), following closely the layout of Ptolemy's Almagest which it updated and replaced: Book I chapters 1–11 are a general vision of the heliocentric theory, and a summarized exposition of his cosmology. The world (heavens) is spherical, as is the Earth, and the land and water make a single globe. The celestial bodies, including the Earth, have regular circular and everlasting movements. The Earth rotates on its axis and around the Sun. Answers to why the ancients thought the Earth was central. The order of the planets around the Sun and their periodicity. Chapters 12–14 give theorems for chord geometry as well as a table of chords. Book II describes the principles of spherical astronomy as a basis for the arguments developed in the following books and gives a comprehensive catalogue of the fixed stars. Book III describes his work on the precession of the equinoxes and treats the apparent movements of the Sun and related phenomena. Book IV is a similar description of the Moon and its orbital movements. Book V explains how to calculate the positions of the wandering stars based on the heliocentric model and gives tables for the five planets. Book VI deals with the digression in latitude from the ecliptic of the five planets. Copernicus argued that the universe comprised eight spheres. The outermost consisted of motionless, fixed stars, with the Sun motionless at the center. The known planets revolved about the Sun, each in its own sphere, in the order: Mercury, Venus, Earth, Mars, Jupiter, Saturn. The Moon, however, revolved in its sphere around the Earth. What appeared to be the daily revolution of the Sun and fixed stars around the Earth was actually the Earth's daily rotation on its own axis. Copernicus adhered to one of the standard beliefs of his time, namely that the motions of celestial bodies must be composed of uniform circular motions. For this reason, he was unable to account for the observed apparent motion of the planets without retaining a complex system of epicycles similar to those of the Ptolemaic system. Despite Copernicus' adherence to this aspect of ancient astronomy, his radical shift from a geocentric to a heliocentric cosmology was a serious blow to Aristotle's science—and helped usher in the Scientific Revolution. Ad lectorem Rheticus left Nürnberg to take up his post as professor in Leipzig. Andreas Osiander had taken over the task of supervising the printing and publication. In an effort to reduce the controversial impact of the book Osiander added his own unsigned letter Ad lectorem de hypothesibus huius operis (To the reader concerning the hypotheses of this work) printed in front of Copernicus' preface which was a dedicatory letter to Pope Paul III and which kept the title "Praefatio authoris" (to acknowledge that the unsigned letter was not by the book's author). Osiander's letter stated that Copernicus' system was mathematics intended to aid computation and not an attempt to declare literal truth: As even Osiander's defenders point out, the Ad lectorem "expresses views on the aim and nature of scientific theories at variance with Copernicus' claims for his own theory". Many view Osiander's letter as a betrayal of science and Copernicus, and an attempt to pass his own thoughts off as those of the book's author. An example of this type of claim can be seen in the Catholic Encyclopedia, which states "Fortunately for him [the dying Copernicus], he could not see what Osiander had done. This reformer, knowing the attitude of Luther and Melanchthon against the heliocentric system ... without adding his own name, replaced the preface of Copernicus by another strongly contrasting in spirit with that of Copernicus." While Osiander's motives behind the letter have been questioned by many, he has been defended by historian Bruce Wrightsman, who points out he was not an enemy of science. Osiander had many scientific connections including "Johannes Schoner, Rheticus's teacher, whom Osiander recommended for his post at the Nurnberg Gymnasium; Peter Apian of Ingolstadt University; Hieronymous Schreiber...Joachim Camerarius...Erasmus Reinhold...Joachim Rheticus...and finally, Hieronymous Cardan." The historian Wrightsman put forward that Osiander did not sign the letter because he "was such a notorious [Protestant] reformer whose name was well-known and infamous among Catholics", so that signing would have likely caused negative scrutiny of the work of Copernicus (a loyal Catholic canon and scholar). Copernicus himself had communicated to Osiander his "own fears that his work would be scrutinized and criticized by the 'peripatetics and theologians'," and he had already been in trouble with his bishop, Johannes Dantiscus, on account of his former relationship with his mistress and friendship with Dantiscus's enemy and suspected heretic, Alexander Scultetus. It was also possible that Protestant Nurnberg could fall to the forces of the Holy Roman Emperor and since "the books of hostile theologians could be burned...why not scientific works with the names of hated theologians affixed to them?" Wrightsman also holds that this is why Copernicus did not mention his top student, Rheticus (a Lutheran) in the book's dedication to the Pope. Osiander's interest in astronomy was theological, hoping for "improving the chronology of historical events and thus providing more accurate apocalyptic interpretations of the Bible... [he shared in] the general awareness that the calendar was not in agreement with astronomical movement and therefore, needed to be corrected by devising better models on which to base calculations." In an era before the telescope, Osiander (like most of the era's mathematical astronomers) attempted to bridge the "fundamental incompatibility between Ptolemaic astronomy and Aristotlian physics, and the need to preserve both", by taking an 'instrumentalist' position. Only the handful of "Philosophical purists like the Averroists... demanded physical consistency and thus sought for realist models." Copernicus was hampered by his insistence on preserving the idea that celestial bodies had to travel in perfect circles — he "was still attached to classical ideas of circular motion around deferents and epicycles, and spheres." This was particularly troubling concerning the Earth because he "attached the Earth's axis rigidly to a Sun-centered sphere. The unfortunate consequence was that the terrestrial rotation axis then maintained the same inclination with respect to the Sun as the sphere turned, eliminating the seasons." To explain the seasons, he had to propose a third motion, "an annual contrary conical sweep of the terrestrial axis". It was not until the Great Comet of 1577, which moved as if there were no spheres to crash through, that the idea was challenged. In 1609, Johannes Kepler fixed Copernicus' theory by stating that the planets orbit the Sun not in circles, but ellipses. Only after Kepler's refinement of Copernicus' theory was the need for deferents and epicycles abolished. In his work, Copernicus "used conventional, hypothetical devices like epicycles...as all astronomers had done since antiquity. ...hypothetical constructs solely designed to 'save the phenomena' and aid computation". Ptolemy's theory contained a hypothesis about the epicycle of Venus that was viewed as absurd if seen as anything other than a geometrical device (its brightness and distance should have varied greatly, but they don't). "In spite of this defect in Ptolemy's theory, Copernicus' hypothesis predicts approximately the same variations." Because of the use of similar terms and similar deficiencies, Osiander could see "little technical or physical truth-gain" between one system and the other. It was this attitude towards technical astronomy that had allowed it to "function since antiquity, despite its inconsistencies with the principles of physics and the philosophical objections of Averroists." Writing Ad lectorem, Osiander was influenced by Pico della Mirandola's idea that humanity "orders [an intellectual] cosmos out of the chaos of opinions." From Pico's writings, Osiander "learned to extract and synthesize insights from many sources without becoming the slavish follower of any of them." The effect of Pico on Osiander was tempered by the influence of Nicholas of Cusa and his idea of coincidentia oppositorum. Rather than having Pico's focus on human effort, Osiander followed Cusa's idea that understanding the Universe and its Creator only came from divine inspiration rather than intellectual organization. From these influences, Osiander held that in the area of philosophical speculation and scientific hypothesis there are "no heretics of the intellect", but when one gets past speculation into truth-claims the Bible is the ultimate measure. By holding that Copernicianism was mathematical speculation, Osiander held that it would be silly to hold it up against the accounts of the Bible. Pico's influence on Osiander did not escape Rheticus, who reacted strongly against the Ad lectorem. As historian Robert S. Westman puts it, "The more profound source of Rheticus's ire however, was Osiander's view of astronomy as a disciple fundamentally incapable of knowing anything with certainty. For Rheticus, this extreme position surely must have resonated uncomfortably with Pico della Mirandola's attack on the foundations of divinatory astrology." In his Disputations, Pico had made a devastating attack on astrology. Because those who were making astrological predictions relied on astronomers to tell them where the planets were, they also became a target. Pico held that since astronomers who calculate planetary positions could not agree among themselves, how were they to be held as reliable? While Pico could bring into concordance writers like Aristotle, Plato, Plotinus, Averroes, Avicenna, and Aquinas, the lack of consensus he saw in astronomy was a proof to him of its fallibility alongside astrology. Pico pointed out that the astronomers' instruments were imprecise and any imperfection of even a degree made them worthless for astrology, people should not trust astrologists because they should not trust the numbers from astronomers. Pico pointed out that astronomers couldn't even tell where the Sun appeared in the order of the planets as they orbited the Earth (some put it close to the Moon, others among the planets). How, Pico asked, could astrologists possibly claim they could read what was going on when the astronomers they relied on could offer no precision on even basic questions? As Westman points out, to Rheticus "it would seem that Osiander now offered new grounds for endorsing Pico's conclusions: not merely was the disagreement among astronomers grounds for mistrusting the sort of knowledge that they produced, but now Osiander proclaimed that astronomers might construct a world deduced from (possibly) false premises. Thus the conflict between Piconian skepticism and secure principles for the science of the stars was built right into the complex dedicatory apparatus of De Revolutionibus itself." According to the notes of Michael Maestlin, "Rheticus...became embroiled in a very bitter wrangle with the printer [over the Ad lectorem]. Rheticus...suspected Osiander had prefaced the work; if he knew this for certain, he declared, he would rough up the fellow so violently that in future he would mind his own business." Objecting to the Ad lectorem, Tiedemann Giese urged the Nuremberg city council to issue a correction, but this was not done, and the matter was forgotten. Jan Broscius, a supporter of Copernicus, also despaired of the Ad lectorem, writing "Ptolemy's hypothesis is the earth rests. Copernicus' hypothesis is that the earth is in motion. Can either, therefore, be true? ... Indeed, Osiander deceives much with that preface of his ... Hence, someone may well ask: How is one to know which hypothesis is truer, the Ptolemaic or the Copernican?" Petreius had sent a copy to Hieronymus Schreiber, an astronomer from Nürnberg who had substituted for Rheticus as professor of mathematics in Wittenberg while Rheticus was in Nürnberg supervising the printing. Schreiber, who died in 1547, left in his copy of the book a note about Osiander's authorship. Via Michael Mästlin, this copy came to Johannes Kepler, who discovered what Osiander had done and methodically demonstrated that Osiander had indeed added the foreword. The most knowledgeable astronomers of the time had realized that the foreword was Osiander's doing. Owen Gingerich gives a slightly different version: Kepler knew of Osiander's authorship since he had read about it in one of Schreiber's annotations in his copy of De Revolutionibus; Maestlin learned of the fact from Kepler. Indeed, Maestlin perused Kepler's book, up to the point of leaving a few annotations in it. However, Maestlin already suspected Osiander, because he had bought his De revolutionibus from the widow of Philipp Apian; examining his books, he had found a note attributing the introduction to Osiander. Johannes Praetorius (1537–1616), who learned of Osiander's authorship from Rheticus during a visit to him in Kraków, wrote Osiander's name in the margin of the foreword in his copy of De revolutionibus. All three early editions of De revolutionibus included Osiander's foreword. Reception Even before the 1543 publication of De revolutionibus, rumors circulated about its central theses. In one of his Tischreden (Table Talks), Martin Luther is quoted as saying in 1539: People gave ear to an upstart astrologer who strove to show that the earth revolves, not the heavens or the firmament, the sun and the moon ... This fool wishes to reverse the entire science of astronomy; but sacred Scripture tells us [Joshua 10:13] that Joshua commanded the sun to stand still, and not the earth. When the book was finally published, demand was low, with an initial print run of 400 failing to sell out. Copernicus had made the book extremely technical, unreadable to all but the most advanced astronomers of the day, allowing it to disseminate into their ranks before stirring great controversy. And, like Osiander, contemporary mathematicians and astronomers encouraged its audience to view it as a useful mathematical model without necessarily being true about causes, thereby somewhat shielding it from accusations of blasphemy. Among some astronomers, the book "at once took its place as a worthy successor to the Almagest of Ptolemy, which had hitherto been the Alpha and Omega of astronomers". Erasmus Reinhold hailed the work in 1542 and by 1551 had developed the Prutenic Tables ("Prussian Tables"; ; ) using Copernicus' methods. The Prutenic Tables, published in 1551, were used as a basis for the calendar reform instituted in 1582 by Pope Gregory XIII. They were also used by sailors and maritime explorers, whose 15th-century predecessors had used Regiomontanus' Table of the Stars. In England, Robert Recorde, John Dee, Thomas Digges and William Gilbert were among those who adopted his position; in Germany, Christian Wurstisen, Christoph Rothmann and Michael Mästlin, the teacher of Johannes Kepler; in Italy, Giambattista Benedetti and Giordano Bruno whilst Franciscus Patricius accepted the rotation of the Earth. In Spain, rules published in 1561 for the curriculum of the University of Salamanca gave students the choice between studying Ptolemy or Copernicus. One of those students, Diego de Zúñiga, published an acceptance of Copernican theory in 1584. Very soon, nevertheless, Copernicus' theory was attacked with Scripture and with the common Aristotelian proofs. In 1549, Melanchthon, Luther's principal lieutenant, wrote against Copernicus, pointing to the theory's apparent conflict with Scripture and advocating that "severe measures" be taken to restrain the impiety of Copernicans. The works of Copernicus and Zúñiga—the latter for asserting that De revolutionibus was compatible with Catholic faith—were placed on the Index of Forbidden Books by a decree of the Sacred Congregation of March 5, 1616 (more than 70 years after Copernicus' publication): This Holy Congregation has also learned about the spreading and acceptance by many of the false Pythagorean doctrine, altogether contrary to the Holy Scripture, that the earth moves and the sun is motionless, which is also taught by Nicholaus Copernicus' De revolutionibus orbium coelestium and by Diego de Zúñiga's In Job ... Therefore, in order that this opinion may not creep any further to the prejudice of Catholic truth, the Congregation has decided that the books by Nicolaus Copernicus [De revolutionibus] and Diego de Zúñiga [In Job] be suspended until corrected. De revolutionibus was not formally banned but merely withdrawn from circulation, pending "corrections" that would clarify the theory's status as hypothesis. Nine sentences that represented the heliocentric system as certain were to be omitted or changed. After these corrections were prepared and formally approved in 1620 the reading of the book was permitted. But the book was never reprinted with the changes and was available in Catholic jurisdictions only to suitably qualified scholars, by special request. It remained on the Index until 1758, when Pope Benedict XIV (1740–58) removed the uncorrected book from his revised Index. Census of copies Arthur Koestler described De revolutionibus as "The Book That Nobody Read" saying the book "was and is an all-time worst seller", despite the fact that it was reprinted four times. Owen Gingerich, an eminent astronomer and historian of science who has written on both Nicolaus Copernicus and Johannes Kepler, disproved this after a 35-year project to examine every surviving copy of the first two editions. Gingerich showed that nearly all the leading mathematicians and astronomers of the time owned and read the book; however, his analysis of the marginalia shows that almost all of them ignored the cosmology at the beginning of the book and were only interested in Copernicus' new equant-free models of planetary motion in the later chapters. Also, Nicolaus Reimers in 1587 translated the book into German. Gingerich's efforts and conclusions are recounted in The Book Nobody Read, published in 2004 by Walker & Co. His census included 276 copies of the first edition (by comparison, there are 228 extant copies of Shakespeare's First Folio) and 325 copies of the second. The research behind this book earned its author the Polish government's Order of Merit in 1981. Due largely to Gingerich's scholarship, De revolutionibus has been researched and catalogued better than any other first-edition historic text except for the original Gutenberg Bible. One of the copies now resides at the Archives of the University of Santo Tomas in the Miguel de Benavides Library. In January 2017, a second-edition copy was stolen as part of a heist of rare books from Heathrow Airport and remains unrecovered. Editions 1543, Nuremberg, by Johannes Petreius. A copy of this is held by the University of Edinburgh; it had been owned by an astronomer, who filled the pages with scholarly annotations, and subsequently by the Scottish economist Adam Smith. Another copy is held by the Cary Graphic Arts Collection in New York, alongside astronomer Johannes de Sacrobosco's manuscript "De sphaera mundi" (On the Sphere of the World), which supports the earlier Ptolemaic model of the universe. Another 1543 copy is present in the Special Collections of Leiden University Libraries. 1566, Basel, by Henricus Petrus. A copy of this is held by the University of Sydney; previously owned by Owen Gingerich. 1617, Amsterdam, by Nicolaus Mulerius. 1854, Warsaw, with Polish translation and the authentic preface by Copernicus. 1873, Thorn; German translation sponsored by the local Coppernicus Society, with all of Copernicus' textual corrections given as footnotes. Latin texts available 1543, Nuremberg, by Johannes Petreius; online from Harvard University. Translations English translations of De revolutionibus have included: On the Revolutions of the Heavenly Spheres, translated by C. G. Wallis, Annapolis, St John's College Bookstore, 1939. Republished in volume 16 of the Great Books of the Western World, Chicago, Encyclopædia Britannica, 1952; in the series of the same name, published by the Franklin Library, Franklin Center, Philadelphia, 1985; in volume 15 of the second edition of the Great Books, Encyclopædia Britannica, 1990; and Amherst, NY: Prometheus Books, 1995, Great Minds Series – Science, . On the Revolutions of the Heavenly Spheres, translated with an introduction and notes by A. M. Duncan, Newton Abbot, David & Charles, ; New York: Barnes and Noble, 1976, . On the Revolutions; translation and commentary by Edward Rosen, Baltimore: Johns Hopkins University Press, 1992, . (Foundations of Natural History. Originally published in Warsaw, Poland, 1978.) See also List of most expensive books and manuscripts Wittenberg interpretation of Copernicus Notes References Gassendi, Pierre: The Life of Copernicus, biography (1654), with notes by Olivier Thill (2002), () Analyses the varieties of argument used by Copernicus. Heilbron, J.L.: The Sun in the Church: Cathedrals as Solar Observatories. Cambridge, Massachusetts, Harvard University Press, 1999 Sobel, D, A More Perfect Heaven - How Copernicus Revolutionised the Cosmos, Bloomsbury 2011. Swerdlow, N.M., O. Neugebauer: Mathematical astronomy in Copernicus' De revolutionibus. New York : Springer, 1984 (Studies in the history of mathematics and physical sciences ; 10) Vermij, R.H.: The Calvinist Copernicans: The Reception of the New Astronomy in the Dutch Republic, 1575–1750 . Amsterdam : Koninklijke Nederlandse Akademie van Wetenschappen, 2002 Westman, R.S., ed.: The Copernican achievement. Berkeley : University of California Press, 1975 Zinner, E.: Entstehung und Ausbreitung der coppernicanischen Lehre. 2. Aufl. durchgesehen und erg. von Heribert M. Nobis und Felix Schmeidler. München : C.H. Beck, 1988 External links Manuscript of De Revolutionibus by Nicolaus Copernicus, from Jagiellonian Library, Poland. De revolutionibus orbium coelestium, from Harvard University. De revolutionibus orbium coelestium, from Jagiellon University, Poland. De Revolutionibus Orbium Coelestium, from Rare Book Room. On the Revolutions, from WebExhibits. English translation of part of Book I. On the Revolutions, Warsaw-Cracow 1978. Full English translation. River Campus Libraries, Book of the Month December 2005: De revolutionibus orbium coelestium A facsimile of De Revolutionibus Orbium Coelestium (1543) from the Rare Book and Special Collection Division at the Library of Congress De Revolutionibus Orbium Coelestium (1566) From the Rare Book and Special Collection Division at the Library of Congress De Revolutionibus Orbium Coelestium (1566) Previously owned by Owen Gingerich. Includes the third printing (previous editions 1540 and 1541) of De libris revolutionum Nicolai Copernici narratio prima. From the University of Sydney Library. A facsimile of De Revolutionibus Orbium Coelestium (1543) with annotations by Michael Maestlin from Stadtbibliothek Schaffhausen (Schaffhausen City Library) 1543 books 1543 in science History of astronomy Astronomy books 16th-century books in Latin Memory of the World Register Works by Nicolaus Copernicus Copernican Revolution
De revolutionibus orbium coelestium
[ "Astronomy" ]
6,275
[ "Astronomy books", "Copernican Revolution", "Works about astronomy", "History of astronomy" ]
1,028,589
https://en.wikipedia.org/wiki/Normal%20basis
In mathematics, specifically the algebraic theory of fields, a normal basis is a special kind of basis for Galois extensions of finite degree, characterised as forming a single orbit for the Galois group. The normal basis theorem states that any finite Galois extension of fields has a normal basis. In algebraic number theory, the study of the more refined question of the existence of a normal integral basis is part of Galois module theory. Normal basis theorem Let be a Galois extension with Galois group . The classical normal basis theorem states that there is an element such that forms a basis of K, considered as a vector space over F. That is, any element can be written uniquely as for some elements A normal basis contrasts with a primitive element basis of the form , where is an element whose minimal polynomial has degree . Group representation point of view A field extension with Galois group G can be naturally viewed as a representation of the group G over the field F in which each automorphism is represented by itself. Representations of G over the field F can be viewed as left modules for the group algebra F[G]. Every homomorphism of left F[G]-modules is of form for some . Since is a linear basis of F[G] over F, it follows easily that is bijective iff generates a normal basis of K over F. The normal basis theorem therefore amounts to the statement saying that if is finite Galois extension, then as left -module. In terms of representations of G over F, this means that K is isomorphic to the regular representation. Case of finite fields For finite fields this can be stated as follows: Let denote the field of q elements, where is a prime power, and let denote its extension field of degree . Here the Galois group is with a cyclic group generated by the q-power Frobenius automorphism with Then there exists an element such that is a basis of K over F. Proof for finite fields In case the Galois group is cyclic as above, generated by with the normal basis theorem follows from two basic facts. The first is the linear independence of characters: a multiplicative character is a mapping χ from a group H to a field K satisfying ; then any distinct characters are linearly independent in the K-vector space of mappings. We apply this to the Galois group automorphisms thought of as mappings from the multiplicative group . Now as an F-vector space, so we may consider as an element of the matrix algebra Mn(F); since its powers are linearly independent (over K and a fortiori over F), its minimal polynomial must have degree at least n, i.e. it must be . The second basic fact is the classification of finitely generated modules over a PID such as . Every such module M can be represented as , where may be chosen so that they are monic polynomials or zero and is a multiple of . is the monic polynomial of smallest degree annihilating the module, or zero if no such non-zero polynomial exists. In the first case , in the second case . In our case of cyclic G of size n generated by we have an F-algebra isomorphism where X corresponds to , so every -module may be viewed as an -module with multiplication by X being multiplication by . In case of K this means , so the monic polynomial of smallest degree annihilating K is the minimal polynomial of . Since K is a finite dimensional F-space, the representation above is possible with . Since we can only have , and as F[X]-modules. (Note this is an isomorphism of F-linear spaces, but not of rings or F-algebras.) This gives isomorphism of -modules that we talked about above, and under it the basis on the right side corresponds to a normal basis of K on the left. Note that this proof would also apply in the case of a cyclic Kummer extension. Example Consider the field over , with Frobenius automorphism . The proof above clarifies the choice of normal bases in terms of the structure of K as a representation of G (or F[G]-module). The irreducible factorization means we have a direct sum of F[G]-modules (by the Chinese remainder theorem): The first component is just , while the second is isomorphic as an F[G]-module to under the action (Thus as F[G]-modules, but not as F-algebras.) The elements which can be used for a normal basis are precisely those outside either of the submodules, so that and . In terms of the G-orbits of K, which correspond to the irreducible factors of: the elements of are the roots of , the nonzero elements of the submodule are the roots of , while the normal basis, which in this case is unique, is given by the roots of the remaining factor . By contrast, for the extension field in which is divisible by , we have the F[G]-module isomorphism Here the operator is not diagonalizable, the module L has nested submodules given by generalized eigenspaces of , and the normal basis elements β are those outside the largest proper generalized eigenspace, the elements with . Application to cryptography The normal basis is frequently used in cryptographic applications based on the discrete logarithm problem, such as elliptic curve cryptography, since arithmetic using a normal basis is typically more computationally efficient than using other bases. For example, in the field above, we may represent elements as bit-strings: where the coefficients are bits Now we can square elements by doing a left circular shift, , since squaring β4 gives . This makes the normal basis especially attractive for cryptosystems that utilize frequent squaring. Proof for the case of infinite fields Suppose is a finite Galois extension of the infinite field F. Let , , where . By the primitive element theorem there exists such and . Let us write . 's (monic) minimal polynomial f over K is the irreducible degree n polynomial given by the formula Since f is separable (it has simple roots) we may define In other words, Note that and for . Next, define an matrix A of polynomials over K and a polynomial D by Observe that , where k is determined by ; in particular iff . It follows that is the permutation matrix corresponding to the permutation of G which sends each to . (We denote by the matrix obtained by evaluating at .) Therefore, . We see that D is a non-zero polynomial, and therefore it has only a finite number of roots. Since we assumed F is infinite, we can find such that . Define We claim that is a normal basis. We only have to show that are linearly independent over F, so suppose for some . Applying the automorphism yields for all i. In other words, . Since , we conclude that , which completes the proof. It is tempting to take because . But this is impermissible because we used the fact that to conclude that for any F-automorphism and polynomial over the value of the polynomial at a equals . Primitive normal basis A primitive normal basis of an extension of finite fields is a normal basis for that is generated by a primitive element of E, that is a generator of the multiplicative group K×. (Note that this is a more restrictive definition of primitive element than that mentioned above after the general normal basis theorem: one requires powers of the element to produce every non-zero element of K, not merely a basis.) Lenstra and Schoof (1987) proved that every extension of finite fields possesses a primitive normal basis, the case when F is a prime field having been settled by Harold Davenport. Free elements If is a Galois extension and x in K generates a normal basis over F, then x is free in . If x has the property that for every subgroup H of the Galois group G, with fixed field KH, x is free for , then x is said to be completely free in . Every Galois extension has a completely free element. See also Dual basis in a field extension Polynomial basis Zech's logarithm References Linear algebra Field (mathematics) Abstract algebra Cryptography
Normal basis
[ "Mathematics", "Engineering" ]
1,705
[ "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Linear algebra", "Abstract algebra", "Algebra" ]
1,028,624
https://en.wikipedia.org/wiki/Chronograph
A chronograph is a specific type of watch that is used as a stopwatch combined with a display watch. A basic chronograph has hour and minute hands on the main dial to tell the time, a small seconds hand to tell that the watch is running, and a seconds hand on the main dial usually equipped with a sweeping movement for precision accompanied by a minutes sub dial for the stopwatch. Another sub dial to measure the hours of the stopwatch may also be included on a chronograph. The stopwatch can be started, stopped, and reset to zero at any time by the user by operating pushers usually placed adjacent to the crown. More complex chronographs often use additional complications and can have multiple sub-dials to measure more aspects of the stopwatch such as fractions of a second as well as other helpful things such as the moon phase and the local 24-hour time. In addition, many modern chronographs include tachymeters on the bezels for rapid calculations of speed or distance. Louis Moinet invented the chronograph in 1816 for use in tracking astronomical objects. Chronographs soon found a widespread use in artillery fire in the mid to late 1800s. Over time, the chronograph found its use to be in several different fields, such as aircraft piloting, auto racing, diving and submarine maneuvering. Since the 1980s, the term chronograph has also been applied to all digital watches that incorporate a stopwatch function. History The term chronograph comes from the Greek ( 'time recording'), from ( 'time') and ( 'to write'). Early versions of the chronograph are the only ones that actually used any "writing": marking the dial with a small pen attached to the index so that the length of the pen mark would indicate how much time had elapsed. The first modern chronograph was invented by Louis Moinet in 1816, solely for working with astronomical equipment. It was Nicolas Mathieu Rieussec who developed the first marketed chronograph at the behest of King Louis XVIII in 1821. The King greatly enjoyed watching horse races, but wanted to know exactly how long each race lasted, so Rieussec was commissioned to invent a contraption that would do the job: as a result he developed the first ever commercialized chronograph. Rieussec was considered the inventor of the chronograph until the Louis Moinet pocket chronograph discovery in 2013 when history was rewritten. In addition to inventing the chronograph, Louis Moinet is also the father of High Frequency. In 1816, his Compteur de Tierces timepiece beat at a rhythm of 216,000 vibrations per hour (30 Hz). This frequency record stood for exactly one century, before eventually being broken in 1916, after which standard chronometer frequencies returned to present-day levels (generally 5-10 Hz, or 18,000 to 36,000 vibrations per hour). Still in perfect working order, the Compteur de Tierces is preserved at Ateliers Louis Moinet. In 1913, Longines created the 13.33Z, one of the first chronograph movements ever developed for a wristwatch, featuring 18 jewels, a diameter of 29 mm and height of 6 mm, and a beat rate of 18,000 vph. It utilized a crown that was used both for winding the watch and serving as a pusher for the chronograph. In 1915, Gaston Breitling produced the first chronograph with a central seconds hand and a 30-minute counter. Later, in 1923, Gaston Breitling introduced the first chronograph with a separate pusher at 2 o'clock. In 1934 Willy Breitling further developed the concept of the chronograph with the addition of the second pusher at 4 o'clock. Since then the 3-pusher chronograph design has been adopted by the entire industry. In 1844 Adolphe Nicole's updated version of the chronograph was the first to include a re-setting feature which now allowed successive measurements, unlike the constantly moving needle in the original chronograph. In the early part of the 20th century, many chronographs were sold with fixed bezels marked in order to function as a tachymeter. In 1958 the watch company Heuer introduced a model with a rotating bezel tachymeter for more complex calculations. Chronographs were very popular with aviators as they allowed them to make rapid calculations and conduct precise timing. The demand for chronographs grew along with the aviation industry in the early part of the 20th century. As the US exploration of outer space initially involved only test pilots, by order of President Dwight D. Eisenhower, chronographs were on the wrists of many early astronauts. Chronograph usage followed a similar trajectory for many fields that involve very precise and/or repeated timing around increasingly more complicated high performance machinery, automobile racing and naval submarine navigation being two examples. As different uses for the chronograph were discovered, the industry responded with different models introducing such features as the flyback (where the second hand could be rapidly reset to zero), minute and hour timers, Rattrapante (or multiple second hands one of which can be stopped and started independently) and waterproof models for divers and swimmers. Although self winding watches and clockwork have been around since the late 1700s, the automatic (self winding) chronograph was not invented until the late 1960s. In 1969, the watch companies Heuer, Breitling, Hamilton, and movement specialist Dubois Dépraz, developed the first automatic chronograph in partnership. They developed this technology secretly in an effort to prevent other watchmaking houses from releasing an automatic chronograph first, namely their competition Zenith and Seiko. It was in Geneva and in New York that this partnership shared the first automatic chronograph with the world on March 3, 1969. These first automatic chronographs were labelled "Chrono-matic". Many companies sell their own styles of chronographs. While today most chronographs are in the form of wristwatches, in the early 20th century pocket chronographs were very popular. Uses The term chronograph is often confused with the term chronometer. Where "Chronograph" refers to the function of a watch, chronometer is a measure of how well a given mechanical timepiece performs: in order to be labeled a chronometer the timepiece must be certified by the COSC, the official Swiss Chronometer testing institute, after undergoing a series of rigorous tests for robustness, accuracy and precision under adverse conditions (though these requirements fall far short of the accuracy achieved by even the cheapest modern quartz watch). A simple mechanical watch, without the stopwatch functionality, can be certified a chronometer, as can a clock, for example a ship's clock, used for navigation. The terms are not mutually exclusive either, for instance the Omega Seamaster 300M Chronograph GMT Co-Axial is also a COSC certified chronometer Originally the term chronograph was mainly used in connection with artillery and the velocity of missiles. The Chronograph's main function is to allow a comparison of observation between a time base and, before the electronic stopwatch was invented, a permanent recording of the observer's findings. For example, one of the first applications of the chronograph was to record the time elapsed during horse races. Some more important uses of the chronograph include the Langley Chronograph, which is used by the US Navy to record, calculate, and analyse data given off by aeroplane launching catapults. Another famous usage of the chronograph was during NASA's Apollo missions to the moon, when each astronaut was equipped with a fully functioning chronograph, the Omega Speedmaster; in one instance, a Bulova chronograph was used. Chronographs are routinely used to record heart beats within hospitals, calculate speed and/or distance on athletic fields, or even as simple timers in kitchens. Function Chronographs can be extremely complicated devices, but they all have the basic function of telling time, as they are watches, and of displaying elapsed time. Rieussec's chronograph was fairly simple. It was composed of two faces, a top and bottom face. The bottom face held a pool of ink, while the upper had a pen-like needle attached to it. When activated, the upper face pushed down on the lower face, while revolving around a central axis, which pulled the needle. This dragged the ink, in a circular fashion, recording the time elapsed by the line of ink that the motion created. There was room left for improvement, because Rieussec's chronograph was not easily ready for multiple uses. This paved the way for the hundreds of patents that have been handed out to people for updating and upgrading this device. Automatic, non-digital chronographs do not require a battery, because the arm or wrist of the wearer creates kinetic energy, which results in the total energy source needed for this device to work. Throughout the day, while the wearer of the watch is walking, the swinging motion of his arm forces a semicircular rotor to turn on a pivot within the watch. The rotor is attached to a ratchet that winds the mainspring in the watch, so that it is ready for use at all times. The modern day chronograph works by pushing a start button, normally located at the two o'clock position, to begin recording time, and by pushing the same button to stop the recording. When the button is pushed to start the recording, a series of three (in more complicated and more precise chronographs there are more wheels) train wheels start turning. The smallest has a revolution time of one second, the next sixty seconds, and the final one has a revolution time of sixty minutes. The three train wheels interact with one another and record how long it has been since the start button has been activated. In addition to the start button, it also features a reset button normally located at the four o'clock position. When the reset button is pushed the chronograph hand will reset back to zero. Tachymeter bezels are a complication that allows rapid calculations of speed or distance. Rotating bezels allow for more complex calculations or repeated calculations without requiring a reset of the timer. Types The original chronographs that Rieussec invented were called tape chronographs. They consisted of a tape that was constantly being dragged along at a controlled speed. When activated, a pen would be pushed onto the tape and begin recording until deactivated. Specialized chronographs are used by deep sea and scuba divers. While basic functionality is the same as other chronographs, diving models have longer and more practical straps to wear over equipment, are made to be waterproof to deeper depths, have more rounded corners to prevent catching and luminous dials for reading in the murky depths. Metered bezels: Many chronographs have a bezel, that is either fixed or can rotate, around the outside of the dial that is marked to specific scales to allow rapid calculations. While any wristwatch can have a bezel, the chronograph stop start feature, as well as the rotation of the bezel, allows more complex calculations or repeated measurements for a series of calculations. The most popular meter is for Tachymeter readings: a simple scale that allows rapid calculations of speed. Other bezels feature Telemeter scale, for distance. The watchmaking company Breitling offers a model with a rotating bezel, in conjunction with another, fixed, meter on the dial, scaled for use as a slide rule for more complex calculations. Flyback chronographs have a timing hand that can be rapidly reset, or flyback, to zero. Ordinarily the sweep second hand is stopped to record the time and started again at that spot on the dial, or reset by spinning the second hand all the way to zero again, clockwise. The flyback allows a reading and a quick reset—a counterclockwise flyback—for the next measurement to start at zero. A rattrapante, sometimes called a double chronograph, has multiple second hands, at least one of which can be stopped and started independently. When not activated, the second hands travel together, one under the other, to appear as just one second hand. A tourbillon, although not strictly limited to chronographs, is an escapement set in a cage and placed in a rotating balance in order to minimize the effects of gravity on the escapement and increase precision. Because chronograph escapements are generally larger and connect with more complications, a tourbillion in a chronograph will differ from a tourbillion in a more simple timepiece. Other types of modern-day chronographs are the automatic chronograph and the digital chronograph. The automatic chronograph depends solely on kinetic energy as its power source, while the digital chronograph is much like the common stopwatch and uses a battery to gain power, as well as quartz for timing. Other, more specific, types of chronographs include split second chronographs, tide chronographs, and asthmometer chronographs. Each of these chronographs has an added feature that sets them apart. Telemeter The telemeter chronograph allows the user to approximately measure the distance to an event that can be both seen and heard (e.g. a lightning bolt or a torpedo strike) using the speed of sound. The user starts the chronograph (stopwatch) at the instant the event is seen, and stops timing at the instant the event is heard. The seconds hand will point to the distance measured on a scale, usually around the edge of the face. The scale can be defined in any unit of distance, but miles or kilometers are most practical and commonplace. See also Gun chronograph Marine chronometer Poljot Strela References External links Chronograph watches Keulen, Robert. 1996. Accessed 25 MAR 2012 Patek Philippe Chronograph Comparison Gray & Sons NOV 2014 A technical perspective, the chronograph, Monochrome-watches, Xavier Markl, February 2016 TAG Heuer’s 01 chronograph watch movement explained with videos WatchTime June 2016 Cronosurf - The online interactive chronograph Measuring instruments Watches Timers
Chronograph
[ "Technology", "Engineering" ]
3,103
[ "Measuring instruments" ]
1,028,755
https://en.wikipedia.org/wiki/Fuzzy%20associative%20matrix
A fuzzy associative matrix expresses fuzzy logic rules in tabular form. These rules usually take two variables as input, mapping cleanly to a two-dimensional matrix, although theoretically a matrix of any number of dimensions is possible. From the perspective of neuro-fuzzy systems, the mathematical matrix is called a "Fuzzy associative memory" because it stores the weights of the perceptron. Applications In the context of game AI programming, a fuzzy associative matrix helps to develop the rules for non-player characters. Suppose a professional is tasked with writing fuzzy logic rules for a video game monster. In the game being built, entities have two variables: hit points (HP) and firepower (FP): This translates to: IF MonsterHP IS VeryLowHP AND MonsterFP IS VeryWeakFP THEN Retreat IF MonsterHP IS LowHP AND MonsterFP IS VeryWeakFP THEN Retreat IF MonsterHP IS MediumHP AND MonsterFP is VeryWeakFP THEN Defend Multiple rules can fire at once, and often will, because the distinction between "very low" and "low" is fuzzy. If it is more "very low" than it is low, then the "very low" rule will generate a stronger response. The program will evaluate all the rules that fire and use an appropriate defuzzification method to generate its actual response. An implementation of this system might use either the matrix or the explicit IF/THEN form. The matrix makes it easy to visualize the system, but it also makes it impossible to add a third variable just for one rule, so it is less flexible. Identify a rule set There is no inherent pattern in the matrix. It appears as if the rules were just made up, and indeed they were. This is both a strength and a weakness of fuzzy logic in general. It is often impractical or impossible to find an exact set of rules or formulae for dealing with a specific situation. For a sufficiently complex game, a mathematician would not be able to study the system and figure out a mathematically accurate set of rules. However, this weakness is intrinsic to the realities of the situation, not of fuzzy logic itself. The strength of the system is that even if one of the rules is wrong, even greatly wrong, other rules that are correct are likely to fire as well and they may compensate for the error. This does not mean a fuzzy system should be sloppy. Depending on the system, it might get away with being sloppy, but it will underperform. While the rules are fairly arbitrary, they should be chosen carefully. If possible, an expert should decide on the rules, and the sets and rules should be tested vigorously and refined as needed. In this way, a fuzzy system is like an expert system. (Fuzzy logic is used in many true expert systems, as well.) References Fuzzy logic Matrices
Fuzzy associative matrix
[ "Mathematics" ]
586
[ "Matrices (mathematics)", "Mathematical objects" ]
1,028,836
https://en.wikipedia.org/wiki/Root%20nodule
Root nodules are found on the roots of plants, primarily legumes, that form a symbiosis with nitrogen-fixing bacteria. Under nitrogen-limiting conditions, capable plants form a symbiotic relationship with a host-specific strain of bacteria known as rhizobia. This process has evolved multiple times within the legumes, as well as in other species found within the Rosid clade. Legume crops include beans, peas, and soybeans. Within legume root nodules, nitrogen gas (N2) from the atmosphere is converted into ammonia (NH3), which is then assimilated into amino acids (the building blocks of proteins), nucleotides (the building blocks of DNA and RNA as well as the important energy molecule ATP), and other cellular constituents such as vitamins, flavones, and hormones. Their ability to fix gaseous nitrogen makes legumes an ideal agricultural organism as their requirement for nitrogen fertilizer is reduced. Indeed, high nitrogen content blocks nodule development as there is no benefit for the plant of forming the symbiosis. The energy for splitting the nitrogen gas in the nodule comes from sugar that is translocated from the leaf (a product of photosynthesis). Malate as a breakdown product of sucrose is the direct carbon source for the bacteroid. Nitrogen fixation in the nodule is very oxygen sensitive. Legume nodules harbor an iron containing protein called leghaemoglobin, closely related to animal myoglobin, to facilitate the diffusion of oxygen gas used in respiration. Symbiosis Leguminous family Plants that contribute to N2 fixation include the legume family – Fabaceae – with taxa such as kudzu, clovers, soybeans, alfalfa, lupines, peanuts, and rooibos. They contain symbiotic bacteria called rhizobia within the nodules, producing nitrogen compounds that help the plant to grow and compete with other plants. When the plant dies, the fixed nitrogen is released, making it available to other plants, and this helps to fertilize the soil. The great majority of legumes have this association, but a few genera (e.g., Styphnolobium) do not. In many traditional farming practices, fields are rotated through various types of crops, which usually includes one consisting mainly or entirely of a leguminous crop such as clover, in order to take advantage of this. Non-leguminous Although by far the majority of plants able to form nitrogen-fixing root nodules are in the legume family Fabaceae, there are a few exceptions: Actinorhizal plants such as alder and bayberry can form (less complex) nitrogen-fixing nodules, thanks to a symbiotic association with Frankia bacteria. These plants belong to 25 genera distributed among 8 plant families. According to a count in 1998, it includes about 200 species and accounts for roughly the same amount of nitrogen fixation as rhizobial symbioses. An important structural difference is that in these symbioses the bacteria are never released from the infection thread. Parasponia, a tropical genus in the Cannabaceae is also able to interact with rhizobia and form nitrogen-fixing nodules. As related plants are actinorhizal, it is believed that the plant "switched partner" in its evolution. The ability to fix nitrogen is far from universally present in these families. For instance, of 122 genera in the Rosaceae, only 4 genera are capable of fixing nitrogen. All these families belong to the orders Cucurbitales, Fagales, and Rosales, which together with the Fabales form a nitrogen-fixing clade (NFC) of eurosids. In this clade, Fabales were the first lineage to branch off; thus, the ability to fix nitrogen may be plesiomorphic and subsequently lost in most descendants of the original nitrogen-fixing plant; however, it may be that the basic genetic and physiological requirements were present in an incipient state in the last common ancestors of all these plants, but only evolved to full function in some of them: Classification Two main types of nodule have been described in legumes: determinate and indeterminate. Determinate nodules are found on certain tribes of tropical legume such as those of the genera Glycine (soybean), Phaseolus (common bean), and Vigna. and on some temperate legumes such as Lotus. These determinate nodules lose meristematic activity shortly after initiation, thus growth is due to cell expansion resulting in mature nodules which are spherical in shape. Another type of determinate nodule is found in a wide range of herbs, shrubs and trees, such as Arachis (peanut). These are always associated with the axils of lateral or adventitious roots and are formed following infection via cracks where these roots emerge and not using root hairs. Their internal structure is quite different from those of the soybean type of nodule. Indeterminate nodules are found in the majority of legumes from all three sub-families, whether in temperate regions or in the tropics. They can be seen in Faboideae legumes such as Pisum (pea), Medicago (alfalfa), Trifolium (clover), and Vicia (vetch) and all mimosoid legumes such as acacias, the few nodulated caesalpinioid legumes such as partridge pea. They earned the name "indeterminate" because they maintain an active apical meristem that produces new cells for growth over the life of the nodule. This results in the nodule having a generally cylindrical shape, which may be extensively branched. Because they are actively growing, indeterminate nodules manifest zones which demarcate different stages of development/symbiosis: Zone I—the active meristem. This is where new nodule tissue is formed which will later differentiate into the other zones of the nodule. Zone II—the infection zone. This zone is permeated with infection threads full of bacteria. The plant cells are larger than in the previous zone and cell division is halted. Interzone II–III—Here the bacteria have entered the plant cells, which contain amyloplasts. They elongate and begin terminally differentiating into symbiotic, nitrogen-fixing bacteroids. Zone III—the nitrogen fixation zone. Each cell in this zone contains a large, central vacuole and the cytoplasm is filled with fully differentiated bacteroids which are actively fixing nitrogen. The plant provides these cells with leghemoglobin, resulting in a distinct pink color. Zone IV—the senescent zone. Here plant cells and their bacteroid contents are being degraded. The breakdown of the heme component of leghemoglobin results in a visible greening at the base of the nodule. This is the most widely studied type of nodule, but the details are quite different in nodules of peanut and relatives and some other important crops such as lupins where the nodule is formed following direct infection of rhizobia through the epidermis and where infection threads are never formed. Nodules grow around the root, forming a collar-like structure. In these nodules and in the peanut type the central infected tissue is uniform, lacking the uninfected ells seen in nodules of soybean and many indeterminate types such as peas and clovers. Actinorhizal-type nodules are markedly different structures found in non-legumes. In this type, cells derived from the root cortex form the infected tissue, and the prenodule becomes part of the mature nodule. Despite this seemingly major difference, it is possible to produce such nodules in legumes by a single homeotic mutation. Nodulation Legumes release organic compounds as secondary metabolites called flavonoids from their roots, which attract the rhizobia to them and which also activate nod genes in the bacteria to produce nod factors and initiate nodule formation. These nod factors initiate root hair curling. The curling begins with the very tip of the root hair curling around the Rhizobium. Within the root tip, a small tube called the infection thread forms, which provides a pathway for the Rhizobium to travel into the root epidermal cells as the root hair continues to curl. Partial curling can even be achieved by nod factor alone. This was demonstrated by the isolation of nod factors and their application to parts of the root hair. The root hairs curled in the direction of the application, demonstrating the action of a root hair attempting to curl around a bacterium. Even application on lateral roots caused curling. This demonstrated that it is the nod factor itself, not the bacterium that causes the stimulation of the curling. When the nod factor is sensed by the root, a number of biochemical and morphological changes happen: cell division is triggered in the root to create the nodule, and the root hair growth is redirected to curl around the bacteria multiple times until it fully encapsulates one or more bacteria. The bacteria encapsulated divide multiple times, forming a microcolony. From this microcolony, the bacteria enter the developing nodule through the infection thread, which grows through the root hair into the basal part of the epidermis cell, and onwards into the root cortex; they are then surrounded by a plant-derived symbiosome membrane and differentiate into bacteroids that fix nitrogen. Effective nodulation takes place approximately four weeks after crop planting, with the size, and shape of the nodules dependent on the crop. Crops such as soybeans, or peanuts will have larger nodules than forage legumes such as red clover, or alfalfa, since their nitrogen needs are higher. The number of nodules, and their internal color, will indicate the status of nitrogen fixation in the plant. Nodulation is controlled by a variety of processes, both external (heat, acidic soils, drought, nitrate) and internal (autoregulation of nodulation, ethylene). Autoregulation of nodulation controls nodule numbers per plant through a systemic process involving the leaf. Leaf tissue senses the early nodulation events in the root through an unknown chemical signal, then restricts further nodule development in newly developing root tissue. The Leucine rich repeat (LRR) receptor kinases (NARK in soybean (Glycine max); HAR1 in Lotus japonicus, SUNN in Medicago truncatula) are essential for autoregulation of nodulation (AON). Mutation leading to loss of function in these AON receptor kinases leads to supernodulation or hypernodulation. Often root growth abnormalities accompany the loss of AON receptor kinase activity, suggesting that nodule growth and root development are functionally linked. Investigations into the mechanisms of nodule formation showed that the ENOD40 gene, coding for a 12–13 amino acid protein [41], is up-regulated during nodule formation [3]. Connection to root structure Root nodules apparently have evolved three times within the Fabaceae but are rare outside that family. The propensity of these plants to develop root nodules seems to relate to their root structure. In particular, a tendency to develop lateral roots in response to abscisic acid may enable the later evolution of root nodules. Nodule-like structures Some fungi produce nodular structures known as tuberculate ectomycorrhizae on the roots of their plant hosts. Suillus tomentosus, for example, produces these structures with its plant host lodgepole pine (Pinus contorta var. latifolia). These structures have, in turn, been shown to host nitrogen fixing bacteria, which contribute a significant amount of nitrogen and allow the pines to colonize nutrient-poor sites. Gallery See also Root gall nematode Rhizobium Sinorhizobium Bradyrhizobium Neorhizobium Pararhizobium Common Symbiotic Signaling Pathway References External links Legume root nodules at the Tree of Life Web project Video and commentary on root nodules of White Clover Plant organogenesis Fabaceae Nitrogen cycle Plant roots Symbiosis Oligotrophs
Root nodule
[ "Chemistry", "Biology" ]
2,590
[ "Behavior", "Symbiosis", "Biological interactions", "Nitrogen cycle", "Metabolism" ]
1,028,841
https://en.wikipedia.org/wiki/Simplex%20category
In mathematics, the simplex category (or simplicial category or nonempty finite ordinal category) is the category of non-empty finite ordinals and order-preserving maps. It is used to define simplicial and cosimplicial objects. Formal definition The simplex category is usually denoted by . There are several equivalent descriptions of this category. can be described as the category of non-empty finite ordinals as objects, thought of as totally ordered sets, and (non-strictly) order-preserving functions as morphisms. The objects are commonly denoted (so that is the ordinal ). The category is generated by coface and codegeneracy maps, which amount to inserting or deleting elements of the orderings. (See simplicial set for relations of these maps.) A simplicial object is a presheaf on , that is a contravariant functor from to another category. For instance, simplicial sets are contravariant with the codomain category being the category of sets. A cosimplicial object is defined similarly as a covariant functor originating from . Augmented simplex category The augmented simplex category, denoted by is the category of all finite ordinals and order-preserving maps, thus , where . Accordingly, this category might also be denoted FinOrd. The augmented simplex category is occasionally referred to as algebraists' simplex category and the above version is called topologists' simplex category. A contravariant functor defined on is called an augmented simplicial object and a covariant functor out of is called an augmented cosimplicial object; when the codomain category is the category of sets, for example, these are called augmented simplicial sets and augmented cosimplicial sets respectively. The augmented simplex category, unlike the simplex category, admits a natural monoidal structure. The monoidal product is given by concatenation of linear orders, and the unit is the empty ordinal (the lack of a unit prevents this from qualifying as a monoidal structure on ). In fact, is the monoidal category freely generated by a single monoid object, given by with the unique possible unit and multiplication. This description is useful for understanding how any comonoid object in a monoidal category gives rise to a simplicial object since it can then be viewed as the image of a functor from to the monoidal category containing the comonoid; by forgetting the augmentation we obtain a simplicial object. Similarly, this also illuminates the construction of simplicial objects from monads (and hence adjoint functors) since monads can be viewed as monoid objects in endofunctor categories. See also Simplicial category PROP (category theory) Abstract simplicial complex References External links What's special about the Simplex category? Algebraic topology Homotopy theory Categories in category theory Free algebraic structures
Simplex category
[ "Mathematics" ]
637
[ "Mathematical structures", "Algebraic topology", "Basic concepts in set theory", "Families of sets", "Category theory", "Algebraic structures", "Simplicial sets", "Categories in category theory", "Topology", "Fields of abstract algebra", "Free algebraic structures" ]
1,028,926
https://en.wikipedia.org/wiki/Architectural%20acoustics
Architectural acoustics (also known as building acoustics) is the science and engineering of achieving a good sound within a building and is a branch of acoustical engineering. The first application of modern scientific methods to architectural acoustics was carried out by the American physicist Wallace Sabine in the Fogg Museum lecture room. He applied his newfound knowledge to the design of Symphony Hall, Boston. Architectural acoustics can be about achieving good speech intelligibility in a theatre, restaurant or railway station, enhancing the quality of music in a concert hall or recording studio, or suppressing noise to make offices and homes more productive and pleasant places to work and live in. Architectural acoustic design is usually done by acoustic consultants. Building skin envelope This science analyzes noise transmission from building exterior envelope to interior and vice versa. The main noise paths are roofs, eaves, walls, windows, door and penetrations. Sufficient control ensures space functionality and is often required based on building use and local municipal codes. An example would be providing a suitable design for a home which is to be constructed close to a high volume roadway, or under the flight path of a major airport, or of the airport itself. Inter-space noise control The science of limiting and/or controlling noise transmission from one building space to another to ensure space functionality and speech privacy. The typical sound paths are ceilings, room partitions, acoustic ceiling panels (such as wood dropped ceiling panels), doors, windows, flanking, ducting and other penetrations. Technical solutions depend on the source of the noise and the path of acoustic transmission, for example noise by steps or noise by (air, water) flow vibrations. An example would be providing suitable party wall design in an apartment complex to minimize the mutual disturbance due to noise by residents in adjacent apartments. Inter-space noise control can take a different form when talking about Acoustics in European football stadiums. One goal in stadium acoustics is to make the crowd as loud as possible and inter-space noise control becomes a factor but in helping reflect noise to create more reverberation and louder decibel level throughout the stadium. Many outdoor soccer stadiums for example have roofs over the fan sections which create more reverberation and echoing which helps raise the general volume in the stadium. Interior space acoustics This is the science of controlling a room's surfaces based on sound absorbing and reflecting properties. Excessive reverberation time, which can be calculated, can lead to poor speech intelligibility. Sound reflections create standing waves that produce natural resonances that can be heard as a pleasant sensation or an annoying one. Reflective surfaces can be angled and coordinated to provide good coverage of sound for a listener in a concert hall or music recital space. To illustrate this concept consider the difference between a modern large office meeting room or lecture theater and a traditional classroom with all hard surfaces. Interior building surfaces can be constructed of many different materials and finishes. Ideal acoustical panels are those without a face or finish material that interferes with the acoustical infill or substrate. Fabric covered panels are one way to heighten acoustical absorption. Perforated metal also shows sound absorbing qualities. Finish material is used to cover over the acoustical substrate. Mineral fiber board, or Micore, is a commonly used acoustical substrate. Finish materials often consist of fabric, wood or acoustical tile. Fabric can be wrapped around substrates to create what is referred to as a "pre-fabricated panel" and often provides good noise absorption if laid onto a wall. Prefabricated panels are limited to the size of the substrate ranging from to . Fabric retained in a wall-mounted perimeter track system, is referred to as "on-site acoustical wall panels". This is constructed by framing the perimeter track into shape, infilling the acoustical substrate and then stretching and tucking the fabric into the perimeter frame system. On-site wall panels can be constructed to accommodate door frames, baseboard, or any other intrusion. Large panels (generally, greater than ) can be created on walls and ceilings with this method. Wood finishes can consist of punched or routed slots and provide a natural look to the interior space, although acoustical absorption may not be great. There are four ways to improve workplace acoustics and solve workplace sound problems – the ABCDs. A = Absorb (via drapes, carpets, ceiling tiles, etc.) B = Block (via panels, walls, floors, ceilings and layout) C = Cover-up, or Control (background sound levels and spectra) (via masking sound) D = Diffuse (cause the sound energy to spread by radiating in many directions) Mechanical equipment noise Building services noise control is the science of controlling noise produced by: HVAC (heating, ventilation, air conditioning) systems Elevators Electrical generators positioned within or attached to a building Any other building service infrastructure component that emits sound. Inadequate control may lead to elevated sound levels within the space which can be annoying and reduce speech intelligibility. Typical improvements are vibration isolation of mechanical equipment, and sound attenuators in ductwork. Sound masking can also be created by adjusting HVAC noise to a predetermined level. See also Noise health effects Noise mitigation Noise Reduction Coefficient Noise regulation Noise, vibration, and harshness Sound transmission class References Further reading Thompson, Emily (2002). The Soundscape of Modernity: Architectural Acoustics and the Culture of Listening in America, 1900–1933. Cambridge, Mass.: MIT Press. Acoustics Building engineering Acoustic problems Sound
Architectural acoustics
[ "Physics", "Materials_science", "Engineering" ]
1,120
[ "Building engineering", "Classical mechanics", "Acoustics", "Civil engineering", "Building defects", "Mechanical failure", "Architecture" ]
1,028,978
https://en.wikipedia.org/wiki/Combs%20method
The Combs method is a rule base reduction method of writing fuzzy logic rules described by William E. Combs in 1997. It is designed to prevent combinatorial explosion in fuzzy logic rules. The Combs method takes advantage of the logical equality . Equality proof The simplest proof of given equality involves usage of truth tables: Combinatorial explosion Suppose we have a fuzzy system that considers N variables at a time, each of which can fit into at least one of S sets. The number of rules necessary to cover all the cases in a traditional fuzzy system is , whereas the Combs method would need only rules. For example, if we have five sets and five variables to consider to produce one output, covering all the cases would require 3125 rules in a traditional system, while the Combs method would require only 25 rules, taming the combinatorial explosion that occurs when more inputs or more sets are added to the system. This article will focus on the Combs method itself. To learn more about the way rules are traditionally formed, see fuzzy logic and fuzzy associative matrix. Example Suppose we were designing an artificial personality system that determined how friendly the personality is supposed to be towards a person in a strategic video game. The personality would consider its own fear, trust, and love in the other person. A set of rules in the Combs system might look like this: The table translates to: [IF Fear IS Unafraid THEN Friendship IS Enemies OR IF Fear IS ModerateFear THEN Friendship IS Neutral OR IF Fear IS Afraid THEN Friendship IS GoodFriends ] OR [IF Trust IS Distrusting THEN Friendship IS Enemies OR IF Trust IS ModerateTrust THEN Friendship IS Neutral OR IF Trust IS Trusting THEN Friendship IS GoodFriends] OR [IF Love IS Unloving THEN Friendship IS Enemies OR IF Love IS ModerateLove THEN Friendship IS Neutral OR IF Love IS Loving THEN Friendship IS GoodFriends] In this case, because the table follows a straightforward pattern in the output, it could be rewritten as: Each column of the table maps to the output provided in the last row. To obtain the output of the system, we just average the outputs of each rule for that output. For example, to calculate how much the computer is Enemies with the player, we take the average of how much the computer is Unafraid, Distrusting, and Unloving of the player. When all three averages are obtained, the result can then be defuzzified by any of the traditional means. References The Combs Method for Rapid Inference (the original paper by William E. Combs) The Combs Method for Rapid Inference (Archive of the original paper by William E. Combs) Fuzzy logic Logic in computer science Non-classical logic
Combs method
[ "Mathematics" ]
559
[ "Mathematical logic", "Logic in computer science" ]
1,029,022
https://en.wikipedia.org/wiki/Embryonic%20stem%20cell
Embryonic stem cells (ESCs) are pluripotent stem cells derived from the inner cell mass of a blastocyst, an early-stage pre-implantation embryo. Human embryos reach the blastocyst stage 4–5 days post fertilization, at which time they consist of 50–150 cells. Isolating the inner cell mass (embryoblast) using immunosurgery results in destruction of the blastocyst, a process which raises ethical issues, including whether or not embryos at the pre-implantation stage have the same moral considerations as embryos in the post-implantation stage of development. Researchers are currently focusing heavily on the therapeutic potential of embryonic stem cells, with clinical use being the goal for many laboratories. Potential uses include the treatment of diabetes and heart disease. The cells are being studied to be used as clinical therapies, models of genetic disorders, and cellular/DNA repair. However, adverse effects in the research and clinical processes such as tumors and unwanted immune responses have also been reported. Properties Embryonic stem cells (ESCs), derived from the blastocyst stage of early mammalian embryos, are distinguished by their ability to differentiate into any embryonic cell type and by their ability to self-renew. It is these traits that makes them valuable in the scientific and medical fields. ESCs have a normal karyotype, maintain high telomerase activity, and exhibit remarkable long-term proliferative potential. Pluripotent Embryonic stem cells of the inner cell mass are pluripotent, meaning they are able to differentiate to generate primitive ectoderm, which ultimately differentiates during gastrulation into all derivatives of the three primary germ layers: ectoderm, endoderm, and mesoderm. These germ layers generate each of the more than 220 cell types in the adult human body. When provided with the appropriate signals, ESCs initially form precursor cells that in subsequently differentiate into the desired cell types. Pluripotency distinguishes embryonic stem cells from adult stem cells, which are multipotent and can only produce a limited number of cell types. Self renewal and repair of structure Under defined conditions, embryonic stem cells are capable of self-renewing indefinitely in an undifferentiated state. Self-renewal conditions must prevent the cells from clumping and maintain an environment that supports an unspecialized state. Typically this is done in the lab with media containing serum and leukemia inhibitory factor or serum-free media supplements with two inhibitory drugs ("2i"), the MEK inhibitor PD03259010 and GSK-3 inhibitor CHIR99021. Growth ESCs divide very frequently due to a shortened G1 phase in their cell cycle. Rapid cell division allows the cells to quickly grow in number, but not size, which is important for early embryo development. In ESCs, cyclin A and cyclin E proteins involved in the G1/S transition are always expressed at high levels. Cyclin-dependent kinases such as CDK2 that promote cell cycle progression are overactive, in part due to downregulation of their inhibitors. Retinoblastoma proteins that inhibit the transcription factor E2F until the cell is ready to enter S phase are hyperphosphorylated and inactivated in ESCs, leading to continual expression of proliferation genes. These changes result in accelerated cycles of cell division. Although high expression levels of pro-proliferative proteins and a shortened G1 phase have been linked to maintenance of pluripotency, ESCs grown in serum-free 2i conditions do express hypo-phosphorylated active Retinoblastoma proteins and have an elongated G1 phase. Despite this difference in the cell cycle when compared to ESCs grown in media containing serum these cells have similar pluripotent characteristics. Pluripotency factors Oct4 and Nanog play a role in transcriptionally regulating the embryonic stem cell cycle. Uses Due to their plasticity and potentially unlimited capacity for self-renewal, embryonic stem cell therapies have been proposed for regenerative medicine and tissue replacement after injury or disease. Pluripotent stem cells have shown promise in treating a number of varying conditions, including but not limited to: spinal cord injuries, age related macular degeneration, diabetes, neurodegenerative disorders (such as Parkinson's disease), AIDS, etc. In addition to their potential in regenerative medicine, embryonic stem cells provide a possible alternative source of tissue/organs which serves as a possible solution to the donor shortage dilemma. There are some ethical controversies surrounding this though (see Ethical debate section below). Aside from these uses, ESCs can also be used for research on early human development, certain genetic disease, and in vitro toxicology testing. Utilizations According to a 2002 article in PNAS, "Human embryonic stem cells have the potential to differentiate into various cell types, and, thus, may be useful as a source of cells for transplantation or tissue engineering." Tissue engineering In tissue engineering, the use of stem cells are known to be of importance. In order to successfully engineer a tissue, the cells used must be able to perform specific biological functions such as secretion of cytokines, signaling molecules, interacting with neighboring cells, and producing an extracellular matrix in the correct organization. Stem cells demonstrates these specific biological functions along with being able to self-renew and differentiate into one or more types of specialized cells. Embryonic stem cells is one of the sources that are being considered for the use of tissue engineering. The use of human embryonic stem cells have opened many new possibilities for tissue engineering, however, there are many hurdles that must be made before human embryonic stem cell can even be utilized. It is theorized that if embryonic stem cells can be altered to not evoke the immune response when implanted into the patient then this would be a revolutionary step in tissue engineering. Embryonic stem cells are not limited to tissue engineering. Cell replacement therapies Research has focused on differentiating ESCs into a variety of cell types for eventual use as cell replacement therapies. Some of the cell types that have or are currently being developed include cardiomyocytes, neurons, hepatocytes, bone marrow cells, islet cells and endothelial cells. However, the derivation of such cell types from ESCs is not without obstacles, therefore research has focused on overcoming these barriers. For example, studies are underway to differentiate ESCs into tissue specific cardiomyocytes and to eradicate their immature properties that distinguish them from adult cardiomyocytes. Clinical potential Researchers have differentiated ESCs into dopamine-producing cells with the hope that these neurons could be used in the treatment of Parkinson's disease. ESCs have been differentiated to natural killer cells and bone tissue. Studies involving ESCs are underway to provide an alternative treatment for diabetes. For example ESCs have been differentiated into insulin-producing cells, and researchers at Harvard University were able to produce large quantities of pancreatic beta cells from ESCs. An article published in the European Heart Journal describes a translational process of generating human embryonic stem cell-derived cardiac progenitor cells to be used in clinical trials of patients with severe heart failure. Drug discovery Besides becoming an important alternative to organ transplants, ESCs are also being used in the field of toxicology, and as cellular screens to uncover new chemical entities that can be developed as small-molecule drugs. Studies have shown that cardiomyocytes derived from ESCs are validated in vitro models to test drug responses and predict toxicity profiles. ESC derived cardiomyocytes have been shown to respond to pharmacological stimuli and hence can be used to assess cardiotoxicity such as torsades de pointes. ESC-derived hepatocytes are also useful models that could be used in the preclinical stages of drug discovery. However, the development of hepatocytes from ESCs has proven to be challenging and this hinders the ability to test drug metabolism. Therefore, research has focused on establishing fully functional ESC-derived hepatocytes with stable phase I and II enzyme activity. Models of genetic disorder Several new studies have started to address the concept of modeling genetic disorders with embryonic stem cells. Either by genetically manipulating the cells, or more recently, by deriving diseased cell lines identified by prenatal genetic diagnosis (PGD), modeling genetic disorders is something that has been accomplished with stem cells. This approach may very well prove valuable at studying disorders such as Fragile-X syndrome, Cystic fibrosis, and other genetic maladies that have no reliable model system. Yury Verlinsky, a Russian-American medical researcher who specialized in embryo and cellular genetics (genetic cytology), developed prenatal diagnosis testing methods to determine genetic and chromosomal disorders a month and a half earlier than standard amniocentesis. The techniques are now used by many pregnant women and prospective parents, especially couples who have a history of genetic abnormalities or where the woman is over the age of 35 (when the risk of genetically related disorders is higher). In addition, by allowing parents to select an embryo without genetic disorders, they have the potential of saving the lives of siblings that already had similar disorders and diseases using cells from the disease free offspring. Repair of DNA damage Differentiated somatic cells and ES cells use different strategies for dealing with DNA damage. For instance, human foreskin fibroblasts, one type of somatic cell, use non-homologous end joining (NHEJ), an error prone DNA repair process, as the primary pathway for repairing double-strand breaks (DSBs) during all cell cycle stages. Because of its error-prone nature, NHEJ tends to produce mutations in a cell's clonal descendants. ES cells use a different strategy to deal with DSBs. Because ES cells give rise to all of the cell types of an organism including the cells of the germ line, mutations arising in ES cells due to faulty DNA repair are a more serious problem than in differentiated somatic cells. Consequently, robust mechanisms are needed in ES cells to repair DNA damages accurately, and if repair fails, to remove those cells with un-repaired DNA damages. Thus, mouse ES cells predominantly use high fidelity homologous recombinational repair (HRR) to repair DSBs. This type of repair depends on the interaction of the two sister chromosomes formed during S phase and present together during the G2 phase of the cell cycle. HRR can accurately repair DSBs in one sister chromosome by using intact information from the other sister chromosome. Cells in the G1 phase of the cell cycle (i.e. after metaphase/cell division but prior the next round of replication) have only one copy of each chromosome (i.e. sister chromosomes aren't present). Mouse ES cells lack a G1 checkpoint and do not undergo cell cycle arrest upon acquiring DNA damage. Rather they undergo programmed cell death (apoptosis) in response to DNA damage. Apoptosis can be used as a fail-safe strategy to remove cells with un-repaired DNA damages in order to avoid mutation and progression to cancer. Consistent with this strategy, mouse ES stem cells have a mutation frequency about 100-fold lower than that of isogenic mouse somatic cells. Clinical trial On January 23, 2009, Phase I clinical trials for transplantation of oligodendrocytes (a cell type of the brain and spinal cord) derived from human ESCs into spinal cord-injured individuals received approval from the U.S. Food and Drug Administration (FDA), marking it the world's first human ESC human trial. The study leading to this scientific advancement was conducted by Hans Keirstead and colleagues at the University of California, Irvine and supported by Geron Corporation of Menlo Park, CA, founded by Michael D. West, PhD. A previous experiment had shown an improvement in locomotor recovery in spinal cord-injured rats after a 7-day delayed transplantation of human ESCs that had been pushed into an oligodendrocytic lineage. The phase I clinical study was designed to enroll about eight to ten paraplegics who have had their injuries no longer than two weeks before the trial begins, since the cells must be injected before scar tissue is able to form. The researchers emphasized that the injections were not expected to fully cure the patients and restore all mobility. Based on the results of the rodent trials, researchers speculated that restoration of myelin sheathes and an increase in mobility might occur. This first trial was primarily designed to test the safety of these procedures and if everything went well, it was hoped that it would lead to future studies that involve people with more severe disabilities. The trial was put on hold in August 2009 due to FDA concerns regarding a small number of microscopic cysts found in several treated rat models but the hold was lifted on July 30, 2010. In October 2010 researchers enrolled and administered ESCs to the first patient at Shepherd Center in Atlanta. The makers of the stem cell therapy, Geron Corporation, estimated that it would take several months for the stem cells to replicate and for the GRNOPC1 therapy to be evaluated for success or failure. In November 2011 Geron announced it was halting the trial and dropping out of stem cell research for financial reasons, but would continue to monitor existing patients, and was attempting to find a partner that could continue their research. In 2013 BioTime, led by CEO Dr. Michael D. West, acquired all of Geron's stem cell assets, with the stated intention of restarting Geron's embryonic stem cell-based clinical trial for spinal cord injury research. BioTime company Asterias Biotherapeutics (NYSE MKT: AST) was granted a $14.3 million Strategic Partnership Award by the California Institute for Regenerative Medicine (CIRM) to re-initiate the world's first embryonic stem cell-based human clinical trial, for spinal cord injury. Supported by California public funds, CIRM is the largest funder of stem cell-related research and development in the world. The award provides funding for Asterias to reinitiate clinical development of AST-OPC1 in subjects with spinal cord injury and to expand clinical testing of escalating doses in the target population intended for future pivotal trials. AST-OPC1 is a population of cells derived from human embryonic stem cells (hESCs) that contains oligodendrocyte progenitor cells (OPCs). OPCs and their mature derivatives called oligodendrocytes provide critical functional support for nerve cells in the spinal cord and brain. Asterias recently presented the results from phase 1 clinical trial testing of a low dose of AST-OPC1 in patients with neurologically complete thoracic spinal cord injury. The results showed that AST-OPC1 was successfully delivered to the injured spinal cord site. Patients followed 2–3 years after AST-OPC1 administration showed no evidence of serious adverse events associated with the cells in detailed follow-up assessments including frequent neurological exams and MRIs. Immune monitoring of subjects through one year post-transplantation showed no evidence of antibody-based or cellular immune responses to AST-OPC1. In four of the five subjects, serial MRI scans performed throughout the 2–3 year follow-up period indicate that reduced spinal cord cavitation may have occurred and that AST-OPC1 may have had some positive effects in reducing spinal cord tissue deterioration. There was no unexpected neurological degeneration or improvement in the five subjects in the trial as evaluated by the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) exam. The Strategic Partnership III grant from CIRM will provide funding to Asterias to support the next clinical trial of AST-OPC1 in subjects with spinal cord injury, and for Asterias' product development efforts to refine and scale manufacturing methods to support later-stage trials and eventually commercialization. CIRM funding will be conditional on FDA approval for the trial, completion of a definitive agreement between Asterias and CIRM, and Asterias' continued progress toward the achievement of certain pre-defined project milestones. Concern and controversy Adverse effects The major concern with the possible transplantation of ESCs into patients as therapies is their ability to form tumors including teratomas. Safety issues prompted the FDA to place a hold on the first ESC clinical trial, however no tumors were observed. The main strategy to enhance the safety of ESCs for potential clinical use is to differentiate the ESCs into specific cell types (e.g. neurons, muscle, liver cells) that have reduced or eliminated ability to cause tumors. Following differentiation, the cells are subjected to sorting by flow cytometry for further purification. ESCs are predicted to be inherently safer than iPS cells created with genetically integrating viral vectors because they are not genetically modified with genes such as c-Myc that are linked to cancer. Nonetheless, ESCs express very high levels of the iPS inducing genes and these genes including Myc are essential for ESC self-renewal and pluripotency, and potential strategies to improve safety by eliminating c-Myc expression are unlikely to preserve the cells' "stemness". However, N-myc and L-myc have been identified to induce iPS cells instead of c-myc with similar efficiency. Later protocols to induce pluripotency bypass these problems completely by using non-integrating RNA viral vectors such as sendai virus or mRNA transfection. Ethical debate Due to the nature of embryonic stem cell research, there are a lot of controversial opinions on the topic. Since harvesting embryonic stem cells usually necessitates destroying the embryo from which those cells are obtained, the moral status of the embryo comes into question. Some people claim that the embryo is too young to achieve personhood or that the embryo, if donated from an IVF clinic (where labs typically acquire embryos), would otherwise go to medical waste anyway. Opponents of ESC research claim that an embryo is a human life, therefore destroying it is murder and the embryo must be protected under the same ethical view as a more developed human being. History 1964: Lewis Kleinsmith and G. Barry Pierce Jr. isolated a single type of cell from a teratocarcinoma, a tumor now known from a germ cell. These cells were isolated from the teratocarcinoma replicated and grew in cell culture as a stem cell and are now known as embryonal carcinoma (EC) cells. Although similarities in morphology and differentiating potential (pluripotency) led to the use of EC cells as the in vitro model for early mouse development, EC cells harbor genetic mutations and often abnormal karyotypes that accumulated during the development of the teratocarcinoma. These genetic aberrations further emphasized the need to be able to culture pluripotent cells directly from the inner cell mass. 1981: Embryonic stem cells (ES cells) were independently first derived from a mouse embryos by two groups. Martin Evans and Matthew Kaufman from the Department of Genetics, University of Cambridge published first in July, revealing a new technique for culturing the mouse embryos in the uterus to allow for an increase in cell number, allowing for the derivation of ES cell from these embryos. Gail R. Martin, from the Department of Anatomy, University of California, San Francisco, published her paper in December and coined the term "Embryonic Stem Cell". She showed that embryos could be cultured in vitro and that ES cells could be derived from these embryos. 1989: Mario R. Cappechi, Martin J. Evans, and Oliver Smithies publish their research that details their isolation and genetic modifications of embryonic stem cells, creating the first "knockout mice". In creating knockout mice, this publication provided scientists with an entirely new way to study disease. 1996: Dolly, was the first mammal cloned from an adult cell by the Roslin Institute of the University of Edinburgh. This experiment instituted the proposition that specialized adult cells obtain the genetic makeup to perform a specific task; which established a basis for further research within a variety of cloning techniques. The Dolly experiment was performed by obtaining the mammalian udder cells from a sheep (Dolly) and differentiating these cells until division was concluded. An egg cell was then procured from a different sheep host and the nucleus was removed. An udder cell was placed next to the egg cell and connected by electricity causing this cell to share DNA. This egg cell differentiated into an embryo and the embryo was inserted into a third sheep which gave birth to the clone version of Dolly. 1998: A team from the University of Wisconsin, Madison (James A. Thomson, Joseph Itskovitz-Eldor, Sander S. Shapiro, Michelle A. Waknitz, Jennifer J. Swiergiel, Vivienne S. Marshall, and Jeffrey M. Jones) publish a paper titled "Embryonic Stem Cell Lines Derived From Human Blastocysts". The researchers behind this study not only created the first embryonic stem cells, but recognized their pluripotency, as well as their capacity for self-renewal. The abstract of the paper notes the significance of the discovery with regards to the fields of developmental biology and drug discovery. 2001: President George W. Bush allows federal funding to support research on roughly 60—at this time, already existing—lines of embryonic stem cells. Seeing as the limited lines that Bush allowed research on had already been established, this law supported embryonic stem cell research without raising any ethical questions that could arise with the creation of new lines under federal budget. 2006: Japanese scientists Shinya Yamanaka and Kazutoshi Takashi publish a paper describing the induction of pluripotent stem cells from cultures of adult mouse fibroblasts. Induced pluripotent stem cells (iPSCs) are a huge discovery, as they are seemingly identical to embryonic stem cells and could be used without sparking the same moral controversy. January, 2009: The US Food and Drug Administration (FDA) provides approval for Geron Corporation's phase I trial of their human embryonic stem cell-derived treatment for spinal cord injuries. The announcement was met with excitement from the scientific community, but also with wariness from stem cell opposers. The treatment cells were, however, derived from the cell lines approved under George W. Bush's ESC policy. March, 2009: Executive Order 13505 is signed by President Barack Obama, removing the restrictions put in place on federal funding for human stem cells by the previous presidential administration. This would allow the National Institutes of Health (NIH) to provide funding for hESC research. The document also states that the NIH must provide revised federal funding guidelines within 120 days of the order's signing. Techniques and conditions for derivation and culture Derivation from humans In vitro fertilization generates multiple embryos. The surplus of embryos is not clinically used or is unsuitable for implantation into the patient, and therefore may be donated by the donor with consent. Human embryonic stem cells can be derived from these donated embryos or additionally they can also be extracted from cloned embryos created using a cell from a patient and a donated egg through the process of somatic cell nuclear transfer. The inner cell mass (cells of interest), from the blastocyst stage of the embryo, is separated from the trophectoderm, the cells that would differentiate into extra-embryonic tissue. Immunosurgery, the process in which antibodies are bound to the trophectoderm and removed by another solution, and mechanical dissection are performed to achieve separation. The resulting inner cell mass cells are plated onto cells that will supply support. The inner cell mass cells attach and expand further to form a human embryonic cell line, which are undifferentiated. These cells are fed daily and are enzymatically or mechanically separated every four to seven days. For differentiation to occur, the human embryonic stem cell line is removed from the supporting cells to form embryoid bodies, is co-cultured with a serum containing necessary signals, or is grafted in a three-dimensional scaffold to result. Derivation from other animals Embryonic stem cells are derived from the inner cell mass of the early embryo, which are harvested from the donor mother animal. Martin Evans and Matthew Kaufman reported a technique that delays embryo implantation, allowing the inner cell mass to increase. This process includes removing the donor mother's ovaries and dosing her with progesterone, changing the hormone environment, which causes the embryos to remain free in the uterus. After 4–6 days of this intrauterine culture, the embryos are harvested and grown in in vitro culture until the inner cell mass forms “egg cylinder-like structures,” which are dissociated into single cells, and plated on fibroblasts treated with mitomycin-c (to prevent fibroblast mitosis). Clonal cell lines are created by growing up a single cell. Evans and Kaufman showed that the cells grown out from these cultures could form teratomas and embryoid bodies, and differentiate in vitro, all of which indicating that the cells are pluripotent. Gail Martin derived and cultured her ES cells differently. She removed the embryos from the donor mother at approximately 76 hours after copulation and cultured them overnight in a medium containing serum. The following day, she removed the inner cell mass from the late blastocyst using microsurgery. The extracted inner cell mass was cultured on fibroblasts treated with mitomycin-c in a medium containing serum and conditioned by ES cells. After approximately one week, colonies of cells grew out. These cells grew in culture and demonstrated pluripotent characteristics, as demonstrated by the ability to form teratomas, differentiate in vitro, and form embryoid bodies. Martin referred to these cells as ES cells. It is now known that the feeder cells provide leukemia inhibitory factor (LIF) and serum provides bone morphogenetic proteins (BMPs) that are necessary to prevent ES cells from differentiating. These factors are extremely important for the efficiency of deriving ES cells. Furthermore, it has been demonstrated that different mouse strains have different efficiencies for isolating ES cells. Current uses for mouse ES cells include the generation of transgenic mice, including knockout mice. For human treatment, there is a need for patient specific pluripotent cells. Generation of human ES cells is more difficult and faces ethical issues. So, in addition to human ES cell research, many groups are focused on the generation of induced pluripotent stem cells (iPS cells). Potential methods for new cell line derivation On August 23, 2006, the online edition of Nature scientific journal published a letter by Dr. Robert Lanza (medical director of Advanced Cell Technology in Worcester, MA) stating that his team had found a way to extract embryonic stem cells without destroying the actual embryo. This technical achievement would potentially enable scientists to work with new lines of embryonic stem cells derived using public funding in the US, where federal funding was at the time limited to research using embryonic stem cell lines derived prior to August 2001. In March, 2009, the limitation was lifted. Human embryonic stem cells have also been derived by somatic cell nuclear transfer (SCNT). This approach has also sometimes been referred to as "therapeutic cloning" because SCNT bears similarity to other kinds of cloning in that nuclei are transferred from a somatic cell into an enucleated zygote. However, in this case SCNT was used to produce embryonic stem cell lines in a lab, not living organisms via a pregnancy. The "therapeutic" part of the name is included because of the hope that SCNT produced embryonic stem cells could have clinical utility. Induced pluripotent stem cells The iPS cell technology was pioneered by Shinya Yamanaka's lab in Kyoto, Japan, who showed in 2006 that the introduction of four specific genes encoding transcription factors could convert adult cells into pluripotent stem cells. He was awarded the 2012 Nobel Prize along with Sir John Gurdon "for the discovery that mature cells can be reprogrammed to become pluripotent." In 2007, it was shown that pluripotent stem cells, highly similar to embryonic stem cells, can be induced by the delivery of four factors (Oct3/4, Sox2, c-Myc, and Klf4) to differentiated cells. Utilizing the four genes previously listed, the differentiated cells are "reprogrammed" into pluripotent stem cells, allowing for the generation of pluripotent/embryonic stem cells without the embryo. The morphology and growth factors of these lab induced pluripotent cells, are equivalent to embryonic stem cells, leading these cells to be known as induced pluripotent stem cells (iPS cells). This observation was observed in mouse pluripotent stem cells, originally, but now can be performed in human adult fibroblasts using the same four genes. Because ethical concerns regarding embryonic stem cells typically are about their derivation from terminated embryos, it is believed that reprogramming to these iPS cells may be less controversial. This may enable the generation of patient specific ES cell lines that could potentially be used for cell replacement therapies. In addition, this will allow the generation of ES cell lines from patients with a variety of genetic diseases and will provide invaluable models to study those diseases. However, as a first indication that the iPS cell technology can in rapid succession lead to new cures, it was used by a research team headed by Rudolf Jaenisch of the Whitehead Institute for Biomedical Research in Cambridge, Massachusetts, to cure mice of sickle cell anemia, as reported by Science journal's online edition on December 6, 2007. On January 16, 2008, a California-based company, Stemagen, announced that they had created the first mature cloned human embryos from single skin cells taken from adults. These embryos can be harvested for patient matching embryonic stem cells. Contamination by reagents used in cell culture The online edition of Nature Medicine published a study on January 24, 2005, which stated that the human embryonic stem cells available for federally funded research are contaminated with non-human molecules from the culture medium used to grow the cells. It is a common technique to use mouse cells and other animal cells to maintain the pluripotency of actively dividing stem cells. The problem was discovered when non-human sialic acid in the growth medium was found to compromise the potential uses of the embryonic stem cells in humans, according to scientists at the University of California, San Diego. However, a study published in the online edition of Lancet Medical Journal on March 8, 2005, detailed information about a new stem cell line that was derived from human embryos under completely cell- and serum-free conditions. After more than 6 months of undifferentiated proliferation, these cells demonstrated the potential to form derivatives of all three embryonic germ layers both in vitro and in teratomas. These properties were also successfully maintained (for more than 30 passages) with the established stem cell lines. Muse cells Muse cells (Multi-lineage differentiating stress enduring cell) are non-cancerous pluripotent stem cell found in adults. They were discovered in 2010 by Mari Dezawa and her research group. Muse cells reside in the connective tissue of nearly every organ including the umbilical cord, bone marrow and peripheral blood. They are collectable from commercially obtainable mesenchymal cells such as human fibroblasts, bone marrow-mesenchymal stem cells and adipose-derived stem cells. Muse cells are able to generate cells representative of all three germ layers from a single cell both spontaneously and under cytokine induction. Expression of pluripotency genes and triploblastic differentiation are self-renewable over generations. Muse cells do not undergo teratoma formation when transplanted into a host environment in vivo, eradicating the risk of tumorigenesis through unbridled cell proliferation. See also Embryoid body Embryonic Stem Cell Research Oversight Committees Fetal tissue implant Induced stem cells KOSR (KnockOut Serum Replacement) Stem cell controversy References External links Understanding Stem Cells: A View of the Science and Issues from the National Academies National Institutes of Health University of Oxford practical workshop on pluripotent stem cell technology Fact sheet on embryonic stem cells Fact sheet on ethical issues in embryonic stem cell research Information & Alternatives to Embryonic Stem Cell Research A blog focusing specifically on ES cells and iPS cells including research, biotech, and patient-oriented issues Stem cells Biotechnology Embryology 1981 in biotechnology Sociobiology
Embryonic stem cell
[ "Biology" ]
6,783
[ "Behavior", "Biotechnology", "Behavioural sciences", "Sociobiology", "nan" ]
1,029,051
https://en.wikipedia.org/wiki/Worst-case%20execution%20time
The worst-case execution time (WCET) of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. What it is used for Worst case execution time is typically used in reliable real-time systems, where understanding the worst case timing behaviour of software is important for reliability or correct functional behaviour. As an example, a computer system that controls the behaviour of an engine in a vehicle might need to respond to inputs within a specific amount of time. One component that makes up the response time is the time spent executing the software – hence if the software worst case execution time can be determined, then the designer of the system can use this with other techniques such as schedulability analysis to ensure that the system responds fast enough. While WCET is potentially applicable to many real-time systems, in practice an assurance of WCET is mainly used by real-time systems that are related to high reliability or safety. For example, in airborne software some attention to software is required by DO178C section 6.3.4. The increasing use of software in automotive systems is also driving the need to use WCET analysis of software. In the design of some systems, WCET is often used as an input to schedulability analysis, although a much more common use of WCET in critical systems is to ensure that the pre-allocated timing budgets in a partition-scheduled system such as ARINC 653 are not violated. Calculation Since the early days of embedded computing, embedded software developers have either used: end-to-end measurements of code, for example performed by setting an I/O pin on the device to high at the start of the task, and to low at the end of the task and using a logic analyzer to measure the longest pulse width, or by measuring within the software itself using the processor clock or instruction count. manual static analysis techniques such as counting assembler instructions for each function, loop etc. and then combining them. Both of these techniques have limitations. End to end measurements place a high burden on software testing to achieve the longest path; counting instructions is only applicable to simple software and hardware. In both cases, a margin for error is often used to account for untested code, hardware performance approximations or mistakes. A margin of 20% is often used, although there is very little justification used for this figure, save for historical confidence ("it worked last time"). As software and hardware have increased in complexity, they have driven the need for tool support. Complexity is increasingly becoming an issue in both static analysis and measurements. It is difficult to judge how wide the error margin should be and how well tested the software system is. System safety arguments based on a high-water mark achieved during testing are widely used, but become harder to justify as the software and hardware become less predictable. In the future, it is likely that a requirement for safety critical systems is that they are analyzed using both static and measurement-based approaches. Considerations The problem of finding WCET by analysis is equivalent to the halting problem and is therefore not solvable in the general. Fortunately, for the kind of systems that engineers typically want to find WCET for, the software is typically well structured, will always terminate and is analyzable. Most methods for finding a WCET involve approximations (usually a rounding upwards when there are uncertainties) and hence in practice the exact WCET itself is often regarded as unobtainable. Instead, different techniques for finding the WCET produce estimates for the WCET. Those estimates are typically pessimistic, meaning that the estimated WCET is known to be higher than the real WCET (which is usually what is desired). Much work on WCET analysis is on reducing the pessimism in analysis so that the estimated value is low enough to be valuable to the system designer. WCET analysis usually refers to the execution time of single thread, task or process. However, on modern hardware, especially multi-core, other tasks in the system will impact the WCET of a given task if they share cache, memory lines and other hardware features. Further, task scheduling events such as blocking or to be interruptions should be considered in WCET analysis if they can occur in a particular system. Therefore, it is important to consider the context in which WCET analysis is applied. Automated approaches There are many automated approaches to calculating WCET beyond the manual techniques above. These include: analytical techniques to improve test cases to increase confidence in end to end measurements static analysis of the software (“static” meaning without executing the software). combined approaches, often referred to as “hybrid” analysis, being a combination of measurements and structural analysis Static analysis techniques A static WCET tool attempts to estimate WCET by examining the computer software without executing it directly on the hardware. Static analysis techniques have dominated research in the area since the late 1980s, although in an industrial setting, end-to-end measurements approaches were the standard practice. Static analysis tools work at a high-level to determine the structure of a program's task, working either on a piece of source code or disassembled binary executable. They also work at a low-level, using timing information about the real hardware that the task will execute on, with all its specific features. By combining those two kinds of analysis, the tool attempts to give an upper bound on the time required to execute a given task on a given hardware platform. At the low-level, static WCET analysis is complicated by the presence of architectural features that improve the average-case performance of the processor: instruction/data caches, branch prediction and instruction pipelines, for example. It is possible, but increasingly difficult, to determine tight WCET bounds if these modern architectural features are taken into account in the timing model used by the analysis. Certification authorities such as the European Aviation Safety Agency, therefore, rely on model validation suites. Static analysis has resulted in good results for simpler hardware, however a possible limitation of static analysis is that the hardware (the CPU in particular) has reached a complexity which is extremely hard to model. In particular, the modelling process can introduce errors from several sources: errors in chip design, lack of documentation, errors in documentation, errors in model creation; all leading to cases where the model predicts a different behavior to that observed on real hardware. Typically, where it is not possible to accurately predict a behavior, a pessimistic result is used, which can lead to the WCET estimate being much larger than anything achieved at run-time. Obtaining tight static WCET estimation is particularly difficult on multi-core processors. There are a number of commercial and academic tools that implement various forms of static analysis. Measurement and hybrid techniques Measurement-based and hybrid approaches usually try to measure the execution times of short code segments on the real hardware, which are then combined in a higher level analysis. Tools take into account the structure of the software (e.g. loops, branches), to produce an estimate of the WCET of the larger program. The rationale is that it's hard to test the longest path in complex software, but it is easier to test the longest path in many smaller components of it. A worst case effect needs only to be seen once during testing for the analysis to be able to combine it with other worst case events in its analysis. Typically, the small sections of software can be measured automatically using techniques such as instrumentation (adding markers to the software) or with hardware support such as debuggers, and CPU hardware tracing modules. These markers result in a trace of execution, which includes both the path taken through the program and the time at which different points were executed. The trace is then analyzed to determine the maximum time that each part of the program has ever taken to execute, what the maximum observed iteration time of each loop is and whether there are any parts of the software that are untested (Code coverage). Measurement-based WCET analysis has resulted in good results for both simple and complex hardware, although like static analysis it can suffer excessive pessimism in multi-core situations, where the impact of one core on another is hard to define. A limitation of measurement is that it relies on observing the worst-case effects during testing (although not necessarily at the same time). It can be hard to determine if the worst case effects have necessarily been tested. There are a number of commercial and academic tools that implement various forms of measurement-based analysis. Research The most active research groups are in USA (American Michigan University ), Sweden (Mälardalen, Linköping), Germany (Saarbrücken, Dortmund, Braunschweig), France (Toulouse, Saclay, Rennes), Austria (Vienna), UK (University of York and Rapita Systems Ltd), Italy (Bologna), Spain (Cantabria, Valencia), and Switzerland (Zurich). Recently, the topic of code-level timing analysis has found more attention outside of Europe by research groups in the US (North Carolina, Florida), Canada, Australia, Bangladesh(MBI LAB and RDS), Kingdom of Saudi Arabia-UQU(HISE LAB), Singapore and India (IIT Madras, IISc Bangalore). WCET Tool Challenge The first international WCET Tool Challenge took place during the autumn of 2006. It was organized by the University of Mälardalen and sponsored by the ARTIST2 Network of Excellence on Embedded Systems Design. The aim of the Challenge was to inspect and to compare different approaches in analyzing the worst-case execution time. All available tools and prototypes able to determine safe upper bounds for the WCET of tasks have participated. The final results were presented in November 2006 at the ISoLA 2006 International Symposium in Paphos, Cyprus. A second Challenge took place in 2008. See also Best and worst cases Amortized analysis Big O notation Optimization (computer science) References Articles and white papers Data-Flow Frameworks for Worst-Case Execution Time Analysis Worst-Case Execution Time Prediction by Static Program Analysis (PDF) OTAWA, a Framework for Experimenting WCET Computations (PDF) WCET Tool Challenge 2006 extended test results analysis of final report (Journal article in Springer) WCET Tool Challenge 2006 final report (PDF) A compiler framework for the reduction of worst-case execution times (PDF) External links CerCo ("Certified Complexity") WCET-aware Compilation / The WCET-aware C Compiler WCC Real-time computing
Worst-case execution time
[ "Technology" ]
2,139
[ "Real-time computing" ]
1,029,060
https://en.wikipedia.org/wiki/Aquatic%20toxicology
Aquatic toxicology is the study of the effects of manufactured chemicals and other anthropogenic and natural materials and activities on aquatic organisms at various levels of organization, from subcellular through individual organisms to communities and ecosystems. Aquatic toxicology is a multidisciplinary field which integrates toxicology, aquatic ecology and aquatic chemistry. This field of study includes freshwater, marine water and sediment environments. Common tests include standardized acute and chronic toxicity tests lasting 24–96 hours (acute test) to 7 days or more (chronic tests). These tests measure endpoints such as survival, growth, reproduction, that are measured at each concentration in a gradient, along with a control test. Typically using selected organisms with ecologically relevant sensitivity to toxicants and a well-established literature background. These organisms can be easily acquired or cultured in lab and are easy to handle. History While basic research in toxicology began in multiple countries in the 1800s, it was not until around the 1930s that the use of acute toxicity testing, especially on fish, was established. Due to the wide use of the organochlorine pesticide DDT [l,l,l-trichloro-2,2-bis(p-chlorophenyl)ethane] and its linkage to causing fish death, the field of aquatic toxicology grew. At first, studies focused mainly on oysters and mussels, as they could not move away from the toxic environment. The results of these studies eventually led to the implementation of programs that monitor concentrations of aquatic pollutants in oysters and mussels, such as the Mussel Watch program of the National Oceanic and Atmospheric Administration (NOAA). Over the next two decades, the effects of chemicals and wastes on non-human species became more of a public issue and the era of the pickle-jar bioassays began as efforts increased to standardize toxicity testing techniques. In the United States, the passage of the Federal Water Pollution Control Act of 1947 marked the first comprehensive legislation for the control of water pollution and was followed by the Federal Water Pollution Control Act in 1956. In 1962, public and governmental interests were renewed, in large part due to the publication of Rachel Carson's Silent Spring, and three years later the Water Quality Act of 1965 was passed, which directed states to develop water quality standards. Public awareness, as well as scientific and governmental concern, continued to grow throughout the 1970s and by the end of the decade research had expanded to include hazard evaluation and risk analysis. In the subsequent decades, aquatic toxicology has continued to expand and internationalize so that there is now a strong application of toxicity testing for environmental protection. Aquatic toxicology is continuing to evolve as risk assessment is becoming more practiced in the field. The field is gaining popularity as it has begun to link the effects of pollutants on marine animals to humans who eat fish and other marine life. Aquatic toxicity tests Aquatic toxicology tests (assays): toxicity tests are used to provide qualitative and quantitative data on adverse (deleterious) effects on aquatic organisms from a toxicant. Toxicity tests can be used to assess the potential for damage to an aquatic environment and provide a database that can be used to assess the risk associated within a situation for a specific toxicant. Aquatic toxicology tests can be performed in the field or in the laboratory. Field experiments generally refer to multiple species exposure, but single species can be caged for a set duration, and laboratory experiments generally refer to single species exposure. A dose–response relationship is most commonly used with a sigmoidal curve to quantify the toxic effects at a selected end-point or criteria for effect (i.e. death or other adverse effect to the organism). Concentration is on the x-axis and percent inhibition or response is on the y-axis. The criteria for effects, or endpoints tested for, can include lethal and sublethal effects (see Toxicological effects). There are different types of toxicity tests that can be performed on various test species. Different species differ in their susceptibility to chemicals, most likely due to differences in accessibility, metabolic rate, excretion rate, genetic factors, dietary factors, age, sex, health and stress level of the organism. Common standard test species are the fathead minnow (Pimephales promelas), daphnids (Daphnia magna, D. pulex, D. pulicaria, Ceriodaphnia dubia), midge (Chironomus tentans, C. riparius), rainbow trout (Oncorhynchus mykiss), sheepshead minnow (Cyprinodon variegatu), zebra fish (Danio rerio), mysids (Mysidopsis), oyster (Crassotreas), scud (Hyalalla Azteca), grass shrimp (Palaemonetes pugio) and mussels (Mytilus galloprovincialis). As defined by ASTM International, these species are routinely selected on the basis of availability, commercial, recreational, and ecological importance, past successful use, and regulatory use. A variety of acceptable standardized test methods have been published. Some of the more widely accepted agencies to publish methods are: the American Public Health Association, US Environmental Protection Agency (EPA), ASTM International, International Organization for Standardization, Environment and Climate Change Canada, and Organisation for Economic Co-operation and Development. Standardized tests offer the ability to compare results between laboratories. There are many kinds of toxicity tests widely accepted in the scientific literature and by regulatory agencies. The type of test used depends on many factors: Specific regulatory agency conducting the test, resources available, physical and chemical characteristics of the environment, type of toxicant, test species available, laboratory vs. field testing, end-point selection, and time and resources available to conduct the assays are some of the most common influencing factors on test design. Exposure systems Exposure systems are four general techniques the controls and test organisms are exposed to the dealing with treated and diluted water or the test solutions. Static. A static test exposes the organism in still water. The toxicant is added to the water in order to obtain the correct concentrations to be tested. The control and test organisms are placed in the test solutions and the water is not changed for the entirety of the test. Recirculation. A recirculation test exposes the organism to the toxicant in a similar manner as the static test, except that the test solutions are pumped through an apparatus (i.e. filter) to maintain water quality, but not reduce the concentration of the toxicant in the water. The water is circulated through the test chamber continuously, similar to an aerated fish tank. This type of test is expensive and it is unclear whether or not the filter or aerator has an effect on the toxicant. Renewal. A renewal test also exposes the organism to the toxicant in a similar manner as the static test because it is in still water. However, in a renewal test the test solution is renewed periodically (constant intervals) by transferring the organism to a fresh test chamber with the same concentration of toxicant. Flow-through. A flow-through test exposes the organism to the toxicant with a flow into the test chambers and then out of the test chambers. The once-through flow can either be intermittent or continuous. A stock solution of the correct concentrations of contaminant must be previously prepared. Metering pumps or diluters will control the flow and the volume of the test solution, and the proper proportions of water and contaminant will be mixed. Types of tests Acute tests are short-term exposure tests (14 days or less) and generally use lethality as an endpoint. In acute exposures, organisms come into contact with higher doses of the toxicant in a single event or in multiple events over a short period of time and usually produce immediate effects, depending on absorption time of the toxicant. These tests are generally conducted on organisms during a specific time period of the organism's life cycle, and are considered partial life cycle tests. Acute tests are not valid if mortality in the control sample is greater than 10%. However, this control acceptability criterion is dependent upon the species and the duration of the test. Results are reported in EC50, or concentration that will affect fifty percent of the sample size. Chronic tests are long-term tests (weeks, months years), relative to the test organism's life span (>10% of life span), and generally use sub-lethal endpoints. In chronic exposures, organisms come into contact with low, continuous doses of a toxicant. Chronic exposures may induce effects to acute exposure, but can also result in effects that develop slowly. Chronic tests are generally considered full life cycle tests and cover an entire generation time or reproductive life cycle ("egg to egg"). Chronic tests are not considered valid if mortality in the control sample is greater than 20%. These results have generally been reported in NOECs (No observed effects level) and LOECs (Lowest observed effects level). However, NOECs and LOECs are becoming less common as endpoints are dependent on the concentration series chosen for the test. These reports are starting to become a topic of debate in the field because of the way it may alter the results of the tests. For example, if the concentration rate of the NOEC is 100, 50, 25, 11.25, 6.25 and the toxicology is reported at 2%, the NOEC would report the concentration as 6.25. Early life stage tests are considered as subchronic exposures that are less than a complete reproductive life cycle and include exposure during early, sensitive life stages of an organism. These exposures are also called critical life stage, embryo-larval, or egg-fry tests. Early life stage tests are not considered valid if mortality in the control sample is greater than 30%. Short-term sublethal tests are used to evaluate the toxicity of effluents to aquatic organisms. These methods are developed by the EPA, and only focus on the most sensitive life stages. Endpoints for these test include changes in growth, reproduction and survival. NOECs, LOECs and EC50s are reported in these tests. Bioaccumulation tests are toxicity tests that can be used for hydrophobic chemicals that may accumulated in the fatty tissue of aquatic organisms. Toxicants with low solubilities in water generally can be stored in the fatty tissue due to the high lipid content in this tissue. The storage of these toxicants within the organism may lead to cumulative toxicity. Bioaccumulation tests use bioconcentration factors (BCF) to predict concentrations of hydrophobic contaminants in organisms. The BCF is the ratio of the average concentration of test chemical accumulated in the tissue of the test organism (under steady state conditions) to the average measured concentration in the water. Freshwater tests and saltwater tests have different standard methods, especially as set by the regulatory agencies. However, these tests generally include a control (negative and/or positive), a geometric dilution series or other appropriate logarithmic dilution series, test chambers and equal numbers of replicates, and a test organism. Exact exposure time and test duration will depend on type of test (acute vs. chronic) and organism type. Temperature, water quality parameters and light will depend on regulator requirements and organism type. In the US, many wastewater dischargers (e.g., factories, power plants, refineries, mines, municipal sewage treatment plants) are required to conduct periodic whole effluent toxicity (WET) tests under the National Pollutant Discharge Elimination System (NPDES) permit program, pursuant to the Clean Water Act. For facilities discharging to freshwater, effluent is used to perform static-acute multi-concentration toxicity tests with Ceriodaphnia dubia (water flea) and Pimephales promelas (fathead minnow), among other species. The test organisms are exposed for 48 hours under static conditions with five concentrations of the effluent. The major deviation in the short-term chronic effluent toxicity tests and the acute effluent toxicity tests is that the short-term chronic test lasts for seven days and the acute test lasts for 48 hours. For discharges to marine and estuarine waters, the test species used are sheepshead minnow (Cyprinodon variegatus), inland silverside (Menidia beryllina), Americamysis bahia, and purple sea urchin (Strongylocentrotus purpuratus). Sediment tests At some point most chemicals originating from both anthropogenic and natural sources accumulate in sediment. For this reason, sediment toxicity can play a major role in the adverse biological effects seen in aquatic organisms, especially those inhabiting benthic habitats. A recommended approach for sediment testing is to apply the sediment quality triad (SQT) which involves simultaneously examining sediment chemistry, toxicity, field alterations, bioaccumulation, and bioavailability assessments that can be used in a laboratory or in the field. Due to the expansion of SQTs, it is now more commonly referred to as "Sediment Assessment Framework." Collection, handling, and storage of sediment can have an effect on bioavailability and for this reason standard methods have been developed to suit this purpose. Toxicological effects Toxicity can be broken down into two broad categories of direct and indirect toxicity. Direct toxicity results from a toxicant acting at the site of action in or on the organism. Indirect toxicity occurs with a change in the physical, chemical, or biological environment. Lethality is most common effect used in toxicology and used as an endpoint for acute toxicity tests. While conducting chronic toxicity tests sublethal effects are endpoints that are looked at. These endpoints include behavioral, physiological, biochemical, and histological changes. There are a number of effects that occur when an organism is simultaneously exposed to two or more toxicants. These effects include additive effects, synergistic effects, potentiation effects, and antagonistic effects. An additive effect occurs when combined effect is equal to a combination or sum of the individual effects. A synergistic effect occurs when the combination of effects is much greater than the two individual effects added together. Potentiation is an effect that occurs when an individual chemical has no effect is added to a toxicant, and the combination has a greater effect than just the toxicant alone. Finally, an antagonistic effect occurs when a combination of chemicals has less of an effect than the sum of their individual effects. Important aquatic toxicology resources ASTM International (formerly American Society for Testing and Materials). A consensus-based organization, representing over 140 participating countries, that develops and delivers international voluntary standard methods for aquatic toxicity testing. Standard Methods for the Examination of Water and Wastewater. A compilation of techniques for water analysis, jointly published by the American Public Health Association (APHA), the American Water Works Association (AWWA), and the Water Environment Federation. "Ecotox." A database maintained by EPA that offers single chemical toxicity information for both aquatic and terrestrial purposes. Society of Environmental Toxicology and Chemistry (SETAC). A nonprofit, worldwide society working to promote scientific research to further our understanding of environmental stressors, environmental education, and the use of science in environmental policy. EPA publishes guidance manuals outlining aquatic toxicity test procedures. Organisation for Economic Co-operation and Development (OECD). A forum for governments to work together to promote policies for the betterment of people's social and economic well-being around the world. One way in which they accomplish this is through the development of aquatic toxicity test guidelines. Environment and Climate Change Canada. Canada's lead federal agency for environmental protection. Terminology Median Lethal Concentration (LC50) – The chemical concentration that is expected to kill 50% of a group of organisms. Median Effective Concentration (EC50) – The chemical concentration that is expected to have one or more specified effects in 50% of a group of organisms. Critical Body Residue (CBR) – An approach that routinely examines whole-body chemical concentrations of an exposed organism that is associated with an adverse biological response. Baseline toxicity – Refers to narcosis which is a depression in biological activity due to toxicants being present in the organism. Biomagnification – The process by which the concentration of a chemical in the tissues of an organism increases as it passes through several levels in the food web. Lowest Observed Effect Concentration (LOEC) – The lowest test concentration that has a statistically significant effect over a specified exposure time. No Observed Effect Concentration (NOEC) – The highest test concentration for which no effect is observed relative to a control over a specified exposure time. Maximum Acceptable Toxicant Concentration (MATC) – An estimated value that represents the highest "no-effect" concentration of a specific substance within the range including the NOEC and LOEC. Application Factor (AF) – An empirically derived "safe" concentration of a chemical. Biomonitoring – The consistent use of living organisms to analyze environmental changes over time. Effluent – Liquid, industrial discharge that usually contain varying chemical toxicants. Quantitative Structure-Activity Relationship (QSAR) – A method of modeling the relationship between biological activity and the structure of organic chemicals. Mode of Action – A set of common behavioral or physiological signs that represent a type of adverse response. Mechanism of Action – The detailed events that take place at the molecular level during an adverse biological response. KOW – The octanol-water partition coefficient which represents the ratio of the concentration of octanol to the concentration of chemical in the water. Bioconcentration Factor (BCF) – The ratio of the average chemical concentration in the tissues of the organism under steady-state conditions to the average chemical concentration measured in the water to which the organisms are exposed. All terms were derived from Rand. Significance in regulatory context In the United States, aquatic toxicology plays an important role in the NPDES wastewater permit program. While most wastewater dischargers typically conduct analytical chemistry testing for known pollutants, whole effluent toxicity tests have been standardized and are performed routinely as a tool for evaluating the potential harmful effects of other pollutants not specifically regulated in the discharge permits. EPA's water quality program has published water quality criteria (for individual pollutants) and water quality standards (for water bodies) that were derived from aquatic toxicity tests. Sediment quality guidelines While sediment quality guidelines are not meant for regulation, they provide a way to rank and compare sediment quality developed by National Oceanic and Atmospheric Administration(NOAA). These sediment quality guidelines are summarized in NOAA's Screening Quick Reference Tables (SQuiRT) for many different chemicals. See also Biotic Ligand Model Clean Water Act (in the US) Ecotoxicology Cyanotoxin Freshwater biology Hydrobiology Marine pollution Oil pollution toxicity to marine fish Toxicology Poisonous fish Water management Water pollution Water purification Water quality References Aquatic ecology Environmental toxicology Water pollution
Aquatic toxicology
[ "Chemistry", "Biology", "Environmental_science" ]
3,933
[ "Toxicology", "Environmental toxicology", "Water pollution", "Ecosystems", "Aquatic ecology" ]
1,029,137
https://en.wikipedia.org/wiki/Eigenplane
In mathematics, an eigenplane is a two-dimensional invariant subspace in a given vector space. By analogy with the term eigenvector for a vector which, when operated on by a linear operator is another vector which is a scalar multiple of itself, the term eigenplane can be used to describe a two-dimensional plane (a 2-plane), such that the operation of a linear operator on a vector in the 2-plane always yields another vector in the same 2-plane. A particular case that has been studied is that in which the linear operator is an isometry M of the hypersphere (written S3) represented within four-dimensional Euclidean space: where s and t are four-dimensional column vectors and Λθ is a two-dimensional eigenrotation within the eigenplane. In the usual eigenvector problem, there is freedom to multiply an eigenvector by an arbitrary scalar; in this case there is freedom to multiply by an arbitrary non-zero rotation. This case is potentially physically interesting in the case that the shape of the universe is a multiply connected 3-manifold, since finding the angles of the eigenrotations of a candidate isometry for topological lensing is a way to falsify such hypotheses. See also Bivector Plane of rotation External links possible relevance of eigenplanes in cosmology GNU GPL software for calculating eigenplanes Proof constructed by J M Shelley 2017 Linear algebra
Eigenplane
[ "Mathematics" ]
306
[ "Linear algebra", "Algebra" ]
1,029,175
https://en.wikipedia.org/wiki/Dammar%20gum
Dammar, also called dammar gum, or damar gum, is a resin obtained from the tree family Dipterocarpaceae in India and Southeast Asia, principally those of the genera Shorea or Hopea (synonym Balanocarpus). The resin of some species of Canarium may also called dammar. Most is produced by tapping trees; however, some is collected in fossilised form on the ground. The gum varies in colour from clear to pale yellow, while the fossilised form is grey-brown. Dammar gum is a triterpenoid resin, containing many triterpenes and their oxidation products. Many of them are low molecular weight compounds (dammarane, dammarenolic acid, oleanane, oleanonic acid, etc.), which easily oxidizes and photoxidizes. Types Damar mata kucing ('cat's eye damar') is a crystalline resin, usually in the form of round balls. Shorea javanica is an important source in Indonesia. Damar batu ('stone damar') is stone or pebble-shaped, opaque dammar collected from the ground. Damar hitam ('black damar') Uses Dammar varnish, made from dammar gum dissolved in turpentine, was introduced as a picture varnish in 1826; commonly used in oil painting, both during the painting process and after the painting is finished. Dammar varnish and similar gum varnishes auto-oxidize and yellow over a relatively short time regardless of storage method; this effect is more pronounced on paintings stored in darkness than with works on display in light due to the bleaching effects of sunlight on the colorants involved. Batik is made from dammar crystals dissolved in molten paraffin wax, to prevent the wax from cracking when it is drawn onto silk or rayon. Encaustic paints are made from dammar crystals in beeswax with pigment added. The dammar crystals serve as a hardening agent. As caulk for ships in the past, frequently with pitch or bitumen. As a common mounting material along with canada balsam for preparing biological samples for light microscopy. Used in Ayurvedic medicine for various conditions. Constituent compounds Fresh dammar gum consists of a mixture of compounds; primarily hydroxydammarenone, dammarenolic acid, and oleanonic aldehyde. Material safety Physical data Appearance: white powder Melting point: around 120 °C Density: 1.04 to 1.12 g/ml Refractive index: around 1.5 CAS number: 9000-16-2 EINECS: 232-528-4 Harmonised Tariff: 1301-90 Stability and toxicity The gum is stable, probably combustible and incompatible with strong oxidising agents. Its toxicity is low, but inhalation of dust may cause allergies. See also Agathis (Araucariaceae), synonym Dammara Canarium strictum (Burseraceae), source of black dammar in South Asia Kauri gum, from Agathis australis Shorea hypochra (Dipterocarpaceae), source of dammar temak Shorea robusta (Dipterocarpaceae), source of sal dammar Vateria indica (Dipterocarpaceae), source of white dammar in South Asia References Further reading Incense material Natural gums Painting materials Resins
Dammar gum
[ "Physics" ]
712
[ "Resins", "Unsolved problems in physics", "Incense material", "Materials", "Amorphous solids", "Matter" ]
1,029,177
https://en.wikipedia.org/wiki/Primitive%20polynomial%20%28field%20theory%29
In finite field theory, a branch of mathematics, a primitive polynomial is the minimal polynomial of a primitive element of the finite field . This means that a polynomial of degree with coefficients in is a primitive polynomial if it is monic and has a root in such that is the entire field . This implies that is a primitive ()-root of unity in . Properties Because all minimal polynomials are irreducible, all primitive polynomials are also irreducible. A primitive polynomial must have a non-zero constant term, for otherwise it will be divisible by x. Over GF(2), is a primitive polynomial and all other primitive polynomials have an odd number of terms, since any polynomial mod 2 with an even number of terms is divisible by (it has 1 as a root). An irreducible polynomial F(x) of degree m over GF(p), where p is prime, is a primitive polynomial if the smallest positive integer n such that F(x) divides is . A primitive polynomial of degree has different roots in , which all have order , meaning that any of them generates the multiplicative group of the field. Over GF(p) there are exactly primitive elements and primitive polynomials, each of degree , where is Euler's totient function. The algebraic conjugates of a primitive element in are , , , …, and so the primitive polynomial has explicit form . That the coefficients of a polynomial of this form, for any in , not necessarily primitive, lie in follows from the property that the polynomial is invariant under application of the Frobenius automorphism to its coefficients (using ) and from the fact that the fixed field of the Frobenius automorphism is . Examples Over the polynomial is irreducible but not primitive because it divides : its roots generate a cyclic group of order 4, while the multiplicative group of is a cyclic group of order 8. The polynomial , on the other hand, is primitive. Denote one of its roots by . Then, because the natural numbers less than and relatively prime to are 1, 3, 5, and 7, the four primitive roots in are , , , and . The primitive roots and are algebraically conjugate. Indeed . The remaining primitive roots and are also algebraically conjugate and produce the second primitive polynomial: . For degree 3, has primitive elements. As each primitive polynomial of degree 3 has three roots, all necessarily primitive, there are primitive polynomials of degree 3. One primitive polynomial is . Denoting one of its roots by , the algebraically conjugate elements are and . The other primitive polynomials are associated with algebraically conjugate sets built on other primitive elements with relatively prime to 26: Applications Field element representation Primitive polynomials can be used to represent the elements of a finite field. If α in GF(pm) is a root of a primitive polynomial F(x), then the nonzero elements of GF(pm) are represented as successive powers of α: This allows an economical representation in a computer of the nonzero elements of the finite field, by representing an element by the corresponding exponent of This representation makes multiplication easy, as it corresponds to addition of exponents modulo Pseudo-random bit generation Primitive polynomials over GF(2), the field with two elements, can be used for pseudorandom bit generation. In fact, every linear-feedback shift register with maximum cycle length (which is , where n is the length of the linear-feedback shift register) may be built from a primitive polynomial. In general, for a primitive polynomial of degree m over GF(2), this process will generate pseudo-random bits before repeating the same sequence. CRC codes The cyclic redundancy check (CRC) is an error-detection code that operates by interpreting the message bitstring as the coefficients of a polynomial over GF(2) and dividing it by a fixed generator polynomial also over GF(2); see Mathematics of CRC. Primitive polynomials, or multiples of them, are sometimes a good choice for generator polynomials because they can reliably detect two bit errors that occur far apart in the message bitstring, up to a distance of for a degree n primitive polynomial. Primitive trinomials A useful class of primitive polynomials is the primitive trinomials, those having only three nonzero terms: . Their simplicity makes for particularly small and fast linear-feedback shift registers. A number of results give techniques for locating and testing primitiveness of trinomials. For polynomials over GF(2), where is a Mersenne prime, a polynomial of degree r is primitive if and only if it is irreducible. (Given an irreducible polynomial, it is not primitive only if the period of x is a non-trivial factor of . Primes have no non-trivial factors.) Although the Mersenne Twister pseudo-random number generator does not use a trinomial, it does take advantage of this. Richard Brent has been tabulating primitive trinomials of this form, such as . This can be used to create a pseudo-random number generator of the huge period ≈ . References External links Field (mathematics) Polynomials
Primitive polynomial (field theory)
[ "Mathematics" ]
1,073
[ "Polynomials", "Algebra" ]
1,029,211
https://en.wikipedia.org/wiki/Metabolomics
Metabolomics is the scientific study of chemical processes involving metabolites, the small molecule substrates, intermediates, and products of cell metabolism. Specifically, metabolomics is the "systematic study of the unique chemical fingerprints that specific cellular processes leave behind", the study of their small-molecule metabolite profiles. The metabolome represents the complete set of metabolites in a biological cell, tissue, organ, or organism, which are the end products of cellular processes. Messenger RNA (mRNA), gene expression data, and proteomic analyses reveal the set of gene products being produced in the cell, data that represents one aspect of cellular function. Conversely, metabolic profiling can give an instantaneous snapshot of the physiology of that cell, and thus, metabolomics provides a direct "functional readout of the physiological state" of an organism. There are indeed quantifiable correlations between the metabolome and the other cellular ensembles (genome, transcriptome, proteome, and lipidome), which can be used to predict metabolite abundances in biological samples from, for example mRNA abundances. One of the ultimate challenges of systems biology is to integrate metabolomics with all other -omics information to provide a better understanding of cellular biology. History The concept that individuals might have a "metabolic profile" that could be reflected in the makeup of their biological fluids was introduced by Roger Williams in the late 1940s, who used paper chromatography to suggest characteristic metabolic patterns in urine and saliva were associated with diseases such as schizophrenia. However, it was only through technological advancements in the 1960s and 1970s that it became feasible to quantitatively (as opposed to qualitatively) measure metabolic profiles. The term "metabolic profile" was introduced by Horning, et al. in 1971 after they demonstrated that gas chromatography-mass spectrometry (GC-MS) could be used to measure compounds present in human urine and tissue extracts. The Horning group, along with that of Linus Pauling and Arthur B. Robinson led the development of GC-MS methods to monitor the metabolites present in urine through the 1970s. Concurrently, NMR spectroscopy, which was discovered in the 1940s, was also undergoing rapid advances. In 1974, Seeley et al. demonstrated the utility of using NMR to detect metabolites in unmodified biological samples. This first study on muscle highlighted the value of NMR in that it was determined that 90% of cellular ATP is complexed with magnesium. As sensitivity has improved with the evolution of higher magnetic field strengths and magic angle spinning, NMR continues to be a leading analytical tool to investigate metabolism. Recent efforts to utilize NMR for metabolomics have been largely driven by the laboratory of Jeremy K. Nicholson at Birkbeck College, University of London and later at Imperial College London. In 1984, Nicholson showed 1H NMR spectroscopy could potentially be used to diagnose diabetes mellitus, and later pioneered the application of pattern recognition methods to NMR spectroscopic data. In 1994 and 1996, liquid chromatography mass spectrometry metabolomics experiments were performed by Gary Siuzdak while working with Richard Lerner (then president of the Scripps Research Institute) and Benjamin Cravatt, to analyze the cerebral spinal fluid from sleep deprived animals. One molecule of particular interest, oleamide, was observed and later shown to have sleep inducing properties. This work is one of the earliest such experiments combining liquid chromatography and mass spectrometry in metabolomics. In 2005, the first metabolomics tandem mass spectrometry database, METLIN, for characterizing human metabolites was developed in the Siuzdak laboratory at the Scripps Research Institute. METLIN has since grown and as of December, 2023, METLIN contains MS/MS experimental data on over 930,000 molecular standards and other chemical entities, each compound having experimental tandem mass spectrometry data generated from molecular standards at multiple collision energies and in positive and negative ionization modes. METLIN is the largest repository of tandem mass spectrometry data of its kind. The dedicated academic journal Metabolomics first appeared in 2005, founded by its current editor-in-chief Roy Goodacre. In 2005, the Siuzdak lab was engaged in identifying metabolites associated with sepsis and in an effort to address the issue of statistically identifying the most relevant dysregulated metabolites across hundreds of LC/MS datasets, the first algorithm was developed to allow for the nonlinear alignment of mass spectrometry metabolomics data. Called XCMS, it has since (2012) been developed as an online tool and as of 2019 (with METLIN) has over 30,000 registered users. On 23 January 2007, the Human Metabolome Project, led by David S. Wishart, completed the first draft of the human metabolome, consisting of a database of approximately 2,500 metabolites, 1,200 drugs and 3,500 food components. Similar projects have been underway in several plant species, most notably Medicago truncatula and Arabidopsis thaliana for several years. As late as mid-2010, metabolomics was still considered an "emerging field". Further, it was noted that further progress in the field depended in large part, through addressing otherwise "irresolvable technical challenges", by technical evolution of mass spectrometry instrumentation. In 2015, real-time metabolome profiling was demonstrated for the first time. Metabolome The metabolome refers to the complete set of small-molecule (<1.5 kDa) metabolites (such as metabolic intermediates, hormones and other signaling molecules, and secondary metabolites) to be found within a biological sample, such as a single organism. The word was coined in analogy with transcriptomics and proteomics; like the transcriptome and the proteome, the metabolome is dynamic, changing from second to second. Although the metabolome can be defined readily enough, it is not currently possible to analyse the entire range of metabolites by a single analytical method. In January 2007, scientists at the University of Alberta and the University of Calgary completed the first draft of the human metabolome. The Human Metabolome Database (HMDB) is perhaps the most extensive public metabolomic spectral database to date and is a freely available electronic database (www.hmdb.ca) containing detailed information about small molecule metabolites found in the human body. It is intended to be used for applications in metabolomics, clinical chemistry, biomarker discovery and general education. The database is designed to contain or link three kinds of data: Chemical data, Clinical data and Molecular biology/biochemistry data. The database contains 220,945 metabolite entries including both water-soluble and lipid soluble metabolites. Additionally, 8,610 protein sequences (enzymes and transporters) are linked to these metabolite entries. Each MetaboCard entry contains 130 data fields with 2/3 of the information being devoted to chemical/clinical data and the other 1/3 devoted to enzymatic or biochemical data. The version 3.5 of the HMDB contains >16,000 endogenous metabolites, >1,500 drugs and >22,000 food constituents or food metabolites. This information, available at the Human Metabolome Database and based on analysis of information available in the current scientific literature, is far from complete. In contrast, much more is known about the metabolomes of other organisms. For example, over 50,000 metabolites have been characterized from the plant kingdom, and many thousands of metabolites have been identified and/or characterized from single plants. Each type of cell and tissue has a unique metabolic ‘fingerprint’ that can elucidate organ or tissue-specific information. Bio-specimens used for metabolomics analysis include but not limit to plasma, serum, urine, saliva, feces, muscle, sweat, exhaled breath and gastrointestinal fluid. The ease of collection facilitates high temporal resolution, and because they are always at dynamic equilibrium with the body, they can describe the host as a whole. Genome can tell what could happen, transcriptome can tell what appears to be happening, proteome can tell what makes it happen and metabolome can tell what has happened and what is happening. Metabolites Metabolites are the substrates, intermediates and products of metabolism. Within the context of metabolomics, a metabolite is usually defined as any molecule less than 1.5 kDa in size. However, there are exceptions to this depending on the sample and detection method. For example, macromolecules such as lipoproteins and albumin are reliably detected in NMR-based metabolomics studies of blood plasma. In plant-based metabolomics, it is common to refer to "primary" and "secondary" metabolites. A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes, but usually has important ecological function. Examples include antibiotics and pigments. By contrast, in human-based metabolomics, it is more common to describe metabolites as being either endogenous (produced by the host organism) or exogenous. Metabolites of foreign substances such as drugs are termed xenometabolites. The metabolome derives from a large network of metabolic reactions, where outputs from one enzymatic chemical reaction are inputs to other chemical reactions. Such systems have been described as hypercycles. Metabonomics Metabonomics is defined as "the quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modification". The word origin is from the Greek μεταβολή meaning change and nomos meaning a rule set or set of laws. This approach was pioneered by Jeremy Nicholson at Murdoch University and has been used in toxicology, disease diagnosis and a number of other fields. Historically, the metabonomics approach was one of the first methods to apply the scope of systems biology to studies of metabolism. There has been some disagreement over the exact differences between 'metabolomics' and 'metabonomics'. The difference between the two terms is not related to choice of analytical platform: although metabonomics is more associated with NMR spectroscopy and metabolomics with mass spectrometry-based techniques, this is simply because of usages amongst different groups that have popularized the different terms. While there is still no absolute agreement, there is a growing consensus that 'metabolomics' places a greater emphasis on metabolic profiling at a cellular or organ level and is primarily concerned with normal endogenous metabolism. 'Metabonomics' extends metabolic profiling to include information about perturbations of metabolism caused by environmental factors (including diet and toxins), disease processes, and the involvement of extragenomic influences, such as gut microflora. This is not a trivial difference; metabolomic studies should, by definition, exclude metabolic contributions from extragenomic sources, because these are external to the system being studied. However, in practice, within the field of human disease research there is still a large degree of overlap in the way both terms are used, and they are often in effect synonymous. Exometabolomics Exometabolomics, or "metabolic footprinting", is the study of extracellular metabolites. It uses many techniques from other subfields of metabolomics, and has applications in biofuel development, bioprocessing, determining drugs' mechanism of action, and studying intercellular interactions. Analytical technologies The typical workflow of metabolomics studies is shown in the figure. First, samples are collected from tissue, plasma, urine, saliva, cells, etc. Next, metabolites extracted often with the addition of internal standards and derivatization. During sample analysis, metabolites are quantified (liquid chromatography or gas chromatography coupled with MS and/or NMR spectroscopy). The raw output data can be used for metabolite feature extraction and further processed before statistical analysis (such as principal component analysis, PCA). Many bioinformatic tools and software are available to identify associations with disease states and outcomes, determine significant correlations, and characterize metabolic signatures with existing biological knowledge. Separation methods Initially, analytes in a metabolomic sample comprise a highly complex mixture. This complex mixture can be simplified prior to detection by separating some analytes from others. Separation achieves various goals: analytes which cannot be resolved by the detector may be separated in this step; in MS analysis, ion suppression is reduced; the retention time of the analyte serves as information regarding its identity. This separation step is not mandatory and is often omitted in NMR and "shotgun" based approaches such as shotgun lipidomics. Gas chromatography (GC), especially when interfaced with mass spectrometry (GC-MS), is a widely used separation technique for metabolomic analysis. GC offers very high chromatographic resolution, and can be used in conjunction with a flame ionization detector (GC/FID) or a mass spectrometer (GC-MS). The method is especially useful for identification and quantification of small and volatile molecules. However, a practical limitation of GC is the requirement of chemical derivatization for many biomolecules as only volatile chemicals can be analysed without derivatization. In cases where greater resolving power is required, two-dimensional chromatography (GCxGC) can be applied. High performance liquid chromatography (HPLC) has emerged as the most common separation technique for metabolomic analysis. With the advent of electrospray ionization, HPLC was coupled to MS. In contrast with GC, HPLC has lower chromatographic resolution, but requires no derivatization for polar molecules, and separates molecules in the liquid phase. Additionally HPLC has the advantage that a much wider range of analytes can be measured with a higher sensitivity than GC methods. Capillary electrophoresis (CE) has a higher theoretical separation efficiency than HPLC (although requiring much more time per separation), and is suitable for use with a wider range of metabolite classes than is GC. As for all electrophoretic techniques, it is most appropriate for charged analytes. In direct-infusion mass spectrometry (DI-MS), sample is directly introduced into the spectrometer and separation steps are skipped. DI-MS can be employed to perform single cell metabolic analysis of human cells. Detection methods Mass spectrometry (MS) is used to identify and quantify metabolites after optional separation by GC, HPLC, or CE. GC-MS was the first hyphenated technique to be developed. Identification leverages the distinct patterns in which analytes fragment. These patterns can be thought of as a mass spectral fingerprint. Libraries exist that allow identification of a metabolite according to this fragmentation pattern . MS is both sensitive and can be very specific. There are also a number of techniques which use MS as a stand-alone technology: the sample is infused directly into the mass spectrometer with no prior separation, and the MS provides sufficient selectivity to both separate and to detect metabolites. For analysis by mass spectrometry, the analytes must be imparted with a charge and transferred to the gas phase. Electron ionization (EI) is the most common ionization technique applied to GC separations as it is amenable to low pressures. EI also produces fragmentation of the analyte, both providing structural information while increasing the complexity of the data and possibly obscuring the molecular ion. Atmospheric-pressure chemical ionization (APCI) is an atmospheric pressure technique that can be applied to all the above separation techniques. APCI is a gas phase ionization method, which provides slightly more aggressive ionization than ESI which is suitable for less polar compounds. Electrospray ionization (ESI) is the most common ionization technique applied in LC/MS. This soft ionization is most successful for polar molecules with ionizable functional groups. Another commonly used soft ionization technique is secondary electrospray ionization (SESI). In the 2000s, surface-based mass analysis has seen a resurgence, with new MS technologies focused on increasing sensitivity, minimizing background, and reducing sample preparation. The ability to analyze metabolites directly from biofluids and tissues continues to challenge current MS technology, largely because of the limits imposed by the complexity of these samples, which contain thousands to tens of thousands of metabolites. Among the technologies being developed to address this challenge is Nanostructure-Initiator MS (NIMS), a desorption/ ionization approach that does not require the application of matrix and thereby facilitates small-molecule (i.e., metabolite) identification. MALDI is also used; however, the application of a MALDI matrix can add significant background at that complicates analysis of the low-mass range (i.e., metabolites). In addition, the size of the resulting matrix crystals limits the spatial resolution that can be achieved in tissue imaging. Because of these limitations, several other matrix-free desorption/ionization approaches have been applied to the analysis of biofluids and tissues. Secondary ion mass spectrometry (SIMS) was one of the first matrix-free desorption/ionization approaches used to analyze metabolites from biological samples. SIMS uses a high-energy primary ion beam to desorb and generate secondary ions from a surface. The primary advantage of SIMS is its high spatial resolution (as small as 50 nm), a powerful characteristic for tissue imaging with MS. However, SIMS has yet to be readily applied to the analysis of biofluids and tissues because of its limited sensitivity at and analyte fragmentation generated by the high-energy primary ion beam. Desorption electrospray ionization (DESI) is a matrix-free technique for analyzing biological samples that uses a charged solvent spray to desorb ions from a surface. Advantages of DESI are that no special surface is required and the analysis is performed at ambient pressure with full access to the sample during acquisition. A limitation of DESI is spatial resolution because "focusing" the charged solvent spray is difficult. However, a recent development termed laser ablation ESI (LAESI) is a promising approach to circumvent this limitation. Most recently, ion trap techniques such as orbitrap mass spectrometry are also applied to metabolomics research. Nuclear magnetic resonance (NMR) spectroscopy is the only detection technique which does not rely on separation of the analytes, and the sample can thus be recovered for further analyses. All kinds of small molecule metabolites can be measured simultaneously - in this sense, NMR is close to being a universal detector. The main advantages of NMR are high analytical reproducibility and simplicity of sample preparation. Practically, however, it is relatively insensitive compared to mass spectrometry-based techniques. Although NMR and MS are the most widely used modern-day techniques for detection, there are other methods in use. These include Fourier-transform ion cyclotron resonance, ion-mobility spectrometry, electrochemical detection (coupled to HPLC), Raman spectroscopy and radiolabel (when combined with thin-layer chromatography). Statistical methods The data generated in metabolomics usually consist of measurements performed on subjects under various conditions. These measurements may be digitized spectra, or a list of metabolite features. In its simplest form, this generates a matrix with rows corresponding to subjects and columns corresponding with metabolite features (or vice versa). Several statistical programs are currently available for analysis of both NMR and mass spectrometry data. A great number of free software are already available for the analysis of metabolomics data shown in the table. Some statistical tools listed in the table were designed for NMR data analyses were also useful for MS data. For mass spectrometry data, software is available that identifies molecules that vary in subject groups on the basis of mass-over-charge value and sometimes retention time depending on the experimental design. Once metabolite data matrix is determined, unsupervised data reduction techniques (e.g. PCA) can be used to elucidate patterns and connections. In many studies, including those evaluating drug-toxicity and some disease models, the metabolites of interest are not known a priori. This makes unsupervised methods, those with no prior assumptions of class membership, a popular first choice. The most common of these methods includes principal component analysis (PCA) which can efficiently reduce the dimensions of a dataset to a few which explain the greatest variation. When analyzed in the lower-dimensional PCA space, clustering of samples with similar metabolic fingerprints can be detected. PCA algorithms aim to replace all correlated variables with a much smaller number of uncorrelated variables (referred to as principal components (PCs)) and retain most of the information in the original dataset. This clustering can elucidate patterns and assist in the determination of disease biomarkers – metabolites that correlate most with class membership. Linear models are commonly used for metabolomics data, but are affected by multicollinearity. On the other hand, multivariate statistics are thriving methods for high-dimensional correlated metabolomics data, of which the most popular one is Projection to Latent Structures (PLS) regression and its classification version PLS-DA. Other data mining methods, such as random forest, support-vector machines, etc. are received increasing attention for untargeted metabolomics data analysis. In the case of univariate methods, variables are analyzed one by one using classical statistics tools (such as Student's t-test, ANOVA or mixed models) and only these with sufficient small p-values are considered relevant. However, correction strategies should be used to reduce false discoveries when multiple comparisons are conducted since there is no standard method for measuring the total amount of metabolites directly in untargeted metabolomics. For multivariate analysis, models should always be validated to ensure that the results can be generalized. Machine learning and data mining Machine learning is a powerful tool that can be used in metabolomics analysis. Recently, scientists have developed retention time prediction software. These tools allow researchers to apply artificial intelligence to the retention time prediction of small molecules in complex mixture, such as human plasma, plant extracts, foods, or microbial cultures. Retention time prediction increases the identification rate in liquid chromatography and can lead to an improved biological interpretation of metabolomics data. Key applications Toxicity assessment/toxicology by metabolic profiling (especially of urine or blood plasma samples) detects the physiological changes caused by toxic insult of a chemical (or mixture of chemicals). In many cases, the observed changes can be related to specific syndromes, e.g. a specific lesion in liver or kidney. This is of particular relevance to pharmaceutical companies wanting to test the toxicity of potential drug candidates: if a compound can be eliminated before it reaches clinical trials on the grounds of adverse toxicity, it saves the enormous expense of the trials. For functional genomics, metabolomics can be an excellent tool for determining the phenotype caused by a genetic manipulation, such as gene deletion or insertion. Sometimes this can be a sufficient goal in itself—for instance, to detect any phenotypic changes in a genetically modified plant intended for human or animal consumption. More exciting is the prospect of predicting the function of unknown genes by comparison with the metabolic perturbations caused by deletion/insertion of known genes. Such advances are most likely to come from model organisms such as Saccharomyces cerevisiae and Arabidopsis thaliana. The Cravatt laboratory at the Scripps Research Institute has recently applied this technology to mammalian systems, identifying the N-acyltaurines as previously uncharacterized endogenous substrates for the enzyme fatty acid amide hydrolase (FAAH) and the monoalkylglycerol ethers (MAGEs) as endogenous substrates for the uncharacterized hydrolase KIAA1363. Metabologenomics is a novel approach to integrate metabolomics and genomics data by correlating microbial-exported metabolites with predicted biosynthetic genes. This bioinformatics-based pairing method enables natural product discovery at a larger-scale by refining non-targeted metabolomic analyses to identify small molecules with related biosynthesis and to focus on those that may not have previously well known structures. Fluxomics is a further development of metabolomics. The disadvantage of metabolomics is that it only provides the user with abundances or concentrations of metabolites, while fluxomics determines the reaction rates of metabolic reactions and can trace metabolites in a biological system over time. Nutrigenomics is a generalised term which links genomics, transcriptomics, proteomics and metabolomics to human nutrition. In general, in a given body fluid, a metabolome is influenced by endogenous factors such as age, sex, body composition and genetics as well as underlying pathologies. The large bowel microflora are also a very significant potential confounder of metabolic profiles and could be classified as either an endogenous or exogenous factor. The main exogenous factors are diet and drugs. Diet can then be broken down to nutrients and non-nutrients. Metabolomics is one means to determine a biological endpoint, or metabolic fingerprint, which reflects the balance of all these forces on an individual's metabolism. Thanks to recent cost reductions, metabolomics has now become accessible for companion animals, such as pregnant dogs. Plant metabolomics is designed to study the overall changes in metabolites of plant samples and then conduct deep data mining and chemometric analysis. Specialized metabolites are considered components of plant defense systems biosynthesized in response to biotic and abiotic stresses. Metabolomics approaches have recently been used to assess the natural variance in metabolite content between individual plants, an approach with great potential for the improvement of the compositional quality of crops. See also Epigenomics Fluxomics Genomics Lipidomics Molecular epidemiology Molecular medicine Molecular pathology Precision medicine Proteomics Transcriptomics XCMS Online, a bioinformatics software designed for statistical analysis of mass spectrometry data References Further reading External links Human Metabolome Database (HMDB) METLIN XCMS LCMStats Metabolights NIH Common Fund Metabolomics Consortium Metabolomics Workbench Golm Metabolome Database Metabolon Metabolism Systems biology Omics
Metabolomics
[ "Chemistry", "Biology" ]
5,643
[ "Bioinformatics", "Omics", "Cellular processes", "Biochemistry", "Metabolism", "Systems biology" ]
1,029,272
https://en.wikipedia.org/wiki/Volcano%20warning%20schemes%20of%20the%20United%20States
In October 2006, the United States Geological Survey (USGS) adopted a nationwide alert system for characterizing the level of unrest and eruptive activity at volcanoes. The system is now used by the Alaska Volcano Observatory, the California Volcano Observatory (California and Nevada), the Cascades Volcano Observatory (Washington, Oregon and Idaho), the Hawaiian Volcano Observatory and the Yellowstone Volcano Observatory (Montana, Wyoming, Colorado, Utah, New Mexico and Arizona). Under this system, the USGS ranks the level of activity at a U.S. volcano using the terms "normal", for typical volcanic activity in a non-eruptive phase; "advisory", for elevated unrest; "watch", for escalating unrest or an eruption underway that poses limited hazards; and, "warning", if a highly hazardous eruption is underway or imminent. These levels reflect conditions at a volcano and the expected or ongoing hazardous volcanic phenomena. When an alert level is assigned by an observatory, accompanying text will give a fuller explanation of the observed phenomena and clarify hazard implications to affected groups. Summary of Volcanic Activity Alert Notification System Aviation color codes Earlier volcano warning schemes for the United States Prior to October 2006, three parallel Volcano warning schemes were used by the United States Geological Survey and the volcano observatories for different volcano ranges in the United States. They each have a base level for dormant-quiescent states and three grades of alert. Color Code Conditions, Long Valley Caldera and Mono-Inyo Craters Region, California Developed in 1997 to replace a previous 5-level system devised in 1991. Level of Concern Color Codes for volcanoes in Alaska The Alaska Volcano Observatory (AVO) used the following color-coded system to rate volcanic activity. It was originally established during the 1989-90 eruption of Redoubt Volcano. All five classifications are spelled as proper nouns, i.e., Level of Concern Color Code Orange not Level of concern color code Orange or any other variation. On its website the AVO spells the alert color in all capitals, but this is not otherwise necessary outside their system. Warning system for Cascade Range volcanoes in Washington and Oregon Introduced following the May 18, 1980, eruption of Mount St. Helens. References USGS Volcanic Activity Alert-Notification System page AVO information release about new warning scheme Volcanology United States warning systems Color codes
Volcano warning schemes of the United States
[ "Technology" ]
474
[ "Warning systems", "United States warning systems" ]
1,029,281
https://en.wikipedia.org/wiki/Monochrom
Monochrom (stylised as monochrom) is an international art-technology-philosophy group, publishing house and film production company. It was founded in 1993, and defines itself as "an unpeculiar mixture of proto-aesthetic fringe work, pop attitude, subcultural science and political activism". Its main office is located at Museumsquartier/Vienna (at 'Q21'). The group's members are: Johannes Grenzfurthner, Evelyn Fürlinger, Harald Homolka-List, Anika Kronberger, Franz Ablinger, Frank Apunkt Schneider, Daniel Fabry, Günther Friesinger and Roland Gratzer. The group is known for working with different media and entertainment formats, although many projects are performative and have a strong focus on a critical and educational narrative. Johannes Grenzfurthner calls this "looking for the best weapon of mass distribution of an idea". Monochrom is openly left-wing and tries to encourage public debate, sometimes using subversive affirmation or over-affirmation as a tactic. The group popularized the concept of "context hacking". On the occasion of Monochrom's 20th birthday in 2013, several Austrian high-profile media outlets paid tribute to the group's pioneering contributions within the field of contemporary art and discourse. History and philosophy In the early 1990s, Johannes Grenzfurthner was an active member of several BBS message boards. He used his online connections to create a zine or alternative magazine that dealt with art, technology and subversive cultures, and was influenced by US magazines like Mondo 2000. Grenzfurthner's motivations were to react to the emerging conservatism in cyber-cultures of the early 1990s and to combine his political background in the Austrian punk and antifa movement with discussion of new technologies and the cultures they create. Franz Ablinger joined Grenzfurthner and they became the publication's core team.The first issue was released in 1993. Over the years the publication featured many interviews and essays, for example by Bruce Sterling, HR Giger, Richard Kadrey, Arthur Kroker, Negativland, Kathy Acker, Michael Marrak, DJ Spooky, Geert Lovink, Lars Gustafsson, Tony Serra, Friedrich Kittler, Jörg Buttgereit, Eric Drexler, Terry Pratchett, Jack Sargeant and Bob Black, in its specific experimental layout style. In 1995 the group decided to cover new artistic practices and started experimenting with different media: performances, computer games, robots, puppet theater, musical, short films, pranks, conferences, online activism. In 1995 we decided that we didn't want to constrain ourselves to just one media format (the "fanzine"). We knew that we wanted to create statements, create viral information. So a quest for the best "Weapon of Mass Distribution" started, a search for the best transportation mode for a certain politics of philosophical ideas. This was the Cambrian Explosion of monochrom. We wanted to experiment, try stuff, find new forms of telling our stories. But, to be clear, it was (and still is) not about keeping the pace, of staying up-to-date, or (even worse) staying "fresh". The emergence of new media (and therefore artistic) formats is certainly interesting. But etching information into copper plates is just as exciting. We think that the perpetual return of 'the new', to cite Walter Benjamin, is nothing to write home about – except perhaps for the slave-drivers in the fashion industry. We've never been interested in the new just in itself, but in the accidental occurrence. In the moment where things don't tally, where productive confusion arises. All the other core team members joined between 1995 and 2006. Grenzfurthner is the group's artistic director. He defines Monochrom's artistic and activist approach as 'Context hacking' or 'Urban Hacking'. The group monochrom refers to its working method as "Context Hacking", thus referencing the hacker culture, which propagates a creative and emancipatory approach to the technologies of the digital age, and in this way turns against the continuation into the digital age of centuries-old technological enslavement perpetrated through knowledge and hierarchies of experts. ... Context hacking transfers the hackers' objectives and methods to the network of social relationships in which artistic production occurs, and upon which it is dependent. ... One of context hackers' central ambitions is to bring the factions of counterculture, which have veered off along widely diverging trajectories, back together again. Community and network From its very foundation, the group defined itself as a movement, culture (referring to Iain M. Banks's sci-fi series) and "open field of experimentation". Monochrom supported and supports various artists, activists, researchers and communities with an online publishing platform, a print publishing service (edition mono), and organizes in-person meetings, screenings, radio shows, debate circles, conferences, online platforms. It is fundamental for the group's core members to combine artistic and educational endeavors with community work (cf. social practice). Some collaborations have been rather short-lived (for example the publication of a 1993 fringe science paper by Jakob Segal, projects with the Billboard Liberation Front and Ubermorgen or the administration of Dorkbot Vienna), some have been going for many years and decades (for example with Michael Marrak, Cory Doctorow, Jon Lebkowsky, Fritz Ostermayer, V. Vale, eSeL, Scott Beale/Laughing Squid, Machine Project, Emmanuel Goldstein, Jason Scott, Jonathan Mann, Jasmin Hagendorfer and the Porn Film Festival Vienna), Michael Zeltner, Anouk Wipprecht, VSL Lindabrunn). Monochrom supports initiatives like the Radius Festival, Play:Vienna, the Buckminster Fuller Institute Austria, RE/Search, the Semantic Web Company and the Vienna hackerspace Metalab. For a couple of years, Monochrom ran the DIY project "Hackbus" in cooperation with David "Daddy D" Dempsey (of FM4) Since 2007, Monochrom is the European correspondent for Boing Boing Video. Art residency Monochrom offers a collaborative art residency in Vienna. Since 2003 the group has invited and created projects with artists, researchers, and activists like Suhrkamp's Johannes Ullmaier, pop theorist Stefan Tiron, performance artist Angela Dorrer, DIY blogger (and later: entrepreneur) Bre Pettis, photographer and activist Audrey Penven, digital artist Eddie Codel, sex work activist Maggie Mayhem, glitch artist Phil Stearns, illustrator Josh Ellingson, DIY artist Ryan Finnigan, digital artist Jane Tingley, digital rights activist Jacob Appelbaum, sex tech expert Kyle Machulis, hacker Nick Farr, filmmakers Sophia Cacciola and Michael J. Epstein, writer Jack Sargeant, and others.All former resident artists are considered ambassadors. Johannes Grenzfurthner sees Monochrom as a community and social incubator of critical and subversive thinkers. An example is Bre Pettis of MakerBot Industries, who got inspired to create 3d printers during his art residency with Monochrom in 2007. Pettis wanted to create a robot that could print shot glasses for Monochrom's cocktail-robot event Roboexotica and did research about the RepRap project at Metalab. Shot glasses remained a theme throughout the history of MakerBot. Main projects (in chronological order) Mackerel Fiddlers (1996-) A radical anti-representation/anti-recording music movement that partially refers to Hakim Bey's Temporary Autonomous Zone. To quote the manifesto: "We set value on developing a form of viral resistance by systematic infiltration of symphonic orchestras. A New Year's Concert of the Vienna Philharmonic Orchestra (1984) could have been transformed by at least one Mackerel Fiddler and Austria's image would have been ruined worldwide. ... These days, self-production and 'embarrassment sells' have become the golden rules of media, be it radio, TV, or telegraph. Thus it is not only legitimate to be ashamed of ones activity as a Mackerel Fiddler, it is also thankworthy. Failure is beautiful! Disgrace is sunshine!" Schubumkehr (1995–1996) A manifesto propagating 'internet demarketing' and deals with negative aspects of early net culture. Paz Sastre reprinted and contextualized the manifesto in this 2021 publication "Manifiestos sobre el arte y la red 1990-1999", published by Exit Media. Der Exot (1997–2012) A telerobot remotely controlled via a web interface/chat forum. The robot was supported and operated by a big community. The robot's basic structure was built out of remodeled Lego bricks and equipped with a Fisheye lens camera. The project was the first telerobot/tele-community projects of its kind. It was presented at art festivals and technology presentations. Monochrom relaunched the project in 2011, calling it a "resurrection", and specifying the social aspect of the project: "A mobile robot with a mounted camera that can be controlled via web interface. But that's tricky. If too many people try to control the robot at the same time it is counter-productive. ... Der Exot is the anti-crowd source robot. The users have to discuss and cooperate via a chat interface to communicate where they want to go, what corners they want to explore, what to crush." The project was presented at the 'Robotville' Exhibition of the Science Museum London. Wir kaufen Seelen (We Buy Souls) (1998) A "spirituo-capitalist" booth where project members tried to buy the souls of passers-by for US$5 per soul. A total of fifteen were purchased and registered. These souls are still being offered for sale to third parties with a power of disposal. Roboexotica (1999-) An annual festival where scientists, researchers, computer geeks and artists from all over the world build cocktail robots and discuss technological innovation, futurology and science fiction. Roboexotica is also an ironic attempt to criticize techno-triumphalism and to dissect technological hypes. 2002 Monochrom teamed up with Shifz in the organization of the events. Roboexotica has been featured on Slashdot, Wired News, Reuters, New York Times and blogs like Boing Boing and New Scientist. Minus 24x (2001) Monochrom's pro-failure/pro-error/pro-inability manifesto, hailing the "Luddites of inability". Quote: "Turning an object against the use inscribed in it (as sociolect of the world of things) means probing its possibilities. ... The information age is an age of permanently getting stuck. Greater and greater speed is demanded. New software, new hardware, new structures, new cultural techniques. Lifelong learning? Yes. But the company can't fire the secretary every six months, just because she can't cope with the new version of Excel. They can count their keystrokes, measure their productivity ... but! They will never be able to sanction their inability! Because that is imminent." Scrotum gegen votum (Scrotum for a vote) (2000-) "A form of political commentary for "about fifty percent of the population". Masculine individuals (whether in sex or gender) are seated nude in a special chair attached to a flatbed scanner. The scans then may or may not be sent to various politicians. The project won the NEBAPOMIC 2000 (Network-based Political Minimalism Counteraction Award) in the category of small country with political tendencies towards the conservative right. Soviet Unterzoegersdorf (1999-) In 2005 Monochrom presented the first part of a computer game trilogy: "Soviet Unterzoegersdorf - The Adventure Game" (using AGS). To Monochrom it was clear that the adventure game, an almost extinct form of computer game, would provide the perfect media platform to communicate the idea of "Soviet Unterzoegersdorf". Edge chose the game as their 'internet game of the month' of November 2005. In 2011 Monochrom and Austrian production company Golden Girls Filmproduktion announced that they are working on the feature film Sierra Zulu. The movie will be dealing with Soviet Unterzoegersdorf. In 2012 Monochrom presented the 16-minute short film "Earthmoving". It is a prequel to the feature film Sierra Zulu and features actors Jeff Ricketts, Martin Auer, Lynsey Thurgar, Adrienne Ferguson and Alexander Fennon. In March 2009 Monochrom presented 'Soviet Unterzoegersdorf: Sector II'. The game features special guest appearances of Cory Doctorow, Bruce Sterling, Jello Biafra, Jason Scott, Bre Pettis and MC Frontalot. The fake history of the "last existing appanage republic of the USSR", Soviet Unterzoegersdorf. Created to discuss topics such as the theoretical problems of historiography, the concept of the "socialist utopia" and the political struggles of postwar Europe. The theoretical concept was transformed into an improvisational theatre/performance/LARP that lasted two days. Georg Paul Thomann (2002–2005) Monochrom was chosen to represent the Republic of Austria at the São Paulo Art Biennial, São Paulo (Brazil) in 2002. However, the political climate in Austria (at that time, the center-right People's Party had recently formed a coalition with Jörg Haider's radical-right Austrian Freedom Party) gave the left-wing art group concerns about acting as wholehearted representatives of their nation. Monochrom dealt with the conundrum by creating the persona of Georg P. Thomann, an irascible, controversial (and completely fictitious) artist of longstanding fame and renown. Through the implementation of this ironic mechanism - even the catalogue included the biography of the non-existent artist - the group solved with pure fiction the philosophical and bureaucratic dilemma attached to the system of representation presented to them by the Biennial. An interesting story related to the Thomann project took place once the São Paulo Art Biennial was underway. The artist Chien-Chi Chang was invited as the representative of Taiwan, but the country's name was removed by the administration from his cube overnight and replaced by the label, "Museum of Fine Arts, Taipei." As the members of Monochrom discovered, China had threatened to retreat from the Biennial (and create massive diplomatic problems) if the organizers of the Biennial were thought to be challenging the "One-China policy." Chang's open letter remained unanswered. Under the guise of Thomann, Monochrom invited artists from several countries to show their solidarity with Chang by taking the adhesive letters from their countries' name tags and giving them to Chang so that he could remount "Taiwan" outside his room. Monochrom wanted to show that artists do not necessarily have to internalize the fragmentation and isolation imposed by the rat-race of art markets and exhibitions as society-controlling imperatives. Several Asian newspapers reported about the performance. One Taiwanese newspaper headlined: "Austrian artist Georg Paul Thomann saves 'Taiwan'". In 2005 Monochrom released press info that "Austrian artist and writer Prof. Georg Paul Thomann died in a tragic accident at the tender age of 60". On 29 July 2005 they staged his funeral in Hall in Tirol. Thomann's gravesite remains in Hall. Georg Paul Thomann's tombstone shows an engraved URL of the Thomann project page. Georg Paul Thomann is featured in RE/Search's "Pranks 2" book. 452 x 157 cm^2 global durability (2002-) Together with Patick Hoenninger. Milk packages are collected in many countries. The standardized format of the Tetra Pak offers a worldwide frame for creative variation, which becomes visible on the 9.5 by 16.5 cm front of the packaging. According to the group, the relation to pop art not only exists in an aesthetic but also in a social dimension, reminiscent of Walter Benjamin's "The Work of Art in the Age of Its Technological Reproducibility." The Absent Quintessence (2002) Feature films were drastically cut and thereby wrenched out of their genres (hardcore porn, splatter, eastern/kung fu, zombie, etc.). These genre films – all of which are characterized by certain anonymity and a mass-produced look – have been stripped of their "essential" scenes (for example, all sex scenes in pornography, all fight scenes in the kung fu films). Thus, the material has been reduced to a bare-bones plot that had actually been conceived only as filler, but its aesthetics and stereotypical narrative patterns now make it easy to contextualize. The project tried to analyze these "re-released" shorts and to filter out interesting subtexts. Towers of Hanoi (2002) Members of the group entered a bank and exchanged 50 euros for dollars, then back again to euros - and so on - until the money was gone. Afterward the group calculated how many times you have to exchange the global amount of cash (20 trillion euros) from euros to dollars until it vanishes completely. It was calculated that if this process was completed a total of 849 times using the global amount of cash, 18 cents would remain. Blattoptera (2003–2005) Artists were invited to design a gallery-space for their tribe of South American cockroaches. Each month a different international artist, or arts group, was invited to design an environment in which the cockroaches are placed, to act as audience for, and as aesthetic judges of the work. Brandmarker (2003-) How well do people remember the logos of large corporations that sell consumer goods? An attempt to evaluate the actual power of commercial brands by making people draw famous logos from memory. Eignblunzn (2003) Members of the group prepared blood sausage out of their own blood and ate it ('auto blood sausage'). The performance was accompanied by political essays about the 'autocannibalistic' tendencies of the global economy. The event also can be interpreted as a critical statement about art, art history, and the art market (Viennese Actionism). Instant Blitz Copy Fight (2004-) People from all over the world are asked to take flash pictures of copyright warnings in movie theaters. Monochrom (in cooperation with Cory Doctorow) collects and exhibits those pictures as a copyleft/Free Culture statement. The Flower Currency (2005) A project to explore a value exchange system, created and owned by children, to enable artists to collaborate on the creation of interdisciplinary artworks. Udo 77 (2004) A musical about Udo Proksch, a criminal figure in recent Austrian history. Born to a poor family he rose to become the darling of Austrian high society before landing in jail on a life sentence for sinking a ship and its crew in order to cash in on insurance of nonexistent goods. His perfectly tuned network of sponsors, friends, and political functionaries could not hush up the scandal and many of his associates joined him in his fall from grace. 1 Baud (2005) Monochrom held workshops in San Francisco to teach people semaphore communication techniques ('International Code of Signals'). After a few days set aside for study and practice, they started a citywide performance to send messages through town at a speed of 1 baud. ("1 Baud" was part of the "Experience The Experience" tour.) Brick Of Coke (2005) Monochrom created a 'Brick Of Coke': they put twenty gallons of Coca-Cola into a pot and boiled it down for a week until the residue left behind could be molded into a brick. The performance and talk dealt with the sugar industry and other multinational corporation policies and Coca-Cola as a symbol of corporate power. ("Brick Of Coke" was part of the "Experience The Experience" tour.) Buried Alive/Six Feet Under Club (2005-) In 2010 Monochrom created the "Six Feet Under Club". Couples could volunteer to be buried together in a casket beneath the ground to perform sexual acts. In a press release they explained that the space they occupy is "extremely private and intimate". The coffin "is a reminder of the social norm of exclusive pair bonding 'till death do us part'." However, this intimate scene was corrupted by the presence of a night vision webcam which projects the scene on to an outside wall. The scenario kept the intimacy of a sexual moment intact while moving the private act into public space. Monochrom's performance can be seen as an absurd parody of pornographic cinema or an examination of the high value placed on sexual privacy. "Six Feet Under Club" performances took place in San Francisco in 2010 and Vienna in 2013 and 2014. People in Los Angeles, San Francisco, Vancouver and Toronto had the opportunity to be buried alive in a real coffin for fifteen minutes. As a framework program Monochrom members held lectures about the history of the science of determining death and the medical cultural history of "buried alive". ("Buried Alive" was launched as part of the "Experience The Experience" tour in 2005, but was extended beyond the tour and became a permanent coffin installation at VSL Lindabrunn in Lower Austria in 2013. Catapulting Wireless Devices (2005) The catapult is one of the oldest machines in the history of technology. Monochrom created an ironic statement about progress. The group build a small medieval trebuchet and used a couple of issues of the techno-utopist magazine Wired as a counterweight to catapult wireless devices (e.g. cell phones or PDAs) at the greatest possible distance. ("Catapulting Wireless Devices" was part of the "Experience The Experience" tour.) Farewell to Overhead (2005) The group created a melancholic electro pop song about the "dead medium" overhead projector and adolescence/socialisation. Growing Money (2005) To quote Monochrom's press statement: "Money is frozen desire. Thus it governs the world. Money is used for all forms of trade, from daily shopping at the supermarket to trafficking in human beings and drugs. In the course of all these transactions, our money wears out quickly, especially the smaller banknotes that are changing hands constantly. ... Money is dirty, and thus it is a living entity. This is something we take literally: money is an ideal environment for microscopic organisms and bacteria. We want to make your money grow. In a potent nutrient fluid under heat lamps we want to get as much life as we can out of your dollar bills." ("Growing Money" was part of the "Experience The Experience" tour.) Illegal Space Race (2005) Monochrom placed the planets true to scale (sun, 4 meters in diameter at Machine Gallery, Alvarado Street, near Echo Park) throughout the Los Angeles cityscape. Then they conducted an 'illegal space car race' through the solar system. ("Illegal Space Race" was part of the "Experience The Experience" tour.) Magnetism Party (2005) In form of a staged college party, Monochrom deleted all the electromagnetic storage media that they could find with a couple of heavy-duty neodymium magnets. Monochrom stated that the Magnetism Party was an attempt to actively come to terms with one aspect of the information society that is almost completely ignored by our epistemological machinery: forgetting. The slogan was "Delete is just another word for nothing left to lose". ("Magnetism Party" was part of the "Experience The Experience" tour.) Arad-II (2005): The members of Monochrom staged a fake (public theatre performance) about a deadly virus outbreak at 'Art Basel Miami Beach', one of the biggest art fairs in North America. Monochrom dealt with the networking/business aspect of the art market, the post-September 11, 2001 attacks hysteria about biological warfare, and the media coverage about Avian influenza (bird flu). Press release quote: "In mid-November 2005, Günther Friesinger visited the Ulaangom Biennial in the Republic of Mongolia. ... He directly departed to Miami to attend some meetings at Art Basel Miami Beach. ... There is acute evidence that he is carrying a rare, but highly contagious sub-form of the Arad-II Virus (family Onoviridae), of which Freiburg virus is also a member. ... Friesinger is walking around the different art fairs in Miami Beach and is spreading the pathogen. The situation is critical. A worldwide outbreak – due to the many visitors from all over the world – is imminent. ... We want to find all the people that Günther Friesinger small talked to and handshaked with. We want to retrieve and destroy the business cards he has spread. Additionally, we must take him into custody and in the event of his death cremation is absolutely necessary." Café King Soccer (Café König Fußball) (2006) In June 2006, Monochrom created the art installation 'Café King Soccer' at NGBK Gallery in Berlin. The installation deals with the soccer corruption case in whose centre we find referee Robert Hoyzer. Monochrom reflect on the fact that soccer has at all times mirrored the dialectics between the culture of subjectivity of the working class and the assertion of objectivity of middle-class culture. The former is represented by the collectives that meet in the game, the latter by the referee, an exemplary civil subject conducting the game by acting as its objective opponent. The Hoyzer case violated this agreement. In it, Hoyzer is – especially in the forefront of the 2006 FIFA World Cup in Germany – also a tragic character, because he acted out his inner self-contradiction as an exemplary civil subject in a publicly effective way. At the same time, the Hoyzer case is itself an integral part of the game – merely because of his exemplary immolation as a scapegoat which seems to correspond exactly to his role on the field - and conditio sine qua non of its perpetuation. Campaign for the Abolition Of Personal Pronouns (2006): Monochrom propagates the creation of gender-neutral personal pronouns. In an activist way the group states that there is a relationship between the structure of language and the way people think and act (see Constructivism). Waiting for GOTO (2006): The reference point Monochrom chose for their theatre-project 'Waiting for GOTO' (Volkstheater Wien) is the theatre classic 'Waiting for Godot' which is projected into the future by modernistic references to science-fiction. In 'Waiting for Goto' we meet 'ideological delinquents' in a distant interstellar future who are separated from their bodies and locked up in two female students who are able to earn their college fees and make ends meet thanks to this job. The play presents us with Monochrom's portrayal of everyday work in a neo-liberal society, double consciousness, the endurance of incorporated contradictions by fragmented subjects, and the exploitation of the living body, self-alienation. Lord Jim Lodge powered by monochrom (2006-): The Lord Jim Lodge was founded during the 1980s by the artists Jörg Schlick, Martin Kippenberger, Albert Oehlen and Wolfgang Bauer. Every member was obliged to use the lodge logo and/or the "Sun Breasts Hammer" symbol and the slogan "No one helps nobody" in his work. The group's declared goal was to make the logo "more well known than that of Coca-Cola". Thanks to the international recognition received by the oeuvres of Kippenberger, Oehlen, and Schlick the Lord Jim Lodge has already attained a relatively high degree of notoriety. Still, the logo's dissemination has remained – despite the international reputation that these artists have achieved – within the framework of the art system and its peripheral importance. As an intentional addition to works of visual art, it was in the end limited by their material form of existence. In March 2006 it was announced that Monochrom has assumed ownership of all trademark and usage rights of the artist Jörg Schlick's Lord Jim Lodge. Monochrom took part in a contest by 'Coca-Cola Light' ('Coca-Cola Light Art Edition 2006'). Quote Monochrom: "This puts us in a position to set in motion long overdue synergy effects between Coca-Cola and the Lord Jim Lodge. The only possibility for realizing the challenge formulated in the lodge logo is to use habitat in the merchandise world as a vehicle of transmission for guiding the message through that world's channels of distribution and into public consciousness. ... Thus we would like to use the prize as a trial run for such a form of cooperation/competition. Coca-Cola and Lord Jim Lodge – together at last! The symbolic-economic capital of the Lord Jim Lodge and the economic-symbolic capital of Coca-Cola will be brought together, paving the way for a better future. For a world of radical beauty and exclusive bottles in small editions! In the end, we are all individuals – at least as long as nobody comes along and proves the contrary." Monochrom won the prize. The logo of "Lord Jim Lodge powered by monochrom" was printed on 50.000 Coca-Cola Light bottles. Taugshow (2006-): Monochrom produce a regular TV talk show for a Viennese community TV station and put it online on their page under a Creative Commons license. Taugshow is referring to the Viennese slang term 'taugen' (to dig something, to adore something). Quote: "Our guests are geeks, heretics, and other coevals. Taugshow is a tour-de-farce, condensed into the well-known cultural technique of a prime time TV show." Guests are people like underground publisher V. Vale, sex activist and author Violet Blue, Chaos Computer Club spokesman Andy Müller-Maguhn, RepRap designer Vik Olliver, fashion researcher Adia Martin, media activist Eddie Codel, blog researcher Klaus Schönberger, computer crime lawyer Jennifer Granick, bondage instructor J. D. Lenzen, science researcher Karin Harrasser, blogger Regine Debatty, IT expert Emmanuel Goldstein, DEF CON founder Jeff Moss, Tim Pritlove and blogger/writer Cory Doctorow. Arse Elektronika (2007-): Monochrom organizes a series of conferences about sex and technology. The first conference was held in October 2007 in San Francisco and dealt with pr0nnovation (the history of pornography and technological innovation) and featured speakers such as Mark Dery, Violet Blue and Eon McKai. Arse Elektronika 2008 dealt with Sex and Science Fiction ('Do Androids Sleep With Electric Sheep?') and was held in San Francisco in October 2008. It featured speakers like Rudy Rucker and Constance Penley. The general theme of Arse Elektronika 2009 was 'Of Intercourse and Intracourse' (genetics, biotechnology, wetware, body modifications) and took place October 2009 in San Francisco. Featured guests: R. U. Sirius, Annalee Newitz, Allen Stein. 2010 the first Arse Elektronika exhibition was presented in the city of Hong Kong. The theme of Arse Elektronika 2010 in San Francisco was "Space Racy" (Sex, Tech and Spaces). The theme of Arse Elektronika 2011 in San Francisco was "Screw the System" (Sex, Tech class, and culture) The theme of Arse Elektronika 2012 in San Francisco was "4PLAY: Gamifuckation and Its Discontents" (Sex, Tech and Games) The theme of Arse Elektronika 2013 in San Francisco was "id/entity" (Sex, Tech and Identity) The theme of Arse Elektronika 2014 in San Francisco was "trans*.*" (Sex, Tech and Transformations) The theme of Arse Elektronika 2015 in San Francisco was "Shoot Your Workload" (Sex, Tech and Work) Arse Elektronika compilations There are currently four compilations or proceedings of essays presented at, or relevant to, the themes of Arse Elektronika. Sculpture Mobs (2008-): Monochrom promotes a concept called Sculpture Mobs. At the 2008 Maker Faire in San Mateo, California Monochrom trained attendees to erect public sculptures in a simulated Wal-Mart parking lot in just 5 minutes before "security" was called. Quote: "No one is safe from public sculptures, those endless atrocities! All of them are labeled 'art in public space'. Unchallenging hunks of aesthetic metal in business parks, roundabouts, in shopping malls! It is time to create DIY public art! Get your hammers! Get your welding equipment!" Monochrom teamed up with the Billboard Liberation Front to create a political illegal public sculpture called "The Great Firewall of China" at the Google Campus in Mountain View, California. Monochrom created additional Sculpture Mobs and Sculpture Mob Training Camps in various cities: Graz (2008), Ljubljana (2008) and Barcelona (2010). Der Streichelnazi / Nazi Petting Zoo (2008): The group staged a public "Nazi petting" or "hugging" on a heavily frequented Viennese shopping street. The piece is a political and ironic statement about Austria's Nazi past and how Austria deals with it. Quote from their video documentation: "In 1938 Austria joined the Third Reich. Millions cheered Hitler and in the referendum, 99.75% said 'yes' to 'Greater Germany'. But after World War II, many Austrians sought comfort in the idea of Austria as "the Nazis' first victim". Factions of Austrian society tried for a long time to advance the view that it was only annexation at the point of a bayonet(te). But it's time to embrace history. It's time to remember the feel-good days of 1938. It's time to let our real feelings out! It's time to hug the Nazi, Austria! Finally!" Carefully Selected Moments (2008): Monochrom publishes a Best-Of CD featuring re-recorded versions of some of the group's favorite songs. Hacking the Spaces (2009-): Monochrom publishes a much-debated pamphlet by Johannes Grenzfurthner who (in collaboration with Frank Apunkt Schneider) makes a critical study on hackerspaces. In writing the historical context of hackerspaces originally expanding from the counter culture movement and conceived as niches against bourgeois society, Grenzfurthner and Schneider argue that hackerspaces today function quite differently as they initially did. Back in the seventies, these open spaces were imagined as tiny worlds to escape from capitalism or authoritarian regimes. The idea was much more based on micro-political tactics than on hippie's spirit: Instead of trying to transfer the old world into a new one people started to build up tiny new worlds with the old world. They made up open space where people could come together and try out different forms of living, working, maybe loving and whatever people do when they want to do something. In a capitalist society, alternative concepts always end up being commodified such as "indie music" becoming mainstream. According to Grenzfurthner and Schneider, the same happened to hackerspaces when "the political approach faded away on en route into tiny geeky workshop paradises". Kiki and Bubu (2008-): Invited by Boing Boing's Xeni Jardin, Monochrom created a sock puppet show focussing on the characters of Kiki and Bubu, an orange-red bird and a brown bear. Kiki is the well-read one, while Bubu is portrayed as a little slow, but often surprises with deep insights. Kiki and Bubu are fond of the ideology of Neo-Marxism and the series is based on the idea of explaining leftist terms (like commodification, neoliberalism, alienation, planned economy) in an entertaining yet surreal way. The first installments were short films (2008), but Monochrom also created life puppet shows (2008, 2010, 2014) and a 50-minute feature video called Kiki and Bubu: Rated R Us (2011): "Kiki and Bubu have some feelings, so they sign up for an online dating site. When the People of China want to become their friends, they are excited. However, sending the People of China a video of themselves proves to be difficult: Their content gets flagged as inappropriate and taken down from YouTube. On the long quest for knowledge that follows, Kiki and Bubu learn all about Internet censorship. And love." Antidev - God Hates Game Designers (2012): Monochrom member Johannes Grenzfurthner staged a fundamentalist Christian protest, holding signs like "God Hates Game Designers" and "Thou Shalt Not Monetize Thy Neighbor" at the Game Developers Conference 2012 in San Francisco, attacking the focus on marketing and monetization. The images went viral and provoked much controversy. Die Gstettensaga: The Rise of Echsenfriedl (2014): A sci-fi fantasy comedy about the post-apocalyptic world after the so-called "Google Wars". The movie was produced for Austria's TV station ORF and deals with the politics and hype behind media technology and nerd culture. The film was directed by Johannes Grenzfurthner. Hedonistika (2014-): Monochrom's "smorgastic Festival for Gastrobots, Culinatronics, Advanced Snackhacks and Nutritional Mayhem", an event dedicated to approaches in gastronomical robots, cooking machines, molecular cuisine and experimental food performances. The first installment was presented in Montréal at the 'Biennale internationale d'art numérique'. The second installment was presented in Holon, near Tel Aviv, at 'Print Screen Festival', and in Linz at Ars Electronica 2022. monochrom's ISS (2011): Monochrom creates an improv reality sitcom for theater stages portraying the first year of operation of the International Space Station. The show depicts day-to-day working life in outer space and asks questions about work under the special conditions (and impairments) of a space station, to come to terms with weightlessness and the dictatorship of the functional. The production features actor Jeff Ricketts. Creative Class Escort Service (Kreativlaufhaus) (2015): Monochrom offered an escort service for creative workers (like writers, sculptors, curators, art theorists, filmmakers, designers). The basic concept was to run a Laufhaus, a specific form of German/Austrian brothel where sex workers rent a room and offer services. Monochrom also transported creative workers to clients off-site. Monochrom wanted to start a public debate about the working conditions in art and the sex work field. Occupy East India Trading Company (2015): At the annual TEDxVienna conference, members of Monochrom entered the Volkstheater in 17th-century costumes, carrying a sign and pamphlets protesting the East India Company. The group wanted to address the history of global corporations, especially at a corporate-sponsored event as TEDx: "The East India Company – the first great multinational corporation, and the first to run amok – was the ultimate model for many of today's joint-stock corporations." Shingal, where are you? (2016): Set in an abandoned coal mine at the Turkish border, the documentary Shingal, where are you? weaves together the stories of Yezidi refugees following ISIS attacks and the kidnapping of more than 3000 women and children. The story is told in raw cinematography from the parallel perspective of three generations of Yezidis. The film was directed by Angelos Rallis and Hans Ulrich Goessl. Monochrom functioned as the co-production company. Traceroute (2016): A documentary about the history, politics, and impact of nerd culture. It was written and directed by Johannes Grenzfurthner. Anima Ex Machina (2020): The novel Anima Ex Machina is a good example of Monochrom's history as publisher. The German science fiction and fantasy writer Michael Marrak was invited to Vienna as an artist-in-residence in September and October 2020. He created the sci-fi novel Anima Ex Machina, which was then published by Monochrom. The novel was nominated for the Kurd Laßwitz Award, possibly the best-known science fiction award from Germany. Glossary of Broken Dreams (2018): An essayistic feature film by Johannes Grenzfurthner that tries to present an overview of political concepts such as freedom, privacy, identity, resistance, etc. The film features performances by Amber Benson, Max Grodenchik, Jason Scott, Maschek, Jeff Ricketts and others. Masking Threshold (2021): A horror drama film directed by Johannes Grenzfurthner, written by Grenzfurthner and Samantha Lienhard. The synopsis: "Conducting a series of experiments in his makeshift home-lab, a skeptical IT worker tries to cure his harrowing hearing impairment. But where will his research lead him? Masking Threshold combines a chamber play, a scientific procedural, an unpacking video and a DIY YouTube channel while suggesting endless vistas of existential pain and decay." Razzennest (2022): A horror comedy film written and directed by Johannes Grenzfurthner. South African filmmaker and enfant terrible Manus Oosthuizen meets with film critic Babette Cruickshank in a Los Angeles sound studio. With key members of Manus's crew joining, they record an audio commentary track for his new "elegiac feature documentary Razzennest." Strange incidents occur during the recording session. Je Suis Auto (2024): A science fiction comedy film directed by Juliana Neuhuber. The film is a farcical comedy that deals with issues such as artificial intelligence, politics of labor, and tech culture. Hacking at Leaves (2024): An documentary film directed and written by Johannes Grenzfurthner. It explores various themes including the United States' colonial past, Navajo tribal history, and the hacker movement, through the lens of the story of a hackerspace in Durango, Colorado, during the early phase of the COVID-19 pandemic. Solvent (2024): A supernatural mystery horror film directed by Johannes Grenzfurthner. In an Austrian farmhouse, a team of experts discovers a hidden secret while searching for Nazi documents. Among the team is Gunner S. Holbrook, an American expatriate who becomes increasingly obsessed with unraveling the mystery. Publications (incomplete) monochrom / magazine and yearbook series. Published in 1993, 1994, 1995, 1996, 1997, 1998, 2000, 2004, 2006, 2007, 2010 Stadt der Klage (Michael Marrak, 1997) Weg der Engel (Michael Marrak and Agus Chuadar, 1998) Who shot Immanence? (edited together with Thomas Edlinger and Fritz Ostermayer, 2002) Leutezeichnungen (edited together with Elffriede, 2003) Das Wesen der Tonalität (Othmar Steinbauer; edited by Guenther Friesinger, Helmut Neumann, Ursula Petrik, Dominik Sedivy, 2006) Quo Vadis, Logo?! (edited by Günther Friesinger and Johannes Grenzfurthner, 2006) Sonne Busen Hammer 16 (edited by Johannes Grenzfurthner, Günther Friesinger and Franz Ablinger, 2006) Als die Welt noch unterging (Frank Apunkt Schneider, 2007) VIPA (edited by Orhan Kipcak, 2007) Spektakel - Kunst - Gesellschaft (edited by Stephan Grigat, Johannes Grenzfurthner and Günther Friesinger, 2006) Sonne Busen Hammer 17 (edited by Johannes Grenzfurthner, Günther Friesinger and Fra21nz Ablinger, 2007) Roboexotica (edited by Günther Friesinger, Magnus Wurzer, Johannes Grenzfurthner, Franz Ablinger and Chris Veigl, 2008) Die Leiden der Neuen Musik (Ursula Petrik; edited by Guenther Friesinger, Helmut Neumann, Ursula Petrik, Dominik Sedivy, 2009) Do Androids Sleep with Electric Sheep? (edited by Johannes Grenzfurthner, Günther Friesinger, Daniel Fabry und Thomas Ballhausen, 2009) Of Intercourse and Intracourse – Sexuality, Biomodification and the Techno-Social Sphere (edited by Johannes Grenzfurthner, Günther Friesinger, Daniel Fabry, 2011) pr0nnovation? Pornography and Technological Innovation (edited by Johannes Grenzfurthner, Günther Friesinger and Daniel Fabry, 2008) Screw the System (edited by Johannes Grenzfurthner, Günther Friesinger, Daniel Fabry) Subvert Subversion. Politischer Widerstand als kulturelle Praxis (edited by Johannes Grenzfurthner, Günther Friesinger, 2020) The Wonderful World of Absence (edited by Günther Friesinger, Johannes Grenzfurthner, Daniel Fabry, 2011) Anima Ex Machina by Michael Marrak (edited by Günther Friesinger, Johannes Grenzfurthner, 2021) Femi und die Fische by Tommy Schmidt (edited by Günther Friesinger, Johannes Grenzfurthner, 2022) Filmography (feature-length films) Solvent (2024) – directed by Johannes Grenzfurthner Hacking at Leaves (2024) – directed by Johannes Grenzfurthner Razzennest (2022) – directed by Johannes Grenzfurthner Masking Threshold (2021) – directed by Johannes Grenzfurthner Glossary of Broken Dreams (2018) – directed by Johannes Grenzfurthner Traceroute (2016) – directed by Johannes Grenzfurthner Die Gstettensaga: The Rise of Echsenfriedl (2014) – directed by Johannes Grenzfurthner Kiki and Bubu: Rated R Us (2011) - directed by Johannes Grenzfurthner Exhibitions and festivals (examples) Arad-II, Art Basel Miami Beach / USA (2005) Die waren früher auch mal besser: monochrom (1993-2013) / Austria (2013) Dilettanten. Forum Stadtpark, Graz / Austria - Steiermärkisches Landesmuseum Joanneum, Graz / Austria - Steirischer Herbst 2002, Graz / Austria (2002) Junge Szene 98. Vereinigung Bildender Künstler, Wiener Secession, Vienna / Austria (1998) MEDIA FORUM/Moscow International Film Festival / Moscow / Russia (2008) Neoist World Congress. Kunsthalle Exnergasse, Vienna / Austria (1997) Roboexotica (Festival for Cocktail Robotics, Vienna, 1999-) Robotronika. Public Netbase t0 Media~Space!, Institut für neue Kulturtechnologien, Vienna / Austria (1998) Seriell Produziertes. Diagonale (Austrian Film Festival), Graz / Austria (2000) techno(sexual) bodies / videotage / Hong Kong / China (2010) The Influencers, Center for Contemporary Culture / Barcelona / Spain (2008) The Thomann Project. São Paulo Art Biennial, São Paulo / Brazil (2002) Unterspiel, Contemporary Art Gallery, Vancouver / Canada (2005) world-information.org. Museum of Contemporary Art, Brussels / Belgium (2000) and Belgrad / Serbia (2003) Awards (examples) 1st prize of 'E55' (Vienna/Berlin) 1999. aniMotion Award Honorary Mention (Sibiu, Romania) for Interactive Tales for Monochrom's "Soviet Unterzoegersdorf/Sector 1/The Adventure Game" (2007). Art Award of FWF Austrian Science Fund (2013). Coca-Cola Light Art Edition (2006). Media Forum/Moscow International Film Festival, Jury Special Mention (Moscow, Russia) for Monochrom's "The Void's Foaming Ebb", (2008). Nestroy Theatre Prize (Vienna) 2005 (together with 'The Great Television Swindle' by maschek and 'Freundschaft' by Steinhauer and Henning) for Udo 77 (2004). Official Honoree for NetArt and Personal Blog/Culture in The 2009 Webby Awards, International Academy of Digital Arts and Sciences (2009). Videomedeja Awards Special Mention, Novi Sad, Serbia for Net/Software Category for Monochrom's "Soviet Unterzoegersdorf/Sector 1/The Adventure Game" (2006). See also Notes References External links Detailed Interview with Johannes Grenzfurthner of Monochrom (by Marc Da Costa/Furtherfield): part 1, part 2 and part 3 Monochrom in English Monochrom in German (different blogs, information, etc. available than on the English site) TEDx talk by Johannes Grenzfurthner on Monochrom, art and subversion Organizations established in 1993 1993 establishments in Austria Cultural organisations based in Austria Austrian artist groups and collectives Austrian activists Austrian bloggers Austrian contemporary artists Postmodern artists Culture jamming Hoaxes Hoaxes in Germany Hoaxes in the United States Political art Politics and technology Underground publishers Anti-consumerist groups Anti-corporate activism Internet-based activism Robotic art Performance artist collectives Culture jamming techniques Impostors Hacker culture Nerd culture Net.artists Film production companies of Austria Webby Award winners Creative Commons-licensed authors Artist residencies
Monochrom
[ "Technology" ]
10,476
[ "Multimedia", "Net.artists" ]
1,029,423
https://en.wikipedia.org/wiki/Megastructure
A megastructure is a very large artificial object, although the limits of precisely how large vary considerably. Some apply the term to any especially large or tall building. Some sources define a megastructure as an enormous self-supporting artificial construct. The products of megascale engineering or astroengineering are megastructures. Most megastructure designs could not be constructed with today's level of industrial technology. This makes their design examples of speculative (or exploratory) engineering. Those that could be constructed easily qualify as megaprojects. Megastructures are also an architectural concept popularized in the 1960s where a city could be encased in a single building, or a relatively small number of buildings interconnected. Such arcology concepts are popular in science fiction. Megastructures often play a part in the plot or setting of science fiction movies and books, such as Rendezvous with Rama by Arthur C. Clarke. In 1968, Ralph Wilcoxen defined a megastructure as any structural framework into which rooms, houses, or other small buildings can later be installed, uninstalled, and replaced; and which is capable of "unlimited" extension. This type of framework allows the structure to adapt to the individual wishes of its residents, even as those wishes change with time. Other sources define a megastructure as "any development in which residential densities are able to support services and facilities essential for the development to become a self-contained community". Many architects have designed such megastructures. Some of the more notable such architects and architectural groups include the Metabolist Movement, Archigram, Cedric Price, Frei Otto, Constant Nieuwenhuys, Yona Friedman, and Buckminster Fuller. Proposed Atlantropa, a hydroelectric dam to be built across the Strait of Gibraltar, lowering the surface of the Mediterranean Sea by as much as 200 meters. Trans-Global Highway, highway systems that would link all six of the inhabited continents on Earth. The highway would network new and existing bridges and tunnels, not only improving ground transportation but also potentially offering a conduit for utility pipelines. Cloud nine is Buckminster Fuller's proposal for a tensegrity sphere a mile in radius which would be large enough so that it would float in the sky if heated by only one degree above ambient temperature, creating habitats for mini cities of thousands of people in each "Cloud Nine". Fuller also proposed a marine analog consisting of a hollow terraced floating tetrahedron of reinforced concrete measuring one mile from vertex to vertex supporting a population of one million living in air-deployed residential modules on the exterior with the requisite infrastructure providing utilities (water, power, sewerage, etc.) inside. The modules would have standardized utility ports so as to be completely livable within minutes of arrival, and could be subsequently detached and moved to other such cities. The Line, a 170-kilometer-long linear settlement in Saudi Arabia, a smart city currently in the early stages of construction. Theoretical A number of theoretical structures have been proposed which may be considered megastructures. Stellar scale Most stellar scale megastructure proposals are designs to make use of the energy from a sun-like star while possibly still providing gravity or other attributes that would make it attractive for an advanced civilization. The Alderson disk is a theoretical structure in the shape of a disk, whose outer radius is equivalent to the orbit of Mars or Jupiter and whose thickness is several thousand kilometers. A civilization could live on either side, held by the gravity of the disk and still receive sunlight from a star bobbing up and down in the middle of the disk. A Dyson sphere (also known as a Dyson shell) refers to a structure or mass of orbiting objects that completely surrounds a star to make full use of its solar energy. A Matrioshka brain is a collection of multiple concentric Dyson spheres which make use of star's energy for computing. A Stellar engine either uses the temperature difference between a star and interstellar space to extract energy or serves as a Shkadov thruster. A Shkadov thruster accelerates an entire star through space by selectively reflecting or absorbing light on one side of it. Topopolis (also known as Cosmic Spaghetti) is a large tube that rotates to provide artificial gravity. A Ringworld (or Niven Ring) is an artificial ring encircling a star, rotating faster than orbital velocity to create artificial gravity on its inner surface. A non-rotating variant is a transparent ring of breathable gas, creating a continuous microgravity environment around the star, as in the eponymous Smoke Ring. Related structures which might not be classified as individual stellar megastructures, but occur on a similar scale: A Dyson swarm is a Dyson sphere made up of separately orbiting elements (including large habitats) rather than a single continuous shell. A Dyson bubble is a Dyson sphere in which the individual elements are statites, non-orbital objects held aloft by the pressure of sunlight. Planetary scale A Bishop Ring, Halo or Orbital is a space habitat similar to but much smaller than a Niven Ring. Instead of being centered on a star, it is in orbit around the star and its diameter is typically on the order of magnitude of a planet. By tilting the ring relative to its orbit, the inner surface would experience a nearly conventional day and night cycle. Due to its enormous scale, the habitat would not need to be fully enclosed like the Stanford torus, instead its atmosphere would be retained solely by centripetal gravity and side walls, allowing an open sky. Globus Cassus is a hypothetical proposed project for the transformation of Planet Earth into a much bigger, hollow, artificial world with the ecosphere on its inner surface. This model serves as a tool to understand the World's real functioning processes. Shellworlds or paraterraforming are inflated shells holding high pressure air around an otherwise airless world to create a breathable atmosphere. The pressure of the contained air supports the weight of the shell. Completely hollow shell worlds can also be created on a planetary or larger scale by contained gas alone, also called gravitational balloons, as long as the outward pressure from the contained gas balances the gravitational contraction of the entire structure, resulting in no net force on the shell. The scale is limited only by the mass of gas enclosed, the shell can be made of any mundane material. The shell can have an additional atmosphere on the outside. It can also refer to terraformed or artificial planets with multiple concentric layers. Orbital structures An orbital ring is a dynamically elevated ring placed around the Earth that rotates at an angular rate that is faster than orbital velocity at that altitude, stationary platforms can be supported by the excess centripetal acceleration of the super-orbiting ring (similar in principle to a Launch loop), and ground-tethers can be supported from stationary platforms. The Bernal sphere is a proposal for a spherical space colony with a maximum diameter of 16 kilometers. It would have gravity at the equator, and gradually turn to zero G at the poles. Rotating wheel space stations, such as the Stanford torus, are wheel-like space station which produce artificial gravity by rotation. Typical designs include transport spokes to a central hub used for docking and/or micro-gravity research. The related concepts, O'Neill and McKendree cylinders, are both pairs of counter-rotating cylinders containing habitable areas inside and creating 1g on their inner surfaces via centripetal acceleration. The scale of each concept came from estimating the largest 1g cylinder that could be built from steel (O'Neill) or carbon fiber (McKendree). Hollowed asteroids (or Bubble worlds or Terraria) are spun on their axis for simulated gravity and filled with air, allowing them to be inhabited on the inside. In some concepts, the asteroid is heated to molten rock and inflated into its final form. A stellaser is a star-powered laser or maser. Trans-orbital structures A skyhook is a very long tether that hangs down from orbit. A space elevator is a tether that is fixed to the ground, extending beyond geostationary orbital altitude, such that centripetal force exceeds gravitational force, leaving the structure under slight outward tension. A space fountain is a dynamically supported structure held up by the momentum of masses which are shot up to the top at high speeds from the ground. A launch loop (or Lofstrom loop) is a dynamically supported 2000 km long iron loop that projects up in an arc to 80 km that is ridden by maglev cars while achieving orbital velocity. StarTram Generation 2 is a maglev launch track extending from the ground to above 96% of the atmosphere's mass, supported by magnetic levitation. A rotovator is a rotating tether where the lower tip is moving in the opposite direction to the tether's orbital velocity, reducing the difference in velocity relative to the ground, and hence reducing the velocity of rendezvous; the upper tip is likewise moving at greater than orbital velocity, allowing propellantless transfer between orbits. Around an airless world, such as the moon, the lower tip can actually touch the ground with zero horizontal velocity. As with any momentum exchange tether, orbital energy is gained or lost in the transfer. Fictional A number of structures have appeared in fiction which may be considered megastructures. Stellar scale The Dyson sphere has appeared in many works of fiction, including the Star Trek universe. Larry Niven's series of novels beginning with Ringworld centered on, and originated the concept of a ringworld, or Niven ring. A ringworld is an artificial ring with a radius roughly equal to the radius of the Earth's orbit (1 AU). A star is present in the center and the ring spins to create g-forces, with inner walls to hold in the atmosphere. The structure is unstable, and required the author to include workarounds in subsequent novels set on it. In the manga Blame! the megastructure is a vast and chaotic complex of metal, concrete, stone, etc., that covers the Earth and assimilates the Moon, and eventually expands to encompass a volume greater than the orbit of Jupiter. In White Light by William Barton and Michael Capobianco, a Topopolis is presented as taking over the entire universe. In the Heechee Saga series by Frederik Pohl, a race of pure energy beings called The Foe have constructed the Kugelblitz, a black hole made of energy and not matter. In the Xeelee series of books by Stephen Baxter, the eponymous alien race constructed the Ring, a megastructure made of cosmic strings, spanning over 10 million light years. In Freelancer, The Dom'Kavosh's Dyson shell that is inhabited by a drone race created by the Dom'Kavosh, Nomads. This is reached via a hyper gate, created by the same creators as the Dyson sphere. The Saga of Cuckoo series novel Wall Around a Star mentions a proposal to build a super dyson sphere, completely enclosing the Galactic Center. The title of the novel Helix by Eric Brown directly references a stellar-scale helical megastructure. Different types of environments and habitats are interspersed along the structure, while their varying distance from the central star affects the climate. The player's central quest in computer game Dyson Sphere Program is to construct a Dyson sphere. Gameplay focuses on constructing planetary scale factories as a means towards this end. The Quarg in the game Endless Sky are shown building a massive ring around one of their stars, which is most likely around one astronomical unit in diameter. A completed version of this can also be found in another location. In computer games Space Empires IV and Space Empires V, the player can construct sphereworlds and ringworlds around stars. Dennis E. Taylor's 2020 novel Heaven's River features a Topopolis built around an alien system. Different segments of the structure are built with artificial climate and weather. Planetary and orbital scale Several structures from the fictional Halo universe: The original twelve Halos, seen in Halo: Cryptum, were 30,000 kilometers in diameter; a separate array of six Halos are 10,000 kilometers in diameter, with one of the original twelve later being reduced to this size in Halo: Primordium. The Lesser Ark is a 127,530 km diameter structure from which the Halo Array can be activated and capable of building 10,000 km Halos. The "greater" Ark, seen in Cryptum and Primordium, is capable of producing 30,000 km Halos. Onyx is an artificial planet made entirely out of Forerunner Sentinels (advanced replicating robots). At its core is a "shield world", contained within slipstream space, that is approximately one astronomical unit in diameter. The much smaller Shield World 0459, (approximately 1,400 km in diameter), is the setting for the latter half of Halo Wars. A third shield world, Requiem, is the primary setting for Halo 4. Requiem is an artificial hollow planet encased in a kind of Dyson Sphere. Halo 5: Guardians introduces a fourth shield world, Genesis. High Charity, the Covenant's mobile planetoid station. In the Doctor Who episodes The Stolen Earth and Journey’s End, a planet-sized space station known as The Crucible is built by the Daleks, a genocidal alien race, and facilitates the reality bomb, a weapon meant to erase the entire multiverse from existence. The Crucible also held enough Daleks to slaughter the universe they were in even without the bomb, according to The Doctor. In Sonic Adventure 2 and Shadow The Hedgehog, the Eclipse Cannon is a planet-destroying WMD built inside of the Space Colony Ark. Buster Machine III from Gunbuster. The Culture Orbital from The Culture. In the 2013 CGI anime film, Space Pirate Captain Harlock, the Jovian Accelerator is an ancient, Death Star-like Weapon of mass destruction that uses energy from Jupiter's atmosphere to create a large beam of intense light strong enough to destroy an entire planet. Sidonia, the main ship and home of millions of humans, 1000 years in the future in the Knights of Sidonia manga and anime series, created after the destruction of the earth along with other unnamed seed ships. Trantor, the capital of an interstellar empire in Isaac Asimov's Foundation series, is an ecumenopolis, a planet entirely covered in one huge metal-clad building, with only one small green space: the Emperor's palace grounds. The Ori Supergate seen in a number of episodes of Stargate SG1 could be classed as a megastructure. In The Hitchhiker's Guide to the Galaxy series, Earth, as well as other planets, were artificial megastructures. Earth was intended to function as a gigantic computer and was built by a race of beings who made their living by manufacturing other planets. Mata-Nui in the BIONICLE franchise is classifiable as a megastructure. In the story, he is a massive robot as tall as a planet, and inside his body, every inhabitant of the BIONICLE Universe (Matoran, Toa, etc.) all live, unaware that they live inside a massive, space-traveling entity. In the Robotech Sentinels novels, Haydon IV is an artificially constructed cyber-planet with android citizens. In the Invader Zim episode "Planet Jackers", two aliens surround the Earth with a fake sky in order to throw it into their sun. In the 2017 video game Destiny 2, the fleet of Dominus Ghaul, the ruler of the Cabal Empire, features a massive super-weapon named the Almighty, whose wingspan is said to be as wide as the planet Mercury. Almighty links itself to a solar system's star on the quantum level via an energy beam and breaks it down into usable fuel, warping to another system before the star collapses into a supernova. Nightmare's fortress from Kirby: Right Back at Ya! can be classified as a megastructure because it is the size of a small planet. In several works, Arthur C. Clarke writes about a colossal hollow tube, first described in Rendezvous with Rama (1973), and inhabited by different races. The Citadel in the Mass Effect universe is an enormous space station constructed by an ancient race of machines called the Reapers millions of years before the games in the series. At the time of Mass Effect 2, its population is 13.2 million. In the game Airforce Delta Strike a large Space Elevator called the Chiron Lift is used to send supplies out into outer space. In the game Half-Life 2, an alien empire, the Combine, invaded earth through the border world Xen. After the combine invaded earth in an event named the Seven Hour War, they created a large tower 2.5 miles tall, the Combine Citadel. In the Warhammer 40,000 series, the Imperial Palace (site of the Golden Throne wherein the Emperor of Mankind is kept alive indefinitely) could be considered a megastructure. The palace is a complex of continent-wide structures with the Golden Throne being located in an area stretching across the whole of the Himalayan mountains. In the film Elysium, a luxury space station (a Bishop Ring) called Elysium houses the wealthy population of the human species. Large rotating space-stations are a staple of science fiction, including Arthur C. Clarke's novel 2001: A Space Odyssey, the battle school from Ender's Game, and the eponymous Babylon 5. Hollowed asteroids feature in various fiction, such as Kim Stanley Robinson's novel 2312, Larry Niven's Known Space, and Golden Age SF writers like Clarke and Asimov. In the 2022 film Moonfall, Earth's moon is knocked from its orbit and begins to circle closer to Earth. A conspiracy theorist believes the Moon is a Dyson sphere megastructure and turns out to be correct. Star Wars (1977 – present, American sci-fi franchise) The Death Star from Star Wars is 160 km in diameter, followed by a second Death Star 200km in diameter. Starkiller Base was constructed from the dwarf planet Ilum, depending on the source it's diameter 660 to 830 km. The Centerpoint Station was a 350  km spherical space station at the Lagrangian point between the planets Talus and Tralus in the Corellia system. It was a gigantic and ancient hyperspace tractor beam with which an ancient race, known as Celestials, created the Corellia star system. With the help of the tractor beam, whole planets could be moved through hyperspace and arranged into their actual orbits around the central star. On the other hand, the same technology could be used as a weapon to destroy even stars. On the inside of the main sphere, a huge living space called Hollowtown was home to many people in a similar fashion as on the inside of a Dyson sphere. A second, smaller megastructure of near-identical design, called Sinkhole Station, was also built shortly after the construction of Centerpoint Station. Its purpose was to maintain the stability of The Maw, a black hole cluster constructed using Centerpoint Station. Coruscant is an ecumenopolis, the planet is entirely covered by and essentially a city. It serves as the capital of first the Republic and then later the First Galactic Empire. The Galaxy Gun, a large space station designed to destroy entire planets from across the galaxy could be considered a megastructure because its size is more than seven kilometres long. The Star Forge from Star Wars: Knights of the Old Republic. Glavis Ringworld is a ring shaped space station around a star in The Book of Boba Fett. The Core World Kuat was circled by an orbital ring used primarily as a shipyard. There are multiple instances of hollowed asteroids, such as Hammer Station and the Eye of Palpatine. Stellaris (2016 video game) Stellar-Scale Megastructures A Dyson Sphere is a megastructure added in the Utopia expansion, capable of producing massive amounts of energy at the cost of rendering the solar system uninhabitable, except for Habitats. A Ring World is a megastructure added in the Utopia expansion, offering a solar-system sized habitat equivalent to four massive habitable planets. A Matter Decompressor is a megastructure added in the Megacorp expansion to the game and allows the owner to harvest massive amounts of minerals from the cores of Black Holes. A Mega-Shipyard is a massive shipyard in orbit of a star, capable of producing ships much faster than average shipyards. The Aetherophasic Engine is a megastructure built by crisis aspirants, capable of destroying the entire galaxy as a side-product of allowing the race which constructed it to ascend to the "Shroud", an alternate dimension in the game composed of nearly pure energy A Quantum Catapult is a large megastructure built around a pulsar or neutron star that is capable of sending fleets instantly across the galaxy. It however is not completely accurate and thus can send fleets away from their intended destinations. Orbital/Planetary Scale Megastructures A Science Nexus is a massive orbital science laboratory which expands the empire's science production massively. A Sentry Array is a massive orbital station that gives you sight over the entire in-game galaxy. Habitats are orbital structures which serve the purpose of a small planet. A Mega Art Installation is a megastructure that improves your overall amenities and happiness in your empire. A Strategic Coordination Center is a megastructure that Increases your naval capacity, and ship speed, as well as adds several other bonuses. An Interstellar Assembly acts as a hub for the game's Galactic Community, increases your diplomatic weight, adds more envoys, and another empire's opinion of you. Gateways allow for near-instantaneous travel across the galaxy. In addition, there is a unique form of Gateway called "L-Gates" which link up to an extragalactic cluster of stars. Orbital Rings are massive ring structures built around planets that afford extra protection and increase the output of the planet. Hyper Relays are large structures that allow ships to jump to identical Hyper Relays in adjacent systems instead of using the existing hyperlane connections, thereby avoiding having to traverse systems at sublight speeds. In addition, many megastructures can also generate in "ruined" versions, which the player can later repair. See also Arcology Skyscraper References External links National Geographic Channel Megastructure.org Megastructure Art Stellaris Wiki Megastructures in Stellaris Exploratory engineering Science fiction themes
Megastructure
[ "Technology" ]
4,705
[ "Exploratory engineering", "Megastructures" ]
1,029,427
https://en.wikipedia.org/wiki/Alebrije
Alebrijes () are brightly colored Mexican folk art sculptures of fantastical (fantasy/mythical) creatures. Description The first alebrijes originated in Mexico City, originally created by 'cartonero' artist Pedro Linares. Linares often said that in 1943, he fell very ill. While he was in bed unconscious, he dreamt of a strange place resembling a forest. There, he saw trees, rocks, and clouds that suddenly transformed into strange, unknown animals. He saw "a donkey with butterfly wings, a rooster with bull horns, and a lion with an eagle head," and all of them were shouting one word "Alebrijes! Alebrijes! Alebrijes!" Upon recovery, he began recreating these Chimera-like creatures that he had seen in cartonería, the making of three-dimensional sculptures with different types of papers, strips of papers, and "engrudo" (glue made out of wheat flour and water). His work caught the attention of artists Diego Rivera and Frida Kahlo because they used to purchase Judas figures from Pedro Linares. In the 1980s, British filmmaker Judith Bronowski arranged an itinerant Mexican art craft demonstration workshop in the United States featuring Pedro Linares, Manuel Jiménez, and Maria Sabina, a textile artisan from Oaxaca. Although the Oaxaca Valley area already had a history of carving animals and other types of figures from wood, artisans from Oaxaca learned of the alebrijes paper-mâché sculptures when Bronowski's workshop took place. Linares demonstrated his designs on family visits. These were adapted to the carving of a local wood called copal; this type of wood is said to be magical, made from united magic. In the 1990s, the artisans of Oaxaca began to use the word Alebrije to designate their figures carved in wood. The papier-mâché-to-wood carving adaptation was pioneered by Arrazola native Manuel Jiménez. This version of the craft has since spread to several other towns, most notably San Martín Tilcajete and La Unión Tejalapan, and has become a significant source of income for the area, especially for Tilcajete. The success of the craft, however, has led to the depletion of the native copal trees. Attempts to remedy this with reforestation efforts and management of wild copal trees have had limited success. The three towns most closely associated with alebrije production in Oaxaca have produced a number of notable artisans such as Manuel Jiménez, Jacobo Angeles, Julia Fuentes, and Miguel Sandiego. Original papercraft alebrijes Alebrijes originated in Mexico City in the 20th century, in 1936. The first alebrijes, as well as the name itself, are attributed to Pedro Linares, an artisan from México City (Distrito Federal), who specialized in making piñatas, carnival masks and "Judas" figures from cartonería, an ancient and widespread papercraft often confused with papier-mâché. He sold his work in markets such as the one in La Merced. In 1936, when he was 30 years old, Linares fell ill with a high fever, which caused him to hallucinate. In his fever dreams, he was in a forest with rocks and clouds, many of which turned into wild, unnaturally colored creatures, frequently featuring wings, horns, tails, fierce teeth and bulging eyes. He heard a crowd of voices repeating the nonsense word "Alebrije". After he recovered, he began to re-create the creatures he'd seen, using papier-mâché and cardboard. Eventually, a Cuernavaca gallery owner discovered his work. This brought him to the attention of Diego Rivera and Frida Kahlo, who began commissioning more alebrijes. The tradition grew considerably after British filmmaker Judith Bronowski's 1975 documentary on Linares. Linares received Mexico's National Arts and Sciences Award in the Popular Arts and Traditions category in 1990, two years before he died. This inspired other alebrije artists, and Linares' work became prized both in Mexico and abroad. Rivera said that no one else could have fashioned the strange figures he requested; work done by Linares for Rivera is now displayed at the Anahuacalli Museum in Mexico City. The descendants of Pedro Linares, such as his son Miguel Linares, his granddaughters Blanca y Elsa Linares, and his grandson Ricardo Linares, live in Mexico City near the Sonora Market and carry on the tradition of making alebrijes and other figures from cardboard and papier-mâché. Their customers have included the Rolling Stones, David Copperfield, and filmmaker Guillermo del Toro. The Stones gave the family tickets to their show. Various branches of the family occupy a row of houses on the same street. Each family works in its own workshops in their own houses, but they will lend each other a hand with big orders. Demand rises and falls; sometimes there is no work, and sometimes families work 18 hours a day. The original designs for Pedro Linares' alebrijes have fallen into the public domain. However, according to Chapter Three of the 1996 Mexican federal copyright law, it is illegal to sell crafts made in Mexico without acknowledging the community and region they are from, or to alter the crafts in a way that could be interpreted as damaging to the culture's reputation or image. The law applied to the commercialization of the crafts as well as to their public exhibition and the use of their images. This law is rarely enforced; most crafts sellers in Mexico rarely disclose where the origin of their products. The name "alebrijes" is used for a wide variety of crafts, even though the Linares family has sought to gain control over the name. The family says that pieces which are not made by them and do not come from Mexico City should state so. The Linares family continues to export their work to the most important galleries showing Mexican art worldwide. For example, "Beasts and Bones: The Cartonería of the Linares Family" in Carlsbad, California, featured about seventy alebrijes and was so popular that it was extended by several weeks. Because a variety of artists and artisans have been creating alebrijes in their own styles, the craft has become part of Mexico's folk art repertoire. No two alebrijes are exactly alike. Outside of the Linares family, one of the most noted alebrije artists is Susana Buyo, who learned to work with cardboard and papier-mâché at one of the Linares family workshops. Known as the "Señora de los Monstruos" by the local children in Condesa, an upscale neighborhood of Mexico City, she is a native Argentine and naturalized Mexican citizen. Her work can be found across Mexico City and elsewhere, such as those found in Europe. Her work differs from that of the Linares in that many of her designs include human contours, and many have expressions more tender than terrifying. She also uses nontraditional materials such as feathers, fantasy stones, and modern resins, both for novelty and for durability. While Pedro Linares dreamed up the creatures, they did not surface in a vacuum. Similarities and parallels can be drawn between alebrijes and various supernatural creatures from Mexico's indigenous and European past. In pre-Hispanic art, the brightly colored images were often fantastic and macabre. Influences from Mexico City's Chinatown, especially in the dragons, and Gothic art such as gargoyles can be seen. Red cardboard demons called judas, which Linares made, are still made to be burned in Mexico during Holy Week in purification rituals. More recent examples in Mexican culture, artist Julio Ruelas and graphics artist/commentator José Guadalupe Posada, have created fantastic and sometimes terrifying images. Alebrijes, especially the monsters, have gained a reputation for "scaring away bad spirits" and protecting the home. Some, like master craftsman Christian David Mendez, claim that there is a certain mysticism involved in the making and owning of alebrijes, with parts of certain animals representing human characteristics. The annual Monumental Alebrije Parade in Mexico City A more recent phenomenon, the annual Monumental Alebrije Parade, has been sponsored by the Museo de Arte Popular in Mexico City since 2007. The 2009 parade featured more than 130 giant alebrijes made of wood, cardboard, paper, wire, and other materials, and marched from the Zocalo in the historic center of the city to the Angel of Independence monument on Paseo de la Reforma. Entries by artisans, artists, families and groups each year have gotten bigger, more creative and more numerous, with names like: "Devora Stein" by Uriel López Baltazar "Alebrhijos" by Santiago Goncen "Totolina", by Arte Lado C "AH1N1" by Taller Don Guajo "Volador", by Taller de Plástica El Volador "La mula del 6" by Daniel Martínez Bartelt "La gárgola de la Atlántida" by Juan Carlos Islas "Alebrije luchador" by Ricardo Rosales They are accompanied by bands playing popular Mexican music. At the end of the parade, the pieces are lined up on Paseo de la Reforma for judging and displayed for two weeks. The 2010 alebrije parade had themes related to the Bicentennial of the Independence of Mexico and the Centennial of the Mexican Revolution, although Walter Boelsterly, head of the Museo de Artes Populares, concedes that such may require a bit of tolerance because it can lead to revered figures such as Miguel Hidalgo and Ignacio Allende with animal parts. He states that the aim is to celebrate and not to mock. In addition to the annual parade, the Museum has sponsored alebrije shows such as the three-meter tall alebrije which captured attention at the Feria International del Libro in Bogotá. The word "alebrije" was not known in Colombia, so the locals dubbed it a "dragoncito" (little dragon). Along with "dragoncito" 150 other, smaller pieces of Mexican crafts were shown. Carved wood alebrijes Development of the craft in Oaxaca Many rural households in the Mexican state of Oaxaca have prospered over the past three decades through the sale of brightly painted, whimsical wood carvings they call alebrijes to international tourists and the owners of ethnic arts shops in the United States, Canada, and Europe. What are called "alebrijes" in Oaxaca is a marriage of native woodcarving traditions and influence from Pedro Linares' work in Mexico City. Pedro Linares was originally from México City (Distrito Federal). In the 1980s, British filmmaker, Judith Bronowski, arranged an itinerant demonstration workshop in the United States participating Pedro Linares, Manuel Jiménez and a textile artisan Maria Sabina from Oaxaca. Although the Oaxaca valley area already had a history of carving animal and other types of figures from wood, it was at this time, when Bronowski's workshop took place when artisans from Oaxaca knew the alebrijes papier-mâché sculptures. Then Linares' designs were adapted to the carving of a local wood called copal. The Oaxaca valley area already had a history of carving animal and other types of figures from wood, and Linares' designs were adapted to the carving of a local wood called copal. This adaptation was pioneered by Arrazola native Manuel Jiménez. This version of the craft has since spread to a number of other towns, most notably San Martín Tilcajete and La Unión Tejalapan, become a major source of income for the area, especially for Tilcajete. Given the scale of success with alebrijes, populations of native copal trees have decreased over the years. Efforts through reforestation and the management of the trees have yet to create any significant growth in population. The three towns most closely associated with alebrije production in Oaxaca have produced a number of notable artisans such as Manuel Jiménez, Jacobo Angeles, Martin Sandiego, Julia Fuentes and Miguel Sandiego. One of the most important things about the fantastical creatures carved of wood is that every piece is removable, it is how one can tell a genuine piece carved by one of the original great carvers. The later carvers did not learn the technique of making each piece fit so well that it could be removed and put back in again and again. Those pieces have more than tripled in value. The painting on these figures is also more intense and varied. The first to copy the fantastic forms and bright colors was Manuel Jiménez, who carved the figures in local copal wood rather than using paper. Animal figures had always been carved in the central valleys area of Oaxaca by the Zapotecs since the pre-Hispanic period. Totems of local animals were carved for luck or religious purposes as well as hunting decoys. Figures were also carved for children as toys, a tradition that continued well into the 20th century. After the craft became popular in Arrazola, it spread to Tilcajete and from there to a number of other communities, and now the three main communities are, San Antonino Arrazola, San Martin Tilcajete and La Union Tejalapam, each of which has developed its own style. The carving of wood figures did not have a name, so the name "alebrije" eventually became adopted for any carved, brightly colored figure of copal wood, whether it is of a real animal or not. To make the distinction, the carvings of fantastic creatures, closer to Linares' alebrijes, are now sometimes called "marcianos" (lit. Martians). Oaxacan alebrijes have eclipsed the Mexico City version, with a large number of stores in and around the city of Oaxaca selling the pieces, and it is estimated that more than 150 families in the same area make a living making the figures. Woodcarving, along with other crafts in Oaxaca, grew in importance as the state opened up to tourism. This started in the 1940s with the Pan-American Highway and has continued to this day with the construction of more roads, airports and other transportation coincided with the rising prosperity of the U.S. and Canada making Mexico an affordable exotic vacation. Oaxacan woodcarving began to be bought in the 1960s by hippies. Prior to the 1980s, most of the woodcarvings were natural and spiritual world of the communities, featuring farm animals, farmers, angels and the like. These pieces, now referred to as "rustic" (nistico), were carved and painted in a simple manner. Later known for their alebrijes, carvers such as Manuel Jimenez of Arrazola, Isadoro Cruz of Tilcajete and Martin Sandiego of La Union began by carving animals as youths, often while doing other chores such as tending sheep. By the 1960s and 1970s, these carvers had enough of a reputation to sell their work in the city of Oaxaca. As more dealers shipping to other parts of Mexico and abroad visited the rural villages, more exotic animals such as lions, elephants and the like were added, and eventually came to dominate the trade. Eventually, traditional paints gave way to acrylics as well. Another development that encouraged woodcarving were artisans' contests held by the state of Oaxaca in the 1970s, which encouraged carvers to try new ideas in order to win prizes and sell their pieces to state museums. In the 1970s and early 1980s, carvers in the three villages sold pieces mostly to store owners in Oaxaca, with only one carver, Manuel Jimenez, carving full-time. Most other carvers used the craft to supplement incomes from farming and wage labor. It was also considered to be a male occupation. In the mid-1980s, the influence of the Linares alebrijes was becoming popular and wholesalers and store owners from the United States, began to deal with artisans in Oaxaca directly. The desire of the foreign merchants for non-indigenous animals and the newly popular alebrijes affected the market. By 1990, woodcarving had begun to boom with most households in Arrazola and Tilcajete earning at least part of their income from the craft. La Union was less successful in attracting dealers and tourists. The boom had a dramatic economic effect, shifting the economies of Arrazola and Tilcajete away from farming and towards carving. It also affected the carvings that were being produced. Carvings became more complicated and paintings more ornate as families competed against each other. Specialization also occurred with neophyte carvers looking for a niche to compete with already established carvers. The craft continued to become established in the 1990s as more families carved and more tourists came to Oaxaca with the building of new roads. Some of these new Oaxacan crafters have extended the design to smooth – abstract painted realistic animals, especially the Mendoza family (Luis Pablo, David Pablo and Moises Pablo a.k.a. Ariel Playas), creating a new generation of alebrijes. While the sales trend has been mostly positive for Oaxacan alebrijes, it is dependent on global market fluctuations and on tourism to Oaxaca. There was a decline in sales in the late 1980s, possibly due to global market saturation and the dominance of repetitive, unimaginative designs. Sales rose again in the 1990s. Sales fell again in 2001, when tourism from the U.S. fell and fell again precipitously 2006 due to statewide social unrest. It has not fully recovered since. The alebrije market is divided into two levels, the production of unique, high-quality, labor-intensive pieces and the production of repetitive, average quality and inexpensive pieces. Those who have produced exceptionally fine pieces have gained reputations as artists, commanding high prices. Larger pieces are generally made only by the better carving families. While pieces can be bought and ordered from the artisans directly, most sell to middlemen who in turn sell them to outlets in Mexico and abroad. The most successful carving families sell almost exclusively to dealers and may have only a few pieces available for the drop-in visitor. Within Mexico, Oaxacan alebrijes are often sold in tourist locations such as Oaxaca city, La Paz, Cancún, Cozumel and Puerto Escondido. Most pieces sold internationally go to the United States, Canada, Europe and Japan, where the most expensive pieces end up in ethnic craft stores in urban areas, university towns and upscale resorts. Cheaper pieces tend to be sold at trade shows and gift shops. Tourists who buy pieces directly from carvers pay about twice what wholesalers do. The price of each piece depends on the quality, coloring, size, originality and sometimes the reputation of the carver. The most expensive pieces are most often sent abroad. Pieces sold retail in Oaxaca generally range from US$1 to $200. The most commercialized figures are those of dogs, armadillos, iguanas, giraffes, cats, elephants, zebras, deer, dolphins, sharks, and fish. Animals are often painted with bright colors and designs and carved with exaggerated features that bear little resemblance to what occurs in the natural world. Anthropomorphism is common and carvings of animals playing musical instruments, golfing, fishing, and engaging in other human pursuits are very popular. Fantastic creatures such as dragons and chimeras and others are also carved, even carvings of Benito Juárez, Subcomandante Marcos, chupacabras (imaginary beings that eat goats), "Martians," mermaids, and hippocampus. The diversity of the figures is due to a segmented market both in Mexico and abroad which rewards novelty and specialization. In a number of cases, carvings return to images from Mexican culture such as angels, saints, and Virgins, which will have somber faces even if they are painted in very bright colors. Devils and skeletons are often parts of more festive scenes depicting them, for example, riding dogs and drinking. Foreign customers demand more creative figures with little repetition. Prices abroad range from between three and five times the retail price in Oaxaca, with a median of US$100, with lowest usually around $10 and highest around $2,000. One of the most expensive pieces sold from a carving village occurred in 1995, when a doctor from Mexico City paid Isidro Cruz of Tilcajete the equivalent of US$3000 for a piece entitled "Carousel of the Americas." This piece took Cruz three months to complete. Typical household income of families from Arrazola and Tilcajete averages about US$2000 per year, but exceptional artists can earn up to $20,000 per year. Two thousand a year is substantially more than average in Oaxaca and allows families to build or expand housing and send children to secondary school. Most families carve as a sideline with agriculture providing basic staples. In some towns, especially in Tilcajete, the economy has shifted from agriculture to the making of wood carvings with a number of families abandoning farming altogether. For most households in Oaxaca, the success of alebrijes has not replaced the need to farm or alleviated the need to send family members to Mexico City or to the United States and work and send remittances back home. Despite Oaxaca's reputation for the production of crafts by indigenous peoples, alebrije makers are monolingual Spanish speakers who generally do not identify themselves as a member of an indigenous group, though almost all have Zapotec ancestors. The alebrijes are considered to be novelty items for the makers rather than expressions of a cultural heritage. More traditional woodcarving, such as utensils, toys, religious figures and the like are still made by older residents, but these crafts are overshadowed by alebrijes. Approximately 150 families now devote themselves at least part-time to the making of alebrijes, with carving techniques being passed down from generation to generation and many children growing up around fantastic figures both finished and in process. Due to copies from other places, a certification scheme is being considered to ensure the viability of crafts from this area. That would include educating consumers and working with reputable stores. The carving process The carving of a piece, which is done while the wood is still wet, can last anywhere from hours to a month, depending on the size and fineness of the piece. Often the copal wood that is used will influence what is made, both because of the shapes the branches can take and because male and female trees differ in hardness and shape. Carving is done with non-mechanical hand tools such as machetes, chisels and knives. The only time a more sophisticated tool is used is when a chain saw is employed to cut off a branch or level a base for the proposed figure. The basic shape of the creature is usually hacked using a machete, then a series of smaller knives used as the final shape is achieved. Certain details such as ears, tails and wings are usually made from pieces separate from the one for the main body. After the carving, the figure is then left to dry for up to ten months, depending on its overall size and thickness. Semi tropical wood such as copal is susceptible to insect infestations, and for this reason drying pieces are often soaked in gasoline and sometimes baked to ensure that all insect eggs have been destroyed. As the figure dries, it is also susceptible to cracking. The cracks are filled with small pieces of copal wood and a sawdust resin mixture before painting. Oaxaca woodcarvings were all originally painted with aniline paints made with natural ingredients such as bark of the copal tree, baking soda, lime juice, pomegranate seeds, zinc, indigo, huitlacoche and cochineal. These colorings were also used for dying clothing, ceremonial paints and other uses. Since 1985, most carvers have now switched to acrylics which resist fading and withstand repeated cleanings better. Some still use aniline paints as they have a more rustic look that some customers prefer. Either way, the painting is generally done in two layers, with a solid undercoat and a multicolored designed superimposed. Originally, woodcarving was a solitary activity with all aspects done by one person, usually a male. As sales soared in the 1980s, the work began to be shared among family members. Women and children help mostly with sanding and painting, leaving men to contribute less than half of the work that goes into the figures. Despite this, pieces are still referred to as the work of one person, usually the male carver. There are exceptions to this. There are men who paint better than they carve and in the community of San Pedro Taviche, women collect and carve wood about as often as the men. In most cases, all the work on pieces is done by family members. Families may hire other relatives or strangers if faced with a large order. However, only the most established of carving families can have any permanent outside help and a number of these refuse to hire outsiders. Copal wood Almost all alebrije carvers in Oaxaca use the wood of trees from the genus Bursera (Family Burseraceae), with a preference for the species B. glabrifolia, which is locally called copal or copalillo. This tree is typically found in dry tropical forests in Oaxaca and neighboring states. The exceptions are Isidro Cruz of Tilcajete, who uses "zompantle" (Erythrina coralloides) and the Manuel Jimenez family, which carves in tropical cedar (Cedrela odorata) imported from Guatemala. Originally, carvers obtained wood from the local forests on their own. Copal trees are short and squat and do not yield much wood; every piece is used. Despite this, the success of woodcarving caused an unsustainable drain on local wild copal, and nearly all of the trees near Tilcajete and Arrazola have disappeared. This localized depletion soon gave rise to a copal wood market in Oaxaca, even though many of the copal trees in other parts are of a different subspecies, which has more knots. Obtaining wood is a complex exercise because negotiating with other municipalities requires navigating complex social, legal and economic norms, and in many cases, state and federal environmental authorities have stepped in to try to preserve wild copal trees in a number of areas. Some communities have simply refused to sell their wood. These difficulties has led to a black market in copal wood, with carvers purchasing most of their supplies from vendors called "copaleros". Harvesting copalillo is not a complex task; trees are relatively small and the wood is soft. Trees are felled using an axe or chainsaw. Branches are cut with machetes. Most harvesting occurs on ejidal (communal) lands. Legal or not, the purchase of copal wood from other parts of Oaxaca is putting unsustainable pressure on wild populations in a wider area, forcing copaleros to go further to obtain wood and often to deal with angry locals and police who alternately seek bribes and enforce the law. Eventually, this led to about only six copaleros which control most of the wood being sold, and these supplies' being unreliable. The federal government states that most of the figures are made with illegally obtained wood. Securing supplies of copal wood is a major concern for woodcarvers. Despite the fact that the cost of the wood is not particularly high, despite the effort, the main issue is reliability. Another issue for carvers is quality. Artisans will pay more for their wood only if they are sure they can pass the added cost onto their customers. A number of attempts to grow the trees for woodcarving purpose have been undertaken. Copal is a native tree species to the area, so it grows readily without much care. It takes anywhere from five to ten years for a tree to grow big enough to be harvested (branches or entire tree). Some of the efforts include reforestation efforts sponsored by groups such as the Rodolfo Morales Foundation in Ocotlan, and a number of families spend time planting trees during the rainy season. Some have begun copal plantations. Various artisans have also joined the reforestation efforts through associations of their own, creating alebrijes while attempting to restore what they take from nature. Current needs for the wood far outweigh what these efforts have been able to produce. Another effort involves a program designed to manage wild copal supplies in a municipality called San Juan Bautista Jayacatlán. This arrangement has economic advantages for both the alebrije-makers and the owners of the forests where the wood is produced. It has not been developed sufficiently yet to affect the illegal harvest of wood, but its organizers hope that in time, it will become the more economical and preferred method. The difference between this program and others is that this works within the broader ethnobotanical context by promoting the management of the species within its native habitat. Jayacatlán is located next to the recently established biosphere reserve of Tehuacán-Cuicatlán. The benefit to Jayacatlán is to give the municipality a way to exploit its copal supplies and preserve its biodiversity at the same time. The benefit to carvers is to promote a reliable source of wood, as well as a trademark called "ecoalebrijes" to help them sell more alebrijes at a higher price. The wood from Jayacatlan is only sold to Arrazola and not to the other major center of Tilcajete. The enthusiasm of Arrazola's woodcarvers stems more from having a supply of good wood than from notions of ecology. San Martin Tilcajete Of the three major carving towns, San Martin Tilcajete has experienced the most success. This success is mostly due to carver Isidro Cruz, who learned to carve when he was thirteen during a long illness in the late 1940s. His work was sold locally and eventually noticed by Tonatiúh Gutierrez, the director of expositions for the Mexican National Tourist Council, later a government agency in charge of promoting crafts. He encouraged Cruz to carve masks and later appointed him in charge of a state craft buying center. Cruz worked at this for four years, learning much about craft selling and getting others from Tilcajete connected to the market. Unlike other carvers, Cruz was open about his techniques and by the late 1970s, about ten men were carving and selling in Tilcajete. Cruz not only taught his methods to others, he was able to purchase many of his neighbors' works. Cruz's efforts stimulated new styles of carving, such as alebrijes, and their sale in the city of Oaxaca. By the 1980s, there were four families devoted to carving full-time, with the rest splitting their time between crafts and agriculture. Through the 1960s and to the 1980s, embroidered shirts, blouses and dresses were still a well-received craft from Tilcajete, but by the end of the 1980s, most families were involved in carving alebrijes. Today, the carving of alebrijes is the economic base of Tilcajete. Every Friday on the main square is the "tianguis del alebrije" or weekly market selling wooden figures. The event allows visitors to purchase items from local craftsmen directly. There are usually also vendors selling other local products such as ice cream as well. Annually, the municipality holds its Feria del Alebrije (Alebrije Festival), which features alebrije sales and exhibitions, music, dance and theatre. There are also offerings of local and regional cuisine. More than 100 vendors attend, selling alebrijes, textiles, local dishes, artwork and locally made alcoholic beverages. It is sponsored by the Master Craftsmen Group of Tilcajete (Grupo de Maestros Talladoes de Tilcajete), which includes Hedilberto Olivera, Emilia Calvo, Roberta Ángeles, Juventino Melchor, Martin Melchor, Margarito Melchor Fuentes, Margarito Melchor Santiago, José Olivera Pérez, Jesús Melchor García, Vásquez, María Jiménez, Cira Ojeda, Jacobo and María Ángeles, Justo Xuana, Victor Xuana, Rene Xuana, Abad Xuana, Flor and Ana Xuana, Rogelio Alonso, who works in papier-mâché, and Doris Arellano, who is a painter. Some of the better known artisans in Tilcajete include Delfino Gutierrez, sisters Ana and Marta Bricia Hernandez, the family of Efrain and Silvia Fuentes, Coindo Melchor, Margarito Melchor and Maria Jimenez. Delfino Gutierrez specializes in free-form elephants, frogs, turtles, armadillos and more which are sold in stores in Chicago, California, New York and Israel. The Hernandez sisters sell primarily from their home and known for their painting style. The Fuentes family gained fame from Efrain's carving talents. He was featured in an exhibit in Santa Fe, NM when he was only 13 and his work has been featured in at least one book. Margarito Melchor specializes in cats, and Coindo Melchor carves elaborate ox teams with bulls, driver, and a cart filled with animals and crops as well as creatures that have been described as "bird headed women." Maria Jimenez and her brothers specialize in saints and angels as well as some animals. Maria is the best known painter in the Oaxacan community. She says that she has about thirty designs that she has developed for carvings, many of which are related to when she made embroidered dresses. The most successful artisan is Jacobo Angeles, whose work have been prominently displayed at The Smithsonian and the National Museum of Mexican Art in Chicago. It can also be found in numerous museums, art colleges and galleries in the world. Jacobo learned to carve from his father when he was twelve, and later was mentored by elders in his and other communities. While alebrijes designs have been innovative and incorporating modern elements, the Angeles family's designs focus on representations of Zapotec culture. This can be seen in the painted designs, based on influences such as the friezes of Mitla, and other ancient symbols as well as the continued use in aniline paints made from natural ingredients such as the bark of the copal tree, baking soda, lime juice, pomegranate seeds, zinc, indigo, huitlacoche and cochineal. Each year, Jacobo travels the United States to promote Oaxacan folk art in general to educational institution as well as a speaker at art institutions. Arrazola The making of alebrijes in Oaxaca was initially established in Arrazola by Manuel Jimenez. Jimenez began carving wooden figures since he was a boy tending animals in the 1920s. By the late 1950s and early 1960s, Jimenez's work was being sold in the city of Oaxaca, which led them to being shown to folk art collectors such as Nelson Rockefeller. By the late 1960s, he was giving exhibitions in museums in Mexico City and the United States and tourists began visiting his workshop in the 1970s. He kept his carving techniques strictly within the family with only his sons and a son in law carving with him. For this reason, only six families were carving alebrijes in Arrazola as late as 1985. Jimenez died in 2005. Today, Jimenez's works fetch a minimum of US$100. Many carvers and carving communities engage in specialties in order to have niches in the more competitive alebrije market in Oaxaca. In Arrazola, one of the community's specialty is the carving of complex animal bodies, especially iguanas out of one single piece of wood. Another way the community competes is through its annual festival "Cuna de los Alebrijes" (Cradle of the Alebrijes), which is held each year to promote its figures. This fair is cosponsored by the Secretary of Tourism for the state of Oaxaca. It occurs in the second half of December, during the Christmas season, with more than sixty artisans who make the figures. The goals are to draw more tourists to the town at this time and to make connections with stores, galleries and museums. Like Tilcajete, Arrazola has a number of well known artisans. Marcelo Hernandez Vasquez and his sisters have been making alebrijes for eighteen years, and Juan Carlos Santiago is sought out for his penguins. Antonio Aragon makes small, finely carved, realistic deer, dogs, lions and cats, and Sergio Aragon specializes in miniatures. One of the best known is Miguel Santiago, who sells about forty pieces a year. Some of these sales are individual pieces and others are multiple sets such as Frida Kalo surrounded by monkeys. Sets are usually sold to foreign buyers for between US$300 and $800 and have been sent to Europe, Japan and the United States. Sets often take more than a month to make and his work is considered to be in the high end of the market. Santiago's orders extend more than two years in advance. Santiago used to work with a brother and later with a nephew, but today he works mostly solo with his father to help. Another of the best known is one of the few female entrepreneurs in the market, Olga Santiago. She does not carve or paint, rather she hires others to do the work while she administrates. However, she signs all the pieces. Many of her carvers and painters are young men who leave quickly to form workshops of their own. While her workshop is not the only one run in this manner, hers is the newest and most successful. Olga's client base is tourists, which are often brought to her by tour guides, taxi drivers and the like for a commission, and wholesalers. La Unión Tejalapan La Union Tejalapan has not had the same success as Arrazola and Tilcajete because they have not been able to attract as many dealers or tourists. A significant market remains for simple rustic pieces (pre-alebrije) and pieces painted with traditional aniline paints, which La Union specializes in. These are popular with those seeking non alebrije pieces such as saints, angels, devils, skeletons and motifs related to Day of the Dead. Alebrije pieces are also made, but are painted simply with one or two colors with few decorations. La Union artisans make multipiece rodeos, fiestas, and nativity scenes. Another rustic aspect to La Union pieces is that legs can be nailed onto the torsos. The first alebrije carver from La union was Martin Santiago. In the 1950s and 1960s, Santiago worked in the United States for various periods working as an agricultural laborer in the Bracero Program. When this program ended, Santiago found that he could not support his family by farming and began selling woodcarvings to a shop owner in Oaxaca. This arrangement ended after a complex dispute. Santiago then began carving and selling on his own with his four brothers and for many years the Santiago family was the only carvers in the community. Today there are a number of others involved in the craft. Aguilino Garcia sells fairly expensive skunks, crocodiles, armadillos, and palm trees. He has a reputation for working slowing but makes pieces that were selling for between 100 and 400 pesos in 1998. Better known is the husband and wife team of Reynaldo Santiago and Elodia Reyes, who have been carving since their marriage in the mid-1970s. Reynaldo is a nephew of Martin Santiago. Like in many other carving families, he carves while she paints. Their children are not involved in their business. While the couple make some large and medium-sized pieces, they specialize in miniatures (around seven cm), such as dogs, cats, giraffes, rabbits and goats which will for around 30 pesos each. Because La Union gets few tourists, the couple is mostly reliant on the store owners and wholesalers who buy from them. Today their major buyers are a wholesaler in California and a store owner in Texas. Other parts of Mexico Outside of Mexico City and Oaxaca, alebrijes are known and made but mostly as a hobby rather than as a significant source of work. Most of these alebrijes are made with papier-mâché, wire, cardboard and sometimes with other materials such as cloth. Alebrije workshops and exhibitions have been held in Cancún. Workshops on the making of alebrijes with the purpose of selling them have been held in Cuautla, Morelos. In Tampico, workshops are given by Omar Villanueva. He has also given workshops in Nuevo Laredo, Campeche, Cancun, Playa del Carmen, Chetumal, Querétero and other places. One alebrije craftsman in Cuautla is Marcos Zenteno, who has taught the craft to his daughter. He also gives workshops on the making of the craft to others. One of the major attractions at the Primer Festival Internacional de las Artes in Saltillo in 2000 were alebrijes, which came from workshops from Monclova, Sabinas, Parras de la Fuente and Saltillo. Illuminated alebrijes An innovation in alebrijes are versions which are lighted, generally designed to be carried by a single person on the shoulders. Instead of cartonería, these alebrijes are made on movable metal frames, with LED lights, and with cloth or plastic skin. Preferred materials for the structure include ecological fabrics and micro-papers, assembled together before the alebrijes are fully painted and varnished for exhibition. This style of alebrije was first presented at a short parade dedicated to them in 2014 in Colonia Roma. These versions have been made in Mexico City by various artists, especially in workshops such as the Fábrica de Artes y Oficios Oriente. Exhibitions dedicated to the variation have attracted up to 6,000 people to the Museo de Arte Popular in Mexico City and have been displayed at Mexico International Festival of Lights. See also Mexico City Alebrije Parade El Tigre: The Adventures of Manny Rivera - One of the villains is an Alebrije Monster. Coco - Countless alebrijes inhabit the land of the dead, some as spirit guides El Alebrije, Mexican luchador enmascarado based on an Alebrije entity Leyendas (franchise) Alebrije and Evaristo are the main characters. Guacamelee!, a gigantic hostile alebrije is encountered in this video game Dead Man's Party, an album by Oingo Boingo with a tableau of alebrijes at a wedding party featured on the cover. References Linares Family website "En Calavera: The Papier-mâché art of the Linares family" by Susan N. Masuoka (casebound) / (softcover) UCLA Fowler Museum of Cultural History External links Amo Alebrijes Mexican folk art Fictional hybrids Woodcarving Articles containing video clips Culture of Mexico
Alebrije
[ "Biology" ]
8,921
[ "Fictional hybrids", "Hybrid organisms" ]
1,029,440
https://en.wikipedia.org/wiki/Translation%20unit
In the field of translation, a translation unit is a segment of a text which the translator treats as a single cognitive unit for the purposes of establishing an equivalence. It may be a single word, a phrase, one or more sentences, or even a larger unit. When a translator segments a text into translation units, the larger these units are, the better chance there is of obtaining an idiomatic translation. This is true not only of human translation, but also where human translators use computer-assisted translation, such as translation memories, and when translations are performed by machine translation systems. Perceptions on the concept of unit Vinay and Darbelnet took to Saussure's original concepts of the linguistic sign when beginning to discuss the idea of a single word as a translation unit. According to Saussure, the sign is naturally arbitrary, so it can only derive meaning from contrast in other signs in that same system. However, Russian scholar Leonid Barkhudarov stated that, limiting it to poetry, for instance, a translation unit can take the form of a complete text. This seems to relate to his conception that a translation unit is the smallest unit in the source language with an equivalent in the target one, and when its parts are taken individually, they become untranslatable; these parts can be as small as phonemes or morphemes, or as large as entire texts. Susan Bassnett widened Barkhudarov's poetry perception to include prose, adding that in this type of translation text is the prime unit, including the idea that sentence-by-sentence translation could cause loss of important structural features. Swiss linguist Werner Koller connected Barkhudarov's idea of unit sizing to the difference between the two languages involved, by stating that the more different or unrelated these languages were, the larger the unit would be. One final perception on the idea of unit came from linguist Eugene Nida. To him, translation units have a tendency to be small groups of language building up into sentences, thus forming what he called meaningful mouthfuls of language. Points of view towards translation units Process-oriented POV According to this point of view, a translation unit is a stretch of text on which attention is focused to be represented as a whole in the target language. In this point of view we can consider the concept of the think-aloud protocol, supported by German linguist Wolfgang Lörscher: isolating units using self-reports by translating subjects. It also relates to how experienced the translator in question is: language learners take a word as a translation unit, whereas experienced translators isolate and translate units of meaning in the form of phrases, clauses or sentences. Since 1996 and 2005 keylogging and eyetracking technologies were introduced in Translation Process Research. These more advanced and non-invasive research methods made it possible to elaborate a finer-grained assessment of translation units as loops of (source or target text) reading and target text typing. Loops of translation units are thought to be the basic units by which translations are produced. Thus, Malmkjaer, for instance, defines process oriented translation units as a “stretch of the source text that the translator keeps in mind at any one time, in order to produce translation equivalents in the text he or she is creating” (p. 286). Records of keystrokes and eye movements allow to investigate these mental constructs through their physical (observable) behavioral traces in the translation process data. Empirical Translation Process Research has deployed numerous theories to explain and models the behavioral traces of these assumed mental units. Product-oriented POV Here, the target-text unit can be mapped into an equivalent source-text unit. A case study on this matter was reported by Gideon Toury, in which 27 English-Hebrew student-produced translations were mapped onto a source text. Those students that were less experienced had larger numbers of small units at word and morpheme level in their translations, while one student with translation experience had approximately half of those units, mostly at phrase or clause level. References Computer-assisted translation Machine translation
Translation unit
[ "Technology" ]
827
[ "Machine translation", "Natural language and computing", "Computer-assisted translation" ]
1,029,554
https://en.wikipedia.org/wiki/Hu%20%28mythology%29
Hu (), in ancient Egypt, was "the personification of a religious term, the 'creative utterance'" and closely connected to Sia. Hu was deification of the first word, the word of creation, that Atum was said to have exclaimed upon ejaculating in his masturbatory act of creating the Ennead. Hu is mentioned already in the Old Kingdom Pyramid texts (PT 251, PT 697) as companion of the deceased pharaoh. Together with Sia, he was depicted in the retinue of Thoth. In the Middle Kingdom, all gods participated in Hu and Sia, and were associated with Ptah who created the universe by uttering the word of creation. Hu was rarely depicted visually, when Hu was depicted it would be as an anthropomorphic deity. In the New Kingdom, both Hu and Sia together with Heka, Irer and Sedjem were members of the creative powers of Amun-Ra. By the time of Ptolemaic Egypt, Hu had merged with Shu (air). References Further reading Wilkinson, R. H., Die Welt der Götter im Alten Ägypten. Glaube - Macht - Mythologie , Stuttgart 2003 See also Logos Egyptian gods Language and mysticism Masturbation Creation myths Ptah Epithets of Ptah Epithets of Amun-Ra Falcon deities ca:Llista de personatges de la mitologia egípcia#H
Hu (mythology)
[ "Astronomy" ]
308
[ "Cosmogony", "Creation myths" ]
1,029,697
https://en.wikipedia.org/wiki/Brood%20%28comics%29
The Brood are a fictional race of insectoid, parasitic, extraterrestrial beings appearing in American comic books published by Marvel Comics, especially Uncanny X-Men. Created by writer Chris Claremont and artist Dave Cockrum, they first appeared in The Uncanny X-Men #155 (March 1982). Concept and creation According to Dave Cockrum, the Brood were originally conceived to serve as generic subordinates for the main villain of The Uncanny X-Men #155: "We had Deathbird in this particular story and Chris [Claremont] had written into the plot 'miscellaneous alien henchmen.' So I had drawn Deathbird standing in this building under construction and I just drew the most horrible looking thing I could think of next to her." Biology Physical characteristics The Brood are an alien race of insectoid beings. They are a specialized race, one that has evolved to reproduce and consume any available resource. They are sadistic creatures that enjoy the suffering they cause others, especially the terror their infection causes their hosts. Despite their resemblance to insects, the Brood have endoskeletons as well as exoskeletons. Also unlike insects, they have fanged jaws instead of mandibles. Their skulls are triangular and flat, with a birthmark between their eyes. Their two front legs are tentacles they can use to manipulate objects. Due to their natural body armor and fangs, the Brood are very dangerous in combat. In addition, they have stingers that can deliver either paralyzing or killing poison. Reproduction The Brood reproduce asexually and have no clear gender. They reproduce by forcibly implanting their eggs into other sentient organism. Each host can only support one egg. Upon hatching, the host dies as the Brood egg releases mutagenic enzymes into the bloodstream. At the same time, the Broodling mentally attacks and assimilates its host. They use a hive mind to pass memory to their hosts, which also passes an individual's knowledge, given to a broodling, to the hive and back to the queen, meaning newborn brood know what any member of the race knows. Until the embryo gains the host's body the embryo can only gain temporary control of the host, often without the host noticing as the host is unaware when it loses control. If the host possesses any powers, the resultant Brood will inherit them. The persona of the host once the Brood is "born" appears to be extinguished, but in some cases, the host's will may be strong enough to survive and coexist with the Brood's. However it is implied that hosts with advanced healing ability are unable to turn, for example when an egg was implanted in Deadpool, instead of turning into a Brood, a small Brood burst out of Deadpool's body. Civilization The Brood have a civilization based on the typical communal insect societies, such as those of the bees and ants. The Empress is the absolute ruler, while the Queens lead individual Brood colonies and the "sleazoids" do all the work; despite their evil, they never rebel against their Queens, perhaps due to the latter's telepathic abilities. However, the Queens have no allegiance to each other. Some roles have proven to be flexible. The Empress is the ruler of the Brood and contains the species' hive mind. She exercises almost total control over her progeny, including determining which Brood become Queen and which remain Warrior-Prime. The Empress is larger than other Brood and has horns, whiskers, and telepathic powers. The Firstborn are the children and servants of the Empress. Because they are not born from hosts, they do not possess the Warrior-Prime ability to conceal their appearance by shifting into their host-forms. The Firstborn are larger than other Broods and possess biological armor and teleportation, but lack wings. The Brood Queens fulfill the mental command of the Empress and can communicate with their spawn via telepathy. Additionally, they lead Brood colonies and have venomous stingers. The Broodlings are Brood workers and warriors who are organized into several different roles, among them Weaponeers, Clan-Masters, Hunt-Masters, Huntsmen, Tech Handlers, and Scholars. Elite Broodlings are known as Warriors-Prime. The Brood King is a mutant Brood created when a King egg is implanted in its host. Unlike those infected with Queen or Drone eggs, the Brood King cannot infect others. It is later revealed that the King-type egg was created from Kree experimentation. Understanding the Brood's volatile nature, the Kree created an egg-like device that can disrupt the species' matriarchy, control them, and use them as weapons to disrupt rival advanced civilizations. Broo, a Brood drone who developed sentience, later eats the device, temporarily giving him its properties. Technology The Brood, like most alien species, possess advanced technology, however due to their nature it's unknown if they developed it themselves or assimilated it for their own benefit. These include: Interstellar warships: despite using the Acanti, the Brood also use actual ships, however with a mixture of organic and inorganic material. Energy-based weapons Psi-scream weapons: gun-like devices that attacks the minds of targets with subconscious fears and hatreds. Inhibitor fields that block telepathy. Nanotechnology Teleportation Fictional species biography The Brood are the Main Universe's first natural predators, spawned on a dark galaxy prior to the emergence of Galactus from his incubator. Their planet of origin is unknown, but it is rumored that the Brood originated from another dimension. They were eventually found and captured by the Kree Empire, along with other hive species, so they could weaponize them and use them against rival empires. The Supreme Intelligence approved of the idea, stating that they could be used against the Shi'ar Empire, although, he sees that it'll take millions of years to create a large enough army to fully be unleashed as a weapon against their enemies. In the next eight million years of experimentation, the Black Judges deemed the Brood a major success and were unleashed on the Shi'ar Galaxy where the Brood found certain large space-dwelling creatures that they decided to prey to use as living starships to infest neighboring star systems and initiating an intergalactic campaign to build a fearsome empire. These space-dwelling creatures included the whale-like Acanti and the shark-like Starsharks. Years later, Kree warrior Mar-Vell, has been ordered to make contact with the stranded Grand Admiral Devros on a planet in the Absolom Sector, a region known to be infested with Brood, Mar-Vell's team, which includes the medic Una and Colonel Yon-Rogg, was ambushed by Brood warriors after landing on the planet and taken prisoner by the Brood-infected Devros. The colony's Brood Queen impregnates each captive with Brood embryos, but Mar-Vell and Una manage to escape, destroy both leaders of the Brood colony, and ridding themselves of their infections using Una's modified omni-wave projector which had been designed to eliminate Brood embryos. After rescuing Colonel Yon-Rogg, the trio escape the planet and are rescued by the Shi'ar royal Deathbird. Deathbird later allies with The Brood to gain their help deposing her sister Lilandra Neramani as ruler of their empire. As a reward for their help, Deathbird gives the Brood Lilandra, the X-Men, and Carol Danvers, along with Fang of the Imperial Guard, to use as hosts. The Brood infect the entire party, except for Danvers, who they perform experiments on because of her half-human/half-Kree genes. Wolverine's adamantium skeleton allowed his healing ability to purge him of the embryo, and he helps the others escape. He is unable to save Fang, who becomes a Brood warrior before they leave. The Brood Queen orders her forces to find them, until she is contacted by the Queen embryo that is implanted in Cyclops. It explains that the X-Men are returning to Broodworld. Resigned to their dooms, the heroes help the Acanti recover the racial Soul, a supernatural force that must be passed from one Acanti leader ("The Prophet-Singer") to the next. The Soul is located in a crystalline part of the dead Acanti Prophet-Singer's brain. Afterwards, the Prophet-Singer leads the Acanti to safety in deep space. Returning to Earth with the Starjammers, the X-Men defeat and detain the Brood Queen infecting Charles Xavier. The advanced medical facilities at the Starjammers' disposal are able to transfer the consciousness of Xavier from the Brood Queen's body to a new cloned body, enabling Xavier to walk again. A Brood-filled starshark later crashes on Earth, leading to the infection of several nearby humans by the Brood. One of the victims is allowed to live as a human assistant, but when he leads the aliens to some mutants, the Brood infect him and the mutants as well. It is revealed that the Brood can morph into the host's form or a hybrid of the two forms. In the course of the battle, an Earth woman named Hannah Connover is infected with a queen, though this problem would not develop until later. Another branch of the Brood manage to land on Earth and infect more mutants, along with the Louisiana Assassins Guild of which X-Man Gambit is a member. The X-Men kill most of the infected people. They and Ghost Rider manage to rescue many of the Brood's other uninfected prisoners, only to have the "Spirit of Vengeance" become infected himself. Psylocke manages to separate Ghost Rider from the Brood host before it could kill Danny Ketch, the current host of the Ghost Rider, and he and the X-Men saved New Orleans. Hannah Connover, previously infected with a Queen, soon begins to demonstrate attributes of Brood. She uses her new-found "healing" powers to become a faith healer and cure many people with her reverend husband, but secretly her Brood nature causes her to infect many people with embryos. Across the Galaxy, on the "true" Brood Homeworld, the Brood Empress sends her "firstborn" Imperial Assassins to kill Hannah for going against the Empress' wishes. Unable to stop future waves of Assassins from coming, the X-Man, Iceman, freezes Connover, putting her in suspended animation and causing the current firstborn to kill themselves, as in their minds the mission was accomplished. Connover is assumed to still be in suspended animation with her Queen host in the custody of the X-Men. In Contest of Champions II, the Brood and the Badoon abduct several heroes and pose as a benevolent species willing to give the heroes access to advanced technology after competing against each other in a series of contests. However, in reality, the Brood intend to use Rogue, infested with a Brood Queen, to absorb the powers of the contest winners and become unstoppable. Fortunately, Iron Man realizes that the Brood are drugging food to amplify aggression- relying on his armor's own life-support systems to prevent him succumbing to the 'infection'- and is able to uncover the plot. Although the Queen had already absorbed the powers and skills of the various contest winners- in the form of Captain America, Thor, the Hulk, Spider-Man, Jean Grey and the Scarlet Witch-, the remaining heroes managed to defeat her. The Brood Queen was extracted from Rogue with the aid of Carol Danvers, who forced the Brood Queen to flee by threatening to kill Rogue. After confirming that Rogue was cured, the heroes returned home. A mixed team of X-Men and Fantastic Four was formed to investigate what happened to the NASA space station Simulacra, only to discover that it had been taken over by the Brood scouting party, leading the way to Earth for the Brood armies. After battling them, they left the station leaving the infected crew members alive despite the desires of Wolverine and Emma Frost to kill them due to the interference of the Invisible Woman. Soon a Brood invasion arrived at New York City. The X-Men and Fantastic Four defended the city from the Brood despite facing overwhelming odds. Using an enhanced Cerebro, Emma Frost projected a telepathic hallucination of the Phoenix and Galactus appearing in the city, which caused the Brood to panic and recall their forces to the dozens of Acanti ships after which they fled Earth. It was also revealed that at some point in the dawn of civilization during the year 2610 BC, a spaceship filled with Brood crash landed in Egypt, marking the end of the second great dynasty. They went as far as turning a Pharaoh into one of their own and it also would have been the end of days if not for Imhotep and a group of soldiers, among them En Sabah Nur, who were able to successfully fend off the invasion. Imhotep himself killed the Queen. The Brood return to Earth in the Ms. Marvel series and battle Carol Danvers, who as Binary played a key role in their earlier defeat. Strangely enough, none of the Brood present recognize who she is, possibly because of her inability to fully access her cosmic powers, which also changed her physical appearance. The Brood are also stalked and summarily exterminated by the alien hunter called Cru, with whom Ms. Marvel also came into violent contact. It later turned out that there had been escape pods from the Acanti, and the other one had the Brood Queen who had landed on Monster Island. Cru itself was back on Earth, having regenerated and was searching the Brood Queen. Ms. Marvel, seeing her as a threat, fought Cru again and in the process merged part of their minds temporarily making them unable to use their powers and therefore vulnerable to the Brood. The Brood Queen had established a nest on the island and infected the Moloids with their eggs. Ms. Marvel then discovered that the Brood Queen who ruled over the Brood of Sleazeworld had survived and now had a crystalline form. Upon arriving on the island, Operation: Lightning Storm strike team and Wonder Man battled the Brood, while Cru and Ms. Marvel having regained the ability to use their powers fought the Brood Queen. In the process Cru was killed, after which the Brood Queen was taken into space by Ms. Marvel who destroyed her with a nuclear weapon. During the invasion of Annihilus and his Annihilation Wave, the Brood were decimated, and the species is now on the brink of extinction. Some Brood appear in the arena of planet Saakar in the Planet Hulk storyline of The Incredible Hulk, one of them even becoming a main character. A Brood referred to as "No-Name", who becomes a genetic queen because their race is becoming rarer, becomes the lover of insect king Miek and also appears in World War Hulk. When it is discovered that Miek was the one who let the Hulk's shuttle explode, No-Name and Hulk attack Miek. Near the end of the War the "Earth Hive", the shared consciousness of every insect on Earth, use Humbug as a Trojan Horse to deal a crippling blow to No-Name, rendering her infertile and poisoning the last generation of hivelings, growing in Humbug's body. No-Name is a rarity among the Brood, as she learned to feel compassion for other living beings. The Brood reappeared once again in the pages of Astonishing X-Men, however these Brood are revealed to be actual genetically grown hybrids created by a geneticist known only as Kaga who started growing and redesigning them with missing data about post M-Day work on Henry McCoy's research computers. In the 2011 "Meanwhile" storyline Astonishing X-Men, S.W.O.R.D. scientists successfully find a way to remove a Brood embryo from a human host, but not before the Brood they are studying escape and attack, prompting a botched rescue mission led by Abigail Brand and another rescue mission led by the X-Men. Given the chance to lower the Brood's numbers further, they discovered that the Annihilation event had caused the interstellar ecosystem to destabilize, since the Brood, dangerous as they are, served as natural predators for even worse species. These remaining species are now breeding out of control and present a greater threat than the Brood ever did. With no other choice, the X-Men act to prevent the Brood extinction. According to Bishop, there would be a race of benevolent Brood in the future, prompting the X-Men to willingly serve as Brood hosts, so that they could instill them with the same compassion felt by No-Name. After being connected with the hive-mind, the X-Men learned of a nearby Brood who was born with the ability to feel compassion, making him the Brood equivalent of a mutant. While such Brood are typically destroyed upon hatching by their kind, this one was permitted to live due to the Brood's dwindling numbers. After rescuing the Brood mutant and defeating the Brood in battle and allowing them to escape, the X-Men had their Brood embryos removed, to be raised aboard the Peak, with the Brood mutant acting as their mentor. The 2012 X-Men subseries Wolverine and the X-Men featured a Broodling as a student at Wolverine's Jean Grey School for Higher Learning. Nicknamed "Broo" by Oya, the Broodling was a mutant, and both intelligent and non-violent able to wear clothing and glasses (which he felt made him look less frightening). Broo expressed a desire to join the Nova Corps. In a possible future timeline seen by Deathlok, Broo joined the X-Men. During the Age of Ultron storyline, it is revealed that while in a hidden S.H.I.E.L.D. substation decades in the past, the future-Wolverine released and was infected by a less menacing Brood. When he cut the embryo out of his body, the Brood Collective responded to the attack by altering the physical structure of all future Brood to the form it is now known for. During the Infinity storyline, a Brood Queen appeared as a member of the Galactic Council where she represents the Brood race, which indicates that the Brood Empress was apparently one of the casualties of the Annihilation Wave. She later made a deal with J'son, the former Emperor of the Spartoi Empire which consisted in J'son surrendered the planet Spartax to the Brood, and in exchange, J'son would acquire one planet for every ten worlds they conquered ever since. In Spider-Man and the X-Men, the Brood made a pact with the Symbiotes but ended up being betrayed and possessed until Spider-Man, with the help of the X-Men and S.W.O.R.D managed to defeat them. During The Black Vortex storyline, the deal between the Brood Queen and J'son is discovered as the Brood began their takeover of Spartax and use the entire planet as hosts. The plot is foiled once Kitty Pryde is cosmically powered by the Black Vortex and banishes the Brood from the planet. Later the Galactic Council manipulated Thanos into attacking the Earth so he could make way for them to raze the planet, the Queen of the Brood was killed by Angela before she and the other leaders of the Galactic Council could begin their attack. Dario Agger and Roxxon Energy Corporation managed to obtain some Brood. Using some parasites on some wolves, Agger and one of his scientists sent them to track down Weapon H. When Weapon H slayed them, Dario Agger had Brood Drones, Brood-infected Space Sharks, and a Brood-infected human riding an Acanti into attacking Weapon H. After the Brood Drones and Brood-Space Sharks are slain and the Acanti is knocked out, the Brood-infected human states to Weapon H that Roxxon wants to hire him. Weapon H stated that those who claim to help people will kill them anyway and has the Brood-infected human carry a message to Roxxon to leave him alone. Later a Brood Queen came to Earth to find the perfect host so she can initiate the process to become the new Brood Empress. Her attempts at targeting astronauts were thwarted by the discovery that J. Jonah Jameson is the perfect host but Frank Castle was able to interfere. When the New Mutants returned from a space adventure with a mysterious egg which turned out to be extremely valuable to rival alien nation-states of the Kree and the Shi'ar, as well as the Brood, as shown by the Kree and Shi'ar fighting over its location and the Brood invading Earth to get it back. The X-Men beat them back while Broo and his colleagues studied the egg and learn that it actually was the King Egg, a superweapon developed by Kree scientists thousands of years before the modern Marvel Universe on the Kree capital world of Hala, to foster the Brood race and instill a patriarchal element that, when activated, could give one member a supercharged version of the Empress' pheromonal control to turn the entire Brood species into a controllable army. This would allow the Kree to set the deadly predators against rival intergalactic powers and consume them. This leads every Brood Queen to send their swarms in pursuit of it, to prevent any loss of their power in the Brood hive-mind. The Brood initially attacked Earth, but a small team of X-Men including Cyclops, Jean Grey, Havok, Vulcan and Broo was able to get the King Egg off of Earth and lead the aliens off-planet. The Brood followed after them, with the X-Men eventually ending up alongside the Starjammers and members of the Shi'Ar Imperial Guard. After crash-landing on an abandoned planet, it initially looks like the assembled heroes are going to be overwhelmed and wiped out by the sheer magnitude of Brood attacking them. As the various factions of the Brood descended upon the X-Men in a massive battle, the Brood suddenly stopped in place. But just as the Brood surround them, to everyone's surprise, the Brood halt their attack when Broo eats the King Egg. This enhances Broo's biology, increasing his pheromone output to the point where even the Brood Queens become subservient of him. For all intents and purposes, eating the egg turned Broo into the Brood King. Some time later, the X-Men got a distress call from deep space and find that the galaxy’s Brood problem is not as solved as they’d thought! When the X-Men’s close friend Broo became the Brood King, he gained the ability to control the savage alien race he was both a part of and so different from. Now he is experiencing his own nightmare scenario, the Brood are killing his friends, and there is nothing he can do to stop it! Rogue Brood factions have begun running wild. It was soon revealed that Nightmare is the force behind the recent Brood expansion, using his abilities to usurp control of the race whenever Broo is asleep, and he's doing this specifically to take revenge on the X-Men for Jean Grey's earlier victory over him, while the new Brood Empress, unhappy with the fact Broo had control of the entire Brood, used this opportunity to break free of him. The Empress also blames the Kree for creating the King Egg that gave Broo control over her race, and has a convoluted scheme to convert superheroes into Brood to use them as an army against the Kree; the part-Kree Captain Marvel is particularly key to this plot. However the plan backfired when the Empress killed Binary, enraging Carol to the point she created a black hole which killed not only the Empress but her loyal Brood too. Known Brood The following characters are either Brood or were turned into Brood: Assassin – A Brood that was spawned from an Assassin's Guild member. Blake – A servant of Roxxon Energy Corporation who was infected by the Brood parasite to help apprehend Weapon H. Blindside – A Brood Mutant that can teleport. He was killed by Storm. Brickbat – A Brood Mutant with super-strength. He was killed when Havok collapsed a building where a support beam impaled him. Broo – A Brood born a mutant when held in the Pandora's Box Space Station. Broodskrulls – A group of Brood and Skrull Hybrids. Buchanan Mitty – Former Entymologist turned Brood. Deadpal – A small Brood born from Deadpool's body after a failed transformation. Devros – A former Kree turned Brood. Dive-Bomber – A Brood Mutant that can fly with the wings on its back. He was killed by Havok. Dzilòs – A Brood killed by Wolverine. Empress Brood – Fang – An Imperial Guard turned Brood. Haeg'Rill – One of the Brood who allied with Deathbird. Hannah Conover – A known Brood Queen who is married to William Conover. Harry Palmer – A human paramedic-turned-Brood who is the leader of the Brood Mutants where he infected different Mutants. He was killed by Wolverine. Josey Thomas – A human paramedic-turned-Brood who is Harry Palmer's partner in the Brood Mutants. She is later killed by the Empress Brood. Kam'N'Ehar – One of the Brood who allied with Deathbird. Karl Lykos Brood clone – A clone with mixture from both Sauron and Brood DNA, created by Kaga to join his army to annihilate the X-Men. Khasekhemwy Khasekhemui – A Pharaoh and ruler of Egypt during the Second Dynasty who was infected by the Brood. He and the Brood with him were killed by a coalition led by Imhotep. Krakoa Brood clone – A clone with a mixture from both Krakoa and Brood DNA, created by Kaga to join his army to annihilate the X-Men. Lockup – A Brood Mutant with a paralyzing touch. He was killed when Havok collapsed a stage on him and Spitball. Nassis – A former Shi'ar student turned Brood. No-Name – A Brood Queen that is a member of the Warbound. Queen of the Brood – An unnamed Brood Queen who is a member of the Galactic Council. Skur'kll – One of the Brood who allied with Deathbird. Spitball – Robert Delgado was a lawyer from Denver whose mutant powers allow him to spit plasma. He was among the mutants who were turned into Brood by Harry Palmer. During the fight with the X-Men, Spitball is killed when Havok collapsed a stage on him and Lockup. T'Crilēē – A hunt-master which contacted a Shi'ar vessel. Temptress – A Brood Mutant with pheromones that enable her to enslave anyone to her control. While having ensnared Psylocke and Rogue under her control, Temptress was killed by Wolverine. Tension – A Brood Mutant who can extend his arms to constrict anyone. After attacking Reverend William Conover, Tension was killed by Havok. Tuurgid – A former Frost Giant turned Brood. Whiphand – A Brood Mutant who can transform his arms into long bands of energy that can disrupt the neuro-functions of anyone. He was killed by Colossus who snapped his neck. Xzax – A Brood mercenary who is a member of Dracula's New Frightful Four. He was killed when Deadpool slammed him into a moving truck. Zen-Pram – A former Kree turned Brood. Other versions Age of Apocalypse In the Age of Apocalypse timeline, without the X-Men to aid them, part of the Shi'ar Imperium was consumed by the Brood, who infected its populace with Brood implants, including the still-captive Christopher Summers. Escaping to Earth, Summers fought to control his Brood implant, but was captured by Mister Sinister. Sinister turned him over to the Dark Beast, who then proceeded to experiment on him for years. Summers eventually escaped, and began infecting other humans (Including the AoA version of Joseph "Robbie" Robertson, as well as friends of Misty Knight and Colleen Wing). Ultimately, Corsair transformed into a Brood Queen and attempted to kill Alex but was killed by his son Cyclops. The Summers brothers cremated their father and indirectly deprived Sinister of the chance to carry out further tests on Brood DNA. Amalgam Comics In Amalgam Comics, the Brood is combined with Brother Blood to form Brother Brood, and with the Cult of Blood to form the Cult of Brood. The Brood appear alongside Brother Brood, but are presented as supernatural rather than extraterrestrial. Bishop's timeline According to the time-traveling X-Man Bishop there are benign factions of Brood in the future. It is speculated that these "good" Brood are originated from Hannah Connover. JLA/Avengers In JLA/Avengers, the Brood have a brief cameo scene, where they are seen attacking Mongul and apparently invading Warworld as the two universes begin to come together. WildC.A.T.s/X-Men In WildC.A.T.s/X-Men: The Silver Age, alien hybrids of the Brood and Daemonites are created by Mister Sinister. Ultimate Marvel In the Ultimate Marvel universe, the Brood appeared as a Danger Room training exercise during the Tempest arc of Ultimate X-Men. The Brood are later revealed to be creatures native to the mindscape, where the Shadow King dwells. X-Men: The End In X-Men: The End, taking place in a possible future, the Brood hatch a plan with Lilandra (possessed by Cassandra Nova). Nova plans to solidify her rule over Shi'ar space by smuggling an other-dimensional pure-Brood queen from an alternate universe. This realm is one where the X-Men failed to ever fight the Brood, they are described as 'pure'. This Brood Queen is implanted in Lilandra's sister, Deathbird. Marvel 2099 During the attack of insanity brought by Psiclone, the androgynous harlot Cash imagined the Brood between the races of extraterrestrials who swarmed the streets of Transverse City. X-Men '92 In the comic book series of X-Men '92, which is set in the X-Men animated series' universe, a cadre of Mutant Brood called X-Brood (composed of Hardside, Fastskin, Phader, Sharpwing and Openmind) were tracked down by the Shi'ar, until they were saved by the X-Men. Earth X In Earth X, while telling to Isaac Christians of the Dire Wraiths exiled by the Spaceknights in Limbo, Kyle Richmond mentions the Brood, when wondering why the invasion attempts were always done by shapeshifting races as the Skrulls, the Impossible Men or the Brood. Marvel Zombies: Resurrection In Marvel Zombies: Resurrection, the infection that has transformed most of Earth's heroes into zombie-like beings is revealed to be the result of a Brood infesting Galactus, which allowed the Brood to achieve a new state of being and expand their resources even further. Heroes Reborn (2021) In the 2021 "Heroes Reborn" comic, the Brood were responsible for infecting the Imperial Guard who were allied with Hyperion. In other media Television A heavily altered version of the Brood called the Colony appears in X-Men: The Animated Series. These versions are reptilian and possess metallic armor. The actual Brood cameo in an episode featuring Mojo. The Brood make a cameo appearance in the Avengers Assemble episode "Mojoworld". A member of the Brood makes a cameo appearance in the M.O.D.O.K. episode "Beware What from Portal Comes!". Video games A Brood Queen appears as a boss in X-Men (1994). The Brood appear in X-Men: Mutant Apocalypse. A species based on the Brood called the Cerci appear in X-Men Legends II: Rise of Apocalypse. They are genetically engineered, insectoid creatures with animal-like intelligence. The Brood appear in Marvel Heroes. Brood appears as a card in Marvel Snap. Collectibles One of the Marvel Milestone statues features Marc Silvestri's Brood-infected Wolverine cover for Uncanny X-Men #234. Brood Queen is one of the "build a figure" toys in the Marvel Legends series. Broodling toys have been produced by Toy Biz (winged, for their X-Men line) and Marvel Select Toys (unwinged and based on Fang's transformation, in a two pack with a Skrull warrior). References External links The Brood at Marvel.com The Brood at UncannyXmen.net Brood at Comic Vine Brood at Comic Book DB Fictional species and races Hive minds in fiction
Brood (comics)
[ "Biology" ]
6,812
[ "Superorganisms", "Fictional superorganisms" ]
1,029,711
https://en.wikipedia.org/wiki/Solar%20zenith%20angle
The solar zenith angle is the zenith angle of the sun, i.e., the angle between the sun’s rays and the vertical direction. It is the complement to the solar altitude or solar elevation, which is the altitude angle or elevation angle between the sun’s rays and a horizontal plane. At solar noon, the zenith angle is at a minimum and is equal to latitude minus solar declination angle. This is the basis by which ancient mariners navigated the oceans. Solar zenith angle is normally used in combination with the solar azimuth angle to determine the position of the Sun as observed from a given location on the surface of the Earth. Formula where is the solar zenith angle is the solar altitude angle, is the hour angle, in the local solar time. is the current declination of the Sun is the local latitude. Derivation of the formula using the subsolar point and vector analysis While the formula can be derived by applying the cosine law to the zenith-pole-Sun spherical triangle, the spherical trigonometry is a relatively esoteric subject. By introducing the coordinates of the subsolar point and using vector analysis, the formula can be obtained straightforward without incurring the use of spherical trigonometry. In the Earth-Centered Earth-Fixed (ECEF) geocentric Cartesian coordinate system, let and be the latitudes and longitudes, or coordinates, of the subsolar point and the observer's point, then the upward-pointing unit vectors at the two points, and , are where , and are the basis vectors in the ECEF coordinate system. Now the cosine of the solar zenith angle, , is simply the dot product of the above two vectors Note that is the same as , the declination of the Sun, and is equivalent to , where is the hour angle defined earlier. So the above format is mathematically identical to the one given earlier. Additionally, Ref. also derived the formula for solar azimuth angle in a similar fashion without using spherical trigonometry. Minimum and Maximum At any given location on any given day, the solar zenith angle, , reaches its minimum, , at local solar noon when the hour angle , or , namely, , or . If , it is polar night. And at any given location on any given day, the solar zenith angle, , reaches its maximum, , at local midnight when the hour angle , or , namely, , or . If , it is polar day. Caveats The calculated values are approximations due to the distinction between common/geodetic latitude and geocentric latitude. However, the two values differ by less than 12 minutes of arc, which is less than the apparent angular radius of the sun. The formula also neglects the effect of atmospheric refraction. Applications Sunrise/Sunset Sunset and sunrise occur (approximately) when the zenith angle is 90°, where the hour angle h0 satisfies Precise times of sunset and sunrise occur when the upper limb of the Sun appears, as refracted by the atmosphere, to be on the horizon. Albedo A weighted daily average zenith angle, used in computing the local albedo of the Earth, is given by where Q is the instantaneous irradiance. Summary of special angles For example, the solar elevation angle is: 90° at the subsolar point, which occurs, for example, at the equator on a day of equinox at solar noon near 0° at the sunset or at the sunrise between −90° and 0° during the night (midnight) An exact calculation is given in position of the Sun. Other approximations exist elsewhere. See also Azimuth Solar azimuth angle Horizontal coordinate system List of orbits Position of the Sun Sun path Sunrise Sunset Sun transit time References Horizontal coordinate system Sun Solar energy
Solar zenith angle
[ "Astronomy" ]
776
[ "Astronomical coordinate systems", "Horizontal coordinate system" ]
1,029,748
https://en.wikipedia.org/wiki/Aleksandr%20Khinchin
Aleksandr Yakovlevich Khinchin (, ), July 19, 1894 – November 18, 1959, was a Soviet mathematician and one of the most significant contributors to the Soviet school of probability theory. Due to romanization conventions, his name is sometimes written as "Khinchin" and other times as "Khintchine". Life and career He was born in the village of Kondrovo, Kaluga Governorate, Russian Empire. While studying at Moscow State University, he became one of the first followers of the famous Luzin school. Khinchin graduated from the university in 1916 and six years later he became a full professor there, retaining that position until his death. Khinchin's early works focused on real analysis. Later he applied methods from the metric theory of functions to problems in probability theory and number theory. He became one of the founders of modern probability theory, discovering the law of the iterated logarithm in 1924, achieving important results in the field of limit theorems, giving a definition of a stationary process and laying a foundation for the theory of such processes. Khinchin made significant contributions to the metric theory of Diophantine approximations and established an important result for simple real continued fractions, discovering a property of such numbers that leads to what is now known as Khinchin's constant. He also published several important works on statistical physics, where he used the methods of probability theory, and on information theory, queuing theory and mathematical analysis. In 1939 Khinchin was elected as a Correspondent Member of the Academy of Sciences of the USSR. He was awarded the USSR State Prize (1941), and the Order of Lenin. See also Pollaczek–Khinchine formula Wiener–Khinchin theorem Khinchin inequality Equidistribution theorem Khinchin's constant Khinchin–Lévy constant Khinchin's theorem on Diophantine approximations Law of the iterated logarithm Palm-Khintchine Theorem Weak law of large numbers (Khinchin's law) Lévy–Khintchine formula of characteristic function of Lévy process Bibliography Sur la Loi des Grandes Nombres, in Comptes Rendus de l'Académie des Sciences, Paris, 1929 Asymptotische Gesetze der Wahrscheinlichkeitsrechnung, Berlin: Julius Springer, 1933 Continued Fractions, Mineola, N.Y. : Dover Publications, 1997, (first published in Moscow, 1935) Three Pearls of Number Theory, Mineola, NY : Dover Publications, 1998, (first published in Moscow and Leningrad, 1947) Mathematical Foundations of Quantum Statistics, Mineola, N.Y. : Dover Publications, 1998, (first published in Moscow and Leningrad, 1951; trans. in 1960 by Irwin Shapiro) Mathematical Foundations of Information Theory, Dover Publications, 1957, References External links List of books by Khinchin provided by National Library of Australia A.Ya. Khinchin at Math-Net.Ru. 20th-century Russian mathematicians Soviet mathematicians Number theorists Probability theorists Queueing theorists Recipients of the Stalin Prize Moscow State University alumni Academic staff of Moscow State University Corresponding Members of the USSR Academy of Sciences 1894 births 1959 deaths Burials at Donskoye Cemetery Russian scientists
Aleksandr Khinchin
[ "Mathematics" ]
672
[ "Number theorists", "Number theory" ]
1,029,755
https://en.wikipedia.org/wiki/Solar%20azimuth%20angle
The solar azimuth angle is the azimuth (horizontal angle with respect to north) of the Sun's position. This horizontal coordinate defines the Sun's relative direction along the local horizon, whereas the solar zenith angle (or its complementary angle solar elevation) defines the Sun's apparent altitude. Conventional sign and origin There are several conventions for the solar azimuth; however, it is traditionally defined as the angle between a line due south and the shadow cast by a vertical rod on Earth. This convention states the angle is positive if the shadow is east of south and negative if it is west of south. For example, due east would be 90° and due west would be -90°. Another convention is the reverse; it also has the origin at due south, but measures angles clockwise, so that due east is now negative and west now positive. However, despite tradition, the most commonly accepted convention for analyzing solar irradiation, e.g. for solar energy applications, is clockwise from due north, so east is 90°, south is 180°, and west is 270°. This is the definition used by NREL in their solar position calculators and is also the convention used in the formulas presented here. However, Landsat photos and other USGS products, while also defining azimuthal angles relative to due north, take counterclockwise angles as negative. Conventional Trigonometric Formulas The following formulas assume the north-clockwise convention. The solar azimuth angle can be calculated to a good approximation with the following formula, however angles should be interpreted with care because the inverse sine, i.e. or , has multiple solutions, only one of which will be correct. The following formulas can also be used to approximate the solar azimuth angle, but these formulas use cosine, so the azimuth angle as shown by a calculator will always be positive, and should be interpreted as the angle between zero and 180 degrees when the hour angle, , is negative (morning) and the angle between 180 and 360 degrees when the hour angle, , is positive (afternoon). (These two formulas are equivalent if one assumes the "solar elevation angle" approximation formula). So practically speaking, the compass azimuth which is the practical value used everywhere (in example in airlines as the so called course) on a compass (where North is 0 degrees, East is 90 degrees, South is 180 degrees and West is 270 degrees) can be calculated as The formulas use the following terminology: is the solar azimuth angle is the solar zenith angle is the hour angle, in the local solar time is the current sun declination is the local latitude In addition, dividing the above sine formula by the first cosine formula gives one the tangent formula as is used in The Nautical Almanac. The formula based on the subsolar point and the atan2 function A 2021 publication presents a method that uses a solar azimuth formula based on the subsolar point and the atan2 function, as defined in Fortran 90, that gives an unambiguous solution without the need for circumstantial treatment. The subsolar point is the point on the surface of the Earth where the Sun is overhead. The method first calculates the declination of the Sun and equation of time using equations from The Astronomical Almanac, then it gives the x-, y- and z-components of the unit vector pointing toward the Sun, through vector analysis rather than spherical trigonometry, as follows: where is the declination of the Sun, is the latitude of the subsolar point, is the longitude of the subsolar point, is the Greenwich Mean Time or UTC, is the equation of time in minutes, is the latitude of the observer, is the longitude of the observer, are the x-, y- and z-components, respectively, of the unit vector pointing toward the Sun. The x-, y- and z-axises of the coordinate system point to East, North and upward, respectively. It can be shown that . With the above mathematical setup, the solar zenith angle and solar azimuth angle are simply , . (South-Clockwise Convention) where is the solar zenith angle, is the solar azimuth angle following the South-Clockwise Convention. If one prefers North-Clockwise Convention, or East-Counterclockwise Convention, the formulas are , (North-Clockwise Convention) . (East-Counterclockwise Convention) Finally, the values of , and at 1-hour step for an entire year can be presented in a 3D plot of "wreath of analemmas" as a graphic depiction of all possible positions of the Sun in terms of solar zenith angle and solar azimuth angle for any given location. Refer to sun path for similar plots for other locations. See also Equation of time Horizontal coordinate system Hour angle Position of the Sun Solar time Solar tracker Sun path Sunrise Sunset Zenith References External links Solar Position Calculators by National Renewable Energy Laboratory (NREL) Solar Position Algorithm for Solar Radiation Applications (NREL) An Excel workbook with VBA functions for solar azimuth, solar elevation, dawn, sunrise, solar noon, sunset, and dusk, by Greg Pelletier, translated from NOAA's online calculators for solar position and sunrise/sunset An Excel workbook with a solar position and solar radiation time-series calculator, by Greg Pelletier Sun Position Calculator Free on-line tool to estimate the position of the sun with three different algorithms. PVCDROM Azimuth Angle - online material regarding Photovoltaics by UNSW, ASU, NSF et al. Horizontal coordinate system Sun Solar energy
Solar azimuth angle
[ "Astronomy" ]
1,178
[ "Astronomical coordinate systems", "Horizontal coordinate system" ]
1,029,759
https://en.wikipedia.org/wiki/Haig%E2%80%93Simons%20income
Haig–Simons income or Schanz–Haig–Simons income is an income measure used by public finance economists to analyze economic well-being which defines income as consumption plus change in net worth. It is represented by the mathematical formula: I = C + ΔNW where C = consumption and ΔNW = change in net worth. Consumption refers to the money spent on goods and services of any kind. From a perfect theory view, consumption does not include capital expenditures, and the full spending would be amortized. History The measure of the income tax base equal to the sum of consumption and change in net worth was first advocated by German legal scholar Georg von Schanz. His concept was further developed by the American economists Robert M. Haig and Henry C. Simons in the 1920s and 1930s. Haig defined personal income as "the money value of the net accretion to one's economic power between two points of time," a formulation that was intended to include the taxpayer's consumption. That was thought by Simons to be interchangeable with his own formulation: "Personal income may be defined as "the algebraic sum of (1) the market value of rights exercised in consumption and (2) the change in the value of the store of property rights between the beginning and end of the period in question." In this concept, all inflows and outflows of resources are considered taxable income in a broad sense, including donations and windfall gains. Schanz–Haig–Simons income tax vs. cash-flow consumption tax A cash-flow consumption tax is intended to confine the cash-flow tax burden to an individual's annual consumption and to remove nonconsumption expenses and current savings from the tax base. The base is calculated by combining the year's gross receipts and savings withdrawals, and then subtracting the year's business and investment expenses and the year's additions to savings. Progressive rates are applied to the resulting sum. By contrast, the base for a theoretically correct Schanz–Haig–Simons (SHS) income tax is each individual's annual consumption plus current additions to savings. Thus current receipts that are otherwise taxable remain in the tax base, even if they are saved, and withdrawals from earlier savings are not currently taxed since they were assessed in a prior year. Stated differently, the SHS tax base has two components—current consumption and current savings (including current appreciation accruing to earlier investments)—whereas a cash-flow consumption tax has only a single component—current consumption. In spite of their differences, however, both a cash-flow consumption tax and an SHS tax require that dollars paid out as business or investment expenses be eliminated from the base. This is necessary under a cash-flow consumption tax because business and investment expenses are not consumption and it is necessary under an SHS tax because these expenditures are neither consumption nor additions to savings. Since business and investment outlays have no place in the base of either tax, intuition suggests that business and investment interest expenses would be treated identically under a cash-flow consumption tax and an SHS tax. But they are not. The SHS tax and the cash-flow consumption tax take different structural approaches to the treatment of business and investment interest outlays although both systems share the general objective of removing current business and investment costs from the tax base. Tax on Haig–Simons income Tax on change in wealth The Haig–Simons equation is different from the USA's individual income tax base calculations. For example, any employer contributions to employee health insurance are not included in taxable employee income. Under the Haig–Simons definition of income, such contributions would be included in income. Such contributions might not be included in a Haig–Simons income tax base, however, if their exclusion reflected "an appropriate adjustment in measuring ability to pay." Tax on consumption The European Union and most states in the USA employ a tax on Haig–Simons income with a consumption tax. In the European Union, a value added tax applies to purchases of goods and services on each level of exchange until it reaches the ultimate consumer. In the US, most states tax purchases of goods with a sales tax. Criticisms of the definition Some argue that the definition is tautological: it is "little more than an accounting identity, a tautology: it tells us only that all income is either spent [consumption] or not [savings], which is obvious enough". Others observe that it is "only a surrogate utility measure." Some fault it for neutrality between savings and consumption. Some scholars resist these criticisms, to the extent they conceive of Haig–Simons as dependent on utility; Simons rejected utility as the basis of the ability-to-pay standard. Indeed, Simons rejected both the notion that humans are "equally efficient pleasure machines," and the idea that taxation can take account of interpersonal utilities. Simons sought a measurable definition for income, but his solution is open to criticism for reifying troubling dichotomies; for example, the Haig–Simons definition depends on the distinction between market and non-market values. See also Flat tax on consumption References Further reading Income Equations
Haig–Simons income
[ "Mathematics" ]
1,084
[ "Mathematical objects", "Equations" ]
1,029,786
https://en.wikipedia.org/wiki/EO%20Personal%20Communicator
The EO is an early commercial tablet computer that was created by Eo Inc. (later acquired by AT&T Corporation), and released in April 1993. Eo (Latin for "I go") was the hardware spin-out of GO Corporation. Officially named the AT&T EO Personal Communicator, it is similar to a large personal digital assistant with wireless communications, and competed against the Apple Newton. The unit was produced in conjunction with David Kelley Design, frog design, and the Matsushita, Olivetti and Marubeni corporations. Among the EO customers AT&T claimed were: New York Stock Exchange, Andersen Consulting, Lawrence Livermore Laboratories, FD Titus & Sons and Woolworths. Eo, Inc., 52 percent owned by AT&T, shut down operations on July 29, 1994, after failing to meet its revenue targets and to secure the funding to continue. It was reported that 10,000 of the computers had been sold. In 2012, PC Magazine called the AT&T EO 440, "the first true phablet". Product specifics Two models, the Communicator 440 and 880, were produced and measure about the size of a small clipboard. Both are powered by the AT&T Hobbit chip, created by AT&T specifically for running code written in the C programming language. They feature I/O ports such as modem, parallel, serial, VGA out and SCSI. The devices come with a wireless cellular network modem, a built-in microphone with speaker, and a free subscription to AT&T EasyLink Mail for both fax and e-mail messages. The operating system, PenPoint OS, was created by GO Corporation. Widely praised for its simplicity and ease of use, the OS did not gain widespread use. The applications suite, Perspective, was licensed to EO by Pensoft. See also Pen computing History of tablet computers Celeste Baranski Notes External links The EO 440 And EO 880 (subscription required) EO 440 receives one of 1993 Byte Awards Personal retrospective about working for EO Computer-related introductions in 1993 AT&T computers Personal digital assistants Tablet computers
EO Personal Communicator
[ "Technology" ]
450
[ "Mobile computer stubs", "Mobile technology stubs" ]
1,029,949
https://en.wikipedia.org/wiki/Printed%20circuit%20board%20milling
Printed circuit board milling (also: isolation milling) is the milling process used for removing areas of copper from a sheet of printed circuit board (PCB) material to recreate the pads, signal traces and structures according to patterns from a digital circuit board plan known as a layout file. Similar to the more common and well known chemical PCB etch process, the PCB milling process is subtractive: material is removed to create the electrical isolation and ground planes required. However, unlike the chemical etch process, PCB milling is typically a non-chemical process and as such it can be completed in a typical office or lab environment without exposure to hazardous chemicals. High quality circuit boards can be produced using either process. In the case of PCB milling, the quality of a circuit board is chiefly determined by the system's true, or weighted, milling accuracy and control as well as the condition (sharpness, temper) of the milling bits and their respective feed/rotational speeds. By contrast, in the chemical etch process, the quality of a circuit board depends on the accuracy and/or quality of the mask used to protect the copper from the chemicals and the state of the etching chemicals. Advantages PCB milling has advantages for both prototyping and some special PCB designs. The biggest benefit is that one does not have to use chemicals to produce PCBs. When creating a prototype, outsourcing a board takes time. An alternative is to make a PCB in-house. Using the wet process, in-house production presents problems with chemicals and disposing thereof. High-resolution boards using the wet process are hard to achieve and still, when done, one still has to drill and eventually cut out the PCB from the base material. CNC machine prototyping can provide a fast-turnaround board production process without the need for wet processing. If a CNC machine is already used for drilling, this single machine could carry out both parts of the process, drilling and milling. A CNC machine is used to process drilling, milling and cutting. Many boards that are simple for milling would be very difficult to process by wet etching and manual drilling afterward in a laboratory environment without using top-of-the-line systems that usually cost many times more than CNC milling machines. In mass production, milling is unlikely to replace etching although the use of CNC is already standard practice for drilling the boards. Hardware A PCB milling system is a single machine that can perform all of the required actions to create a prototype board, with the exception of inserting vias and through hole plating. Most of these machines require only a standard AC mains outlet and a shop-type vacuum cleaner for operation. Software Software for milling PCBs is usually delivered by the CNC machine manufacturer. Most of the packages can be split in two main categories – raster and vector. Software that produces tool paths using raster calculation method tends to have lower resolution of processing than the vector based software since it relies on the raster information it receives. Mechanical system The mechanics behind a PCB milling machine are fairly straightforward and have their roots in CNC milling technology. A PCB milling system is similar to a miniature and highly accurate NC milling table. For machine control, positioning information and machine control commands are sent from the controlling software via a serial port or parallel port connection to the milling machine's on-board controller. The controller is then responsible for driving and monitoring the various positioning components which move the milling head and gantry and control the spindle speed. Spindle speeds can range from 30,000 RPM to 100,000 RPM depending on the milling system, with higher spindle speeds equating to better accuracy, in a nutshell the smaller the tool diameter the higher RPM you need. Typically this drive system comprises non-monitored stepper motors for the X/Y axis, an on-off non-monitored solenoid, pneumatic piston or lead screw for the Z-axis, and a DC motor control circuit for spindle speed, none of which provide positional feedback. More advanced systems provide a monitored stepper motor Z-axis drive for greater control during milling and drilling as well as more advanced RF spindle motor control circuits that provide better control over a wider range of speeds. X and Y-axis control For the X and Y-axis drive systems most PCB milling machines use stepper motors that drive a precision lead screw. The lead screw is in turn linked to the gantry or milling head by a special precision machined connection assembly. To maintain correct alignment during milling, the gantry or milling head's direction of travel is guided along using linear or dovetailed bearing(s). Most X/Y drive systems provide user control, via software, of the milling speed, which determines how fast the stepper motors drive their respective axes. Z-axis control Z-axis drive and control are handled in several ways. The first and most common is a simple solenoid that pushes against a spring. When the solenoid is energized it pushes the milling head down against a spring stop that limits the downward travel. The rate of descent as well as the amount of force exerted on the spring stop must be manually set by mechanically adjusting the position of the solenoid's plunger. The second type of Z-axis control is through the use of a pneumatic cylinder and a software-driven gate valve. Due to the small cylinder size and the amount of air pressure used to drive it there is little range of control between the up and down stops. Both the solenoid and pneumatic system cannot position the head anywhere other than the endpoints, and are therefore useful for only simple 'up/down' milling tasks. The final type of Z-axis control uses a stepper motor that allows the milling head to be moved in small accurate steps up or down. Further, the speed of these steps can be adjusted to allow tool bits to be eased into the board material rather than hammered into it. The depth (number of steps required) as well as the downward/upward speed is under user control via the controlling software. One of the major challenges with milling PCBs is handling variations in flatness. Since conventional etching techniques rely on optical masks that sit right on the copper layer they can conform to any slight bends in the material so all features are replicated faithfully. When milling PCBs however, any minute height variations encountered when milling will cause conical bits to either sink deeper (creating a wider cut) or rise off the surface, leaving an uncut section. Before cutting some systems perform height mapping probes across the board to measure height variations and adjust the Z values in the G-code beforehand. Tooling PCBs may be machined with conventional endmills, conical d-bit cutters, and spade mills. D-bits and spade mills are cheap and as they have a small point allow the traces to be close together. Taylor's equation, Vc Tn = C, can predict tool life for a given surface speed. References External links Software review and how-to's on RepRap wiki Printed circuit board manufacturing
Printed circuit board milling
[ "Engineering" ]
1,467
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
1,029,967
https://en.wikipedia.org/wiki/Hypernucleus
A hypernucleus is similar to a conventional atomic nucleus, but contains at least one hyperon in addition to the normal protons and neutrons. Hyperons are a category of baryon particles that carry non-zero strangeness quantum number, which is conserved by the strong and electromagnetic interactions. A variety of reactions give access to depositing one or more units of strangeness in a nucleus. Hypernuclei containing the lightest hyperon, the lambda (Λ), tend to be more tightly bound than normal nuclei, though they can decay via the weak force with a mean lifetime of around . Sigma (Σ) hypernuclei have been sought, as have doubly-strange nuclei containing xi baryons (Ξ) or two Λ's. Nomenclature Hypernuclei are named in terms of their atomic number and baryon number, as in normal nuclei, plus the hyperon(s) which are listed in a left subscript of the symbol, with the caveat that atomic number is interpreted as the total charge of the hypernucleus, including charged hyperons such as the xi minus (Ξ−) as well as protons. For example, the hypernucleus contains 8 protons, 7 neutrons, and one Λ (which carries no charge). History The first was discovered by Marian Danysz and Jerzy Pniewski in 1952 using a nuclear emulsion plate exposed to cosmic rays, based on their energetic but delayed decay. This event was inferred to be due to a nuclear fragment containing a Λ baryon. Experiments until the 1970s would continue to study hypernuclei produced in emulsions using cosmic rays, and later using pion (π) and kaon (K) beams from particle accelerators. Since the 1980s, more efficient production methods using pion and kaon beams have allowed further investigation at various accelerator facilities, including CERN, Brookhaven National Laboratory, KEK, DAφNE, and JPARC. In the 2010s, heavy ion experiments such as ALICE and STAR first allowed the production and measurement of light hypernuclei formed through hadronization from quark–gluon plasma. Properties Hypernuclear physics differs from that of normal nuclei because a hyperon is distinguishable from the four nucleon spin and isospin. That is, a single hyperon is not restricted by the Pauli exclusion principle, and can sink to the lowest energy level. As such, hypernuclei are often smaller and more tightly bound than normal nuclei; for example, the lithium hypernucleus is 19% smaller than the normal nucleus 6Li. However, the hyperons can decay via the weak force; the mean lifetime of a free Λ is , and that of a Λ hypernucleus is usually slightly shorter. A generalized mass formula developed for both the non-strange normal nuclei and strange hypernuclei can estimate masses of hypernuclei containing Λ, ΛΛ, Σ, and Ξ hyperon(s). The neutron and proton driplines for hypernuclei are predicted and existence of some exotic hypernuclei beyond the normal neutron and proton driplines are suggested. This generalized mass formula was named the "Samanta formula" by Botvina and Pochodzalla and used to predict relative yields of hypernuclei in heavy-ion collisions. Types Λ hypernuclei The simplest, and most well understood, type of hypernucleus includes only the lightest hyperon, the Λ. While two nucleons can interact through the nuclear force mediated by a virtual pion, the Λ becomes a Σ baryon upon emitting a pion, so the Λ–nucleon interaction is mediated solely by more massive mesons such as the η and ω mesons, or through the simultaneous exchange of two or more mesons. This means that the Λ–nucleon interaction is weaker and has a shorter range than the standard nuclear force, and the potential well of a Λ in the nucleus is shallower than that of a nucleon; in hypernuclei, the depth of the Λ potential is approximately 30 MeV. However, one-pion exchange in the Λ–nucleon interaction does cause quantum-mechanical mixing of the Λ and Σ baryons in hypernuclei (which does not happen in free space), especially in neutron-rich hypernuclei. Additionally, the three-body force between a Λ and two nucleons is expected to be more important than the three-body interaction in nuclei, since the Λ can exchange two pions with a virtual Σ intermediate, while the equivalent process in nucleons requires a relatively heavy delta baryon (Δ) intermediate. Like all hyperons, Λ hypernuclei can decay through the weak interaction, which changes it to a lighter baryon and emits a meson or a lepton–antilepton pair. In free space, the Λ usually decays via the weak force to a proton and a π– meson, or a neutron and a π0, with a total half-life of . A nucleon in the hypernucleus can cause the Λ to decay via the weak force without emitting a pion; this process becomes dominant in heavy hypernuclei, due to suppression of the pion-emitting decay mode. The half-life of the Λ in a hypernucleus is considerably shorter, plateauing to about near , but some empirical measurements substantially disagree with each other or with theoretical predictions. Hypertriton The simplest hypernucleus is the hypertriton (), which consists of one proton, one neutron, and one Λ hyperon. The Λ in this system is very loosely bound, having a separation energy of 130 keV and a large radius of 10.6 fm, compared to about for the deuteron. This loose binding would imply a lifetime similar to a free Λ. However, the measured hypertriton lifetime averaged across all experiments (about ) is substantially shorter than predicted by theory, as the non-mesonic decay mode is expected to be relatively minor; some experimental results are substantially shorter or longer than this average. Σ hypernuclei The existence of hypernuclei containing a Σ baryon is less clear. Several experiments in the early 1980s reported bound hypernuclear states above the Λ separation energy and presumed to contain one of the slightly heavier Σ baryons, but experiments later in the decade ruled out the existence of such states. Results from exotic atoms containing a Σ− bound to a nucleus by the electromagnetic force have found a net repulsive Σ–nucleon interaction in medium-sized and large hypernuclei, which means that no Σ hypernuclei exist in such mass range. However, an experiment in 1998 definitively observed the light Σ hypernucleus . ΛΛ and Ξ hypernuclei Hypernuclei containing two Λ baryons have been made. However, such hypernuclei are much harder to produce due to containing two strange quarks, and , only seven candidate ΛΛ hypernuclei have been observed. Like the Λ–nucleon interaction, empirical and theoretical models predict that the Λ–Λ interaction is mildly attractive. Hypernuclei containing a Ξ baryon are known. Empirical studies and theoretical models indicate that the Ξ––proton interaction is attractive, but weaker than the Λ–nucleon interaction. Like the Σ– and other negatively charged particles, the Ξ– can also form an exotic atom. When a Ξ– is bound in an exotic atom or a hypernucleus, it quickly decays to a ΛΛ hypernucleus or to two Λ hypernuclei by exchanging a strange quark with a proton, which releases about 29 MeV of energy in free space: Ξ− + p → Λ + Λ Ω hypernuclei Hypernuclei containing the omega baryon (Ω) were predicted using lattice QCD in 2018; in particular, the proton–Ω and Ω–Ω dibaryons (bound systems containing two baryons) are expected to be stable. , no such hypernuclei have been observed under any conditions, but the lightest such species could be produced in heavy-ion collisions, and measurements by the STAR experiment are consistent with the existence of the proton–Ω dibaryon. Hypernuclei with higher strangeness Since the Λ is electrically neutral and its nuclear force interactions are attractive, there are predicted to be arbitrarily large hypernuclei with high strangeness and small net charge, including species with no nucleons. Binding energy per baryon in multi-strange hypernuclei can reach up to 21 MeV/A under certain conditions, compared to 8.80 MeV/A for the ordinary nucleus 62Ni. Additionally, formation of Ξ baryons should quickly become energetically favorable, unlike when there are no Λ's, because the exchange of strangeness with a nucleon would be impossible due to the Pauli exclusion principle. Production Several modes of production have been devised to make hypernuclei through bombardment of normal nuclei. Strangeness exchange and production One method of producing a K− meson exchanges a strange quark with a nucleon and changes it to a Λ: p + K− → Λ + π0 n + K− → Λ + π− The cross section for the formation of a hypernucleus is maximized when the momentum of the kaon beam is approximately 500 MeV/c. Several variants of this setup exist, including ones where the incident kaons are either brought to rest before colliding with a nucleus. In rare cases, the incoming K− can instead produce a Ξ hypernucleus via the reaction: p + K− → Ξ− + K+ The equivalent strangeness production reaction involves a π+ meson reacts with a neutron to change it to a Λ: n + π+ → Λ + K+ This reaction has a maximum cross section at a beam momentum of 1.05 GeV/c, and is the most efficient production route for Λ hypernuclei, but requires larger targets than strangeness exchange methods. Elastic scattering Electron scattering off of a proton can change it to a Λ and produce a K+: p + e− → Λ + e− + K+ where the prime symbol denotes a scattered electron. The energy of an electron beam can be more easily tuned than pion or kaon beams, making it easier to measure and calibrate hypernuclear energy levels. Initially theoretically predicted in the 1980s, this method was first used experimentally in the early 2000s. Hyperon capture The capture of a Ξ− baryon by a nucleus can make a Ξ− exotic atom or hypernucleus. Upon capture, it changes to a ΛΛ hypernucleus or two Λ hypernuclei. The disadvantage is that the Ξ− baryon is harder to make into a beam than singly strange hadrons. However, an experiment at J-PARC begun in 2020 will compile data on Ξ and ΛΛ hypernuclei using a similar, non-beam setup where scattered Ξ− baryons rain onto an emulsion target. Heavy-ion collisions Similar species Kaonic nuclei The K– meson can orbit a nucleus in an exotic atom, such as in kaonic hydrogen. Although the K–-proton strong interaction in kaonic hydrogen is repulsive, the K––nucleus interaction is attractive for larger systems, so this meson can enter a strongly bound state closely related to a hypernucleus; in particular, the K––proton–proton system is experimentally known and more tightly bound than a normal nucleus. Charmed hypernuclei Nuclei containing a charm quark have been predicted theoretically since 1977, and are described as charmed hypernuclei despite the possible absence of strange quarks. In particular, the lightest charmed baryons, the Λc and Σc baryons, are predicted to exist in bound states in charmed hypernuclei, and could be created in processes analogous to those used to make hypernuclei. The depth of the Λc potential in nuclear matter is predicted to be 58 MeV, but unlike Λ hypernuclei, larger hypernuclei containing the positively charged Λc would be less stable than the corresponding Λ hypernuclei due to Coulomb repulsion. The mass difference between the Λc and the is too large for appreciable mixing of these baryons to occur in hypernuclei. Weak decays of charmed hypernuclei have strong relativistic corrections compared to those in ordinary hypernuclei, as the energy released in the decay process is comparable to the mass of the Λ baryon. Antihypernuclei In August 2024 the STAR Collaboration reported the observation of the heaviest antimatter nucleus known, antihyperhydrogen-4 consisting of one antiproton, two antineutrons and an antihyperon. The anti-lambda hyperon and the antihypertriton have also been previously observed. See also Strangelet, a hypothetical form of matter that also contains strange quarks Notes References Exotic matter Nuclear physics Strange quark
Hypernucleus
[ "Physics" ]
2,733
[ "Nuclear physics", "Matter", "Exotic matter" ]
1,029,985
https://en.wikipedia.org/wiki/Prisoner-of-war%20camp
A prisoner-of-war camp (often abbreviated as POW camp) is a site for the containment of enemy fighters captured as prisoners of war by a belligerent power in time of war. There are significant differences among POW camps, internment camps, and military prisons. Purpose-built prisoner-of-war camps appeared at Norman Cross in England in 1797 during the French Revolutionary Wars and HM Prison Dartmoor, constructed during the Napoleonic Wars, and they have been in use in all the main conflicts of the last 200 years. The main camps are used for marines, sailors, soldiers, and more recently, airmen of an enemy power who have been captured by a belligerent power during or immediately after an armed conflict. Civilians, such as merchant mariners and war correspondents, have also been imprisoned in some conflicts. Per the 1929 Geneva Convention on Prisoners of War, later superseded by the Third Geneva Convention, such camps have been required to be open to inspection by representatives of a neutral power, but this hasn't always been consistently applied. Detention of prisoners of war before the development of camps Before the Peace of Westphalia, enemy fighters captured by belligerent forces were usually executed, enslaved, or held for ransom. This, coupled with the relatively small size of armies, meant there was little need for any form of camp to hold prisoners of war. The Peace of Westphalia, a series of treaties signed between May and October 1648 that ended the Thirty Years' War and the Eighty Years' War, contained a provision that all prisoners should be released without ransom. This is generally considered to mark the point where captured enemy fighters would be reasonably treated before being released at the end of the conflict or under a parole not to take up arms. The practice of paroling enemy fighters had begun thousands of years earlier, at least as early as the time of Carthage but became normal practice in Europe from 1648 onwards. The consequent increase in the number of prisoners was to lead eventually to the development of the prisoner of war camps. Development of temporary camps Following General John Burgoyne's surrender at the Battle of Saratoga in 1777, several thousand British and German (Hessian and Brunswick) troops were marched to Cambridge, Massachusetts. For various reasons, the Continental Congress desired to move them south. For this purpose, one of the congressmen offered his land outside of Charlottesville, Virginia. The remaining soldiers (some 2,000 British, upwards of 1,900 German, and roughly 300 women and children) marched south in late 1778—arriving at the site (near Ivy Creek) in January 1779. Since the barracks were barely sufficient in construction, the officers were paroled to live as far away as Richmond and Staunton. The camp was never adequately provisioned, but the prisoners built a theater on the site. Hundreds escaped Albemarle Barracks because of the shortage of guards. As the British Army moved northward from the Carolinas in late 1780, the remaining prisoners were moved to Frederick, Maryland; Winchester, Virginia; and perhaps elsewhere. No remains of the encampment site are left. First purpose-built camp The earliest known purpose-built prisoner-of-war camp was established by the Kingdom of Great Britain at Norman Cross, in 1797 to house the increasing number of prisoners from the French Revolutionary Wars and the Napoleonic Wars. The prison operated until 1814 and held between 3,300 and 6,272 men. American Civil War camps Lacking a means for dealing with large numbers of captured troops early in the American Civil War, the Union and Confederate governments relied on the traditional European system of parole and exchange of prisoners. While awaiting exchange, prisoners were confined to permanent camps. Neither Union or Confederate prison camps were always well run, and it was common for prisoners to die of starvation or disease. It is estimated that about 56,000 soldiers died in prisons during the war; almost 10% of all Civil War fatalities. During a period of 14 months in Camp Sumter, located near Andersonville, Georgia, 13,000 (28%) of the 45,000 Union soldiers confined there died. At Camp Douglas in Chicago, Illinois, 10% of its Confederate prisoners died during one cold winter month; and the 25% death rate at Elmira Prison in New York State very nearly equaled that of Andersonville's. Boer War During the Second Boer War, the British government established prisoner-of-war camps (to hold captured Boer belligerents or fighters) and concentration camps (to hold Boer civilians). In total, six prisoner-of-war camps were erected in South Africa and around 31 in overseas British colonies to hold Boer prisoners of war. The majority of Boer prisoners of war were sent overseas (25,630 out of the 28,000 Boer men captured during the fighting). After an initial settling-in period, these prisoner-of-war camps were generally well administered. The number of concentration camps, all located in South Africa, was much higher and a total of 109 of these camps had been constructed by the end of the war - 45 camps for Boer civilians and 64 camps for black Africans. The vast majority of Boers held in the concentration camps were women and children. The concentration camps were generally poorly administered, the food rations were insufficient to maintain health, standards of hygiene were low, and overcrowding was chronic. Due to these conditions, thousands perished in the 109 concentration camps. Of the Boer women and children held in captivity, over 26,000 died during the war. Boer War camps World War I The first international convention on prisoners of war was signed at the Hague Peace Conference of 1899. It was widened by the Hague Convention of 1907. The main combatant nations engaged in World War I abided by the convention and treatment of prisoners was generally good. The situation on the eastern front was significantly worse than the western front, with prisoners in Russia at risk from starvation and disease. In total during the war about eight million men were held in prisoner of war camps, with 2.5 million prisoners in German custody, 2.9 million held by the Russian Empire, and about 720,000 held by Britain and France. Permanent camps did not exist at the beginning of the war. The unexpectedly large number of prisoners captured in the first days of the war by the German army created an immediate problem. By September 1914, the German army had captured over 200,000 enemy combatants. These first prisoners were held in temporary camps until 1915, by which time the prisoner population had increased to 652,000 living in unsatisfactory conditions. In response, the government began constructing permanent camps both in Germany and the occupied territories. The number of prisoners increased significantly during the war, exceeding one million by August 1915 and 1,625,000 by August 1916, and reaching 2,415,000 by the end of the war. Geneva Conference The International Committee of the Red Cross held a conference in Geneva, Switzerland in September 1917. The conference addressed the war, and the Red Cross addressed the conditions that the civilians were living under, which resembled those of soldiers in prisoner of war camps, as well as "barbed wire disease" (symptoms of mental illness) suffered by prisoners in France and Germany. It was agreed at the conference that the Red Cross would provide prisoners of war with mail, food parcels, clothes, and medical supplies and that prisoners in France and Germany suffering from "barbed wire disease" should be interned in Switzerland, a neutral country. A few countries were not on the same terms as Germany and Austria. For example, Hungary believed that harsh conditions would reduce the number of traitors. The countries in the east continued their fight to help the Red Cross provide support to POWs. At the end of the war, a Franco-German agreement was made that both countries would exchange their prisoners, but the French kept a small number while the Germans released all French prisoners. Krasnoyarsk Krasnoyarsk in Siberia, Russia, was used after the Russian defeat to the Japanese in the Russo-Japanese war, as a base for military camps to train for future wars. Conditions there were dire and the detainees could be conscripted for war while they lived in concentration camps and prisons. Over 50,000 camp tenants were used for transportation, agriculture, mining and machinery production. Throughout World War I, captured prisoners of war were sent to various camps including the one in Krasnoyarsk. There was a point where a large mix of nationalities was together in Krasnoyarsk which included Bulgarians, Czechs, Germans, and Poles. Many prisoners were nationalists, which led to violence within the camp. Militants would be forced to put down the instigators and keep the camp running. Polish–Soviet War From autumn 1920, thousands of captured Red Army soldiers and guards had been placed in the Tuchola internment camp, in Pomerania. These prisoners lived in dugouts, and many died of hunger, cold, and infectious diseases. According to historians Zbigniew Karpus and Waldemar Rezmer, up to 2000 prisoners died in the camp during its operation. In a joint work by Polish and Russian historians, Karpus and Rezmer estimate the total death toll in all Polish POW camps during the war at 16,000–17,000, while the Russian historian Matvejev estimates it at 18,000–20,000. On the other side of the frontline about 20,000 out of about 51,000 Polish POWs died in Soviet and Lithuanian camps While the conditions for Soviet prisoners were clearly exposed by the free press in Poland, no corresponding fact-finding about Soviet camps for Polish POWs could be expected from the tightly controlled Soviet press of the time. Available data shows many cases of mistreatment of Polish prisoners. There have been also cases of Polish POWs' being executed by the Soviet army, when no POW facilities were available. World War II The 1929 Geneva Convention on the Prisoners of War established certain provisions relative to the treatment of prisoners of war. One requirement was that POW camps were to be open to inspection by authorised representatives of a neutral power. Article 10 required that POWs should be lodged in adequately heated and lighted buildings where conditions were the same as their own troops. Articles 27–32 detailed the conditions of labour. Enlisted ranks were required to perform whatever labour they were asked and able to do, so long as it was not dangerous and did not support the captor's war effort. Senior Non-commissioned officers (sergeants and above) were required to work only in a supervisory role. Commissioned officers were unrequired to work, although they could volunteer. The work performed was largely agricultural or industrial, ranging from coal or potash mining, stone quarrying, or work in saw mills, breweries, factories, railway yards, and forests. POWs hired out to military and civilian contractors and were paid $.80 per day in scrip in U.S. camps. The workers were also supposed to get at least one day of rest per week. Article 76 ensured that POWs who died in captivity were to be honourably interred in marked graves. Not all combatants applied the provisions of the convention. In particular the Empire of Japan, which had signed but never ratified the convention, was notorious for its treatment of prisoners of war; this poor treatment occurred in part because the Japanese viewed surrender as dishonourable. Prisoners from all nations were subject to forced labour, beatings, torture, murder, and even medical experimentation. Rations fell short of the minimum required to sustain life, and many were forced into labour. After March 20, 1943, the Imperial Navy was under orders to execute all prisoners taken at sea. Japanese POW camps are found throughout south-east Asia and the Japanese conquered territories. Escapes The Great Escape from Stalag Luft III, on the night of March 24, 1944, involved the escape of 76 Allied servicemen, although only three were able to avoid recapture. The Cowra breakout, on August 5, 1944, is believed to be the largest escape of POWs in recorded history and possibly the largest prison breakout ever. At least 545 Japanese POWs attempted to escape from a camp near Cowra, New South Wales, Australia. Most sources say that 234 POWs were killed or committed suicide. The remainder were recaptured. The Great Papago Escape, on December 23, 1944, was the largest POW escape to occur from an American facility. Over 25 German POWs tunneled out of Camp Papago Park, near Phoenix, Arizona, and fled into the surrounding desert. Over the next few weeks all were recaptured. The escape of Felice Benuzzi, Giovanni ('Giuàn') Balletto, and Vincenzo ('Enzo') Barsotti from Camp 354 in Nanyuki, Kenya, on a lark to climb Mount Kenya is of particular note. The account is recorded by Benuzzi in No Picnic on Mount Kenya. After their attempt to climb Mount Kenya, the trio "escaped" back into Camp 354. Role of the Red Cross After World War I, when around 40 million civilians and prisoners could not be saved, the Red Cross was entrusted with more rights and responsibilities. In the course of World War II, it provided millions of Red Cross parcels to Allied POWs in Axis prison camps; most of these contained food and personal hygiene items, while others held medical kits. A special "release kit" parcel was also provided to some newly released POWs at the war's end. During the United States' call for war on Japan, the Red Cross stepped up to provide services for the soldiers overseas. A large number of provisions were needed for the soldiers in World War II over the 4 years that the Americans were involved. The American Red Cross and thirteen million volunteers had donated in the country with an average weekly donation of 111,000 pints of blood. Nurses, doctors, and volunteer workers worked on the front lines overseas to provide for the wounded and the needy. This program saved thousands of lives as plasma donations were delivered to the camps and bases. However, the Red Cross only accepted donations from white Americans and excluded those of Japanese, Italian, German and African Americans. To combat this, activists tried to fight such segregation back home with arguments that blood of Whites and blood of Blacks is the same. Allied camps Featherston prisoner of war camp, New Zealand List of POW camps in Australia List of POW camps in Britain List of POW camps in Canada List of POW camps in Kenya List of POW camps in occupied Germany List of POW camps in the United States List of POW camps in USSR Lom prisoner of war camp, Norway Skorpa prisoner of war camp, Norway Zonderwater POW camp in Cullinan, South Africa Conditions in Japanese camps In the lead up to the Second World War, Japan had engaged in several conflicts aimed at expanding its empire, most notably the Second Sino-Japanese War. Although maintaining its neutrality at the outbreak of war in Europe, in 1941 the Japanese military launched surprise attacks on Hong Kong, Singapore, Thailand, the Philippines, and Pearl Harbor, which had brought the United States into the war on the side of the Allies. In 1942, after they had captured Hong Kong from the British, the Japanese established several prisoner-of-war camps in Kowloon to house Allied prisoners of war. Believing it was shameful to be captured alive in combat, the Japanese ran their prisoner-of-war camps brutally, with many Allied prisoners of war dying in them. The Japanese field army code included a "warrior spirit", which stated that an individual must calmly face death. Those who disobeyed orders would be sentenced to death via decapitation, usually carried out by the katana of Japanese officers. The sword was seen as a symbol of wisdom and perseverance to the Japanese, and they perceived that it was an honor to die by it. Allied prisoners-of-war in Japanese camps were forced to engage in physical labour such as building bridges, erecting forts, and digging defence trenches. These prisoners received limited food, and once their military uniforms wore out, no replacements were given. Some brutal prison guards would answer requests for water with their beatings or rifle butts. Prisoners who were seen as no use, physically weak, or rebellious, would often be killed. At the end of the war, when the camp inmates were released, many had lost body parts, and many were starved and faced extreme emaciation. Some prisoners feared execution by the Japanese in response to American bombing. The brutality of the guards caused traumatized prisoners to suffer mental illnesses that persisted for decades afterward. In many cases, survivors of camps were traumatized or ended up living with a disability. Many survivors went home or to other areas of the world to have a successful life as a businessman, or they would devote themselves to helping poor people or people in the camps who were in need of support. A former PoW, Lieutenant Colonel Philip Toosey, stated that the Japanese committed brutal atrocities. Some of these included filling a prisoner's nose with water while the guards tied them with barbed wire, then they would stand on the prisoners, stepping on the wires. Or the guards would tie a prisoner on a tree by their thumbs, with their toes barely touching the ground, and leave them there for two days without food or water. After the two days of torture, the prisoner would be jailed prior to execution, after which their corpses would later be burnt. Life in the POW camps was recorded at great risk to themselves by artists such as Jack Bridger Chalker, Philip Meninsky, John Mennie, Ashley George Old, and Ronald Searle. Human hair was often used for brushes, plant juices and blood for paint, and toilet paper as the "canvas". Some of their works were used as evidence in the trials of Japanese war criminals. Many are now held by the Australian War Memorial, State Library of Victoria, and the Imperial War Museum in London. The State Library of Victoria exhibited many of these works under the title The Major Arthur Moon Collection, in 1995. In 2016, war historian Antony Beevor (who had recently completed his book The Second World War), said that the UK government had recently declassified information that some British POWs in some Japanese POW camps were subjected to being fattened, then cannibalised. Apparently, Winston Churchill had been aware of this atrocity, but kept the information secret; families would have been too distressed to learn that their sons had been the victims of cannibalism rather than killed in action. More deaths occurred in Japanese POW camps than in any others. The Red Cross were not able to drop parcels into these camps because they were too well defended to fly over. Canadian camps The Second World War was mainly fought in Europe and western Russia, East Asia, and the Pacific; there were no invasions of Canada. The few prisoners of war sent to Canada included Japanese and German soldiers, captured U-boat crews, and prisoners from raids such as Dieppe and Normandy. The camps meant for German POWs were smaller than those meant for Japanese prisoners of war and were far less brutal. German prisoners generally benefitted from good food. However, the hardest part was surviving the Canadian winters. Most camps were isolated and located in the far north. Death and sickness caused by the elements was common. Many camps were only lightly watched, and as such, many Germans attempted escape. Tunnelling was the most common method. Peter Krug, an escapee from a prison located in Bowmanville, Ontario, managed to escape along the railroads, using forests as cover. He made his way to Toronto, where he then travelled to Texas. Fighting, sometimes to the death, was somewhat common in the camps. Punishments for major infractions could include death by hanging. German POWs wore shirts with a large red dot painted on the back, an easily identifiable mark outside the camps. Therefore, escapees could be easily found and recaptured. Japanese in Canada In the wake of the Japanese attacking Hong Kong, the Philippines and Pearl Harbor in which 2000 Canadians were involved, Canadians put a large focus onto Japanese-Canadians even though innocent. Japan seemed to be able to attack along the Pacific and Canada could potentially be next. Canadian Prime Minister William Lyon Mackenzie King implemented the War Measures Act and the Defence of Canada Regulations; therefore, they could not get involved with Canadian services along with the Italians and Germans. The Nikkei (Canadians and Immigrants of Japanese origin) were stripped of possessions, which were later auctioned off without consent. The intense cold winters made it hard to live as the Nikkei were placed in camps; these campers were made of Japanese immigrants and Japanese-Canadians. They lived in barns and stables which were used for animals, therefore unsanitary. It took 5 years after the war for the Nikkei to gain their rights. Compensation was given but was not enough to cover the loss of properties. Over 22,000 Nikkei were put into these camps. Axis camps List of POW camps in Germany and German-occupied countries (Stalags) List of Japanese war ships List of POW camps in Italy List of POW camps in Japan List of POW camps in Switzerland Cigarettes as currency In many POW camps, cigarettes were widely used as currency known as 'commodity money'. They performed the functions of money as a medium of exchange because they were generally accepted among the prisoners for settling payments or debts, and the function of money as a unit of account, because prices of other goods were expressed in terms of cigarettes. Compared with other goods, the supply of cigarettes was more stable, as they were rationed in the POW camps, and cigarettes were more divisible, portable, and homogeneous. Korean War U.N. camps The International Red Cross visited United Nations-run POW camps, often unannounced, noting prisoner hygiene, quality of medical care, variety of diet, and weight gain. They talked to the prisoners and asked for their comments on conditions, as well as providing them with copies of the Geneva Convention. The IRC delegates dispersed boots, soap, and other requested goods. A prison camp was established on the island of Koje-do, where over 170,000 communist and non-communist prisoners were held from December 1950 until June 1952. Throughout 1951 and early 1952, upper-level communist agents infiltrated and conquered much of Koje section-by-section by uniting fellow communists; bending dissenters to their will through staged trials and public executions; and exporting allegations of abuse to the international community to benefit the communist negotiation team. In May 1952, Chinese and North Korean prisoners rioted and took Brigadier General Francis T. Dodd captive. In 1952 the camp's administration were afraid that the prisoners would riot and demonstrate on May Day (a day honoring Communism) and so United States Navy ships (such as the USS Gunston Hall) removed 15,000 North Korean and Chinese prisoners from the island and moved them to prison facilities at Ulsan and Cheju-do. These ships also participated in Operation Big Switch in September 1953 when prisoners were exchanged at the end of the war. Communist camps The Chinese operated three types of POW camps during the Korean war. Peace camps housed POWs who were sympathetic to communism, reform camps were intended for skilled POWs who were to be indoctrinated in communist ideologies and the third type was the normal POW camps. Chinese policy did not allow for the exchange of prisoners in the first two camp types. While these POW camps were designated numerically by the communists, the POWs often gave the camps a colloquial name. Camp 1 – Changsong – near Camp 3 on the Yalu River. Camp 2 – Pyoktong – on the Yalu River. Camp 3 – Changsong – near Camp 1 on the Yalu River. Camp 4 – north of Camp 2 Camp 5 – near Pyoktong. Camp 6 – P'yong-yang Camp 7 – near Pyoktong. Camp 8 – Kangdong Camp 9 – P'yong-yang. Camp 10 – Chon ma Camp 11 – Pukchin Camp 12 – P'yong-yang- (Peace Camp) was located in the northwestern vicinity of the capitol. Nearby were several other camps including PAK's Palace. Bean Camp – Suan Camp DeSoto – P'yong-yang locale – The camp was near to Camp 12. Pak's Palace Camp – P'yong-yang locale – Located in the northernmost area near the Capitol. The camp was near Camp 12. Pukchin Mining Camp – between Kunu-ri and Pyoktong – (aka. Death Valley Camp). Sunchon Tunnel – - (aka. Caves Camp) Suan Mining Camp – P'yong-yang Valley Camps – Teksil-li Vietnam War South Vietnamese Army camps in South Vietnam By the end of 1965, Viet Cong suspects, prisoners of war, and even juvenile delinquents were mixed together in South Vietnamese jails and prisons. After June 1965, the prison population steadily rose, and by early 1966, there was no space to accommodate additional prisoners in the existing jails and prisons. In 1965, plans were made to construct five POW camps, each with an initial capacity of 1,000 prisoners and to be staffed by the South Vietnamese military police, with U.S. military policemen as a prisoner of war advisers assigned to each stockade. Prisons and jails Côn Đảo National Prison Chí Hòa National Prison Tam Hiep National Prison Thu Duc National Prison plus 42 Province jails Camps Bien Hoa camp – in III Corps area was opened May 1966 Pleiku camp – in II Corps area was opened August 1966 Da Nang camp (Non Nuoc) – in I Corps area was opened in November 1966 Can Tho camp – in IV Corps area was opened early 1967 Qui Nhon (Phu Tai) – opened March 1968 (for female PoWs) Phú Quốc camp – off the coast of Cambodia, opened in 1968 North Vietnamese Army camps "Alcatraz" – North Central Hanoi "Briarpatch" – WNW of Hanoi "Camp Faith" – West of Hanoi "Dirty Bird" – Northern Hanoi "Dogpatch" – NNE of Hanoi "Farnsworth" – SW of Hanoi "Hanoi Hilton" – Hoa Lo, Central Hanoi "Mountain Camp" – NW of Hanoi "Plantation – Northeast Hanoi "Rockpile" – South of Hanoi Sơn Tây – West of Hanoi "Skidrow" – SW of Hanoi "The Zoo" – SW suburb of Hanoi Yugoslav wars Serb Camps Manjača camp – Banja Luka, Republika Srpska Sremska Mitrovica camp – Sremska Mitrovica, Vojvodina Stajićevo camp – Stajićevo, Vojvodina Other Camps Čelebići prison camp – Konjic, Federation of Bosnia and Herzegovina Lapušnik prison camp – Kosovo Afghanistan and Iraq wars The United States of America refused to grant prisoner-of-war status to many prisoners captured during its War in Afghanistan (2001–2021) and 2003 invasion of Iraq. This was mainly because it classed them as insurgents or terrorists, which did not meet the requirements laid down by the Third Geneva Convention of 1949 such as being part of a chain of command, wearing a "fixed distinctive marking, visible from a distance", bearing arms openly, and conducting military operations in accordance with the laws and customs of war. The legality of this refusal has been questioned and cases are pending in the U.S. courts. In the Hamdan v. Rumsfeld court case, on June 29, 2006, the U.S. Supreme Court ruled that the captives at Guantanamo Bay detention camp were entitled to the minimal protections listed under Common Article 3 of the Geneva Conventions. Other captives, including Saddam Hussein, have been accorded POW status. The International Red Cross has been permitted to visit at least some sites. Many prisoners were held in secret locations (black sites) around the world. The identified sites are listed below: Abu Ghraib prison – 32 km west of Baghdad, Iraq Bagram Air Base – near Charikar in Parvan, Afghanistan Camp Bucca – near Umm Qasr, Iraq Camp Delta – Guantanamo Bay, Cuba See also List of World War II prisoner-of-war camps in the United States American Civil War prison camps Finnish Civil War prison camps Internment camp List of prisoner-of-war escapes List of World War II POW camps Military prison Eden Camp Museum Notes and references Bibliography Burnham, Philip. So Far from Dixie: Confederates in Yankee Prisons (2003) Byrne, Frank L., "Libby Prison: A Study in Emotions," Journal of Southern History 1958 24(4): 430–444. in JSTOR Cloyd, Benjamin G. Haunted by Atrocity: Civil War Prisons in American Memory (Louisiana State University Press; 2010) 272 pages.traces shifts in Americans' views of the brutal treatment of soldiers in both Confederate and Union prisons, from raw memories in the decades after the war to a position that deflected responsibility. Horigan, Michael. Elmira: Death Camp of the North (2002) Imprisonment and detention Total institutions
Prisoner-of-war camp
[ "Biology" ]
5,925
[ "Behavioural sciences", "Behavior", "Total institutions" ]
1,030,104
https://en.wikipedia.org/wiki/Darknet
A darknet or dark net is an overlay network within the Internet that can only be accessed with specific software, configurations, or authorization, and often uses a unique customized communication protocol. Two typical darknet types are social networks (usually used for file hosting with a peer-to-peer connection), and anonymity proxy networks such as Tor via an anonymized series of connections. The term "darknet" was popularized by major news outlets and was associated with Tor Onion services when the infamous drug bazaar Silk Road used it, despite the terminology being unofficial. Technology such as Tor, I2P, and Freenet are intended to defend digital rights by providing security, anonymity, or censorship resistance and are used for both illegal and legitimate reasons. Anonymous communication between whistle-blowers, activists, journalists and news organisations is also facilitated by darknets through use of applications such as SecureDrop. Terminology The term originally described computers on ARPANET that were hidden, programmed to receive messages but not respond to or acknowledge anything, thus remaining invisible and in the dark. Since ARPANET, the usage of dark net has expanded to include friend-to-friend networks (usually used for file sharing with a peer-to-peer connection) and privacy networks such as Tor. The reciprocal term for a darknet is a clearnet or the surface web when referring to content indexable by search engines. The term "darknet" is often used interchangeably with "dark web" because of the quantity of hidden services on Tor's darknet. Additionally, the term is often inaccurately used interchangeably with the deep web because of Tor's history as a platform that could not be search-indexed. Mixing uses of both these terms has been described as inaccurate, with some commentators recommending the terms be used in distinct fashions. Origins "Darknet" was coined in the 1970s to designate networks isolated from ARPANET (the government-founded military/academical network which evolved into the Internet), for security purposes. Darknet addresses could receive data from ARPANET but did not appear in the network lists and would not answer pings or other inquiries. The term gained public acceptance following publication of "The Darknet and the Future of Content Distribution", a 2002 paper by Peter Biddle, Paul England, Marcus Peinado, and Bryan Willman, four employees of Microsoft who argued the presence of the darknet was the primary hindrance to the development of workable digital rights management (DRM) technologies and made copyright infringement inevitable. This paper described "darknet" more generally as any type of parallel network that is encrypted or requires a specific protocol to allow a user to connect to it. Sub-cultures Journalist J. D. Lasica, in his 2005 book Darknet: Hollywood's War Against the Digital Generation, described the darknet's reach encompassing file sharing networks. Subsequently, in 2014, journalist Jamie Bartlett in his book The Dark Net used the term to describe a range of underground and emergent subcultures, including camgirls, cryptoanarchists, darknet drug markets, self harm communities, social media racists, and transhumanists. Uses Darknets in general may be used for various reasons, such as: To better protect the privacy rights of citizens from targeted and mass surveillance Computer crime (cracking, file corruption, etc.) Protecting dissidents from political reprisal File sharing (warez, personal files, pornography, confidential files, illegal or counterfeit software, etc.) Sale of restricted goods on darknet markets Whistleblowing and news leaks Purchase or sale of illicit or illegal goods or services Circumventing network censorship and content-filtering systems, or bypassing restrictive firewall policies Software All darknets require specific software installed or network configurations made to access them, such as Tor, which can be accessed via a customized browser from Vidalia (aka the Tor browser bundle), or alternatively via a proxy configured to perform the same function. Active Tor is the most popular instance of a darknet, and it is often mistakenly thought to be the only online tool that facilitates access to darknets. Alphabetical list: anoNet is a decentralized friend-to-friend network built using VPN and software BGP routers. Decentralized network 42 (not for anonymity but research purposes). Freenet is a popular DHT file hosting darknet platform. It supports friend-to-friend and opennet modes. GNUnet can be utilized as a darknet if the "F2F (network) topology" option is enabled. I2P (Invisible Internet Project) is an overlay proxy network that features hidden services called "Eepsites". IPFS has a browser extension that may backup popular webpages. RetroShare is a friend-to-friend messenger communication and file transfer platform. It may be used as a darknet if DHT and Discovery features are disabled. Riffle is a government, client-server darknet system that simultaneously provides secure anonymity (as long as at least one server remains uncompromised), efficient computation, and minimal bandwidth burden. Secure Scuttlebutt is a peer-to peer communication protocol, mesh network, and self-hosted social media ecosystem Syndie is software used to publish distributed forums over the anonymous networks of I2P, Tor and Freenet. Tor (The onion router) is an anonymity network that also features a darknet – via its onion services. Tribler is an anonymous BitTorrent client with built in search engine, and non-web, worldwide publishing through channels. Urbit is a federated system of personal servers in a peer-to-peer overlay network. Zeronet is a DHT Web 2.0 hosting with Tor users. No longer supported StealthNet (discontinued) WASTE Defunct AllPeers Turtle F2F See also BlackBook (social network) Crypto-anarchism Cryptocurrency Darknet market Dark web Deep web Private peer-to-peer (P2P) Sneakernet Virtual private network (VPN) References File sharing Virtual private networks Darknet markets Cyberspace Internet terminology Dark web Network architecture Distributed computing architecture 1970s neologisms Internet architecture
Darknet
[ "Technology", "Engineering" ]
1,284
[ "Computing terminology", "Internet architecture", "IT infrastructure", "Cyberspace", "Internet terminology", "Network architecture", "Computer networks engineering", "Information technology" ]
1,030,288
https://en.wikipedia.org/wiki/Henri%20Moissan
Ferdinand Frédéric Henri Moissan (; 28 September 1852 – 20 February 1907) was a French chemist and pharmacist who won the 1906 Nobel Prize in Chemistry for his work in isolating fluorine from its compounds. Moissan was one of the original members of the International Atomic Weights Committee. Biography Early life and education Moissan was born in Paris on 28 September 1852, the son of a minor officer of the Eastern Railway Company, Francis Ferdinand Moissan, and a seamstress, Joséphine Améraldine (née Mitel). His mother was of Jewish descent, his father was not. In 1864 they moved to Meaux, where he attended the local school. During this time, Moissan became an apprentice clockmaker. However, in 1870, Moissan and his family moved back to Paris due to war against Prussia. Moissan was unable to receive the grade universitaire necessary to attend university. After spending a year in the army, he enrolled at the Ecole Superieure de Pharmacie de Paris. Scientific career Moissan became a trainee in pharmacy in 1871 and in 1872 he began working for a chemist in Paris, where he was able to save a person poisoned with arsenic. He decided to study chemistry and began first in the laboratory of Edmond Frémy at the Musée d’Histoire Naturelle, and later in that of Pierre Paul Dehérain at the École Pratique des Haute Études. Dehérain persuaded him to pursue an academic career. He passed the baccalauréat, which was necessary to study at university, in 1874 after an earlier failed attempt. He also became qualified as first-class pharmacist at the École Supérieure de Pharmacie in 1879, and received his doctoral degree there in 1880. He soon climbed through the ranks of the School of Pharmacy, and was appointed Assistant Lecturer, Senior Demonstrator, and finally Professor of Toxicology by 1886. He took the Chair of Inorganic Chemistry in 1899. The following year, he succeeded Louis Joseph Troost as Professor of Inorganic Chemistry at the Sorbonne. During his time in Paris he became a friend of the chemist Alexandre Léon Étard and the botanist Vasque. His marriage, to Léonie Lugan, took place in 1882. They had a son in 1885, named Louis Ferdinand Henri. Death Moissan died suddenly in Paris in February 1907, shortly after his return from receiving the Nobel Prize in Stockholm. His death was attributed to an acute case of appendicitis, however, there is speculation that repeated exposure to fluorine and carbon monoxide also contributed to his death. Awards and honors During his extensive career, Moissan authored more than three hundred publications, won the 1906 Nobel Prize in Chemistry for the first isolation of fluorine, in addition to the Prix Lucaze, the Davy Medal, the Hofmann Medal, and the Elliott Cresson Medal. He was elected fellow of the Royal Society and The Chemical Society of London, served on the International Atomic Weights Committee and made a commandeur in the Légion d'honneur. Research Moissan published his first scientific paper, about carbon dioxide and oxygen metabolism in plants, with Dehérain in 1874. He left plant physiology and then turned towards inorganic chemistry; subsequently his research on pyrophoric iron was well received by the two most prominent French inorganic chemists of that time, Henri Étienne Sainte-Claire Deville and Jules Henri Debray. After Moissan received his Ph.D. on cyanogen and its reactions to form cyanures in 1880, his friend Landrine offered him a position at an analytic laboratory. Isolation of fluorine During the 1880s, Moissan focused on fluorine chemistry and especially the production of fluorine itself. The existence of the element had been well known for many years, but all attempts to isolate it had failed, and some experimenters had died in the attempt. He had no laboratory of his own, but borrowed lab space from others, including Charles Friedel. There he had access to a strong battery consisting of 90 Bunsen cells which made it possible to observe a gas produced by the electrolysis of molten arsenic trichloride; the gas was reabsorbed by the arsenic trichloride. Moissan eventually succeeded in isolating fluorine in 1886 by the electrolysis of a solution of potassium hydrogen difluoride (KHF2) in liquid hydrogen fluoride (HF). The mixture was necessary because hydrogen fluoride is a nonconductor. The device was built with platinum-iridium electrodes in a platinum holder and the apparatus was cooled to −50 °C. The result was the complete separation of the hydrogen produced at the negative electrode from the fluorine produced at the positive one, first achieved on 26 June 1886. This remains the current standard method for commercial fluorine production. The French Academy of Science sent three representatives, Marcellin Berthelot, Henri Debray, and Edmond Frémy, to verify the results, but Moissan was unable to reproduce them, owing to the absence from the hydrogen fluoride of traces of potassium fluoride present in the previous experiments. After resolving the problem and demonstrating the production of fluorine several times, he was awarded a prize of 10,000 francs. For the first successful isolation, he was awarded the 1906 Nobel Prize in Chemistry. Following his grand achievement, his research focused on characterizing fluorine's chemistry. He discovered numerous fluorine compounds, such as (together with Paul Lebeau) sulfur hexafluoride in 1901. Further studies Moissan contributed to the development of the electric arc furnace, which opened several paths to developing and preparing new compounds, and attempted to use pressure to produce synthetic diamonds from the more common form of carbon. He also used the furnace to synthesize the borides and carbides of numerous elements. Calcium carbide was a noticeable accomplishment as this paved the way for the development of the chemistry of acetylene. In 1893, Moissan began studying fragments of a meteorite found in Meteor Crater near Diablo Canyon in Arizona. In these fragments he discovered minute quantities of a new mineral and, after extensive research, Moissan concluded that this mineral was made of silicon carbide. In 1905, this mineral was named moissanite, in his honor. In 1903 Moissan was elected member of the International Atomic Weights Committee where he served until his death. Footnotes See also List of Jewish Nobel laureates References Further reading External links Scientific genealogy Books and letters by Henri Moissan in Europeana 1852 births 1907 deaths Nobel laureates in Chemistry French Nobel laureates Jewish Nobel laureates Scientists from Paris Members of the French Academy of Sciences 19th-century French chemists French pharmacists French people of Jewish descent Inorganic chemists Foreign members of the Royal Society Foreign associates of the National Academy of Sciences 20th-century French chemists Fluorine Deaths from appendicitis Burials at Père Lachaise Cemetery
Henri Moissan
[ "Chemistry" ]
1,445
[ "Inorganic chemists" ]
1,030,290
https://en.wikipedia.org/wiki/Nikolay%20Zelinsky
Nikolay Dmitriyevich Zelinsky (; ; 6 February 1861 – 31 July 1953) was a Russian and Soviet chemist and educator. A professor at Moscow University from 1893, Academician of the Academy of Sciences of the Soviet Union (1929). Zelinsky studied at the University of Odessa and at the universities of Leipzig and Göttingen in Germany. Zelinsky was one of the founders of theory on organic catalysis. He was the inventor of the first effective filtering activated charcoal gas mask in the world (1915). Life Nikolai Zelinsky was born on 25 January (6 February) 1861 in Tiraspol in a noble family. His father Dmitry Osipovich Zelinsky who came from hereditary Volyn nobles, died of rapidly developing consumption in 1863; two years later his mother died of the same disease. The orphaned boy was left in the care of his grandmother M.P. Vasilyeva and he spent his childhood in her village. At the age of ten, Nikolai Zelinsky entered the Tiraspol district school for two-year courses to prepare for entering the gymnasium. Having completed them ahead of schedule at the age of 11, he entered the second grade of Odessa Richelieu Gymnasium. After graduating from the gymnasium in 1880, Zelinsky entered the natural science department of the Physics and Mathematics Faculty of the Novorossiysk University, and graduated in 1884. He was given an appointment at the university and was sent to Germany. He did research for two years (1885–1887), first he worked in the laboratory of Johannes Wislicenus in Leipzig. Then he performed a study of a new reaction in the laboratory of Viktor Meyer in Göttingen, which led to severe poisoning himself with mustard gas, which had not been studied enough by that time. In 1887 he was appointed Privatdozent in the Department of Chemistry at the Novorossiysk University. In 1888 he passed the master's exam, in 1889 he defended his Masters Thesis ("On the issue of Isomer in the thiophene series"), and in 1891 he defended his Doctoral Thesis ("Investigation of the phenomena of isomerism in the series of limiting carbon compounds"). He was invited to Moscow University on the initiative of Dmitri Mendeleev. He was a professor at Moscow University from 1893 until his death, with the exception of the period 1911–1917. Since 1893 he became an extraordinary professor at the Department of Organic Chemistry, since 1902 he was an ordinary professor. In 1911, he left the university with a group of scientists in protest against the policy of the tsarist Minister of Education Lev Kasso. From 1911 to 1917 he worked as a professor at the St. Petersburg Polytechnic Institute. In 1917 he returned to Moscow University. There he was a professor of the Department of Chemistry (1917–1929) of the Physics and Mathematics Faculty, then he became a head of the Department of Organic Chemistry (1929–1930 and 1933–1938), a head of the Department of Petroleum Chemistry (1938–1953), a head of the Laboratory of Antibiotics and Biogenic Bases (1950–1953), Faculty of Chemistry. Also, he was a head of the Department of Organic Chemistry of the Chemical Department (1932–1933). Since 1935 he actively participated in the organization of the Institute of Organic Chemistry of the USSR Academy of Sciences, later he headed a number of its laboratories. On July 10, 1941 Zelinsky joined the Scientific and Technical Council for the development and testing of scientific works related to military defense, chaired by the authorized State Defense Committee, Professor Sergei Kaftanov. During the Great Patriotic War, he worked in evacuation until the summer of 1943. Zelinsky took part in work to improve the quality of aviation gasolines and lubricating oils. A new process has been developed to produce high octane fuel; new catalysts were found for the processes of aromatization of oil and the production of defense products. Under the leadership of Zelinsky, the process of catalytic cracking of oil was studied in detail with the determination of the chemical nature of its products by spectral methods. Zelinsky also supervised work on finding ways to rationally use the products of primary processing of solid fuels - coal, shale and peat. In this regard, the problem of separation of sulfur from shale resins has become important. Shale accounted for about three-quarters of the fuel reserves of the USSR, but their high sulfur content depreciated them as a raw material for motor fuel. During the war years Zelinsky found a solution to this problem by passing shale oils mixed with hydrogen over platinum or nickel on aluminum oxide at 300 °. Sulfur was removed as hydrogen sulfide. The development of petrochemistry in the USSR has led to a radical reconstruction of the oil refining industry for the production of artificial liquid fuel. As a result of scientific research, it has become possible to use not only liquid, but also solid fossil fuels as a valuable raw material for high-octane motor fuel and high-quality lubricant oils. Thus, the necessary prerequisites were created for processing the richest coal resources of Western Siberia, coal and natural gas from Ukhta and Pechora and other areas remote from the front into motor fuel. Nikolai Zelinsky died on July 31, 1953. He was buried in Moscow at the Novodevichy Cemetery (Division 1), and a headstone was made by Nilolai Nikoghosyan. Scientific activity Zelinsky's scientific activity was very versatile: his works on the chemistry of thiophene and the stereochemistry of organic dibasic acids are widely known. In the summer of 1891, Zelinsky participated in an expedition to survey the waters of the Black Sea and the Odessa estuaries on the gunboat Zaporozhets, where he proved for the first time that the hydrogen sulfide contained in the water was of bacterial origin. During the period of life and work in Odessa, Nikolai Zelinsky wrote 40 scientific papers. A number of his works were also devoted to electrical conductivity in non-aqueous solutions and to the chemistry of amino acids, but his main works were related to the chemistry of hydrocarbons and organic catalysis. In 1895–1907 he was the first to synthesize a number of cyclopentane and cyclohexane hydrocarbons, which served as standards for studying the chemical composition and the basis for artificial modeling of oil and oil fractions. In 1910 he discovered the phenomenon of dehydrogenation catalysis, which consists in the exclusively selective action of platinum and palladium on cyclohexane and aromatic hydrocarbons and in the ideal reversibility of hydro- and dehydrogenation reactions only depending on temperature. In 1911 he carried out a smooth dehydrogenation of cyclohexane and its homologues into aromatic hydrocarbons in the presence of platinum and palladium catalysts; he widely used this reaction to determine the content of cyclohexane hydrocarbons in gasoline and kerosene fractions of oil (1920–1930), and also as an industrial method for obtaining aromatic hydrocarbons from oil. These Zelinsky’s studies underlie the modern processes of catalytic reforming of petroleum fractions. Subsequent research led Zelinsky and his students to the discovery of the reaction of hydrogenolysis of cyclopentane hydrocarbons with their transformation into alkanes in the presence of platinized coal and excess hydrogen in 1934. In 1915, Zelinsky successfully used oxide catalysts for oil cracking, which led to a decrease in the process temperature and an increase in the yield of aromatic hydrocarbons. In 1918–1919, he developed a method for producing gasoline by solar oil and petroleum cracking in the presence of aluminum chloride and aluminum bromide; the implementation of this method on an industrial scale played an important role in providing gasoline to the Soviet state. Zelinsky improved the reaction of catalytic conversion of acetylene into benzene by suggesting the use of activated carbon as a catalyst. Zelinsky and his students also studied the dehydrogenation of paraffins and olefins in the presence of oxide catalysts. Being a supporter of the theory of the organic origin of oil, Zelinsky conducted a series of studies to connect its genesis with sapropels, oil shale and other natural and synthetic organic substances. Zelinsky and his students proved the intermediate formation of methylene radicals in many heterogeneous catalytic reactions: in the decomposition of cyclohexane, in the synthesis of hydrocarbons from carbon monoxide and hydrogen on a cobalt catalyst, in the reactions of hydrocondensation of olefins with carbon monoxide and hydropolymerization of olefins in the presence of small amounts of oxide carbon which were discovered by him. The works of Zelinsky and his scientific team on the adsorption of gases on activated carbons were important for the country's defense ability, the creation of a coal gas mask in cooperation with Kumant (1915) and its adoption during the First World War in the Russian and allied armies were significant for the country's defense ability. Pedagogical activity Zelinsky created a large scientific school and its scientists made fundamental contributions to various fields of chemistry. Among his students werw Academicians of the Academy of Sciences of the USSR A. A. Balandin, L. F. Vereshchagin, B. A. Kazansky, K. A. Kocheshkov, S. S. Nametkin, A. N. Nesmeyanov; Corresponding Members of the Academy of Sciences of the USSR N. A. Izgaryshev, K. P. Lavrovsky, Yu. G. Mamedaliev, B. M. Mikhailov, A. V. Rakovsky, V. V. Chelintsev, N. I. Shuikin; professors V. V. Longinov, A. E. Uspensky, L. A. Chugaev, N. A. Shilov, V. A. Nekrasova-Popova and others. N. D. Zelinsky - one of the organizers of the All-Union Chemical Society named after D. I. Mendeleev; since 1941 he was its honorary member. Since 1921 - an honorary member of the Moscow Society of Naturalists, since 1935 he was its president. Personal life the first wife – Raisa (died in 1906) – their marriage lasted 25 years. the second wife – Evgenia Kuzmina-Karavaeva, pianist – their marriage lasted 25 years. daughter Raisa Zelinskaya-Plate (1910-2001). the third wife - Nina Evgenievna Zhukovskaya-Bok, an artist – their marriage lasted 20 years. son Andrei (1933). son Nikolai (1940) Interesting Facts Zelinsky did not patent the gas mask he invented, believing that one should not profit from human misfortunes, and Russia transferred the right to produce it to the Allies. The only surviving copy of the first gas mask is in Zelinsky's apartment. During an internship in Germany before the start of the war, Zelinsky synthesized chloropicrin for the first time, and became the first person to experience its toxic effects. Later, chloropicrin, discovered by Zelinsky, was widely used as a chemical warfare agent. Awards corresponding academician of the Spanish Royal Academy of Sciences (1934) Hero of Socialist Labor (06/10/1945) four Orders of Lenin (05/07/1940; 06/10/1945; 02/05/1946; 02/05/1951) two PrOrders of the Red Banner of Labor (03/29/1941; 04/03/1944) USSR State Prize from the seizure of the chemicalization of the national economy of the USSR (1934) USSR State Prize of the first degree (1942) - for outstanding scientific works on organic chemistry, published in the collection of selected works of the author in 1941 USSR State Prize of the second degree (1946) - for the development of a new method for obtaining aromatic hydrocarbons USSR State Prize of the first degree (1948) A. M. Butlerov Prize of the Russian Physical and Chemical Society (1924) Recognition The Zelinsky Institute of Organic Chemistry of Russian Academy of Sciences is named after him since 1953; In 1961, a postage stamp was issued in honor of N. D. Zelinsky in the USSR; One of the Moscow streets is named after him, as well as streets in the cities of Voskresensk (Moscow region), Tiraspol, Chișinău, Tyumen, Yaroslavl, Veliky Novgorod, Orsk, Karaganda, Daugavpils, Alma-Ata and Mariupol; On the occasion of the 150th anniversary of the birth of the scientist, the State Unitary Enterprise “Marka Pridnestrovya” issued a series of stamps and envelopes; The large chemical auditorium of the Faculty of Chemistry of Moscow State University is named after Zelinsky; The crater Zelinskiy on the Moon is named in his honor (since 1970); On June 2, 2014, the name of Nikolai Dmitrievich Zelinsky was given to an enterprise producing personal and collective protective equipment - JSC Elektrostal Chemical and Mechanical Plant; May 19, 2016 in St. Petersburg on the building of the Research Institute of Metrology. D. I. Mendeleev (Moskovsky Prospekt, 19) a commemorative plaque was installed (sculptor-artist V. A. Sivakov) with the text: “Here, in 1915, the outstanding scientist Nikolai Dmitrievich Zelinsky invented a coal gas mask” Monuments There is a monument of Zelinsky in Elektrostal city. It was opened in July 2013 in front of the entrance of the Elektrostal Chemical and Mechanical Plant OJSC. In Transnistria In Tiraspol, in the house in which Zelinsky spent his childhood, there is a memorial house-museum of the academician, and on the building of school No. 6 (now the humanitarian and mathematical gymnasium), where he studied, a memorial plaque was erected, a monument was erected in front of the building; in the Kirovsky district of Tiraspol there is a street named after Zelinsky. In Chișinău, a street in the Botanica sector is named after him. In Ukraine In Odessa, in the house in which Zelinsky lived while working at Novorossiysk University, the Department of Organic Chemistry, a descendant of the Odessa National University named after I.I.Mechnikov, now houses a memorial plaque. Compositions Investigation of the phenomena of stereoisometry in the series of limiting carbonaceous compounds. - Odessa: type. A. Schulze, 1891. - 190 p. Materials for the study of the genesis of silt deposits [Rev. ed. acad. N. D. Zelinsky]. - M.-L .: Publishing House of Acad. Sciences of the USSR, 1939. - 200 p. Coal as a means of combating asphyxiating and poisonous gases: An experimental study of 1915-1916. / N. D. Zelinsky and V. S. Sadikov. - M.-L .: Publishing House of Acad. Sciences of the USSR, 1941. - 131 p. Selected Works, vols. 1-2, M.-L., 1941; The great Russian chemist A. M. Butlerov (1828-1886) / Acad. N. D. Zelinsky; with the participation of M. M. Azarin. - M .: Publishing House of Moscow. islands of naturalists, 1949. - 241 p. Higher fatty acids and their relationship to tubercle bacilli / Acad. N. D. Zelinsky and Assoc. L. S. Bondar. - M .: Publishing House of Moscow. islands of naturalists, 1951. - 84 p. Collection of works, vol. 1-4, M., 1954-1960 Literature Academician Nikolai Dmitrievich Zelinsky: Ninetieth birthday. Sat. - M., 1952. Zelinsky A.N. Спаси и сохрани: К 100-летию «Противогаза Зелинского» // Russian Bulletin - 07/03/2015. Zelinsky Nikolai Dmitrievich // Great Soviet Encyclopedia: [in 30 volumes] / ch. ed. A. M. Prokhorov. - 3rd ed. - M .: Soviet Encyclopedia, 1969-1978. Kazansky B. A., Nesmeyanov A. N., Plate A. F. Работы академика Н. Д. Зелинского и его школы в области химии углеводородов и органического катализа. / Ученые записки МГУ. Issue. 175. - M., 1956. Moscow University in the Great Patriotic War. - 4th, revised and supplemented. Moscow: Moscow University Press, 2020 - 1000 copies. - ISBN 978-5-19-011499-7. Nametkin S. S. President of the Moscow Society of Naturalists, Academician Nikolai Dmitrievich Zelinsky: On the occasion of his 80th birthday. - B. m., 1941. Nikolai Dmitrievich Zelinsky / USSR Academy of Sciences. — M.; L .: Publishing house of the Academy of Sciences of the USSR, 1946. - 88 p. - (Materials for the bio-bibliography of scientists of the USSR. Series of chemical sciences. Issue 1). Plate A.F. Nikolai Dmitrievich Zelinsky // People of Russian science: Mathematics - Mechanics - Astronomy - Physics - Chemistry. - M., 1961. Sysoeva E. K., Terentiev P. B. ZELINSKY Nikolai Dmitrievich // Imperial Moscow University: 1755-1917: encyclopedic dictionary / compiled by A. Yu. Andreev, D. A. Tsygankov. - M.: Russian Political Encyclopedia (ROSSPEN), 2010. - S. 254-255. — 894 p. - 2000 copies. — ISBN 978-5-8243-1429-8. Figurovsky N. A. Essay on the emergence and development of a coal gas mask by N. D. Zelinsky. M., 1952. Yuryev Yu. K., Levina R. Ya. / Sci. ed. Ioffe S.T.; Moscow Society of Naturalists. — M.: MOIP, 1953. — 120 p. - (Historical series; No. 48). - 7000 copies. See also Hell–Volhard–Zelinsky halogenation References Further reading 1861 births 1953 deaths 20th-century Russian chemists People from Tiraspol People from Kherson Governorate Full Members of the USSR Academy of Sciences Academic staff of Moscow State University Academic staff of Peter the Great St. Petersburg Polytechnic University Heroes of Socialist Labour Recipients of the Stalin Prize Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Chemists from the Russian Empire Soviet organic chemists Burials at Novodevichy Cemetery
Nikolay Zelinsky
[ "Chemistry" ]
4,086
[ "Soviet organic chemists", "Organic chemists" ]
1,030,401
https://en.wikipedia.org/wiki/Humic%20substance
Humic substances (HS) are colored relatively recalcitrant organic compounds naturally formed during long-term decomposition and transformation of biomass residues. The color of humic substances varies from bright yellow to light or dark brown leading to black. The term comes from humus, which in turn comes from the Latin word humus, meaning "soil, earth". Humic substances represent the major part of organic matter in soil, peat, coal, and sediments, and are important components of dissolved natural organic matter (NOM) in lakes (especially dystrophic lakes), rivers, and sea water. Humic substances account for 50 – 90% of cation exchange capacity in soils. "Humic substances" is an umbrella term covering humic acid, fulvic acid and humin, which differ in solubility. By definition, humic acid (HA) is soluble in water at neutral and alkaline pH, but insoluble at acidic pH < 2. Fulvic acid (FA) is soluble in water at any pH. Humin is not soluble in water at any pH. This definition of humic substances is largely operational. It is rooted in the history of soil science and, more precisely, in the tradition of alkaline extraction, which dates back to 1786, when Franz Karl Achard treated peat with a solution of potassium hydroxide and, after subsequent addition of an acid, obtained an amorphous dark precipitate (i.e., humic acid). Aquatic humic substances were isolated for the first time in 1806, from spring water by Jöns Jakob Berzelius. In terms of chemistry, FA, HA, and humin share more similarities than differences and represent a continuum of humic molecules. All of them are constructed from similar aromatic, polyaromatic, aliphatic, and carbohydrate units and contain the same functional groups (mainly carboxylic, phenolic, and ester groups), albeit in varying proportions. Water solubility of humic substances is primarily governed by interplay of two factors: the amount of ionizable functional groups and (mainly carboxylic) and molecular weight (MW). In general, fulvic acid has a higher amount of carboxylic groups and lower average molecular weight than does humic acid. Measured average molecular weights vary with source; however, molecular weight distributions of HA and FA overlap significantly. Age and origin of the source material determine the chemical structure of humic substances. In general, humic substances derived from soil and peat (which takes hundreds to thousands of years to form) have higher molecular weight, higher amounts of O and N, more carbohydrate units, and fewer polyaromatic units than humic substances derived from coal and leonardite (which takes millions of years to form). Isolation of HS is the result of an alkaline extraction from solid sources of NOM the adsorption of HS on a resin. A newer view of humic substances is that they are not mostly high-molecular-weight macropolymers but rather represent a heterogeneous mixture of relatively small molecular components of the soil organic matter auto-assembled in supramolecular associations and are composed of a variety of compounds of biological origin and synthesized by abiotic and biotic reactions in soil. and surface waters It is the large molecular complexity of the soil humeome that confers to humic matter its bioactivity in, its stability in ecosystems, soil and its role as plant growth promoter (in particular plant roots). The academic definition of humic substances is under debate and some researchers argue against the traditional concepts of humification and seek to forgo alkali extract method and to analyze the soil directly. Concepts of humic substances The formation of HS in nature is one of the least understood aspects of humus chemistry and one of the most intriguing. Historically, there have been three main theories to explain it: the lignin theory of Waksman (1932), the polyphenol theory, and the sugar-amine condensation theory of Maillard (1911). Humic substances are formed by the microbial degradation of dead biota matter, such as lignin, cellulose. ligno-cellulose and charcoal. Humic substances in the lab are resistant to further biodegradation. Their structure, elemental composition and content of functional groups of a given sample depend on the water or soil source and on the specific procedures and conditions of extraction. Nevertheless, the average properties of lab extractes HS from different sources are remarkably similar. Fractionation Historically, scientists have used variations of similar methods for extracting HS from NOM and separation the extracts into HA and FA. The International Humic Substances Society advocates the use of standard laboratory methods for preparation of humic and fulvic acids. Humic substances are extracted from soil and other solid sources using 0.1 M NaOH, under a nitrogen atmosphere, to prevent abiotic oxidation of some of the components of HS. The HA is then precipitated at pH 1, and the soluble fraction is treated on a resin column to separate fulvic acid components from other acid soluble compounds. The fraction of NOM not extracted by 0.1 NaOH is humin. Humic acid plus fulvic acid is extracted from natural waters using a resin column after microfiltration and acidification to pH 2. The humic materials are eluted from the column with NaOH, and humic acid is precipitated at pH 1. After adjusting the pH to 2 fulvic acid is separated from other acid soluble compounds, using a resin column as with solid phase sources. An analytical method for quantifying humic acid and fulvic acid in commercial ores and humic products, has been developed based on the IHSS humic acid and fulvic acid preparation methods. Scientists associated with the IHSS have also isolated the entire NOM from black water streams using reverse osmosis  The retentate from this process contains both humic and fulvic acids, predominately humic acid. The NOM from hard water streams has been isolated using  reverse osmosis and electrodialysis in tandem. Extracted humic acid not a single acid; rather, it is a complex mixture of many different acids containing carboxyl and phenolate groups so that the mixture behaves functionally as a dibasic acid or, occasionally, as a tribasic acid. Commercial humic acid used to amend soil is manufactured using these same well established procedures. Humic acids can form complexes with ions that are commonly found in the environment creating humic colloids. A sequential chemical fractionation called Humeomics can be used to isolate more homogeneous humic fractions and determine their molecular structures by advanced spectroscopic and chromatographic methods. Substances identified in humic extracts and directly in soil include mono-, di-, and tri-hydroxycarboxylic acids, fatty acids, dicarboxylic acids, linear alcohols, phenolic acids, terpenoids, carbohydrates, and amino acids. This suggests humic molecules may form a supramolecular structures held together by non-covalent forces, such as van der Waals force, π-π, and CH-π bonds. Chemical characteristics Since the dawn of modern chemistry, humic substances are among the most studied among natural materials. Despite long study, their molecular structure remains debatable The traditional view has been that humic substances are hetero- poly-condensates, in varying associations with clay. A more recent view is that relatively small molecules also play major a role. A typical humic substance is a mixture of many molecules, some of which are based on a motif of aromatic nuclei with phenolic and carboxylic substituents, linked together;  The functional groups that contribute most to surface charge and reactivity of humic substances are phenolic and carboxylic groups. Humic substances commonly behave as mixtures of dibasic acids, with a pK1 value around 4 for protonation of carboxyl groups and around 8 for protonation of phenolate groups in HA. Fulvic acids are more acidic than HA. There is considerable overall similarity among individual humic acids. For this reason, measured pK values for a given sample are average values relating to the constituent species. The other important characteristic is charge density. The more recent determinations of molecular weights of HS show that the molecular weights are not as great as once thought.  Reported  number average molecular weights of soil HA are < 6000 but they are highly poly disperse with some components with much larger measure molecular weights and much lower.  Measured number average molecular weights of aquatic HS with HA <= 1700 and FA< 900.  The aquatic HA and FA are also highly poly disperse.  The number of individually distinct components in HS, as measured by mass spectroscopy is in the thousands  The average composition of HA and FA can be represented by model structures. The presence of carboxylate and phenolate groups gives the humic acids the ability to form complexes with ions such as Mg2+, Ca2+, Fe2+, and Fe3+ creating humic colloids.. Many humic acids have two or more of these groups arranged so as to enable the formation of chelate complexes. The formation of (chelate) complexes is an important aspect of the biological role of humic acids in regulating bioavailability of metal ions. Criticism Decomposition products of dead plant materials form intimate associations with minerals, making it difficult to isolate and characterize soil organic constituents. 18th century soil chemists successfully used alkaline extraction to isolate a portion of the organic constituents in soil. This led to the theory that a 'humification' process created distinct 'humic substances' like 'humic acid', 'fulvic acid', and 'humin'. However, modern chemical analysis methods applied to unprocessed mineral soil have not directly observed large humic molecules. This suggests that the extraction and fractionation techniques used to isolate humic substances alter the original chemical composition of the organic matter. Since the definition of humic substances like humic and fulvic acids relies on their separation through these methods, it raises the question of whether the distinction between these compounds accurately reflects the natural state of organic matter in soil. Despite these concerns, the 'humification' theory persists in the field and in even textbooks, and attempts to redefine 'humic substances' in soil have resulted in a proliferation of conflicting definitions. This lack of consensus makes it difficult to communicate scientific understanding of soil processes and properties accurately." Determination of humic acids in water samples The presence of humic acid in water intended for potable or industrial use can have a significant impact on the treatability of that water and the success of chemical disinfection processes. For instance, humic and fulvic acids can react with the chemicals used in the chlorination process to form disinfection byproducts such as dihaloacetonitriles, which are toxic to humans. Accurate methods of establishing humic acid concentrations are therefore essential in maintaining water supplies, especially from upland peaty catchments in temperate climates. As a lot of different bio-organic molecules in very diverse physical associations are mixed together in natural environments, it is cumbersome to measure their exact concentrations in the humic superstructure. For this reason, concentrations of humic acid are traditionally estimated out of concentrations of organic matter, typically from concentrations of total organic carbon (TOC) or dissolved organic carbon (DOC). Extraction procedures are bound to alter some of the chemical linkages present in the soil humic substances (mainly ester bonds in biopolyesters such as cutins and suberins). The humic extracts are composed of large numbers of different bio-organic molecules that have not yet been totally separated and identified. However, single classes of residual biomolecules have been identified by selective extractions and chemical fractionation, and are represented by alkanoic and hydroxy alkanoic acids, resins, waxes, lignin residues, sugars, and peptides. Ecological effects Organic matter soil amendments have been known by farmers to be beneficial to plant growth for longer than recorded history. However, the chemistry and function of the organic matter have been a subject of controversy since humans began postulating about it in the 18th century. Until the time of Liebig, it was supposed that humus was used directly by plants, but, after Liebig showed that plant growth depends upon inorganic compounds, many soil scientists held the view that organic matter was useful for fertility only as it was broken down with the release of its constituent nutrient elements into inorganic forms. At the present time, soil scientists hold a more holistic view and at least recognize that humus influences soil fertility through its effect on the water-holding capacity of the soil. Also, since plants have been shown to absorb and translocate the complex organic molecules of systemic insecticides, they can no longer discredit the idea that plants may be able to absorb the soluble forms of humus; this may in fact be an essential process for the uptake of otherwise insoluble iron oxides. A study on the effects of humic acid on plant growth was conducted at Ohio State University which said in part "humic acids increased plant growth" and that there were "relatively large responses at low application rates". A 1998 study by scientists at the North Carolina State University College of Agriculture and Life Sciences showed that addition of humate to soil significantly increased root mass in creeping bentgrass turf. A 2018 study by scientists at the University of Alberta showed that humic acids can reduce prion infectivity in laboratory experiments, but that this effect may be uncertain in the environment due to minerals in the soil that buffer the effect. Anthropogenic production Humans can affect the production of humic substances via a variety of ways: by making use of natural processes by composting lignin or adding biochar (see soil rehabilitation), or by industrial synthesis of artificial humic substances from organic feedstocks directly. These artificial substances may be similarly divided into artificial humic acid (A-HA) and artificial fulvic acid (A-FA). Lignosulfonates, a by-product from the sulfite pulping of wood, are valorized in the industrial fabrication of concrete where they serve as water reducer, or concrete superplasticizer, to decrease the water-cement ratio (w/c) of fresh concrete while preserving its workability. The w/c ratio of concrete is one of the main parameter controlling the mechanical strength of hardened concrete and its durability. The same wood pulping process can also be applied to obtain humus-like substances by hydrolysis and oxidation. A kind of artificial "lignohumate" can be directly produced from wood in this way. Agricultural litter can be turned into an artificial humic substance by a hydrothermal reaction. The resulting mixture can increase the content of dissolved organic matter (DOM) and total organic carbon (TOC) in soil. Lignite (brown coal) may also be oxidized to produce humic substances, reversing the natural process of coal formation under anoxic and reducing conditions. This form of "mineral-derived fulvic acid" is widely used in China. This process also occurs in nature, producing leonardite. Economic geology In economic geology, the term humate refers to geological materials, such as weathered coal beds (leonardite), mudrock, or pore material in sandstones, that are rich in humic acids. Humate has been mined from the Fruitland Formation of New Mexico for use as a soil amendment since the 1970s, with nearly 60,000 metric tons produced by 2016. Humate deposits may also play an important role in the genesis of uranium ore bodies. Technological applications The heavy-metal binding abilities of humic acids have been exploited to develop remediation technologies for removing lead from waste water. To this end, Yurishcheva et al. coated magnetic nanoparticles with humic acids. After capturing lead ions, the nanoparticles can then be captured using a magnet. Ancient masonry Archeology finds that ancient Egypt used mudbricks reinforced with straw and humic acids. See also Black water (drink) Humin Humus Polycyclic aromatic hydrocarbon Soil References External links International Humic Substances Society Composting Organic acids Soil chemistry
Humic substance
[ "Chemistry" ]
3,420
[ "Organic acids", "Soil chemistry", "Acids", "Organic compounds" ]
1,030,420
https://en.wikipedia.org/wiki/Equidistributed%20sequence
In mathematics, a sequence (s1, s2, s3, ...) of real numbers is said to be equidistributed, or uniformly distributed, if the proportion of terms falling in a subinterval is proportional to the length of that subinterval. Such sequences are studied in Diophantine approximation theory and have applications to Monte Carlo integration. Definition A sequence (s1, s2, s3, ...) of real numbers is said to be equidistributed on a non-degenerate interval [a, b] if for every subinterval [c, d] of [a, b] we have (Here, the notation |{s1,...,sn} ∩ [c, d]| denotes the number of elements, out of the first n elements of the sequence, that are between c and d.) For example, if a sequence is equidistributed in [0, 2], since the interval [0.5, 0.9] occupies 1/5 of the length of the interval [0, 2], as n becomes large, the proportion of the first n members of the sequence which fall between 0.5 and 0.9 must approach 1/5. Loosely speaking, one could say that each member of the sequence is equally likely to fall anywhere in its range. However, this is not to say that (sn) is a sequence of random variables; rather, it is a determinate sequence of real numbers. Discrepancy We define the discrepancy DN for a sequence (s1, s2, s3, ...) with respect to the interval [a, b] as A sequence is thus equidistributed if the discrepancy DN tends to zero as N tends to infinity. Equidistribution is a rather weak criterion to express the fact that a sequence fills the segment leaving no gaps. For example, the drawings of a random variable uniform over a segment will be equidistributed in the segment, but there will be large gaps compared to a sequence which first enumerates multiples of ε in the segment, for some small ε, in an appropriately chosen way, and then continues to do this for smaller and smaller values of ε. For stronger criteria and for constructions of sequences that are more evenly distributed, see low-discrepancy sequence. Riemann integral criterion for equidistribution Recall that if f is a function having a Riemann integral in the interval [a, b], then its integral is the limit of Riemann sums taken by sampling the function f in a set of points chosen from a fine partition of the interval. Therefore, if some sequence is equidistributed in [a, b], it is expected that this sequence can be used to calculate the integral of a Riemann-integrable function. This leads to the following criterion for an equidistributed sequence: Suppose (s1, s2, s3, ...) is a sequence contained in the interval [a, b]. Then the following conditions are equivalent: The sequence is equidistributed on [a, b]. For every Riemann-integrable (complex-valued) function , the following limit holds: {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Proof |- |First note that the definition of an equidistributed sequence is equivalent to the integral criterion whenever f is the indicator function of an interval: If f = 1[c, d], then the left hand side is the proportion of points of the sequence falling in the interval [c, d], and the right hand side is exactly This means 2 ⇒ 1 (since indicator functions are Riemann-integrable), and 1 ⇒ 2 for f being an indicator function of an interval. It remains to assume that the integral criterion holds for indicator functions and prove that it holds for general Riemann-integrable functions as well. Note that both sides of the integral criterion equation are linear in f, and therefore the criterion holds for linear combinations of interval indicators, that is, step functions. To show it holds for f being a general Riemann-integrable function, first assume f is real-valued. Then by using Darboux's definition of the integral, we have for every ε > 0 two step functions f1 and f2 such that f1 ≤ f ≤ f2 and Notice that: By subtracting, we see that the limit superior and limit inferior of differ by at most ε. Since ε is arbitrary, we have the existence of the limit, and by Darboux's definition of the integral, it is the correct limit. Finally, for complex-valued Riemann-integrable functions, the result follows again from linearity, and from the fact that every such function can be written as f = u + vi, where u, v are real-valued and Riemann-integrable. ∎ |} This criterion leads to the idea of Monte-Carlo integration, where integrals are computed by sampling the function over a sequence of random variables equidistributed in the interval. It is not possible to generalize the integral criterion to a class of functions bigger than just the Riemann-integrable ones. For example, if the Lebesgue integral is considered and f is taken to be in L1, then this criterion fails. As a counterexample, take f to be the indicator function of some equidistributed sequence. Then in the criterion, the left hand side is always 1, whereas the right hand side is zero, because the sequence is countable, so f is zero almost everywhere. In fact, the de Bruijn–Post Theorem states the converse of the above criterion: If f is a function such that the criterion above holds for any equidistributed sequence in [a, b], then f is Riemann-integrable in [a, b]. Equidistribution modulo 1 A sequence (a1, a2, a3, ...) of real numbers is said to be equidistributed modulo 1 or uniformly distributed modulo 1 if the sequence of the fractional parts of an, denoted by (an) or by an − ⌊an⌋, is equidistributed in the interval [0, 1]. Examples The equidistribution theorem: The sequence of all multiples of an irrational α, 0, α, 2α, 3α, 4α, ... is equidistributed modulo 1. More generally, if p is a polynomial with at least one coefficient other than the constant term irrational then the sequence p(n) is uniformly distributed modulo 1. This was proven by Weyl and is an application of van der Corput's difference theorem. The sequence log(n) is not uniformly distributed modulo 1. This fact is related to Benford's law. The sequence of all multiples of an irrational α by successive prime numbers, 2α, 3α, 5α, 7α, 11α, ... is equidistributed modulo 1. This is a famous theorem of analytic number theory, published by I. M. Vinogradov in 1948. The van der Corput sequence is equidistributed. Weyl's criterion Weyl's criterion states that the sequence an is equidistributed modulo 1 if and only if for all non-zero integers ℓ, The criterion is named after, and was first formulated by, Hermann Weyl. It allows equidistribution questions to be reduced to bounds on exponential sums, a fundamental and general method. {| class="toccolours collapsible collapsed" width="90%" style="text-align:left" !Sketch of proof |- |If the sequence is equidistributed modulo 1, then we can apply the Riemann integral criterion (described above) on the function which has integral zero on the interval [0, 1]. This gives Weyl's criterion immediately. Conversely, suppose Weyl's criterion holds. Then the Riemann integral criterion holds for functions f as above, and by linearity of the criterion, it holds for f being any trigonometric polynomial. By the Stone–Weierstrass theorem and an approximation argument, this extends to any continuous function f. Finally, let f be the indicator function of an interval. It is possible to bound f from above and below by two continuous functions on the interval, whose integrals differ by an arbitrary ε. By an argument similar to the proof of the Riemann integral criterion, it is possible to extend the result to any interval indicator function f, thereby proving equidistribution modulo 1 of the given sequence. ∎ |} Generalizations A quantitative form of Weyl's criterion is given by the Erdős–Turán inequality. Weyl's criterion extends naturally to higher dimensions, assuming the natural generalization of the definition of equidistribution modulo 1: The sequence vn of vectors in Rk is equidistributed modulo 1 if and only if for any non-zero vector ℓ ∈ Zk, Example of usage Weyl's criterion can be used to easily prove the equidistribution theorem, stating that the sequence of multiples 0, α, 2α, 3α, ... of some real number α is equidistributed modulo 1 if and only if α is irrational. Suppose α is irrational and denote our sequence by aj = jα (where j starts from 0, to simplify the formula later). Let ℓ ≠ 0 be an integer. Since α is irrational, ℓα can never be an integer, so can never be 1. Using the formula for the sum of a finite geometric series, a finite bound that does not depend on n. Therefore, after dividing by n and letting n tend to infinity, the left hand side tends to zero, and Weyl's criterion is satisfied. Conversely, notice that if α is rational then this sequence is not equidistributed modulo 1, because there are only a finite number of options for the fractional part of aj = jα. Complete uniform distribution A sequence of real numbers is said to be k-uniformly distributed mod 1 if not only the sequence of fractional parts is uniformly distributed in but also the sequence , where is defined as , is uniformly distributed in . A sequence of real numbers is said to be completely uniformly distributed mod 1 it is -uniformly distributed for each natural number . For example, the sequence is uniformly distributed mod 1 (or 1-uniformly distributed) for any irrational number , but is never even 2-uniformly distributed. In contrast, the sequence is completely uniformly distributed for almost all (i.e., for all except for a set of measure 0). van der Corput's difference theorem A theorem of Johannes van der Corput states that if for each h the sequence sn+h − sn is uniformly distributed modulo 1, then so is sn. A van der Corput set is a set H of integers such that if for each h in H the sequence sn+h − sn is uniformly distributed modulo 1, then so is sn. Metric theorems Metric theorems describe the behaviour of a parametrised sequence for almost all values of some parameter α: that is, for values of α not lying in some exceptional set of Lebesgue measure zero. For any sequence of distinct integers bn, the sequence (bnα) is equidistributed mod 1 for almost all values of α. The sequence (αn) is equidistributed mod 1 for almost all values of α > 1. It is not known whether the sequences (en) or (n) are equidistributed mod 1. However it is known that the sequence (αn) is not equidistributed mod 1 if α is a PV number. Well-distributed sequence A sequence (s1, s2, s3, ...) of real numbers is said to be well-distributed on [a, b] if for any subinterval [c, d] of [a, b] we have uniformly in k. Clearly every well-distributed sequence is uniformly distributed, but the converse does not hold. The definition of well-distributed modulo 1 is analogous. Sequences equidistributed with respect to an arbitrary measure For an arbitrary probability measure space , a sequence of points is said to be equidistributed with respect to if the mean of point measures converges weakly to : In any Borel probability measure on a separable, metrizable space, there exists an equidistributed sequence with respect to the measure; indeed, this follows immediately from the fact that such a space is standard. The general phenomenon of equidistribution comes up a lot for dynamical systems associated with Lie groups, for example in Margulis' solution to the Oppenheim conjecture. See also Equidistribution theorem Low-discrepancy sequence Erdős–Turán inequality Notes References Further reading External links Lecture notes by Charles Walkden with proof of Weyl's Criterion Diophantine approximation Dynamical systems Ergodic theory
Equidistributed sequence
[ "Physics", "Mathematics" ]
2,832
[ "Ergodic theory", "Mechanics", "Mathematical relations", "Diophantine approximation", "Approximations", "Number theory", "Dynamical systems" ]
1,030,540
https://en.wikipedia.org/wiki/Physical%20schema
A physical data model (or database design) is a representation of a data design as implemented, or intended to be implemented, in a database management system. In the lifecycle of a project it typically derives from a logical data model, though it may be reverse-engineered from a given database implementation. A complete physical data model will include all the database artifacts required to create relationships between tables or to achieve performance goals, such as indexes, constraint definitions, linking tables, partitioned tables or clusters. Analysts can usually use a physical data model to calculate storage estimates; it may include specific storage allocation details for a given database system. seven main databases dominate the commercial marketplace: Informix, Oracle, Postgres, SQL Server, Sybase, IBM Db2 and MySQL. Other RDBMS systems tend either to be legacy databases or used within academia such as universities or further education colleges. Physical data models for each implementation would differ significantly, not least due to underlying operating-system requirements that may sit underneath them. For example: SQL Server runs only on Microsoft Windows operating-systems (Starting with SQL Server 2017, SQL Server runs on Linux. It's the same SQL Server database engine, with many similar features and services regardless of your operating system), while Oracle and MySQL can run on Solaris, Linux and other UNIX-based operating-systems as well as on Windows. This means that the disk requirements, security requirements and many other aspects of a physical data model will be influenced by the RDBMS that a database administrator (or an organization) chooses to use. Physical schema Physical schema is a term used in data management to describe how data is to be represented and stored (files, indices, et al.) in secondary storage using a particular database management system (DBMS) (e.g., Oracle RDBMS, Sybase SQL Server, etc.). In the ANSI/SPARC Architecture three schema approach, the internal schema is the view of data that involved data management technology. This is as opposed to an external schema that reflects an individual's view of the data, or the conceptual schema that is the integration of a set of external schemas. Subsequently the internal schema was recognized to have two parts: The logical schema was the way data were represented to conform to the constraints of a particular approach to database management. At that time the choices were hierarchical and network. Describing the logical schema, however, still did not describe how physically data would be stored on disk drives. That is the domain of the physical schema. Now logical schemas describe data in terms of relational tables and columns, object-oriented classes, and XML tags. A single set of tables, for example, can be implemented in numerous ways, up to and including an architecture where table rows are maintained on computers in different countries. See also Database schema Conceptual data model Logical data model References External links FEA Consolidated Reference Model Document (whitehouse.gov) Oct 2007. Data modeling Data management ja:スキーマ (データベース)
Physical schema
[ "Technology", "Engineering" ]
636
[ "Data management", "Data engineering", "Data modeling", "Data" ]
1,030,561
https://en.wikipedia.org/wiki/Water%20fuel%20cell
The water fuel cell is a non-functional design for a "perpetual motion machine" created by Stanley Allen Meyer (August 24, 1940 – March 20, 1998). Meyer claimed that a car retrofitted with the device could use water as fuel instead of gasoline. Meyer's claims about his "Water Fuel Cell" and the car that it powered were found to be fraudulent by an Ohio court in 1996. Purported design The water fuel cell purportedly split water into its component elements, hydrogen and oxygen. The hydrogen gas was then burned to generate energy, a process that reconstituted the water molecules. According to Meyer, the device required less energy to perform electrolysis than the minimum energy requirement predicted or measured by conventional science. The mechanism of action was alleged to involve "Brown's gas", a mixture of oxyhydrogen with a ratio of 2:1, the same composition as liquid water; which would then be mixed with ambient air (nitrogen, oxygen, argon, etc). The resultant hydrogen gas was then burned to generate energy, which reconstituted the water molecules in another unit separate from the unit in which water was separated. If the device worked as specified, it would violate both the first and second laws of thermodynamics, allowing operation as a perpetual motion machine. Throughout his patents Meyer used the terms "fuel cell" or "water fuel cell" to refer to the portion of his device in which electricity is passed through water to produce hydrogen and oxygen. Meyer's use of the term in this sense is contrary to its usual meaning in science and engineering, in which such cells are conventionally called "electrolytic cells". Furthermore, the term "fuel cell" is usually reserved for cells that produce electricity from a chemical redox reaction, whereas Meyer's fuel cell consumed electricity, as shown in his patents and in the circuit pictured on the right. Meyer describes in a 1990 patent the use of a "water fuel cell assembly" and portrays some images of his "fuel cell water capacitor". According to the patent, in this case "... the term 'fuel cell' refers to a single unit of the invention comprising a water capacitor cell ... that produces the fuel gas in accordance with the method of the invention." Media coverage In a news report on an Ohio TV station, Meyer showed a dune buggy he claimed was powered by his water fuel cell. He stated that only 22 US gallons (83 liters) of water were required to travel from Los Angeles to New York. Furthermore, Meyer claimed to have replaced the spark plugs with "injectors" that introduced a hydrogen/oxygen mixture into the engine cylinders. The water was subjected to an electrical resonance that dissociated it into its basic atomic make-up. The water fuel cell would split the water into hydrogen and oxygen gas, which would then be combusted back into water vapor in a conventional internal combustion engine to produce net energy. Philip Ball, writing in academic journal Nature, characterized Meyer's claims as pseudoscience, noting that "It's not easy to establish how Meyer's car was meant to work, except that it involved a fuel cell that was able to split water using less energy than was released by recombination of the elements ... Crusaders against pseudoscience can rant and rave as much as they like, but in the end they might as well accept that the myth of water as a fuel is never going to go away." Lawsuit Stanley Meyer's invention was later termed fraudulent after two investors to whom he had sold dealerships offering the right to do business in Water Fuel Cell technology sued him in 1996. His car was due to be examined by the expert witness Michael Laughton, Professor of Electrical Engineering at Queen Mary University of London and Fellow of the Royal Academy of Engineering. However, Meyer made what Professor Laughton considered a "lame excuse" on the days of examination and did not allow the test to proceed. His "water fuel cell" was later examined by three expert witnesses in court who found that there "was nothing revolutionary about the cell at all and that it was simply using conventional electrolysis." The court found Meyer had committed "gross and egregious fraud" and ordered him to repay the two investors their $25,000. Meyer's death Stanley Meyer died suddenly on March 20, 1998, while dining at a restaurant. His brother claimed that during a meeting with two Belgian investors, Meyer suddenly ran outside, saying "They poisoned me". After an investigation, the Grove City police agreed with the Franklin County coroner report that ruled that Meyer, who had high blood pressure, died of a cerebral aneurysm. Some of Meyer's supporters believe that he was assassinated to suppress his inventions. Philippe Vandemoortele, one of the Belgian investors, stated that he had been supporting Meyer financially for several years and considered him a personal friend, and that he has no idea where the rumors came from. Aftermath Meyer's patents have expired. His inventions are now in the public domain, available for all to use without restriction or royalty payment. No engine or vehicle manufacturer has incorporated Meyer's work. See also References External links Stanley Meyer biography from waterpoweredcar.com Fuel for fraud or vice versa? (On Stanley Meyer) — summary of the article in New Energy News. Meyer's rebuttal letter to New Energy News. Water-fuelled cars American inventions Free energy conspiracy theories Conspiracy theories in the United States Fraud in the United States 1998 controversies Discovery and invention controversies
Water fuel cell
[ "Technology" ]
1,140
[ "Free energy conspiracy theories", "Science and technology-related conspiracy theories" ]
344,536
https://en.wikipedia.org/wiki/Hepatitis%20A
Hepatitis A is an infectious disease of the liver caused by Hepatovirus A (HAV); it is a type of viral hepatitis. Many cases have few or no symptoms, especially in the young. The time between infection and symptoms, in those who develop them, is two–six weeks. When symptoms occur, they typically last eight weeks and may include nausea, vomiting, diarrhea, jaundice, fever, and abdominal pain. Around 10–15% of people experience a recurrence of symptoms during the 6 months after the initial infection. Acute liver failure may rarely occur, with this being more common in the elderly. It is usually spread by eating food or drinking water contaminated with infected feces. Undercooked or raw shellfish are relatively common sources. It may also be spread through close contact with an infectious person. While children often do not have symptoms when infected, they are still able to infect others. After a single infection, a person is immune for the rest of their life. Diagnosis requires blood testing, as the symptoms are similar to those of a number of other diseases. It is one of five known hepatitis viruses: A, B, C, D, and E. The hepatitis A vaccine is effective for prevention. Some countries recommend it routinely for children and those at higher risk who have not previously been vaccinated. It appears to be effective for life. Other preventive measures include hand washing and properly cooking food. No specific treatment is available, with rest and medications for nausea or diarrhea recommended on an as-needed basis. Infections usually resolve completely and without ongoing liver disease. Treatment of acute liver failure, if it occurs, is with liver transplantation. Globally, around 1.4 million symptomatic cases occur each year and about 114 million infections (symptomatic and asymptomatic). It is more common in regions of the world with poor sanitation and not enough safe water. In the developing world, about 90% of children have been infected by age 10, thus are immune by adulthood. It often occurs in outbreaks in moderately developed countries where children are not exposed when young and vaccination is not widespread. Acute hepatitis A resulted in 11,200 deaths in 2015. World Hepatitis Day occurs each year on July 28 to bring awareness to viral hepatitis. Signs and symptoms Early symptoms of hepatitis A infection can be mistaken for influenza, but some people, especially children, exhibit no symptoms at all. Symptoms typically appear two–six weeks (the incubation period) after the initial infection. About 90% of children do not have symptoms. The time between infection and symptoms, in those who develop them, is two–six weeks, with an average of 28 days. The risk for symptomatic infection is directly related to age, with more than 80% of adults having symptoms compatible with acute viral hepatitis and the majority of children having either asymptomatic or unrecognized infections. Symptoms usually last less than 2 months, although some people can be ill for as long as 6 months: Fatigue Fever Nausea Appetite loss Jaundice, a yellowing of the skin or the whites of the eyes owing to hyperbilirubinemia Bile is removed from the bloodstream and excreted in the urine, giving it a dark amber color Diarrhea Light or clay-colored faeces (acholic faeces) Abdominal discomfort Extrahepatic manifestations Joint pains, red cell aplasia, pancreatitis and generalized lymphadenopathy are the possible extrahepatic manifestations. Kidney failure and pericarditis are very uncommon. If they occur, they show an acute onset and disappear upon resolution of the disease. Virology Taxonomy Hepatovirus A is a species of virus in the order Picornavirales, family Picornaviridae, genus Hepatovirus. Humans and other vertebrates serve as natural hosts of this genus. Nine members of Hepatovirus are recognized. These species infect bats, rodents, hedgehogs, and shrews. Phylogenetic analysis suggests a rodent origin for human Hepatitis A. A member virus of hepatovirus B (Phopivirus) has been isolated from a seal. This virus shared a common ancestor with Hepatovirus A about 1800 years ago. Another hepatovirus – Marmota himalayana hepatovirus – has been isolated from the woodchuck Marmota himalayana. This virus appears to have had a common ancestor with the primate-infecting species around 1000 years ago. Genotypes One serotype and six different genotypes (three human and three simian) have been described. The human genotypes are numbered I–III. Six subtypes have been described (IA, IB, IIA, IIB, IIIA, IIIB). The simian genotypes have been numbered IV–VI. A single isolate of genotype VII isolated from a human has also been described but has been reclassified as subgenotype IIB. Genotype III has been isolated from both humans and owl monkeys. Most human isolates are of genotype I. Of genotype I isolates, subtype IA accounts for the majority. The mutation rate in the genome has been estimated to be nucleotide substitutions per site per year. The human strains appear to have diverged from the simian about 3600 years ago. The mean age of genotypes III and IIIA strains has been estimated to be 592 and 202 years, respectively. Structure Hepatovirus A is a picornavirus; it is not enveloped and contains a positive-sense, single-strand of RNA packaged in a protein shell. Only one serotype of the virus has been found, but multiple genotypes exist. Codon use within the genome is biased and unusually distinct from its host. It also has a poor internal ribosome entry site. In the region that codes for the HAV capsid, highly conserved clusters of rare codons restrict antigenic variability. Replication cycle Vertebrates such as humans serve as the natural hosts. Transmission routes are fecal-oral and blood. Following ingestion, HAV enters the bloodstream through the epithelium of the oropharynx or intestine. The blood carries the virus to its target, the liver, where it multiplies within hepatocytes and Kupffer cells (liver macrophages). Viral replication is cytoplasmic. Entry into the host cell is achieved by attachment of the virus to host receptors, which mediates endocytosis. Replication follows the positive-stranded RNA virus replication model. Translation takes place by viral initiation. The virus exits the host cell by lysis and viroporins. Virions are secreted into the bile and released in stool. HAV is excreted in large numbers about 11 days prior to the appearance of symptoms or anti-HAV IgM antibodies in the blood. The incubation period is 15–50 days and risk of death in those infected is less than 0.5%. Within the liver hepatocytes, the RNA genome is released from the protein coat and is translated by the cell's own ribosomes. Unlike other picornaviruses, this virus requires an intact eukaryotic initiation factor 4G (eIF4G) for the initiation of translation. The requirement for this factor results in an inability to shut down host protein synthesis, unlike other picornaviruses. The virus must then inefficiently compete for the cellular translational machinery, which may explain its poor growth in cell culture. Aragonès et al. (2010) theorize that the virus has evolved a naturally highly deoptimized codon usage with respect to that of its cellular host in order to negatively influence viral protein translation kinetics and allow time for capsid proteins to fold optimally. No apparent virus-mediated cytotoxicity occurs, presumably because of the virus' own requirement for an intact eIF4G, and liver pathology is likely immune-mediated. Transmission The virus primarily spreads by the fecal–oral route, and infections often occur in conditions of poor sanitation and overcrowding. Hepatitis A can be transmitted by the parenteral route, but very rarely by blood and blood products. Food-borne outbreaks are common, and ingestion of shellfish cultivated in polluted water is associated with a high risk of infection. HAV can also be spread through sexual contact, specifically oro–anal and digital–rectal sexual acts. Humans are the only natural reservoir and disease vector of the HAV virus; no known insect or other animal vectors can transmit the virus. A chronic HAV state has not been reported. About 40% of all acute viral hepatitis is caused by HAV. Infected individuals are infectious prior to onset of symptoms, roughly 10 days following infection. The virus is resistant to detergent, acid (pH 1), solvents (e.g., ether, chloroform), drying, and temperatures up to 60 °C. It can survive for months in fresh and salt water. Common-source (e.g., water, food) outbreaks are typical. Infection is common in children in developing countries, reaching 100% incidence, but following infection, lifelong immunity results. HAV can be inactivated by chlorine treatment (drinking water), formalin (0.35%, 37 °C, 72 hours), peracetic acid (2%, 4 hours), beta-propiolactone (0.25%, 1 hour), and UV radiation (2 μW/cm2/min). In developing countries, and in regions with poor hygiene standards, the rates of infection with this virus are high and the illness is usually contracted in early childhood. As incomes rise and access to clean water increases, the incidence of HAV decreases. In developed countries, though, the infection is contracted primarily by susceptible young adults, most of whom are infected with the virus during trips to countries with a high incidence of the disease or through contact with infectious persons. Diagnosis Although HAV is excreted in the feces towards the end of the incubation period, specific diagnosis is made by the detection of HAV-specific IgM antibodies in the blood. IgM antibody is only present in the blood following an acute hepatitis A infection. It is detectable from 1–2 weeks after the initial infection and persists for up to 14 weeks. The presence of IgG antibodies in the blood means the acute stage of the illness has passed and the person is immune to further infection. IgG antibodies to HAV are also found in the blood following vaccination, and tests for immunity to the virus are based on the detection of these antibodies. During the acute stage of the infection, the liver enzyme alanine transferase (ALT) is present in the blood at levels much higher than is normal. The enzyme comes from the liver cells damaged by the virus. Hepatovirus A is present in the blood (viremia) and feces of infected people up to 2 weeks before clinical illness develops. Prevention Hepatitis A can be prevented by vaccination, good hygiene, and sanitation. Vaccination The two types of vaccines contain either inactivated Hepatovirus A or a live but attenuated virus. Both provide active immunity against a future infection. The vaccine protects against HAV in more than 95% of cases for longer than 25 years. In the United States, the vaccine developed by Maurice Hilleman and his team was licensed in 1995, and the vaccine was first used in 1996 for children in high-risk areas, and in 1999 it was spread to areas with elevating levels of infection. The vaccine is given by injection. An initial dose provides protection lasting one year starting 2–4 weeks after vaccination; the second booster dose, given six to 12 months later, provides protection for over 20 years. The vaccine was introduced in 1992 and was initially recommended for persons at high risk. Since then, Bahrain and Israel have embarked on elimination programmes. In countries where widespread vaccination has been practised, the incidence of hepatitis A has decreased dramatically. In the United States, vaccination of children is recommended at 1 and 2 years of age; hepatitis A vaccination is not recommended in those younger than 12 months of age. It is also recommended in those who have not been previously immunized and who have been exposed or are likely to be exposed due to travel. The CDC recommends vaccination against infection for men who have sex with men. Treatment No specific treatment for hepatitis A is known. Recovery from symptoms following infection may take several weeks or months. Therapy is aimed at maintaining comfort and adequate nutritional balance, including replacement of fluids lost from vomiting and diarrhea. Prognosis In the United States in 1991, the mortality rate for hepatitis A was estimated to be 0.015% for the general population, but ranged up to 1.8–2.1% for those aged 50 and over who were hospitalized with icteric hepatitis. The risk of death from acute liver failure following HAV infection increases with age and when the person has underlying chronic liver disease. Young children who are infected with hepatitis A typically have a milder form of the disease, usually lasting 1–3 weeks, whereas adults tend to experience a much more severe form of the disease. Epidemiology Globally, symptomatic HAV infections are believed to occur in around 1.4 million people a year. About 114 million infections (asymptomatic and symptomatic) occurred all together in 2015. Acute hepatitis A resulted in 11,200 deaths in 2015. Developed countries have low circulating levels of hepatovirus A, while developing countries have higher levels of circulation. Most adolescents and adults in developing countries have already had the disease, thus are immune. Adults in midlevel countries may be at risk of disease with the potential of being exposed. Countries Over 30,000 cases of hepatitis A were reported to the CDC in the US in 1997, but the number has since dropped to less than 2,000 cases reported per year. The most widespread hepatitis A outbreak in the United States occurred in 2018, in the state of Kentucky. The outbreak is believed to have started in November 2017. By July 2018 48% of the state's counties had reported at least one case of hepatitis A, and the total number of suspected cases was 969 with six deaths (482 cases in Louisville, Kentucky). By July 2019 the outbreak had reached 5,000 cases and 60 deaths, but had slowed to just a few new cases per month. Another widespread outbreak in the United States, the 2003 US hepatitis outbreak, affected at least 640 people (killing four) in northeastern Ohio and southwestern Pennsylvania in late 2003. The outbreak was blamed on tainted green onions at a restaurant in Monaca, Pennsylvania. In 1988, more than 300,000 people in Shanghai, China, were infected with HAV after eating clams (Anadara subcrenata) from a contaminated river. In June 2013, frozen berries sold by US retailer Costco and purchased by around 240,000 people were the subject of a recall, after at least 158 people were infected with HAV, 69 of whom were hospitalized. In April 2016, frozen berries sold by Costco were once again the subject of a recall, after at least 13 people in Canada were infected with HAV, three of whom were hospitalized. In Australia in February 2015, a recall of frozen berries was issued after at least 19 people contracted the illness following their consumption of the product. In 2017, California (particularly around San Diego), Michigan, and Utah reported outbreaks of hepatitis A that have led to over 800 hospitalizations and 40 deaths. See also 2019 United States hepatitis A outbreak References External links CDC's hepatitis A links Virus Pathogen Database and Analysis Resource (ViPR): Picornaviridae Human hepatitis A virus Wikipedia medicine articles ready to translate Wikipedia infectious disease articles ready to translate Picornaviridae Vaccine-preventable diseases +
Hepatitis A
[ "Biology" ]
3,312
[ "Vaccination", "Vaccine-preventable diseases" ]
344,542
https://en.wikipedia.org/wiki/Multiplicative%20order
In number theory, given a positive integer n and an integer a coprime to n, the multiplicative order of a modulo n is the smallest positive integer k such that . In other words, the multiplicative order of a modulo n is the order of a in the multiplicative group of the units in the ring of the integers modulo n. The order of a modulo n is sometimes written as . Example The powers of 4 modulo 7 are as follows: The smallest positive integer k such that 4k ≡ 1 (mod 7) is 3, so the order of 4 (mod 7) is 3. Properties Even without knowledge that we are working in the multiplicative group of integers modulo n, we can show that a actually has an order by noting that the powers of a can only take a finite number of different values modulo n, so according to the pigeonhole principle there must be two powers, say s and t and without loss of generality s > t, such that as ≡ at (mod n). Since a and n are coprime, a has an inverse element a−1 and we can multiply both sides of the congruence with a−t, yielding as−t ≡ 1 (mod n). The concept of multiplicative order is a special case of the order of group elements. The multiplicative order of a number a modulo n is the order of a in the multiplicative group whose elements are the residues modulo n of the numbers coprime to n, and whose group operation is multiplication modulo n. This is the group of units of the ring Zn; it has φ(n) elements, φ being Euler's totient function, and is denoted as U(n) or U(Zn). As a consequence of Lagrange's theorem, the order of a (mod n) always divides φ(n). If the order of a is actually equal to φ(n), and therefore as large as possible, then a is called a primitive root modulo n. This means that the group U(n) is cyclic and the residue class of a generates it. The order of a (mod n) also divides λ(n), a value of the Carmichael function, which is an even stronger statement than the divisibility of φ(n). Programming languages Maxima CAS : zn_order (a, n) Wolfram Language : MultiplicativeOrder[k, n] Rosetta Code - examples of multiplicative order in various languages See also Discrete logarithm Modular arithmetic References External links Modular arithmetic
Multiplicative order
[ "Mathematics" ]
549
[ "Arithmetic", "Modular arithmetic", "Number theory" ]
344,670
https://en.wikipedia.org/wiki/Organic%20lawn%20management
Organic lawn management or organic turf management or organic land care or organic landscaping is the practice of establishing and caring for an athletic turf field or garden lawn and landscape using organic horticulture, without the use of manufactured inputs such as synthetic pesticides or artificial fertilizers. It is a component of organic land care and organic sustainable landscaping which adapt the principles and methods of sustainable gardening and organic farming to the care of lawns and gardens. Techniques A primary element of organic lawn management is the use of compost and compost tea to reduce the need for fertilization and to encourage healthy soil that enables turf to resist pests. A second element is mowing tall (3" – 4") to suppress weeds and encourage deep grass roots, and leaving grass clippings and leaves on the lawn as fertilizer. Additional techniques include fertilizing in the fall, not the spring. Organic lawns often benefit from over seeding, slice seeding and aeration more frequently due to the importance of a strong root system. Well-maintained organic lawns are often drought-tolerant. If a lawn does need watering it should be done infrequently but deeply. Other organic techniques for caring for a lawn include irrigation only when the lawn shows signs of drought stress and then watering deeply – minimizing needless water consumption. Using low volume sprinklers provides more penetration without runoff. Lawnmowers with a mulching function can useful in reducing fertilizer use by allowing grass clippings and leaves that are cut so minutely that they can settle into the grass inconspicuously to decompose into the soil. Clover lawns Grass seed mixes used to contain white clover, which provides natural fertilizer, but this practice fell out of favor with the rise of synthetic fertilizer and businesses profiting from the sale of this fertilizer. In recent years, homeowners have returned to the use of clover as a natural fertilization source for lawns. In 2022, the New York Times reported on the growing popularity of clover lawns. The New York Times reported "#cloverlawn has over 65 million views on TikTok." Organic pesticides Organic land managers may use registered pesticides approved under the National Organic Program in their lawn care programs. These pesticides are generally derived from natural materials and are minimally-processed. Alternatives include the use of beneficial insects and natural predators such as nematodes to prevent infestation of lawns with pests such as crane fly larvae and ants. Pesticides are allowed under the NOFA Standards for organic land care but not always used in organic lawn care because proper cultural practices can keep pest populations below action thresholds, such as preventing fungal infections using physical maintenance techniques such as effective mowing and raking. Organic fertilizers Synthetic (inorganic based) fertilizers are made in chemical processes, some of which use fossil fuels and contribute to global warming. They also greatly increase the amount of nitrogen entering the global nitrogen cycle which has a serious negative impact on the organization and functioning of the world's ecosystems, including accelerating the loss of biological diversity and decline of coastal marine ecosystems and fisheries. Nitrogen fertilizer releases N2O, a greenhouse gas, into the atmosphere after application. Organic fertilizer nitrogen content is typically lower than synthetic fertilizer. Biodiversity Organic lawns contribute to biodiversity, by definition, when they contain more than one or two grass species. Examples of additional lawn and grasslike species that can be encouraged in organic lawns include dozens of grass species (eight for ryegrass alone, sedges, mosses, clover, vetches, trefoils, yarrow, ground cover alternatives, and other mowable plants). Biodiversity increases the functioning and stress tolerance of ecosystems. Lack of biodiversity is a significant environmental issue brought up by the use of lawns with grassroots groups emerging to promote this method of lawn care. Certain low-growing grass species can also eliminate the need for mowing, thus also being environmentally friendly. Clover is often mixed with grasses for its ability to fix nitrogen into the soil and fertilize the lawn. No Mow May No Mow May is a campaign to encourage homeowners to not mow their lawns during the month of May to support pollinators, native plants and wildlife diversity. The campaign was started in 2019 by Plantlife, a nature conservation charity based in the United Kingdom. In 2020, the city of Appleton, Wisconsin stopped mowing for the month of May. In 2022, cities around the United States participated in No Mow May. The campaign is supported by the Xerces Society through its Bee City USA program, and has described the movement as a "gateway" to creating better habitat for bees when adapted to local conditions and including native wildflowers. Locations with organic lawns Many small properties with lawns around the world are maintained using organic techniques. In the late 20th century, a movement to manage lawns organically began to grow. Some large properties and municipalities require organic lawn management and organic landscaping. They include the following locations: Ryton Organic Gardens, Ryton-on-Dunsmore, Warwickshire, England In 1985, the nonprofit Garden Organic, formerly known as the Henry Doubleday Research Association, relocated to its present site where it operates organic, landscaped grounds open to the public. The property is owned by Coventry University. Highgrove House estate, Gloucestershire, England In 1996, King Charles III, then the Prince of Wales, had transitioned the Highgrove House estate's farm and gardens to organic management. Painshill, Cobham, Surrey, England In 1998, Painshill was awarded the Europa Nostra Medal for its restoration of the 18th century organic gardens originally designed by Charles Hamilton between 1738 and 1773 using what today are called organic methods. Common Ground Education Center, Unity, Maine, United States In 1998, the Maine Organic Farmers and Gardeners Association acquired the 300 acre property in Unity, Maine, and converted the land to organic demonstration fields, gardens, orchards, shade trees and low-impact forestry woodlots. It is the site of the annual Common Ground Country Fair, a fair showcasing organic food and farming. Vineyard Golf Club, Martha's Vineyard, Massachusetts, United States In 2002, the Vineyard Golf Club opened on Martha's Vineyard with the requirement that it use organic turf management. The first course superintendent, Jeff Carlson, was the recipient of the 2003 GCSAA/Golf Digest Environmental Leaders in Golf Award and is the 2008 winner of the President's Award for Environmental Stewardship. Harvard University, Massachusetts, United States In 2009, the New York Times reported on Harvard University's decision to use organic management on all their grounds, which was championed by President Drew Gilpin Faust and implemented by landscape director Wayne Carbone. The New York Times noted: "Thanks to these efforts, the university has reduced the use of irrigation by 30 percent, according to Mr. Carbone, thus saving two million gallons of water a year. And the 40-year-old orchards at Elmwood, which have been treated with compost tea, are recovering from leaf spot and apple scab, two ailments that had afflicted them." High Line, Manhattan, New York, United States In 2009, the first section of the organically-managed 1.45 mile linear park the High Line opened on the former New York Central Railroad, an elevated train line spur, on the west side of Manhattan in New York City. The High Line's design is a collaboration between James Corner Field Operations, Diller Scofidio + Renfro, and Piet Oudolf. Takoma Park, Maryland, United States Residents Catherine Cummings and Julie Taddeo began a campaign in 2011 to restrict lawncare pesticide use in Takoma Park, Maryland. City council members Seth Grimes and Tim Male quickly got behind the effort and drafted Takoma Park's Safe Grow Act of 2013, leading to the city council's enactment of the law, which went into effect March 1, 2014. Ogunquit, Maine, United States In 2014, Bill and Judy Baker and other residents convinced the Ogunquit Town Council to pass a strict pesticide ban requiring organic land care on both public and private property. Montgomery County, Maryland, United States In 2015, Julie Taddeo and Catherine Cummings and Safe Grow Montgomery colleagues campaigned to get Montgomery County, Maryland to adopt a pesticide ban that required organic lawn management throughout the county on both public and private property. Montgomery County Council President George Leventhal (D-at-large) wrote and introduced Bill 52-14, based on Takoma Park's 2013 legislation. The county council enacted Bill 52-14 that October. The ban was challenged in court by local lawn care companies and pesticide industry lobbying group Responsible Industry for a Sound Environment (RISE). In 2017, the ban was overturned by a Circuit Court, and the ruling was appealed. In 2019, a Maryland appeals court upheld the ban. Irvine, California, United States In 2016, Non Toxic Irvine, a group led by citizens Laurie Thompson, Ayn Craciun, Kathleen Hallal, Kim Konte and Bob Johnson with help from City Councilor Christina Shea convinced the City Council to adopt an organic integrated pest management program requiring organic land care on all city property. Carlsbad, California, United States In 2017, Non Toxic Carlsbad campaigned to get the city to adopt an ordinance requiring organic land care on all city property. Portland, Maine, United States In 2018, Portland Protectors led by Avery Yale Kamila and Maggie Knowles convinced the Portland City Council to adopt an organic ordinance requiring organic land care on all public and private property. Dover, New Hampshire, United States In 2018, the Dover City Council unanimously approved a resolution calling for a "Commitment to Organic Land Management Practices" brought to the Council by resident group Non Toxic Dover, NH and sponsored by Councilor Dennis Shanahan. The City began an all-organic turf program for City property in 2020. Gardens of Vatican City, Rome, Italy In 2019, Rafael Tornini, head of the Garden and Environment Service of the Vatican, announced the 37 acre Gardens of Vatican City had been transitioning to organic management since 2017. Delta Dental Stadium, Manchester, New Hampshire, United States In 2019, the New Hampshire Fisher Cats began the transition to make Delta Dental Stadium the first professional baseball field that is organically managed. Stonyfield, as part of its #playfree campaign to convert recreational spaces to organic management, supported the field's transition. New York City public lands, United States In 2021, the New York City Council banned the use of synthetic pesticides by city agencies. The effort was started by teacher Paula Rogovin's kindergarten class at P.S. 290. Maui, Hawaii, United States In 2021, the island of Maui banned synthetic pesticides and fertilizers from all county lands. Community organizers including Autumn Ness, director of Beyond Pesticides' Hawai'i Organic Land Management Program worked to enact the law. Baltimore, Maryland, United States In 2022, a synthetic pesticide ban on public and private property passed by the Baltimore City Council in 2020 went into effect. It includes a fine of up to $250 for violators. Books "NOFA Standards for Organic Land Care 6th Edition Practices for the Design and Maintenance of Ecological Landscapes," Michael Almstead, Dr. Jamie Banks, et al., contributors. Northeast Organic Farming Association of Connecticut, Inc. 2017. "The Organic Lawn Care Manual: A Natural, Low-Maintenance System for a Beautiful, Safe Lawn," by Paul Tukey. Storey Publishing, LLC. 2007. See also Grasscycling Organic farming Organic horticulture Organic movement Natural landscaping References Organic gardening Sustainable gardening Lawns Lawn care Turfgrass diseases Organic farming Pesticides
Organic lawn management
[ "Biology", "Environmental_science" ]
2,421
[ "Biocides", "Toxicology", "Pesticides" ]
344,731
https://en.wikipedia.org/wiki/Analog%20multiplier
In electronics, an analog multiplier is a device that takes two analog signals and produces an output which is their product. Such circuits can be used to implement related functions such as squares (apply same signal to both inputs), and square roots. An electronic analog multiplier can be called by several names, depending on the function it is used to serve (see analog multiplier applications). Voltage-controlled amplifier versus analog multiplier If one input of an analog multiplier is held at a steady-state voltage, a signal at the second input will be scaled in proportion to the level on the fixed input. In this case, the analog multiplier may be considered to be a voltage controlled amplifier. Obvious applications would be for electronic volume control and automatic gain control (AGC). Although analog multipliers are often used for such applications, voltage-controlled amplifiers are not necessarily true analog multipliers. For example, an integrated circuit designed to be used as a volume control may have a signal input designed for 1 Vp-p, and a control input designed for 0-5 V dc; that is, the two inputs are not symmetrical and the control input will have a limited bandwidth. By contrast, in what is generally considered to be a true analog multiplier, the two signal inputs have identical characteristics. Applications specific to a true analog multiplier are those where both inputs are signals, for example in a frequency mixer or an analog circuit to implement a discrete Fourier transform. Due to the precision required for the device to be accurate and linear over the input range a true analog multiplier is generally a much more expensive part than a voltage-controlled amplifier. A four-quadrant multiplier is one where inputs and outputs may swing positive and negative. Many multipliers only work in 2 quadrants (one input may only have one polarity), or single quadrant (inputs and outputs have only one polarity, usually all positive). Analog multiplier devices Analog multiplication can be accomplished by using the Hall effect. The Gilbert cell is a circuit whose output current is a 4 quadrant multiplication of its two differential inputs. Integrated circuits analog multipliers are incorporated into many applications, such as a true RMS converter, but a number of general purpose analog multiplier building blocks are available such as the Linear Four Quadrant Multiplier. General-purpose devices will usually include attenuators or amplifiers on the inputs or outputs in order to allow the signal to be scaled within the voltage limits of the circuit. Although analog multiplier circuits are very similar to operational amplifiers, they are far more susceptible to noise and offset voltage-related problems as these errors may become multiplied. When dealing with high-frequency signals, phase-related problems may be quite complex. For this reason, manufacturing wide-range general-purpose analog multipliers is far more difficult than ordinary operational amplifiers, and such devices are typically produced using specialist technologies and laser trimming, as are those used for high-performance amplifiers such as instrumentation amplifiers. This means they have a relatively high cost and so they are generally used only for circuits where they are indispensable. Some commonly available Analog Multiplier ICs in the market are MPY634 from Texas Instruments, AD534, AD632 and AD734 from Analog Devices, HA-2556 from Intersil and many more from other IC manufacturers. Analog versus digital tradeoff in multiplication In most cases, the functions performed by an analog multiplier may be performed better and at lower cost using digital signal processing techniques. At low frequencies, a digital solution is cheaper and more effective and allows the circuit function to be modified in firmware. As frequencies rise, the cost of implementing digital solutions increases much more steeply than for analog solutions. As digital technology advances, the use of analog multipliers tends to be ever more marginalized towards higher-frequency circuits or very specialized applications. In addition, most signals are now destined to become digitized sooner or later in the signal path, and if at all possible the functions that would require a multiplier tend to be moved to the digital side. For example, in early digital multimeters, true RMS functions were provided by external analog multiplier circuits. Nowadays (with the exception of high-frequency measurements) the tendency is to increase the sampling rate of the ADC in order to digitize the input signal allowing RMS and a whole range of other functions to be carried out by a digital processor. However, blindly digitizing the signal as early in the signal path as possible costs unreasonable amounts of power due to the need for high-speed ADCs. A much more efficient solution involves analog preprocessing to condition the signal and reduce its bandwidth so that energy is spent to digitize only the bandwidth that contains useful information. In addition, digitally controlled resistors allow microcontrollers to implement many functions such as tone control and AGC without having to process the digitized signal directly. Analog multiplier applications Variable-gain amplifier Ring modulator Product detector Frequency mixer Companding Squelch Analog computer Analog signal processing Automatic gain control True RMS converter Analog filters (especially voltage-controlled filters) PAM-pulse amplitude modulation Further reading Multipliers vs. Modulators Analog Dialogue, June 2013 See also NE612, oscillator and a Gilbert cell multiplier mixer. References Frequency mixers Analog circuits
Analog multiplier
[ "Engineering" ]
1,104
[ "Analog circuits", "Electronic engineering", "Radio electronics", "Frequency mixers" ]
344,775
https://en.wikipedia.org/wiki/Earthenware
Earthenware is glazed or unglazed nonvitreous pottery that has normally been fired below . Basic earthenware, often called terracotta, absorbs liquids such as water. However, earthenware can be made impervious to liquids by coating it with a ceramic glaze, and such a process is used for the great majority of modern domestic earthenware. The main other important types of pottery are porcelain, bone china, and stoneware, all fired at high enough temperatures to vitrify. End applications include tableware and decorative ware such as figurines. Earthenware comprises "most building bricks, nearly all European pottery up to the seventeenth century, most of the wares of Egypt, Persia and the near East; Greek, Roman and Mediterranean, and some of the Chinese; and the fine earthenware which forms the greater part of our tableware today" ("today" being 1962). Pit fired earthenware dates back to as early as 29,000–25,000 BC, and for millennia, only earthenware pottery was made, with stoneware gradually developing some 5,000 years ago, but then apparently disappearing for a few thousand years. Outside East Asia, porcelain was manufactured at any scale only from the 18th century AD, and then initially as an expensive luxury. After it is fired, earthenware is opaque and non-vitreous, soft and capable of being scratched with a knife. The Combined Nomenclature of the European Union describes it as being made of selected clays sometimes mixed with feldspars and varying amounts of other minerals, and white or light-coloured (i.e., slightly greyish, cream, or ivory). Characteristics Generally, unfired earthenware bodies exhibit higher plasticity than most whiteware bodies and hence are easier to shape by RAM press, roller-head or potter's wheel than bone china or porcelain. Due to its porosity, fired earthenware, with a water absorption of 5-8%, must be glazed to be watertight. Earthenware has lower mechanical strength than bone china, porcelain or stoneware, and consequently articles are commonly made in thicker cross-section, although they are still more easily chipped. Darker-coloured terracotta earthenware, typically orange or red due to a comparatively high content of iron oxides, are widely used for flower pots, tiles and some decorative and oven ware. Production Materials The compositions of earthenware bodies vary considerably, and include both prepared and 'as dug'; the former being by far the dominant type for studio and industry. A general body formulation for contemporary earthenware is 25% kaolin, 25% ball clay, 35% quartz and 15% feldspar. Shaping Firing Earthenware can be produced at firing temperatures as low as and many clays will not fire successfully above about . Much historical pottery was fired somewhere around , giving a wide margin of error where there was no precise way of measuring temperature, and very variable conditions within the kiln. Modern earthenware may be biscuit (or "bisque") fired to temperatures between and glost-fired (or "glaze-fired") to between . Some studio potters follow the reverse practice, with a low-temperature biscuit firing and a high-temperature glost firing. Oxidising atmospheres are the most common. After firing, most earthenware bodies will be colored white, buff or red. For iron-rich bodies earthenware, firing at comparatively low temperature in an oxidising atmosphere results in a red colour, whilst higher temperatures with a reducing atmosphere results in darker colours, including black. Higher firing temperatures may cause earthenware to bloat. Examples of earthenware Despite the most highly valued types of pottery often switching to stoneware and porcelain as these were developed by a particular culture, there are many artistically important types of earthenware. All ancient Greek and ancient Roman pottery is earthenware, as is the Hispano-Moresque ware of the late Middle Ages, which developed into tin-glazed pottery or faience traditions in several parts of Europe, mostly notably the painted maiolica of the Italian Renaissance, and Dutch Delftware. With a white glaze, these were able to imitate porcelains both from East Asia and Europe. Amongst the most complicated earthenware ever made are the life-size Yixian glazed pottery luohans of the Liao dynasty (907–1125), Saint-Porchaire ware of the mid-16th century, apparently made for the French court and the life-size majolica peacocks by Mintons in the 1860s. In the 18th century, especially in English Staffordshire pottery, technical improvements enabled very fine wares such as Wedgwood's creamware, that competed with porcelain with considerable success, as his huge creamware Frog Service for Catherine the Great showed. The invention of transfer printing processes made highly decorated wares cheap enough for far wider sections of the population in Europe. In China, sancai glazed wares were lead-glazed earthenware, and as elsewhere, terracotta remained important for sculpture. The Etruscans had made large sculptures such as statues in it, where the Romans used it mainly for figurines and Campana reliefs. Chinese painted or Tang dynasty tomb figures were earthenware as were the later Yixian glazed pottery luohans. After the ceramic figurine was revived in European porcelain, earthenware figures followed, such as the popular English Staffordshire figures. See also Other types of earthenware or other examples include: Terracotta Redware Victorian majolica Lusterware, which uses iridescent glazes Raku Ironstone china, on the border of earthenware and stoneware Yellowware References Further reading Rado, P. An Introduction to the Technology Of Pottery. 2nd edition. Pergamon Press, 1988. Ryan W. and Radford, C. Whitewares: Production, Testing And Quality Control. Pergamon Press, 1987. Hamer, Frank and Janet. The Potter's Dictionary of Materials and Techniques. A & C Black Publishers Limited, London, England, Third Edition, 1991. . "Petersons": Peterson, Susan, Peterson, Jan, The Craft and Art of Clay: A Complete Potter's Handbook, 2003, Laurence King Publishing, , 9781856693547, google books External links Digital Version of "A Representation of the manufacturing of earthenware" — 1827 text on the manufacture of earthenware Short film on pottery making around the world Tin-glazed earthenware livery-button, ca 1651, Victoria & Albert museum jewellery collection Clay Ceramic materials Crockery Pottery Serving vessels Cooking vessels Food storage containers
Earthenware
[ "Engineering" ]
1,393
[ "Ceramic engineering", "Ceramic materials" ]
344,815
https://en.wikipedia.org/wiki/Glen%20Canyon%20Dam
Glen Canyon Dam is a concrete arch-gravity dam in the southwestern United States, located on the Colorado River in northern Arizona, near the city of Page. The  dam was built by the Bureau of Reclamation (USBR) from 1956 to 1966 and forms Lake Powell, one of the largest man-made reservoirs in the U.S. with a capacity of more than . The dam is named for Glen Canyon, a series of deep sandstone gorges now flooded by the reservoir; Lake Powell is named for John Wesley Powell, who in 1869 led the first expedition to traverse the Colorado River's Grand Canyon by boat. A dam in Glen Canyon was studied as early as 1924, but these plans were initially dropped in favor of the Hoover Dam (completed in 1936) which was located in the Black Canyon. By the 1950s, due to rapid population growth in the seven U.S. and two Mexican states comprising the Colorado River Basin, the Bureau of Reclamation deemed the construction of additional reservoirs necessary. The Glen Canyon Dam remains a central issue for modern environmentalist movements. Beginning in the late 1990s, the Sierra Club and other organizations renewed the call to dismantle the dam and drain Lake Powell in Lower Glen Canyon. Glen Canyon and Lake Powell are managed by the Department of the Interior within Glen Canyon National Recreation Area. Since first filling to capacity in 1980, Lake Powell water levels have fluctuated greatly depending on water demand and annual runoff. The operation of Glen Canyon Dam helps ensure an equitable distribution of water between the states of the Upper Colorado River Basin (Colorado, Wyoming, and most of New Mexico and Utah) and the Lower Basin (California, Nevada and most of Arizona). During years of drought, Glen Canyon guarantees a water delivery to the Lower Basin states, without the need for rationing in the Upper Basin. In wet years, it captures extra runoff for future use. The dam is also a major source of hydroelectricity, averaging over 4 billion kilowatt hours per year. The long and winding Lake Powell, known for its scenic beauty and recreational opportunities including houseboating, fishing and water skiing, attracts millions of tourists each year to the Glen Canyon National Recreation Area. In addition to its flooding of the scenic Glen Canyon, the dam's economic justification was questioned by some critics. It became "a catalyst for the modern environmental movement," and was one of the last dams of its size to be built in the United States. The dam has been criticized for the large evaporative losses from Lake Powell and its impact on the ecology of the Grand Canyon, which lies downstream; environmental groups continue to advocate for the dam's removal. Water managers and utilities state that the dam is a major source of renewable energy and provides a buffer for severe droughts. Background The need for a dam The Colorado River is the single largest source of water in the southwestern United States and northwest Mexico; before massive dam projects tamed the river in the 20th century, its flow was far from dependable. Annual discharge from the Colorado River and its tributaries ranges from , and 10-year averages may fluctuate as much as . Flooding, and the river's enormous silt or sediment load, created problems for settlements in the Lower Colorado River Valley and navigation on the lower portion of the river. During droughts, there was too little water available for irrigation. In 1904, the Colorado River was accidentally redirected after it damaged a canal gate in Mexico, causing the river to flood part of California's Imperial Valley and create the Salton Sea. After this catastrophe, California and Arizona began to call for a dam to control the tempestuous river. In 1922, six U.S. states signed the Colorado River Compact to officially allocate the flow of the Colorado River and its tributaries. Each half of the Colorado River Basin – the Upper Basin, comprising Colorado, New Mexico, Utah and Wyoming – and the Lower Basin, with California and Nevada – was allotted of water annually, and Treaty relating to the utilization of waters of the Colorado and Tijuana Rivers and of the Rio Grande was signed in 1944 allocating to Mexico. The third lower basin state, Arizona, did not ratify the Compact until 1944 because it was concerned that California might seek to appropriate a portion of its share before it could be put to use. The total, , was based on only thirty years of streamflow records starting in the late 1890s. It was believed to represent the annual flow as measured at Lee's Ferry, Arizona (the official dividing point of the upper and lower basins), downstream of present-day Glen Canyon Dam. As it turned out, the early 20th century was one of the wettest periods in the last 800 years. The dependable natural flow past Lees Ferry is now believed to be about . The general consensus among inhabitants of the Colorado River basin and government officials was that a high dam had to be built on the Colorado to control floods and provide carry-over water storage for times of drought. Possible locations for this dam were debated for years, and the Bureau of Reclamation's first study for a dam at Glen Canyon was made in 1924, in addition to studies for locations at Black and Boulder Canyons lower on the Colorado, below Grand Canyon. These studies found that the lower Colorado sites had stronger foundation rock which might result in less reservoir seepage. The Glen Canyon site was so remote that delivering supplies and transporting workers there would be infeasible at the time. The first Glen Canyon proposal lay upstream of the Lee's Ferry dividing line, and would be considered the Upper Basin's water. With its substantial Congressional clout, California refused to allow the "virtual faucets" of a Colorado River dam "to be built in what amounted to hostile territory." With the Glen Canyon site out of the question, the initial need for a reservoir was realized in 1936 with the completion of Hoover Dam in Black Canyon, storing in the mammoth reservoir of Lake Mead. It was not able to weather the worst floods or droughts, and was filling with sediment at a rate that would render it useless in a few hundred years. But most importantly, Hoover only controlled the lower portion of the river. The Upper Basin states, whose rivers remained undammed, had no way to ensure they could fulfill their delivery obligation to the Lower Basin states while retaining enough water for their own use. Without storage reservoirs of their own, the Upper Basin states risked a "call" on the Colorado River during drought years: they would be forced to use less water in order to keep the river flowing to Lake Mead and California, the state with the most senior water rights. Colorado River Storage Project To provide water for the Upper Basin and ensure delivery to the Lower Basin, the Bureau of Reclamation proposed the Colorado River Storage Project, which would consist of a dam on the Colorado River at Glen Canyon, several dams on the Gunnison River and San Juan River, and a pair of dams to be built on the Green River, the Colorado's major upper tributary, at Echo Park and Split Mountain. The 1956 Colorado River Storage Project Act authorized the purposes of "regulating the flow of the Colorado River, storing water for beneficial consumptive use, providing for reclamation of arid and semi-arid lands, providing flood control, and generating hydropower." The proposal for Glen Canyon Dam was most vocally supported by the state of Arizona, which wished to get Colorado River water to Phoenix and Tucson, located hundreds of miles away from the Colorado in the center of the state. Glen Canyon Dam would regulate river flow between Lee's Ferry and Lake Mead, where the Colorado drops some , allowing the future construction of two additional hydroelectric dams, at Marble Canyon and Bridge Canyon. These two dams would be partially inside Grand Canyon National Park. Glen, Marble and Bridge together would provide the power necessary to pump water to where it was needed in central Arizona. In 1963, Arizona's congressional delegation proposed these dams as part of the Central Arizona Project to accomplish these goals. The state of California opposed the project, as it would eliminate the "surplus" water in the Colorado (really the Upper Basin's yet unused supplies) it had gotten accustomed to using. The Bureau of Reclamation recognized a more serious problem. Construction of the Storage Project, and allowing the Upper Basin to develop its water supplies, would tip the whole Colorado River system toward a structural water deficit since the Colorado River's average flow is less than what was apportioned in the 1922 Compact. The USBR predicted that by 2030 the annual water supply for the Lower Basin would fall by twenty-five percent, to . To make up for this deficit, the USBR incorporated these proposals with the "Pacific Southwest Water Plan" on January 21, 1964, in which power sales from Glen, Marble and Bridge (often called "cash register dams") would be used to fund a diversion of water from the wetter Pacific Northwest to the Colorado Basin. In addition to the proposed diversion of the Trinity River in Northern California, Marc Reisner wrote in Cadillac Desert that "in the Pacific Northwest there was a lot of suspicion that the Pacific Southwest Water Plan was merely a smokescreen for a much larger plan, long a gleam in the Colorado Basin's eye, to tap the Columbia River." Environmental concerns The Echo Park dam would be inside the federally protected Dinosaur National Monument and would submerge of scenic canyons – a move that alarmed environmentalists. The environmental organization Sierra Club, led by David Brower, was the most vocal opponent of Echo Park Dam, and fought a protracted battle against the Bureau of Reclamation, on the basis that "building the dam would not only destroy a unique wilderness area, but would set a terrible precedent for exploiting resources in America's national parks and monuments". The Bureau of Reclamation favored the Echo Park site over Glen Canyon, because its narrow canyons and high elevation (more than , as compared to at Glen Canyon) would lead to less evaporation. It said that building Echo Park Dam and a "low" Glen Canyon Dam would save of water per year over a "high" Glen Canyon Dam (which was ultimately the version to be built). While studying the figures, Brower discovered that the difference should be no more than . Although it is unclear whether the discrepancy was due to a miscalculation or intentional manipulation, Brower said "it would be a great mistake [to rely on the Bureau's figures] when they cannot add, subtract, multiply and divide." In the face of public scrutiny, and wishing to avoid more questions about the Colorado River Storage Project as a whole, the Bureau of Reclamation dropped the Echo Park proposal in 1954. Even as construction began on the other dams, the drama of the Echo Park debate had shifted the American public's perception on big government projects and their environmental consequences. Echo Park was considered a victory for the American environmental movement, but it only happened in exchange for a dam upstream at Flaming Gorge, and increasing the size of the proposed dam at Glen Canyon to replace the storage that would have been provided by Echo Park. A common misconception is that the environmentalists were given a choice between damming Echo Park and damming Glen Canyon, but the USBR "had always planned to build a dam at Glen Canyon, regardless of the outcome of the Echo Park debate". Floyd Dominy, commissioner of the Bureau of Reclamation, was a vital figure in pushing the project through Congress and convincing politicians to take a pro-dam stance, and to assuage rising public concerns. Dominy realized that the USBR had considerable political clout in Western states, due to the economic contributions of its water projects. Reisner wrote that "Dominy cultivated Congress as if he were tending prize-winning orchids  ... If some Senator was causing him trouble, money for his project could disappear mighty fast." With the necessary political support secured, the Colorado River Storage Project was authorized in April 1956, and groundbreaking of Glen Canyon Dam began in October of the same year. David Brower visited Glen Canyon shortly after the decision to build the dam, and "realized once he arrived that this was not a place for a reservoir". Glen Canyon's springs, side canyons, and intricately sculpted rock formations were home to such features as Music Temple and Cathedral in the Desert, a giant cave-like natural amphitheater with a waterfall at its center. The Colorado River flowed gently across the bottom of the canyon, in sharp contrast to the roaring rapids upstream in Cataract Canyon and downstream in the Grand Canyon. After his groundbreaking 1869 expedition, John Wesley Powell had named Glen Canyon for its characteristics: "So we have a curious ensemble of wonderful features – carved walls, royal arches, glens, alcove gulches, mounds and monuments. From which of these features shall we select a name? We decide to call it Glen Canyon." In addition to its variegated rock formations, Glen Canyon supported a rich riparian zone habitat on the numerous low river terraces formed by the Colorado River, with as many as 316 bird species, 79 plant species and 34 kinds of mammals. In 1963, when construction on the dam was well underway, the Sierra Club published a book on Glen Canyon, The Place No One Knew, featuring photographs by Eliot Porter, and lamenting the loss of the canyon before most of the American public had a chance to visit, or were even aware of its existence. Though little known to most Americans before Porter's book, Glen Canyon had been visited by a handful of hikers and boaters (such as Powell's expedition), and some had even been interviewed by Brower. As said to Brower by writer Wallace Stegner, who had been to the canyon in 1947, "Echo doesn't hold a candle to Glen." Emboldened by Echo Park and desperate to prevent the Grand Canyon from reaching the same fate as Glen, Brower and the Sierra Club directed attention towards the proposed Bridge and Marble dams. The Sierra Club launched an extensive publicity campaign to sway public opinion against the plan; in response to the USBR's argument that new reservoirs would open up the Grand Canyon to recreational boaters as Lake Powell had, a full-page advertisement in the New York Times ran the slogan: "Should we also flood the Sistine Chapel so tourists can float nearer the ceiling?" Faced with public outcry, the Bureau abandoned its Grand Canyon dams, effectively terminating most of the Pacific Southwest Water Plan, in 1968. The coal-fired Navajo Generating Station was built near Page, to make up for the electric power that was lost with the cancellation of the dam project. The Sierra Club lost its IRS tax-exempt status a day after the advertisement was released due to its disruptive political activities. The group's membership more than doubled in the next three years, many of them citizens unhappy with the IRS's apparent overreach. Construction Site preparations As early as 1947, the Bureau of Reclamation had begun investigating two potential sites, both located in the narrow lower reaches of Glen Canyon shortly upstream of Lee's Ferry. The site originally favored by the USBR was just upstream, but the final decision was to build the dam upstream because of stronger foundation rock and easier access to gravel deposits on Wahweap Creek. Because the dam site lay in a remote, rugged area of the Colorado Plateau – more than from the closest paved road, U.S. Route 89 – a new road had to be constructed, branching off from US 89 north of Flagstaff, Arizona, and running through the dam site to its terminus at Kanab, Utah. Because of the isolated location, acquiring the land at the dam and reservoir sites was not particularly difficult, but there were a few disputes with ranchers and miners in the area (many of the Navajo Nation). Much of the land acquired for the dam was through an exchange with the Navajo, in which the tribe ceded Manson Mesa south of the dam site for a similar-sized chunk of land near Aneth, Utah, which the Navajo had long coveted. In the early stages of construction, the only way to cross Glen Canyon was a suspension footbridge made of chicken wire and metal grates. Vehicles had to make a journey in order to get from one side of the canyon to the other. A road link was urgently needed, in order to safely accommodate workers and heavy construction equipment. The contract for building the bridge was awarded to Peter Kiewit Sons and the Judson Pacific Murphy Co. for $4 million and construction began in late 1956, reaching completion on August 11, 1957. When finished, the steel arch Glen Canyon Bridge was itself a marvel of engineering: at long and rising above the river, it was the highest bridge of its kind in the United States and one of the highest in the world. The bridge soon became a major tourist attraction. The March 1959 issue of LIFE reported that "motorists [were] driving miles out of their way just to be thrilled by its dizzying height." Workers moved to the dam site beginning in the mid-late 1950s; the construction camp started out as a haphazardly organized trailer park that grew with the workforce. During the construction of the Glen Canyon Bridge, the USBR also began planning a company town to house the workers. This resulted in the town of Page, Arizona, named for former Reclamation Commissioner John C. Page. By 1959, Page had a host of temporary buildings, electricity, and a small school serving workers' children. As the city grew, it gathered additional features, including numerous stores, a hospital, and even a jeweler. It was intended to serve a maximum population of eight thousand, accounting for the workers' families; the peak workforce would eventually exceed 2,500 in the busiest phases of construction. The engineer in charge of the project would be Lem F. Wylie, who had worked on Hoover Dam and had previously designed six other USBR dams. Prior to and during construction, three separate grants were issued by the National Park Service to document and recover artifacts of historical cultures along the river. These went to University of Utah historian C. Gregory Crampton and anthropologist Jesse Jennings, and to the Museum of Northern Arizona. Crampton subsequently wrote several books and articles on his findings. The Museum of Northern Arizona funded an expedition by William Miller and Helmut Abt, in coordination with the Navajo Nation, to investigate historical artifacts. They discovered a petroglyph in the upper part of the canyon depicting the appearance of the Crab Nebula in 1054. River diversion In 1956, work began on the two diversion tunnels that would carry the Colorado River around the dam site during construction. Each of the tunnels was in diameter, with a combined capacity of ; the right-side tunnel was long and the left . The right tunnel would be used for carrying the Colorado's normal flow around the dam site, while the left tunnel, above the water, would only be used during floods. The lower reaches of the tunnels would later be used to form the lower ends of the dam's spillways. About of material would have to be excavated from the diversion tunnels. On October 15, 1956, President Dwight D. Eisenhower pressed a button on his desk in Washington, D.C., sending a telegraph signal that set off the first blast of dynamite at the portal of the right diversion tunnel. Drilling the tunnels through the porous Navajo sandstone abutting the dam site posed major problems for the excavation crews of the Mountain States Construction Company, which won the contract for the diversion tunnels in 1956. Transporting workers and equipment to the bottom of the canyon was extremely difficult. Initially, transport was done by barge from Wahweap Creek, but the fast current of the Colorado River could be dangerous. After a barge capsized, spilling tons of machinery into the river, a much safer cable-car system was installed. During excavation, the rock frequently broke apart or "slabbed" and collapsed into the tunnels, and metal bolts had to be drilled into the rock to secure it. The largest such event, on August 5, 1958, sent crashing down onto the upper portal of the left diversion tunnel. Material dug out of the tunnels and the dam abutments on the canyon walls was used to build the two cofferdams to divert the Colorado River, which were complete in February 1960. The upper cofferdam was high, and it alone could store several million acre-feet of water to protect the dam site from flooding in the event that inflows exceeded the capacity of the diversion tunnels. On February 11, 1959, the right diversion tunnel was completed and began to carry the flow of the Colorado. The left tunnel was finished over three months later on May 19, 1959, slightly behind schedule. Concrete placement and completion With the Colorado River safely diverted around the canyon, construction could begin on the actual concrete arch dam. The contract was given to the Merritt-Chapman & Scott Corporation for an "astoundingly low" $107,955,552, about $30 million less than USBR's own estimate. Then, right before construction began, about 750 workers organized a strike because of a wage reduction due to the completion of public facilities at Page. In December 1959, wages were raised by $4 a day, quelling the strikers. Concrete placement started on June 16, 1960, and started at a sluggish but growing pace. In 1962 the workforce topped out at nearly 2,500 employees laboring on the dam. Construction would ultimately claim eighteen lives and injure numerous other workers, but contrary to popular myth, no workers were buried alive in the concrete. Cement needed to make concrete for the dam came from the Phoenix Cement Company plant constructed for the purpose in Clarkdale, south of Flagstaff. A huge concrete plant capable of putting out 1,450 tons per hour was installed, and a pair of cableways with movable towers (with capacities of 50 and 25 tons respectively) spanned the canyon, carrying the concrete buckets to their final destinations on the steadily rising crest of the dam. The concrete was poured into modular high wooden blocks or "forms", the largest measuring up to by ; more than 3,000 of these blocks made up the main structure of the dam. Once the concrete cured, the wooden scaffolding was removed and shifted upwards to accommodate the next load of concrete. As more efficient methods of concrete pouring were installed, including conveyors and remotely controlled buckets, the workforce gradually decreased. By late 1962, concrete was being poured into the dam at a rate of per day even as the workforce was scaled down to about 1,500. At the beginning of 1963, the dam was high enough to begin impounding water; huge steel gates were closed over the right diversion tunnel on January 21, and Lake Powell began to rise. A minimal flow of was allowed through the dam, to prevent the Colorado River from drying up completely. On that day, David Brower confronted President John F. Kennedy in a last-ditch effort to delay Glen Canyon's inundation. Brower later said of that exchange: "On January 2, 1963, the last day on which the execution of one of the planet's greatest scenic antiquities could yet have been spared, the man who theoretically had the power to save the place did not. I was within a few feet of his desk in Washington that day and witnessed how the forces long at work had their way. So a steel gate dropped, choking off the flow of the canyon's carotid artery, and from that moment the canyon's life force ebbed quickly. A huge reservoir, absolutely not needed in this century, almost certainly not needed in the next, and conceivably never to be needed at all, began to fill." Construction continued and on September 13, 1963, the dam was topped out. Work on the power plant and spillways began directly after the dam wall was complete. The spillway tunnels were excavated around both abutments of the dam, dropping steeply from their control gates on Lake Powell to merge with the lower ends of the diversion tunnels. This measure saved cost, but introduced a weak point where the two tunnels intersected. The upper ends of the diversion tunnels were then sealed with solid concrete. The first electricity was generated on September 4, 1964, with the power sent into the regional electric grid through a pair of long-distance transmission lines as far as Phoenix, Arizona and Farmington, New Mexico. It took two more years to complete all remaining aspects of the project. On September 22, 1966 Lady Bird Johnson gave the official dedication speech for Glen Canyon Dam, before a crowd of 3,000 people. Filling Lake Powell With a capacity equal to almost two years' annual flow of the Colorado River, engineers were aware that Lake Powell would be difficult to fill, but more problems were encountered than expected. The original plan was to fill Lake Powell to above sea level, the minimum level necessary to generate hydroelectric power by late 1964, after which water would be released down to Lake Mead, with only the excess stored in Lake Powell. The spring runoff in 1963 was the lowest on record in ten years. By the beginning of 1964, Lake Powell had barely reached half the target level, and Lake Mead had seen a sharp decline. In March, Secretary of the Interior Stewart Udall ordered the filling halted and extra releases made to Lake Mead, to the consternation of the Upper Basin states. In May, Udall changed his mind yet again to lower releases, gambling that the spring runoff would be enough to raise Powell to minimum power pool by autumn, by which time power releases could begin, to prevent Lake Mead from falling below its minimum power pool. That gamble paid off, with Lake Powell barely inching over the mark on August 16, 1964. It took more than 17 years for Lake Powell to finally reach its full elevation of above sea level, which it crossed on June 22, 1980. One of the main reasons for this slow rise, in addition to the need to meet obligations to the Lower Basin, was the leakage of vast amounts of water into the porous Navajo Sandstone aquifer. Between 1963 and 1969, as much as leaked into the reservoir banks each year. Conversely, some of this "bank storage" flows back into the reservoir as springs and seeps when Lake Powell is low. Exactly how much of this water has potential to return to the reservoir, and how much "disappears" into the ground, is subject to debate. The Bureau of Reclamation projected that once Lake Powell filled, the total bank storage would stabilize at approximately , and henceforth would fluctuate depending on water levels in the reservoir. The actual loss was , twice the initial prediction, but river flow data indicates that further leakage after 1980 has been negligible. According to a 2013 study by hydrologist Thomas Myers for the Glen Canyon Institute, the reservoir continues to lose about each year due to leakage. According to USBR data for water year 2015 (a year when Lake Powell did not experience a significant overall gain or loss in volume), Lake Powell lost a total of to evaporation and only to leakage. Later history The 1983 floods During the El Niño winter of 1982–1983, the Bureau of Reclamation predicted an average runoff for the Colorado River basin based on snowpack measurements in the Rocky Mountains. Snowfall during April and May was exceptionally heavy; this combined with a sudden rise in temperatures and unusual rainstorms in June to produce major flooding across the western United States. With Lake Powell nearly full, the USBR did not have enough time to draw down the reservoir to accommodate extra runoff. By mid-June, water was pouring into Lake Powell at over . Even with the power plant and river outlet works running at full capacity, Lake Powell continued to rise to the point where the spillways had to be opened. Other than a brief test in 1980, this was the only time the spillways had ever been used. At the beginning of June, dam operators opened the gates on the left spillway, sending , less than one-tenth of capacity, down the tunnel into the river below. After a few days, the entire dam suddenly began to shake violently. The spillway was closed down for inspections and workers discovered that the flow of water was causing cavitation – the explosive collapse of vacuum pockets in water moving at high speed – which was damaging the concrete lining and eroding the rock spillway tunnels from the upper ends of the diversion tunnels, which connect to the bottom of the reservoir. This was rapidly being destroyed by the cavitation and it was feared that a connection would be made to the bottom of Lake Powell, compromising the dam's foundation and causing the dam to fail. Meanwhile, snow continued melting in the Rockies and Lake Powell continued to rise rapidly. To delay having to use the spillways, the USBR installed plywood flashboards (later replaced by steel) atop the gates to increase the lake level. Even this additional capacity was exhausted; discharges through the left spillway reached , and the right spillway was opened to . At Lee's Ferry, the Colorado River peaked at , which was and still is the highest water flow recorded there since the dam was built. On July 14, Lake Powell reached elevation, a level that has not been exceeded since. Just as it seemed inevitable that the dam would fail, inflows fell and the dam was saved. Upon inspection, it was found that cavitation had caused massive gouging damage to both spillways, carrying away thousands of tons of concrete, steel rebar and huge chunks of rock. Repairs to the spillways commenced as soon as possible and continued well into 1984. Air slots were installed at the bottom of each spillway to break up and absorb the shock of the bubbles formed by cavitation. In 1984, the Colorado River basin produced even more runoff than 1983, peaking at in early June. This time, the USBR had drawn down the reservoir enough that it absorbed most of the early high flows. Nevertheless, Lake Powell rapidly approached the top of the spillway gates and construction efforts were subsequently focused on the left spillway in order to get it in operation in time. On August 12, the left spillway gates were opened, releasing water at a rate of . The spillway was undamaged, proving the worth of the re-engineering and suggesting that Glen Canyon Dam will also be able to hold against future floods with the magnitude of 1983. Continuing debates Long after the Glen Canyon Dam was built and continuing to the present day, controversy remains between supporters of dam removal and those who believe it should be left in place. One of the earliest debates regarding the dam was its impact on Rainbow Bridge National Monument, whose high natural arch is the highest in North America, and is a sacred site to the Navajo people. The environmental lobby wanted the Bureau of Reclamation to keep Lake Powell at or below a level of , to prevent it from intruding into the monument. The Bureau of Reclamation proposed to build a barrier dam and pump system in order to keep water out of the monument. With the potential damage that would be caused to the remote environment, "the cure would be far worse than the disease." The proposal was fought over and litigated for years until it was permanently shelved in 1973. Glen Canyon Dam became the subject of influential literature, including Edward Abbey's novel The Monkey Wrench Gang (1975), which tells the story of a fictional group of environmentalists fighting against industrial developers in the American Southwest, their ultimate target being Glen Canyon Dam. The novel gained a cult following after its publication and established Glen Canyon Dam as a poster child of environmental destruction caused by dams. Abbey's book is discussed in Ecospeak: Rhetoric and Environmental Politics in America (1992) by Jimmie Killingsworth and Jacqueline Palmer, who write that Glen Canyon Dam became "the big symbol of all that blocked freedom in the interests of civilized progress." On March 21, 1981, the radical environmental group Earth First! staged an anti-dam protest by unfurling a tapered black sheet of plastic down the face of the dam, making it appear as if a gigantic crack had appeared in the structure – a direct re-enactment of a scene from Abbey's book. Authorities were unable to find the individuals responsible. In his comprehensive history of western water development, Cadillac Desert (1986), Marc Reisner criticized the political forces that resulted in Glen Canyon and hundreds of other dams being built in the 1960s and 1970s. Many of these projects had dubious economic justifications and hidden environmental costs, but the government agencies that built them – namely the Bureau of Reclamation and the U.S. Army Corps of Engineers – were more interested in maintaining their size and influence. Reisner writes that "in the West, it is said, water flows uphill towards money." In a 2011 interview, Floyd Dominy, the Reclamation Commissioner who had spearheaded the Colorado River Storage Project, maintained USBR's stance on the benefits of the dam project. Although Lake Powell loses water to evaporation and leakage, it continues to serve an important function capturing runoff during wet years, as "insurance" for droughts. During the 2000–2004 Colorado River drought, when the basin experienced its lowest five-year runoff on record, Lake Mead would likely have gone dry and the Lower Basin experienced massive cuts, were it not for releases from Lake Powell. Lake Powell and Lake Mead are currently operated under an "equalization" policy that governs releases from Glen Canyon Dam. In order to maintain hydropower generation at both Glen Canyon and Hoover Dams, the lakes must be kept at approximately the same level. By spreading out the water, evaporation is greatly increased. Since the year 2000, Lake Mead has steadily declined toward the critical level at which a shortage would be declared for the Lower Basin states. A plan called "Fill Mead First", which would drain Lake Powell in order to refill Lake Mead, has gained traction in recent years. Glen Canyon Dam would remain in place (as total removal of the structure would be prohibitively expensive), but would only store water in wet seasons when runoff exceeds the capacity of Lake Mead to hold it. Much of the opposition to this plan is along political lines: Lake Powell is legally considered the Upper Basin's water, and Lake Mead belongs to the Lower Basin. The Friends of Lake Powell have called this an attempt to steal water from the Upper Basin, to avoid a shortage in the Lower Basin. The Upper Basin has released 107% of its obligation from Lake Powell since 2000; therefore, falling levels in Lake Mead are a result of water overuse and waste in the Lower Basin states – a "structural deficit". There are also arguments for storing water in Powell: Lake Mead, with its much lower elevation and hotter climate, has a considerably greater evaporation rate than Lake Powell. In addition, a 1983 study by Larry J. Paulson of the University of Nevada showed that the cold water discharge from Glen Canyon Dam has led to a significant reduction of the water temperature, and thus evaporation, from Lake Mead. Design Dam and spillways Glen Canyon's overall design was based on that of Hoover Dam – a massive concrete arch-gravity structure anchored in solid bedrock – with several significant changes. The engineers wanted the dam to rely predominantly on its arch shape to carry the tremendous pressure of the impounded water into the canyon walls instead of depending on the sheer weight of the structure to hold the reservoir back, as had been done at Hoover. The foundation rock at Glen Canyon consists of porous sandstone prone to spalling, in contrast to the stronger granite at the Hoover Dam site, forcing the Glen Canyon design to follow more conservative lines by greatly thickening the abutments, thus increasing the surface area through which the weight of dam and reservoir would be transmitted to the rock and relieving the pressure on the highly breakable cliffs. The Glen Canyon Dam is high from the foundations and stands above the Colorado River. The crest of the dam is long and wide, while the maximum thickness of the base is . The elevation at the crest is , and the elevation of the Colorado River below the dam is . In total, the dam contains of concrete and of reinforcing steel. The hydroelectric power station and river outlet works are located at the foot of the dam. The outlet works consist of four diameter pipes, each controlled by a ring gate and hollow-jet valve. The discharge capacity of the river outlet works is . The two spillway tunnels are excavated through the canyon walls on each side of the dam. Twin radial gates, each wide and high, control the flow of water into the spillways. Together, the spillways can pass up to . The tunnels required of excavation and another of concrete lining. The circular, concrete-lined spillway tunnels plunge at a 55-degree angle, reducing in diameter from , until they intersect with the old river diversion tunnels at sharp elbow joints before returning to the Colorado River. This was done as a cost-saving measure, but resulted in the destruction of both spillways during the 1983 flood releases. The repairs, in which air slots were installed to prevent cavitation shock waves, cost about $15 million. Water storage and distribution With a capacity of , Lake Powell is the second largest man-made lake in the United States by total water capacity (after only Lake Mead), extending upstream through the canyons of Arizona and Utah. The lake covers at its full pool elevation of . The active, or useful capacity is . The minimum water level required for power generation is , corresponding to storage of , and the "dead pool", the lowest point at which water can be released through the dam, is with storage of . When Glen Canyon Dam was first built, the reservoir capacity was estimated at , but some of this has since been lost to siltation. Because of the hundreds of bays and sinuous side canyons, including those formed by the San Juan, Escalante and Dirty Devil Rivers, Lake Powell has an exceptionally long shoreline for a lake of its size – about at full pool, longer than the entire west coast of the continental United States. Glen Canyon Dam's most vital purpose is to provide storage to ensure enough water flows from the Upper Colorado River Basin to the lower, especially in drought years. The 1922 Colorado River Compact requires annual delivery of to the Lower Basin states of Arizona, California and Nevada; the 1944 treaty with Mexico obligates the U.S. to allow at least for use in the Mexican states of Baja California and Sonora. Glen Canyon Dam must supply at least of this water; the remaining comes from other tributaries of the Colorado River. The required release from Glen Canyon is averaged over a 10-year period, so releases in each year may be higher or lower depending on the amount of runoff. In wetter years, the Bureau of Reclamation may decide to release extra water from Glen Canyon Dam if the level of Lake Powell exceeds the "equalization tier", an elevation determined by the difference in storage between Lake Powell and Lake Mead. Most of Lake Powell's inflow originates as summer snowmelt from the Rocky Mountains of Colorado, Utah and Wyoming. Releases are made over a water year of October 1–September 30, since the annual snowpack begins to accumulate in late autumn. On April 1 of each year, the Bureau of Reclamation releases its official forecast of the April–July (snowmelt season) runoff, and adjusts releases from Glen Canyon Dam accordingly to maintain Lake Powell at a safe level. An accurate forecast is vital to prevent uncontrolled spilling, which would waste water that could have been used for power generation. Although the snowpack typically reaches its peak and begins to melt in April, the picture can occasionally change unexpectedly and dramatically – either due to a hot and dry spring that evaporates snow before it can melt, or an extremely wet spring as occurred in May 1983. After the near disaster in 1983, the USBR has maintained a minimum of of flood-storage space in Lake Powell at the beginning of each year, to guard against unanticipated high runoff. 21st century drought Colorado River flows have been below average since 2000 as a result of the southwestern North American megadrought, leading to lower lake levels. In winter 2005 (before the spring run-off) the lake reached its then-lowest level since filling, an elevation of above sea level, which was approximately below full pool. After 2005, the lake level slowly rebounded, although it has not filled completely since then. Summer 2011 saw the third largest June and the second largest July runoff since the closure of Glen Canyon Dam, and the water level peaked at nearly , 77 percent of capacity, on July 30. Water years 2012 and 2013 were, respectively, the third and fourth-lowest runoff years recorded on the Colorado River. By April 9, 2014, the lake level had fallen to , largely erasing the gains made in 2011. Colorado River levels returned to normal during water years 2014 and 2015 (pushing the lake to by the end of water year 2015. The Bureau of Reclamation in 2014 reduced the Lake Powell release from 8.23 to 7.48 million acre-feet, for the first time since the lake filled in 1980. This was done due to the "equalization" guideline which stipulates that an approximately equal amount of water must be retained in both Lake Powell and Lake Mead, in order to preserve hydro-power generation capacity at both lakes. This resulted in Lake Mead declining to the lowest level on record since the 1930s. Long-term water level decline continued, forcing an emergency release of water from the Flaming Gorge Reservoir in July 2021, and by April 22, 2022 Lake Powell was at in elevation – just of capacity. This marks the lowest water level for Lake Powell since it was filled in 1963. Peer-reviewed studies indicate that storing water in Lake Mead rather than in Lake Powell would yield a savings of 300,000 acre feet of water or more per year, leading to calls by environmentalists to drain Lake Powell and restore Glen Canyon to its natural, free-flowing state. Power generation The other principal goal of Glen Canyon Dam is hydroelectricity generation. It is the second-biggest producer of hydroelectric power in the Southwestern United States, after Hoover Dam. Revenues derived from power sales were integral in paying off the bonds used to build the dam and have also been used to fund other Bureau of Reclamation projects, including environmental restoration programs in the Grand Canyon and elsewhere along the Colorado River. For this reason, it has long been known as a "cash register" dam. The dam also serves as a primary peaking power plant and black start power source for the Southwest electrical grid. The power plant has a total capacity of 1,320 megawatts from eight 165,000 kilowatt generators. Each generator is driven by a 254,000 horsepower vertical-axis Francis turbine. The gross hydraulic head is . The units were installed between September 1964 and February 1966 at an original rating of 950 megawatts; an upgrade project between 1985 and 1997 brought it to its present capacity. Because of fluctuating demands on the electrical grid, the dam release into the Colorado River rises and falls dramatically on a daily basis. After the dam was completed in 1964, there were few restrictions on hydro-power generation. The minimum dam release was set at a meager (increased to during the summer whitewater rafting season), with a maximum of during peak times; to respond to changing power demands, river flows could double or even triple in the space of an hour. This caused severe erosion of the Colorado River banks downstream, damaging habitat for native fish and causing danger for boaters, who could get stuck whenever the river flow dropped too quickly. In 1990 temporary restrictions were put in place on dam operations, before the release of a final environmental impact statement (EIS). The EIS completed March 21, 1995 cemented some restrictions on dam operations, limiting the maximum power release to , the maximum hourly "ramp-up" (increase in river flow) to , and the maximum "ramp-down" to . The minimum dam release was set to during the day and at night. Flood control releases are allowed to go higher, but must remain constant for the entire month. Because these criteria limit the flexibility of Glen Canyon Dam to meet grid demands, economic losses for the period 1997–2005 were estimated at $38 million to $58 million per year. Between 1980 and 2013, Glen Canyon Dam generated an average of 4,717 gigawatt hours (GWh) per year, enough for about 400,000 homes. The highest was 8,703 GWh in 1984, and the lowest was 3,299 GWh in 2005. Power generation is affected not only by the volume of water passing through the dam, but also the depth of water in the reservoir, as a higher water level means more pressure (head) on the turbines. Hydropower generated at Glen Canyon serves about 5 million people in Arizona, Colorado, Nevada, New Mexico, Utah and Wyoming, and is sold to utilities in these states as 20-year contracts. Power sales have been managed by the Western Area Power Administration since 1977. Glen Canyon Dam generates enough power to offset 6.7 billion pounds (3 billion kg) of carbon dioxide emissions each year. Drought conditions in the 21st century have reduced the amount of hydropower available from Glen Canyon Dam. An unusual feature of the Glen Canyon power plant is the Kentucky bluegrass lawn occupying the crescent between the dam and hydroelectric plant. At the time of construction in 1964, the steel penstocks feeding water to the power plant were exposed and they experienced severe vibration when in use. Engineers decided to bury them in soil to act as a buffer against the potentially damaging vibrations. The grass was later planted to prevent the dirt from getting blown away – but also provides a mild cooling effect through evapotranspiration, reducing temperatures inside the power plant. Environmental issues Because of its tremendous ecological effect on the Colorado River, the Glen Canyon Dam has been subject to decades of criticism from the environmental movement. Being located in a high desert climate amid porous geology, Lake Powell causes huge evaporation and seepage losses. The Glen Canyon Institute estimates that is lost from the reservoir in an average year. This amounts to 6 percent of the Colorado River's flow, an increasingly valuable amount of water in an arid land for both humans and the animals and plants that live along the river. (This amount greatly decreases when Lake Powell is low; with the reservoir about half full in water year 2015, evaporation was .) Like all dams, Glen Canyon traps sediment (silt), but because the Colorado is an especially muddy river, the dam has posed even more visible consequences for the river within the Grand Canyon. About 100 million US tons (90,700,000 metric tons) of sediment are trapped behind the dam annually, equal to about 30,000 dump truck loads per day. Because of the dam, sediment deposited by the Colorado and its tributaries is slowly filling up the canyon, and projections put the useful life of the reservoir at 300 to 700 years. If no action is taken such as dredging or sediment sluicing, in a few hundred years, sediment deposits will begin to build up at the foot of the dam and will gradually block the different outlets, reducing the dam's capacity to store and release water. Thus, it would become more difficult to maintain the required release of below the dam. The Colorado River would reduce to a trickle in dry seasons as it naturally did before the dam was built, potentially compromising the water supply of the Lower Basin states. The Colorado through Grand Canyon now lacks the source of sediment it needs to build sandbars and islands, and these natural fluvial formations within the canyon have now suffered severe damage from erosion. The floods that once scoured the river each year are now contained behind the dam except in extraordinary cases such as 1983–84; the lack of floods has promoted vegetation encroachment which not only has considerably changed the riparian zone environment but has created problems for tourism, as hikers and boaters often cannot find good spots to camp due to overgrowth. Flood control has also caused an inability of the river to carry away the rockslides that are common along the canyons, leading to the creation of incrementally dangerous rapids that pose a hazard to fish and boaters alike. Before damming, the Colorado commonly reached flows of more than during the spring; this has been limited to less than most years with few exceptions. Before the dam was built, Colorado River temperatures ranged from over in the heat of summer to just above freezing in winter. Today, water released by Glen Canyon is a consistent throughout the year due to a thermal mass effect in Lake Powell. The water typically released from hundreds of feet below the lake surface through the penstocks is insulated from temperature fluctuations by the thick layer of water above it. Nikolai Ramsey of the Grand Canyon Trust describes the clearer, colder river as a "death zone for native fish", such as the endemic Colorado pikeminnow and humpback chub, which are adapted to survive in warm, silty water. According to biologist and river guide Michael P. Ghiglieri, many drowning deaths by boaters in the Grand Canyon have been caused or exacerbated by rapid hypothermia and hypothermic shock caused by entering the cold water. He further described that during the record post-dam high-flow season of 1983 (mentioned above), there was only one boating fatality in the canyon, providing a strong challenge to views that the dam, by reducing and mediating river flows, increases the safety of canyon river users. The river water temperature in 1983 was significantly higher than normal, due to a large portion of the water having come from overflows of warmer surface water over the spillways of Glen Canyon Dam, rather than the colder lower levels which feed the penstocks. Glen Canyon Dam has also impacted the Colorado River well downstream of the Grand Canyon. When the gates of the dam were closed in 1963, the resulting reductions in river flow effectively dried up the Colorado River Delta, the large estuary formed by the Colorado River at the Gulf of California (Sea of Cortez) in Mexico. Prior to the completion of Glen Canyon Dam, about reached the delta each year, despite heavy water use in California and Arizona. Because Glen Canyon Dam made possible an increased utilization of water from the Colorado River system, not enough water is left to flow to the delta in a normal year, and about of ecologically productive wetlands have disappeared. In 2014 an intentional "pulse flow" was released into the delta to restore some of these wetlands; the viability of such flows have been controversial, considering the already high demand for Colorado River water. Restoration efforts On March 26, 1996, the penstocks and two of the outlet works' bypass tubes at Glen Canyon Dam were opened to maximum capacity, causing a flood of to move down the Colorado River. This was the first of the Glen Canyon Adaptive Management Program "high flow experiments", a controlled effort to assist the recovery of the damaged riverine ecosystem by mimicking the floods that once swept through the canyons each spring. The flow appeared to have scoured clean numerous pockets of encroaching vegetation, carried away rockslides that had become dangerous to boaters, and rearranged sand and gravel bars along the river, and was initially believed to be an environmental success. However, in the following months it was discovered that the initial results were misleading. Crews working in the Grand Canyon after the 1996 experiment found that the offensive vegetation had not been carried away as previously thought – only buried – and had mostly recovered within six months. The surface area of sandbars had been increased, but much of the material had been eroded from the submerged portions of the bars and deposited on top, making them unstable, rather than scoured from the riverbed as hoped. Subsequent releases in 2004, 2008, 2012, and 2014 were timed to take advantage of summer monsoon storms, and redistribute sediment carried into the Grand Canyon by the Paria and Little Colorado Rivers. The high-flow experiments do not change the total amount of water outflow from Lake Powell on an annual basis, but as a consequence hydro-electric power releases during the rest of the year must be reduced. Some organizations, such as Living Rivers, continue to believe that the dam has too large and severe of an effect on the river's ecology to make restoration efforts worthwhile. Recreation According to the U.S. National Park Service, Lake Powell is "widely recognized by boating enthusiasts as one of the premier water-based recreation destinations in the world." Despite its remote location, the Glen Canyon National Recreation Area, which surrounds the reservoir, receives more than three million visitors annually. Activities include boating, fishing, waterskiing, jet-skiing, swimming and hiking. Prepared campgrounds can be found at each marina, but many visitors choose to rent a houseboat or bring their own camping equipment, find a private spot somewhere in the canyons, and make their own camp (there are no restrictions on where visitors can stay). About 85,000 people per year travel via boat to Rainbow Bridge in Utah, a large natural arch once very hard to access, but now easily reachable because one of the arms of the reservoir extends near it. Because most of the lake is surrounded by steep sandstone walls, access is limited to developed marinas. The heavily used Wahweap and Antelope Point Marinas are located in Arizona, close to Page. Two other marinas at Halls Crossing and Bullfrog are located further upstream in Utah. The Hite Marina, located at the upper end of the reservoir near the Hite Crossing Bridge, is now disused since the water level is usually too low for boats to launch there. Other facilities at Dangling Rope and Rainbow Bridge are accessible only by boat. Aside from the bridges at either end of the lake, a car-and-passenger ferry between Halls Crossing and Bullfrog is the only way for vehicles to cross Lake Powell. More than 500,000 people tour the Carl Hayden Visitor Center at Glen Canyon Dam each year. The Bureau of Reclamation provides guided tours of the dam; stringent security measures have been in place since the September 11 attacks. The base of the dam can also be reached via boat from Lee's Ferry. Because of the cold, clear water released from Lake Powell, the stretch of the Colorado River between Glen Canyon Dam and Lee's Ferry has become an excellent rainbow trout fishery. Trout are not native to the Colorado River system; they were stocked in the river below Glen Canyon Dam after the dam was built. Other non-native fish such as smallmouth bass, striped bass, largemouth bass and black crappie were planted in Lake Powell to provide sport fishing opportunities. Like many U.S. lakes and reservoirs, Lake Powell has an active problem with zebra and quagga mussels, invasive bivalve species originating in eastern Europe. Mussels are most commonly transferred from lake to lake attached to the hulls, and inside the bilge area of boats. Lake users are required by law to clean, drain and dry their vessels, both before and after taking a trip to Lake Powell. Mussel infestations tend to clog the hydroelectric intakes at the Glen Canyon Dam, as well as the propellers and exhaust pipes of boats, requiring expensive de-contamination. Their impact on the lake ecology appears to be low, or even beneficial due to their providing a food source for fish. See also List of tallest dams List of dams and reservoirs in the United States List of the tallest dams in the United States List of largest reservoirs in the United States List of dams in the Colorado River system Black Mesa and Lake Powell Railroad Colorado Plateau Katie Lee, an activist against the dam Navajo Generating Station References Works cited Notes External links Glen Canyon Dam Overlook 1995 Glen Canyon EIS Glen Canyon Before Flooding – 1962 Glen Canyon Institute Challenge at Glen Canyon – USBR film about the 1983 floods Glen Canyon National Recreation Area Glen Canyon Natural History Association Geologic Map of the Glen Canyon Dam, 30ʹ x 60ʹ Quadrangle, Coconino County, Northern Arizona Historical Physical and Chemical Data for Water in Lake Powell and from Glen Canyon Dam Releases, Utah-Arizona, 1964–2013 Lake Powell daily water levels Arch-gravity dams Buildings and structures in Coconino County, Arizona Dams on the Colorado River Dams in Arizona Glen Canyon National Recreation Area Hydroelectric power plants in Arizona Landmarks in Arizona U.S. Route 89 United States Bureau of Reclamation dams Dams completed in 1966 Energy infrastructure completed in 1966 Colorado River Storage Project 1966 establishments in Arizona Lake Powell
Glen Canyon Dam
[ "Engineering" ]
11,536
[ "Colorado River Storage Project", "Lake Powell" ]
344,859
https://en.wikipedia.org/wiki/New%20media
New media are communication technologies that enable or enhance interaction between users as well as interaction between users and content. In the middle of the 1990s, the phrase "new media" became widely used as part of a sales pitch for the influx of interactive CD-ROMs for entertainment and education. The new media technologies, sometimes known as Web 2.0, include a wide range of web-related communication tools such as blogs, wikis, online social networking, virtual worlds, and other social media platforms. The phrase "new media" refers to computational media that share material online and through computers. New media inspire new ways of thinking about older media. Media do not replace one another in a clear, linear succession, instead evolving in a more complicated network of interconnected feedback loops . What is different about new media is how they specifically refashion traditional media and how older media refashion themselves to meet the challenges of new media. Unless they contain technologies that enable digital generative or interactive processes, broadcast television programs, feature films, magazines, and books are not considered to be new media. History In the 1950s, connections between computing and radical art began to grow stronger. It was not until the 1980s that Alan Kay and his co-workers at Xerox PARC began to give the computability of a personal computer to the individual, rather than have a big organization be in charge of this. In the late 1980s and early 1990s, however, we seem to witness a different kind of parallel relationship between social changes and computer design. Although causally unrelated, conceptually, it makes sense that the Cold War and the design of the Web took place at exactly the same time. Writers and philosophers such as Marshall McLuhan were instrumental in the development of media theory during this period which is now famous declaration in Understanding Media: The Extensions of Man, that "the medium is the message" drew attention to the too often ignored influence media and technology themselves, rather than their "content," have on humans' experience of the world and on society broadly. Until the 1980s, media relied primarily upon print and analog broadcast models such as television and radio. The last twenty-five years have seen the rapid transformation into media which are predicated upon the use of digital technologies such as the Internet and video games. However, these examples are only a small representation of new media. The use of digital computers has transformed the remaining 'old' media, as suggested by the advent of digital television and online publications. Even traditional media forms such as the printing press have been transformed through the application of technologies by using of image manipulation software like Adobe Photoshop and desktop publishing tools. Andrew L. Shapiro argues that the "emergence of new, digital technologies signals a potentially radical shift of who is in control of information, experience and resources". W. Russell Neuman suggests that whilst the "new media" have technical capabilities to pull in one direction, economic and social forces pull back in the opposite direction. According to Neuman, "We are witnessing the evolution of a universal interconnected network of audio, video, and electronic text communications that will blur the distinction between interpersonal and mass communication; and between public and private communication". Neuman argues that new media will: Alter the meaning of geographic distance. Allow for a huge increase in the volume of communication. Provide the possibility of increasing the speed of communication. Provide opportunities for interactive communication. Allow forms of communication that were previously separate to overlap and interconnect. Consequently, it has been the contention of scholars such as Douglas Kellner and James Bohman that new media and particularly the Internet will provide the potential for a democratic postmodern public sphere, in which citizens can participate in well informed, non-hierarchical debate pertaining to their social structures. Contradicting these positive appraisals of the potential social impacts of new media are scholars such as Edward S. Herman and Robert McChesney who have suggested that the transition to new media has seen a handful of powerful transnational telecommunications corporations who achieve a level of global influence which was hitherto unimaginable. Scholars have highlighted both the positive and negative potential and actual implications of new media technologies, suggesting that some of the early work into new media studies was guilty of technological determinismwhereby the effects of media were determined by the technology themselves, rather than through tracing the complex social networks which governed the development, funding, implementation and future development of any technology. Based on the argument that people have a limited amount of time to spend on the consumption of different media, displacement theory argue that the viewership or readership of one particular outlet leads to the reduction in the amount of time spent by the individual on another. The introduction of new media, such as the internet, therefore reduces the amount of time individuals would spend on existing "old" media, which could ultimately lead to the end of such traditional media. Definition Although, there are several ways that new media may be described, Lev Manovich, in an introduction to The New Media Reader, defines new media by using eight propositions: New media versus cybercultureCyberculture is the various social phenomena that are associated with the Internet and network communications (blogs, online multi-player gaming), whereas new media is concerned more with cultural objects and paradigms (digital to analog television, smartphones). New media as computer technology used as a distribution platformNew media are the cultural objects which use digital computer technology for distribution and exhibition. e.g. (at least for now) Internet, Web sites, computer multimedia, Blu-ray disks etc. The problem with this is that the definition must be revised every few years. The term "new media" will not be "new" anymore, as most forms of culture will be distributed through computers. New media as digital data controlled by softwareThe language of new media is based on the assumption that, in fact, all cultural objects that rely on digital representation and computer-based delivery do share a number of common qualities. New media is reduced to digital data that can be manipulated by software as any other data. Now media operations can create several versions of the same object. An example is an image stored as matrix data which can be manipulated and altered according to the additional algorithms implemented, such as color inversion, gray-scaling, sharpening, rasterizing, etc. New media as the mix between existing cultural conventions and the conventions of softwareNew media today can be understood as the mix between older cultural conventions for data representation, access, and manipulation and newer conventions of data representation, access, and manipulation. The "old" data are representations of visual reality and human experience, and the "new" data is numerical data. The computer is kept out of the key "creative" decisions, and is delegated to the position of a technician. e.g. In film, software is used in some areas of production, in others are created using computer animation. New media as the aesthetics that accompanies the early stage of every new modern media and communication technologyWhile ideological tropes indeed seem to be reappearing rather regularly, many aesthetic strategies may reappear two or three times ... In order for this approach to be truly useful it would be insufficient to simply name the strategies and tropes and to record the moments of their appearance; instead, we would have to develop a much more comprehensive analysis which would correlate the history of technology with social, political, and economical histories or the modern period. New media as faster execution of algorithms previously executed manually or through other technologiesComputers are a huge speed-up of what were previously manual techniques. e.g. calculators. Dramatically speeding up the execution makes possible previously non-existent representational technique. This also makes possible of many new forms of media art such as interactive multimedia and video games. On one level, a modern digital computer is just a faster calculator, we should not ignore its other identity: that of a cybernetic control device. New media as the encoding of modernist avant-garde; new media as metamediaManovich declares that the 1920s are more relevant to new media than any other time period. Metamedia coincides with postmodernism in that they both rework old work rather than create new work. New media avant-garde is about new ways of accessing and manipulating information (e.g. hypermedia, databases, search engines, etc.). Meta-media is an example of how quantity can change into quality as in new media technology and manipulation techniques can recode modernist aesthetics into a very different postmodern aesthetics. New media as parallel articulation of similar ideas in post–World War II art and modern computingPost-WWII art or "combinatorics" involves creating images by systematically changing a single parameter. This leads to the creation of remarkably similar images and spatial structures. This illustrates that algorithms, this essential part of new media, do not depend on technology, but can be executed by humans. Globalization The rise of new media has increased communication between people all over the world and the Internet. It has allowed people to express themselves through blogs, websites, videos, pictures, and other user-generated media. Terry Flew stated that as new technologies develop, the world becomes more globalized. Globalization is more than the development of activities throughout the world, globalization allows the world to be connected no matter the distance from user to user and Frances Cairncross expresses this great development as the "death of distance". New media has established the importance of making friendships through digital social places more prominent than in physical places. Globalization is generally stated as "more than expansion of activities beyond the boundaries of particular nation states". New media "radically break the connection between physical place and social place, making physical location much less significant for our social relationships". However, the changes in the new media environment create a series of tensions in the concept of "public sphere". According to Ingrid Volkmer, "public sphere" is defined as a process through which public communication becomes restructured and partly disembedded from national political and cultural institutions. This trend of the globalized public sphere is not only as a geographical expansion form a nation to worldwide, but also changes the relationship between the public, the media and state. "Virtual communities" are being established online and transcend geographical boundaries, eliminating social restrictions. Howard Rheingold describes these globalized societies as self-defined networks, which resemble what we do in real life. "People in virtual communities use words on screens to exchange pleasantries and argue, engage in intellectual discourse, conduct commerce, make plans, brainstorm, gossip, feud, fall in love, create a little high art and a lot of idle talk". For Sherry Turkle "making the computer into a second self, finding a soul in the machine, can substitute for human relationships". New media has the ability to connect like-minded others worldwide. While this perspective suggests that the technology drivesand therefore is a determining factorin the process of globalization, arguments involving technological determinism are generally frowned upon by mainstream media studies. Instead academics focus on the multiplicity of processes by which technology is funded, researched and produced, forming a feedback loop when the technologies are used and often transformed by their users, which then feeds into the process of guiding their future development. While commentators such as Manuel Castells espouse a "soft determinism" whereby they contend that "Technology does not determine society. Nor does society script the course of technological change, since many factors, including individual inventiveness and entrepreneurialism, intervene in the process of scientific discovery, technical innovation and social applications, so the final outcome depends on a complex pattern of interaction. Indeed the dilemma of technological determinism is probably a false problem, since technology is society and society cannot be understood without its technological tools". This, however, is still distinct from stating that societal changes are instigated by technological development, which recalls the theses of Marshall McLuhan. Manovich and Castells have argued that whereas mass media "corresponded to the logic of industrial mass society, which values conformity over individuality," new media follows the logic of the postindustrial or globalized society whereby "every citizen can construct her own custom lifestyle and select her ideology from a large number of choices. Rather than pushing the same objects to a mass audience, marketing now tries to target each individual separately". The evolution of virtual communities highlighted many aspects of the real world. Tom Boellstorff's studies of Second Life discuss a term known as "griefing." In Second Life griefing means to consciously upset another user during their experience of the game. Other users also posed situations of their avatar being raped and sexually harassed. In the real world, these same types of actions are carried out. Virtual communities are a clear demonstration of new media through means of new technological developments. Anthropologist Daniel Miller and sociologist Don Slater discussed online Trinidad culture on online networks through the use of ethnographic studies. The study argues that internet culture does exist and this version of new media cannot eliminate people's relations to their geographic area or national identity. The focus on Trini culture specifically demonstrated the importance of what Trini values and beliefs existed within the page while also representing their identities on the web. As tool for social change Social movement media has a rich and storied history (see Agitprop) that has changed at a rapid rate since new media became widely used. The Zapatista Army of National Liberation of Chiapas, Mexico were the first major movement to make widely recognized and effective use of new media for communiques and organizing in 1994. Since then, new media has been used extensively by social movements to educate, organize, share cultural products of movements, communicate, coalition build, and more. The WTO Ministerial Conference of 1999 protest activity was another landmark in the use of new media as a tool for social change. The WTO protests used media to organize the original action, communicate with and educate participants, and was used as an alternative media source. The Indymedia movement also developed out of this action, and has been a great tool in the democratization of information, which is another widely discussed aspect of new media movement. Some scholars even view this democratization as an indication of the creation of a "radical, socio-technical paradigm to challenge the dominant, neoliberal and technologically determinist model of information and communication technologies." A less radical view along these same lines is that people are taking advantage of the Internet to produce a grassroots globalization, one that is anti-neoliberal and centered on people rather than the flow of capital. Chanelle Adams, a feminist blogger for the Bi-Weekly webpaper The Media says that in her "commitment to anti-oppressive feminist work, it seems obligatory for her to stay in the know just to remain relevant to the struggle." In order for Adams and other feminists who work towards spreading their messages to the public, new media becomes crucial towards completing this task, allowing people to access a movement's information instantaneously. Some are also skeptical of the role of new media in social movements. Many scholars point out unequal access to new media as a hindrance to broad-based movements, sometimes even oppressing some within a movement. Others are skeptical about how democratic or useful it really is for social movements, even for those with access. New media has also found a use with less radical social movements such as the Free Hugs Campaign. Using websites, blogs, and online videos to demonstrate the effectiveness of the movement itself. Along with this example the use of high volume blogs has allowed numerous views and practices to be more widespread and gain more public attention. Another example is the ongoing Free Tibet Campaign, which has been seen on numerous websites as well as having a slight tie-in with the band Gorillaz in their Gorillaz Bitez clip featuring the lead singer 2D sitting with protesters at a Free Tibet protest. Another social change seen coming from New Media is trends in fashion and the emergence of subcultures such as textspeak, Cyberpunk, and various others. Following trends in fashion and textspeak, New Media also makes way for "trendy" social change. The Ice Bucket Challenge is a recent example of this. All in the name of raising money for ALS (the lethal neurodegenerative disorder also known as Lou Gehrig's disease), participants are nominated by friends via social media such as Facebook and Twitter to dump a bucket of ice water on themselves, or donate to the ALS Foundation. This became a huge trend through Facebook's tagging tool, allowing nominees to be tagged in the post. The videos appeared on more people's feeds, and the trend spread fast. This trend raised over 100 million dollars for the cause and increased donations by 3,500 percent. A meme, often seen on the internet, is an idea that has been replicated and passed along. Ryan Milner compared this concept to a possible tool for social change. The combination of pictures and texts represent pop polyvocality ("the people's version"). A meme can make more serious conversations less tense while still displaying the situation at sake. In the music industry The music industry was affected by the advancement of new media. Throughout years of technology growth, the music industry faced major changes such as the distribution of music from shellac to vinyl, vinyl to 8-tracks, and many more changes over the decades. Beginning in the early 1900s, audio was released on a brittle material called "shellac." The quality of the sound was very distorted and the delicacy of the physical format resulted in the change to LPs (Long Playing). The first LP was made by Columbia Records in 1948 and later on, RCA developed the EP (Extended Play) which was only seven inches around and had a longer playing time in comparison to the original LP. The desire for portable music still persisted in this era which projected the launch of the compact cassette. The Cassette was released in 1963 and flourished after post-war where Cassette tapes were being converted into cars for entertainment when traveling. Not long after the development of the cassette did the music industry begin to see forms of piracy. Cassette tapes allowed people to make their own tapes without paying for rights to the music. This effect caused a major loss in the music industry but it also led to the evolution of mixtapes. As music technologies continued to develop from 8-tracks, floppy discs, CD's, and now, MP3, so did new media platforms as well. The discovery of MP3's in the 1990s has since changed the world we live in today. At first, MP3 tracks threatened the industry with massive piracy file-to-file sharing networks such as Napster, until laws were established to prevent this. However, consumption of music is higher than ever before due to streaming platforms like Apple Music, Spotify, Pandora, and many more! National security New media has become of interest to the global espionage community as it is easily accessible electronically in database format and can therefore be quickly retrieved and reverse engineered by national governments. Particularly of interest to the espionage community are Facebook and Twitter, two sites where individuals freely divulge personal information that can then be sifted through and archived for the automatic creation of dossiers on both people of interest and the average citizen. New media also serves as an important tool for both institutions and nations to promote their interests and values (The contents of such promotion may vary according to different purposes). Some communities consider it an approach of "peaceful evolution" that may erode their own nation's system of values and eventually compromise national security. Interactivity Interactivity has become a term for a number of new media use options evolving from the rapid dissemination of Internet access points, the digitalization of media, and media convergence. In 1984, Ronald E. Rice defined new media as communication technologies that enable or facilitate user-to-user interactivity and interactivity between user and information. Such a definition replaces the "one-to-many" model of traditional mass communication with the possibility of a "many-to-many" web of communication. Any individual with the appropriate technology can now produce his or her online media and include images, text, and sound about whatever he or she chooses. Thus the convergence of new methods of communication with new technologies shifts the model of mass communication, and radically reshapes the ways we interact and communicate with one another. In "What is new media?", Vin Crosbie described three different kinds of communication media. He saw interpersonal media as "one to one", mass media as "one to many", and finally new media as individuation media or "many to many". Interactivity is present in some programming work, such as video games. It's also viable in the operation of traditional media. In the mid-1990s, filmmakers started using inexpensive digital cameras to create films. It was also the time when moving image technology had developed, which was able to be viewed on computer desktops in full motion. This development of new media technology was a new method for artists to share their work and interact with the big world. Other settings of interactivity include radio and television talk shows, letters to the editor, listener participation in such programs, and computer and technological programming. Interactive new media has become a true benefit to every one because people can express their artwork in more than one way with the technology that we have today and there is no longer a limit to what we can do with our creativity. Interactivity can be considered a central concept in understanding new media, but different media forms possess, or enable different degrees of interactivity, and some forms of digitized and converged media are not in fact interactive at all. Tony Feldman considers digital satellite television as an example of a new media technology that uses digital compression to dramatically increase the number of television channels that can be delivered, and which changes the nature of what can be offered through the service, but does not transform the experience of television from the user's point of view, and thus lacks a more fully interactive dimension. It remains the case that interactivity is not an inherent characteristic of all new media technologies, unlike digitization and convergence. Terry Flew argues that "the global interactive games industry is large and growing, and is at the forefront of many of the most significant innovations in new media". Interactivity is prominent in these online video games such as World of Warcraft, The Sims Online and Second Life. These games, which are developments of "new media," allow for users to establish relationships and experience a sense of belonging that transcends traditional temporal and spatial boundaries (such as when gamers logging in from different parts of the world interact). These games can be used as an escape or to act out a desired life. New media have created virtual realities that are becoming virtual extensions of the world we live in. With the creation of Second Life and Active Worlds before it, people have even more control over this virtual world, a world where anything that a participant can think of can become a reality. Interactive games and platforms such as YouTube and Facebook have led to many viral apps that devise a new way to be interacting with media. The development of GIFs, which dates back to the early stages of webpage development has evolved into a social media phenomenon. Miltner and Highfield refer to GIFs as being "polysemic." These small looping images represent a specific meaning in cultures and often can be used to display more than one meaning. Miltner and Highfield argue that GIFs are particularly useful in creating affective or emotional connections of meaning between people. Affect creates an emotional connection of meaning to the person and their culture. Industry The new media industry shares an open association with many market segments in areas such as software/video game design, television, radio, mobile and particularly movies, advertising and marketing, through which industry seeks to gain from the advantages of two-way dialogue with consumers primarily through the Internet. As a device to source the ideas, concepts, and intellectual properties of the general public, the television industry has used new media and the Internet to expand its resources for new programming and content. The advertising industry has also capitalized on the proliferation of new media with large agencies running multimillion-dollar interactive advertising subsidiaries. Interactive websites and kiosks have become popular. In a number of cases advertising agencies have also set up new divisions to study new media. Public relations firms are also taking advantage of the opportunities in new media through interactive PR practices. Interactive PR practices include the use of social media to reach a mass audience of online social network users. With the development of the Internet, many new career paths have emerged. Before the rise, many tech jobs were considered boring. The Internet led to creative work that was seen as casual and diverse across gender, race, and sexual orientation. Web design, gaming design, webcasting, blogging, and animation are all creative career paths that came with this rise. At first glance, the field of new media may seem hip, cool, creative, and relaxed. What many don't realize is that working in this field is tiresome. Many of the people that work in this field don't have steady jobs. Work in this field has become project-based. Individuals work project to project for different companies. Most people are not working on one project or contract, but on multiple ones at the same time. Despite working on numerous projects, people in this industry receive low payments, which is highly contrasted with the techy millionaire stereotype. It may seem like a carefree life from the outside, but it is not. New media workers work long hours for little pay and spend up to 20 hours a week looking for new projects to work on. Youth Based on nationally representative data, a study conducted by Kaiser Family Foundation in five-year intervals in 1998–99, 2003–04, and 2008–09 found that with technology allowing nearly 24-hour media access, the amount of time young people spend with entertainment media has risen dramatically, especially among Black and Hispanic youth. Today, 8 to 18-year-olds devote an average of 7 hours and 38 minutes (7:38) to using entertainment media in a typical day (more than 53 hours a week)about the same amount most adults spend at work per day. Since much of that time is spent 'media multitasking' (using more than one medium at a time), they actually manage to spend a total of 10 hours and 45 minutes worth of media content in those 7½ hours per day. According to the Pew Internet & American Life Project, 96% of 18 to 29-year-olds and three-quarters (75%) of teens now own a cell phone, 88% of whom text, with 73% of wired American teens using social networking websites, a significant increase from previous years. A survey of over 25000 9- to 16-year-olds from 25 European countries found that many underage children use social media sites despite the site's stated age requirements, and many youth lack the digital skills to use social networking sites safely. The development of the new digital media demands a new educational model by parents and educators. Parental mediation has become a way to manage the children's experiences with Internet, chat, videogames and social network. A recent trend in internet is YouTubers Generation. YouTubers are young people who offer free video in their personal channel on YouTube. There are videos on games, fashion, food, cinema and music, where they offers tutorial or comments. The role of cellular phones, such as the iPhone, has created the inability to be in social isolation, and the potential of ruining relationships. The iPhone activates the insular cortex of the brain, which is associated with feelings of love. People show similar feelings to their phones as they would to their friends, family and loved ones. Countless people spend more time on their phones, while in the presence of other people than spending time with the people in the same room or class. Political campaigns in the United States In trying to determine the impact of new media on political campaigning and electioneering, the existing research has tried to examine whether new media supplants conventional media. Television is still the dominant news source, but new media's reach is growing. What is known is that new media has had a significant impact on elections and what began in the 2008 presidential campaign established new standards for how campaigns would be run. Since then, campaigns also have their outreach methods by developing targeted messages for specific audiences that can be reached via different social media platforms. Both parties have specific digital media strategies designed for voter outreach. Additionally, their websites are socially connected, engaging voters before, during, and after elections. Email and text messages are also regularly sent to supporters encouraging them to donate and get involved. Some existing research focuses on the ways that political campaigns, parties, and candidates have incorporated new media into their political strategizing. This is often a multi-faceted approach that combines new and old media forms to create highly specialized strategies. This allows them to reach wider audiences, but also to target very specific subsets of the electorate. They are able to tap into polling data and in some cases harness the analytics of the traffic and profiles on various social media outlets to get real-time data about the kinds of engagement that is needed and the kinds of messages that are successful or unsuccessful. One body of existing research into the impact of new media on elections investigates the relationship between voters' use of new media and their level of political activity. They focus on areas such as "attentiveness, knowledge, attitudes, orientations, and engagement". In references a vast body of research, Diana Owen points out that older studies were mixed, while "newer research reveals more consistent evidence of information gain". Some of that research has shown that there is a connection between the amount and degree of voter engagement and turnout. However, new media may not have overwhelming effects on either of those. Other research is tending toward the idea that new media has reinforcing effect, that rather than completely altering, by increasing involvement, it "imitates the established pattern of political participation". After analyzing the Citizenship Involvement Democracy survey, Taewoo Nam found that "the internet plays a dual role in mobilizing political participation by people not normally politically involved, as well as reinforcing existing offline participation." These findings chart a middle ground between some research that optimistically holds new media up to be an extremely effective or extremely ineffective at fostering political participation. Terri Towner found, in his survey of college students, that attention to new media increases offline and online political participation particularly for young people. His research shows that the prevalence of online media boosts participation and engagement. His work suggests that "it seems that online sources that facilitate political involvement, communication, and mobilization, particularly campaign websites, social media, and blogs, are the most important for offline political participation among young people". When gauging effects and implications of new media on the political process, one means of doing so is to look at the deliberations that take place in these digital spaces. In citing the work of several researchers, Halpern and Gibbs define deliberation to be "the performance of a set of communicative behaviors that promote thorough discussion. and the notion that in this process of communication the individuals involved weigh carefully the reasons for and against some of the propositions presented by others". The work of Daniel Halpern and Jennifer Gibbs "suggest that although social media may not provide a forum for intensive or in-depth policy debate, it nevertheless provides a deliberative space to discuss and encourage political participation, both directly and indirectly". Their work goes a step beyond that as well though because it shows that some social media sites foster a more robust political debate than do others such as Facebook which includes highly personal and identifiable access to information about users alongside any comments they may post on political topics. This is in contrast to sites like YouTube whose comments are often posted anonymously. Ethical issues in new media research Due to the popularity of new media, social media websites (SMWs) like Facebook and Twitter are becoming increasingly popular among researchers. Although SMWs present new opportunities, they also represent challenges for researchers interested in studying social phenomena online, since it can be difficult to determine what are acceptable risks to privacy unique to social media. Some scholars argue that standard Institutional Review Board (IRB) procedures provide little guidance on research protocols relating to social media in particular. As a consequence, three major approaches to research on social media and relevant concerns scholars should consider before engaging in social media research have been identified. Observational research One of the major issues for observational research is whether a particular project is considered to involve human subjects. A human subject is one that “is defined by federal regulations as a living individual about whom an investigator obtains data through interaction with the individual or identifiable private information”. If access to a social media site is public, information is considered identifiable but not private, and information gathering procedures do not require researchers to interact with the original poster of the information, then this does not meet the requirements for human subjects research. Research may also be exempt if the disclosure of participant responses outside the realm of the published research does not subject the participant to civic or criminal liability, damage the participant's reputation, employability or financial standing. Given these criteria, however, researchers still have considerable leeway when conducting observational research on social media. Many profiles on Facebook, Twitter, LinkedIn, and Twitter are public and researchers are free to use that data for observational research. Users have the ability to change their privacy settings on most social media websites. Facebook, for example, provides users with the ability to restrict who sees their posts through specific privacy settings. There is also debate about whether requiring users to create a username and password is sufficient to establish whether the data is considered public or private. Historically, Institutional Review Boards considered such websites to be private, although newer websites like YouTube call this practice into question. For example, YouTube only requires the creation of a username and password to post videos and/or view adult content, but anyone is free to view general YouTube videos and these general videos would not be subject to consent requirements for researchers looking to conduct observational studies. Interactive research According to Moreno and colleagues, interactive research occurs when "a researcher wishes to access the [social media website] content that is not publicly available". Because researchers have limited ways of accessing this data, this could mean that a researcher sends a Facebook user a friend request, or follows a user on Twitter in order to gain access to potentially protected tweets. While it could be argued that such actions would violate a social media user's expectation of privacy, other scholars argued that actions like "friending" or "following" an individual on social media constitutes a "loose tie" relationship and therefore not sufficient to establish a reasonable expectation of privacy since individuals often have friends or followers they have never even met. Survey and interview research Because research on social media occurs online, it is difficult for researchers to observe participant reactions to the informed consent process. For example, when collecting information about activities that are potentially illegal, or recruiting participants from stigmatized populations, this lack of physical proximity could potentially negatively impact the informed consent process. Another important consideration regards the confidentiality of information provided by participants. While information provided over the internet might be perceived as lower risk, studies that publish direct quotes from study participants might expose them to the risk of being identified via a Google search. See also Augmented reality Collective intelligence Cyberculture Cybertext Digital media Digital art Distance education Digital rhetoric Electronic media Global Editors Network (GEN) Information Age Interactive media Live media Mass media Mass collaboration Media intelligence Multimedia New media art New media artist New Media Film Festival New media studies Non-linear media Residual media Social journalism Social media in education Social media marketing Social media use in politics User-generated content Telecommunications Web 2.0 References Further reading Poynter Institute: New Media Timeline (1969–2010) created by David B. Shedden, Library Director at Poynter Institute Leah A. Lievrouw, Sonia Livingstone (ed.), The Handbook of New Media, Sage, 2002 Logan, Robert K. (2010) Understanding New Media: Extending Marshall McLuhan, New York: Peter Lang Publishing. Croteau and Hoynes (2003) Media Society: Industries, Images and Audiences (3rd ed.) Pine Forge Press: Thousand Oakes. Timothy Murray, Derrick de Kerckhove, Oliver Grau, Kristine Stiles, Jean-Baptiste Barrière, Dominique Moulon, Jean-Pierre Balpe, Maurice Benayoun Open Art, Nouvelles éditions Scala, 2011, French version, Flew and Humphreys (2005) "Games: Technology, Industry, Culture" in Terry Flew, New Media: an Introduction (2nd ed.), Oxford University Press: South Melbourne. Holmes (2005) "Telecommunity" in Communication Theory: Media, Technology and Society, Cambridge: Polity. Jarzombek, Mark (2016). Digital Stockholm Syndrome in the Post-Ontological Age, Minneapolis: University of Minnesota Press. Scharl, A. and Tochtermann, K., Eds. (2007). The Geospatial WebHow Geobrowsers, Social Software and the Web 2.0 are Shaping the Network Society. London: Springer. Turkle, Sherry (1996) "Who am We?" Wired magazine, 4.01, published January 1996,Who Am We? Andrade, Kara, Online media can foster community, Online News Association Convention, October 29, 2005. Mark Tribe and Reena Jana, New Media Art, Taschen, 2006. . Robert C. Morgan, Commentaries on the New Media Arts Pasadena, CA: Umbrella Associates,1992 Foreword. Lev Manovich. The Language of New Media, Cambridge: MIT Press/Leonardo Books, 2001. . Kennedy, Randy. "Giving New Life to Protests of Yore", The New York Times, July 28, 2007. Immersive Ideals / Critical Distances : A Study of the Affinity Between Artistic Ideologies Based in Virtual Reality and Previous Immersive Idioms by Joseph Nechvatal 1999 Planetary Collegium Why New Media Isn't: A Personal Journey by David Shedden (2007) Norberto González Gaitano (2016): Family and media. Family relationship, their rappresentation on the mass media and virtual relationship American art Contemporary art Digital media Hyperreality Internet culture Promotion and marketing communications Science and technology studies Social influence Social media Visual arts genres
New media
[ "Technology" ]
7,887
[ "New media", "Science and technology studies", "Digital media", "Computing and society", "Multimedia", "Hyperreality", "Social media" ]
344,887
https://en.wikipedia.org/wiki/Boole%27s%20inequality
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater than the sum of the probabilities of the individual events. This inequality provides an upper bound on the probability of occurrence of at least one of a countable number of events in terms of the individual probabilities of the events. Boole's inequality is named for its discoverer, George Boole. Formally, for a countable set of events A1, A2, A3, ..., we have In measure-theoretic terms, Boole's inequality follows from the fact that a measure (and certainly any probability measure) is σ-sub-additive. Proof Proof using induction Boole's inequality may be proved for finite collections of events using the method of induction. For the case, it follows that For the case , we have Since and because the union operation is associative, we have Since by the first axiom of probability, we have and therefore Proof without using induction For any events in in our probability space we have One of the axioms of a probability space is that if are disjoint subsets of the probability space then this is called countable additivity. If we modify the sets , so they become disjoint, we can show that by proving both directions of inclusion. Suppose . Then for some minimum such that . Therefore . So the first inclusion is true: . Next suppose that . It follows that for some . And so , and we have the other inclusion: . By construction of each , . For it is the case that So, we can conclude that the desired inequality is true: Bonferroni inequalities Boole's inequality may be generalized to find upper and lower bounds on the probability of finite unions of events. These bounds are known as Bonferroni inequalities, after Carlo Emilio Bonferroni; see . Let for all integers k in {1, ..., n}. Then, when is odd: holds, and when is even: holds. The equalities follow from the inclusion–exclusion principle, and Boole's inequality is the special case of . Proof for odd K Let , where for each . These such partition the sample space, and for each and every , is either contained in or disjoint from it. If , then contributes 0 to both sides of the inequality. Otherwise, assume is contained in exactly of the . Then contributes exactly to the right side of the inequality, while it contributes to the left side of the inequality. However, by Pascal's rule, this is equal to which telescopes to Thus, the inequality holds for all events , and so by summing over , we obtain the desired inequality: The proof for even is nearly identical. Example Suppose that you are estimating 5 parameters based on a random sample, and you can control each parameter separately. If you want your estimations of all five parameters to be good with a chance 95%, what should you do to each parameter? Tuning each parameter's chance to be good to within 95% is not enough because "all are good" is a subset of each event "Estimate i is good". We can use Boole's Inequality to solve this problem. By finding the complement of event "all five are good", we can change this question into another condition: P( at least one estimation is bad) = 0.05 ≤ P( A1 is bad) + P( A2 is bad) + P( A3 is bad) + P( A4 is bad) + P( A5 is bad) One way is to make each of them equal to 0.05/5 = 0.01, that is 1%. In other words, you have to guarantee each estimate good to 99%( for example, by constructing a 99% confidence interval) to make sure the total estimation to be good with a chance 95%. This is called the Bonferroni Method of simultaneous inference. See also Diluted inclusion–exclusion principle Schuette–Nesbitt formula Boole–Fréchet inequalities Probability of the union of pairwise independent events References Other related articles Probabilistic inequalities Statistical inequalities
Boole's inequality
[ "Mathematics" ]
897
[ "Theorems in statistics", "Statistical inequalities", "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)" ]
344,913
https://en.wikipedia.org/wiki/Asymmetry
Asymmetry is the absence of, or a violation of, symmetry (the property of an object being invariant to a transformation, such as reflection). Symmetry is an important property of both physical and abstract systems and it may be displayed in precise terms or in more aesthetic terms. The absence of or violation of symmetry that are either expected or desired can have important consequences for a system. In organisms Due to how cells divide in organisms, asymmetry in organisms is fairly usual in at least one dimension, with biological symmetry also being common in at least one dimension. Louis Pasteur proposed that biological molecules are asymmetric because the cosmic [i.e. physical] forces that preside over their formation are themselves asymmetric. While at his time, and even now, the symmetry of physical processes are highlighted, it is known that there are fundamental physical asymmetries, starting with time. Asymmetry in biology Asymmetry is an important and widespread trait, having evolved numerous times in many organisms and at many levels of organisation (ranging from individual cells, through organs, to entire body-shapes). Benefits of asymmetry sometimes have to do with improved spatial arrangements, such as the left human lung being smaller, and having one fewer lobes than the right lung to make room for the asymmetrical heart. In other examples, division of function between the right and left half may have been beneficial and has driven the asymmetry to become stronger. Such an explanation is usually given for mammal hand or paw preference (handedness), an asymmetry in skill development in mammals. Training the neural pathways in a skill with one hand (or paw) may take less effort than doing the same with both hands. Nature also provides several examples of handedness in traits that are usually symmetric. The following are examples of animals with obvious left-right asymmetries: Most snails, because of torsion during development, show remarkable asymmetry in the shell and in the internal organs. Male fiddler crabs have one big claw and one small claw. The narwhal's tusk is a left incisor which can grow up to 10 feet in length and forms a left-handed helix. Flatfish have evolved to swim with one side upward, and as a result have both eyes on one side of their heads. Several species of owls exhibit asymmetries in the size and positioning of their ears, which is thought to help locate prey. Many animals (ranging from insects to mammals) have asymmetric male genitalia. The evolutionary cause behind this is, in most cases, still a mystery. As an indicator of unfitness Certain disturbances during the development of the organism, resulting in birth defects. Injuries after cell division that cannot be biologically repaired, such as a lost limb from an accident. Since birth defects and injuries are likely to indicate poor health of the organism, defects resulting in asymmetry often put an animal at a disadvantage when it comes to finding a mate. For example, a greater degree of facial symmetry is seen as more attractive in humans, especially in the context of mate selection. In general, there is a correlation between symmetry and fitness-related traits such as growth rate, fecundity and survivability for many species. This means that, through sexual selection, individuals with greater symmetry (and therefore fitness) tend to be preferred as mates, as they are more likely to produce healthy offspring. In structures Pre-modern architectural styles tended to place an emphasis on symmetry, except where extreme site conditions or historical developments lead away from this classical ideal. To the contrary, modernist and postmodern architects became much more free to use asymmetry as a design element. While most bridges employ a symmetrical form due to intrinsic simplicities of design, analysis and fabrication and economical use of materials, a number of modern bridges have deliberately departed from this, either in response to site-specific considerations or to create a dramatic design statement. Some asymmetrical structures In fire protection In fire-resistance rated wall assemblies, used in passive fire protection, including, but not limited to, high-voltage transformer fire barriers, asymmetry is a crucial aspect of design. When designing a facility, it is not always certain, that in the event of fire, which side a fire may come from. Therefore, many building codes and fire test standards outline, that a symmetrical assembly, need only be tested from one side, because both sides are the same. However, as soon as an assembly is asymmetrical, both sides must be tested and the test report is required to state the results for each side. In practical use, the lowest result achieved is the one that turns up in certification listings. Neither the test sponsor, nor the laboratory can go by an opinion or deduction as to which side was in more peril as a result of contemplated testing and then test only one side. Both must be tested in order to be compliant with test standards and building codes. In mathematics In mathematics, asymmetry can arise in various ways. Examples include asymmetric relations, asymmetry of shapes in geometry, asymmetric graphs et cetera. Lines of symmetry When determining whether an object is asymmetrical, look for lines of symmetry. For instance, a square has four lines of symmetry, while a circle has infinite. If a shape has no lines of symmetry, then it is asymmetrical, but if an object has any lines of symmetry, it is symmetrical. Asymmetric Relation An asymmetric relation is a binary relation defined on a set of elements such that if holds for elements and , then must be false. Stated differently, an asymmetric relation is characterized by a necessary absence of symmetry of the relation in the opposite direction. Inequalities exemplify asymmetric relations. Consider elements and . If is less than (), then cannot be greater than (). This highlights how the relations "less than", and similarly "greater than", are not symmetric. In contrast, if is equal to (), then is also equal to (). Thus the binary relation "equal to" is a symmetric one. Asymmetric Tensors In general an Asymmetric tensor is defined by the change of signs of its solution under the interchange of two indexes. The Epsilon-tensor is an example of an asymmetric tensor. It is defined as: ,with . For even or uneven permutations of the indexes the tensor is either 1 or -1. In chemistry Certain molecules are chiral; that is, they cannot be superposed upon their mirror image. Chemically identical molecules with different chirality are called enantiomers; this difference in orientation can lead to different properties in the way they react with biological systems. In physics Asymmetry arises in physics in a number of different realms. Thermodynamics The original non-statistical formulation of thermodynamics was asymmetrical in time: it claimed that the entropy in a closed system can only increase with time. This was derived from the Second Law (any of the two, Clausius' or Lord Kelvin's statement can be used since they are equivalent) and using the Clausius' Theorem (see Kerson Huang ). The later theory of statistical mechanics, however, is symmetric in time. Although it states that a system significantly below maximum entropy is very likely to evolve towards higher entropy, it also states that such a system is very likely to have evolved from higher entropy. Particle physics Symmetry is one of the most powerful tools in particle physics, because it has become evident that practically all laws of nature originate in symmetries. Violations of symmetry therefore present theoretical and experimental puzzles that lead to a deeper understanding of nature. Asymmetries in experimental measurements also provide powerful handles that are often relatively free from background or systematic uncertainties. Parity violation Until the 1950s, it was believed that fundamental physics was left-right symmetric; i.e., that interactions were invariant under parity. Although parity is conserved in electromagnetism, strong interactions and gravity, it turns out to be violated in weak interactions. The Standard Model incorporates parity violation by expressing the weak interaction as a chiral gauge interaction. Only the left-handed components of particles and right-handed components of antiparticles participate in weak interactions in the Standard Model. A consequence of parity violation in particle physics is that neutrinos have only been observed as left-handed particles (and antineutrinos as right-handed particles). In 1956–1957 Chien-Shiung Wu, E. Ambler, R. W. Hayward, D. D. Hoppes, and R. P. Hudson found a clear violation of parity conservation in the beta decay of cobalt-60. Simultaneously, R. L. Garwin, Leon Lederman, and R. Weinrich modified an existing cyclotron experiment and immediately verified parity violation. CP violation After the discovery of the violation of parity in 1956–57, it was believed that the combined symmetry of parity (P) and simultaneous charge conjugation (C), called CP, was preserved. For example, CP transforms a left-handed neutrino into a right-handed antineutrino. In 1964, however, James Cronin and Val Fitch provided clear evidence that CP symmetry was also violated in an experiment with neutral kaons. CP violation is one of the necessary conditions for the generation of a baryon asymmetry in the early universe. Combining the CP symmetry with simultaneous time reversal (T) produces a combined symmetry called CPT symmetry. CPT symmetry must be preserved in any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian. As of 2006, no violations of CPT symmetry have been observed. Baryon asymmetry of the universe The baryons (i.e., the protons and neutrons and the atoms that they comprise) observed so far in the universe are overwhelmingly matter as opposed to anti-matter. This asymmetry is called the baryon asymmetry of the universe. Isospin violation Isospin is the symmetry transformation of the weak interactions. The concept was first introduced by Werner Heisenberg in nuclear physics based on the observations that the masses of the neutron and the proton are almost identical and that the strength of the strong interaction between any pair of nucleons is the same, independent of whether they are protons or neutrons. This symmetry arises at a more fundamental level as a symmetry between up-type and down-type quarks. Isospin symmetry in the strong interactions can be considered as a subset of a larger flavor symmetry group, in which the strong interactions are invariant under interchange of different types of quarks. Including the strange quark in this scheme gives rise to the Eightfold Way scheme for classifying mesons and baryons. Isospin is violated by the fact that the masses of the up and down quarks are different, as well as by their different electric charges. Because this violation is only a small effect in most processes that involve the strong interactions, isospin symmetry remains a useful calculational tool, and its violation introduces corrections to the isospin-symmetric results. In collider experiments Because the weak interactions violate parity, collider processes that can involve the weak interactions typically exhibit asymmetries in the distributions of the final-state particles. These asymmetries are typically sensitive to the difference in the interaction between particles and antiparticles, or between left-handed and right-handed particles. They can thus be used as a sensitive measurement of differences in interaction strength and/or to distinguish a small asymmetric signal from a large but symmetric background. A forward-backward asymmetry is defined as AFB=(NF-NB)/(NF+NB), where NF is the number of events in which some particular final-state particle is moving "forward" with respect to some chosen direction (e.g., a final-state electron moving in the same direction as the initial-state electron beam in electron-positron collisions), while NB is the number of events with the final-state particle moving "backward". Forward-backward asymmetries were used by the LEP experiments to measure the difference in the interaction strength of the Z boson between left-handed and right-handed fermions, which provides a precision measurement of the weak mixing angle. A left-right asymmetry is defined as ALR=(NL-NR)/(NL+NR), where NL is the number of events in which some initial- or final-state particle is left-polarized, while NR is the corresponding number of right-polarized events. Left-right asymmetries in Z boson production and decay were measured at the Stanford Linear Collider using the event rates obtained with left-polarized versus right-polarized initial electron beams. Left-right asymmetries can also be defined as asymmetries in the polarization of final-state particles whose polarizations can be measured; e.g., tau leptons. A charge asymmetry or particle-antiparticle asymmetry is defined in a similar way. This type of asymmetry has been used to constrain the parton distribution functions of protons at the Tevatron from events in which a produced W boson decays to a charged lepton. The asymmetry between positively and negatively charged leptons as a function of the direction of the W boson relative to the proton beam provides information on the relative distributions of up and down quarks in the proton. Particle-antiparticle asymmetries are also used to extract measurements of CP violation from B meson and anti-B meson production at the BaBar and Belle experiments. See also Information asymmetry Asymmetric multiprocessing Chirality References Further reading Gardner, Martin (1990), The New Ambidextrous Universe: Symmetry and Asymmetry from Mirror Reflections to Superstrings, 3rd edition, W.H.Freeman & Co Ltd. Passive fire protection
Asymmetry
[ "Physics", "Mathematics" ]
2,928
[ "Geometry", "Symmetry", "Asymmetry" ]
344,919
https://en.wikipedia.org/wiki/Leech%20lattice
In mathematics, the Leech lattice is an even unimodular lattice Λ24 in 24-dimensional Euclidean space, which is one of the best models for the kissing number problem. It was discovered by . It may also have been discovered (but not published) by Ernst Witt in 1940. Characterization The Leech lattice Λ24 is the unique lattice in 24-dimensional Euclidean space, E24, with the following list of properties: It is unimodular; i.e., it can be generated by the columns of a certain 24×24 matrix with determinant 1. It is even; i.e., the square of the length of each vector in Λ24 is an even integer. The length of every non-zero vector in Λ24 is at least 2. The last condition is equivalent to the condition that unit balls centered at the points of Λ24 do not overlap. Each is tangent to 196,560 neighbors, and this is known to be the largest number of non-overlapping 24-dimensional unit balls that can simultaneously touch a single unit ball. This arrangement of 196,560 unit balls centred about another unit ball is so efficient that there is no room to move any of the balls; this configuration, together with its mirror-image, is the only 24-dimensional arrangement where 196,560 unit balls simultaneously touch another. This property is also true in 1, 2 and 8 dimensions, with 2, 6 and 240 unit balls, respectively, based on the integer lattice, hexagonal tiling and E8 lattice, respectively. It has no root system and in fact is the first unimodular lattice with no roots (vectors of norm less than 4), and therefore has a centre density of 1. By multiplying this value by the volume of a unit ball in 24 dimensions, , one can derive its absolute density. showed that the Leech lattice is isometric to the set of simple roots (or the Dynkin diagram) of the reflection group of the 26-dimensional even Lorentzian unimodular lattice II25,1. By comparison, the Dynkin diagrams of II9,1 and II17,1 are finite. Applications The binary Golay code, independently developed in 1949, is an application in coding theory. More specifically, it is an error-correcting code capable of correcting up to three errors in each 24-bit word, and detecting up to four. It was used to communicate with the Voyager probes, as it is much more compact than the previously-used Hadamard code. Quantizers, or analog-to-digital converters, can use lattices to minimise the average root-mean-square error. Most quantizers are based on the one-dimensional integer lattice, but using multi-dimensional lattices reduces the RMS error. The Leech lattice is a good solution to this problem, as the Voronoi cells have a low second moment. The vertex algebra of the two-dimensional conformal field theory describing bosonic string theory, compactified on the 24-dimensional quotient torus R24/Λ24 and orbifolded by a two-element reflection group, provides an explicit construction of the Griess algebra that has the monster group as its automorphism group. This monster vertex algebra was also used to prove the monstrous moonshine conjectures. Constructions The Leech lattice can be constructed in a variety of ways. Like all lattices, it can be constructed by taking the integral span of the columns of its generator matrix, a 24×24 matrix with determinant 1. Using the binary Golay code The Leech lattice can be explicitly constructed as the set of vectors of the form 2−3/2(a1, a2, ..., a24) where the ai are integers such that and for each fixed residue class modulo 4, the 24 bit word, whose 1s correspond to the coordinates i such that ai belongs to this residue class, is a word in the binary Golay code. The Golay code, together with the related Witt design, features in a construction for the 196560 minimal vectors in the Leech lattice. Leech lattice (L mod 8) can be directly constructed by combination of the 3 following sets, , ( is a ones vector of size n), G - 24-bit Golay code B - Binary integer sequence C - Thue-Morse Sequence or integer bit parity sum (that give chirality of the lattice) 24-bit Golay [2^12 codes] 24-bit integer[2^24 codes] Parity Leech Lattice [2^36 codes] G = B = C = L = (4B + C) ⊕ 2G 00000000 00000000 00000000 00000000 00000000 00000000 0 00000000 00000000 00000000 11111111 00000000 00000000 10000000 00000000 00000000 1 22222222 00000000 00000000 11110000 11110000 00000000 01000000 00000000 00000000 1 22220000 22220000 00000000 00001111 11110000 00000000 11000000 00000000 00000000 0 ... 11001100 11001100 00000000 00100000 00000000 00000000 1 51111111 11111111 11111111 00110011 11001100 00000000 10100000 00000000 00000000 0 73333333 11111111 11111111 00111100 00111100 00000000 01100000 00000000 00000000 0 ... 11000011 00111100 00000000 11100000 00000000 00000000 1 15111111 11111111 11111111 10101010 10101010 00000000 00010000 00000000 00000000 1 37333333 11111111 11111111 01010101 10101010 00000000 10010000 00000000 00000000 0 ... 01011010 01011010 00000000 01010000 00000000 00000000 0 44000000 00000000 00000000 10100101 01011010 00000000 11010000 00000000 00000000 1 66222222 00000000 00000000 ... ... ... ... 11111111 11111111 11111111 11111111 11111111 11111111 0 66666666 66666666 66666666 Using the Lorentzian lattice II25,1 The Leech lattice can also be constructed as where w is the Weyl vector: in the 26-dimensional even Lorentzian unimodular lattice II25,1. The existence of such an integral vector of Lorentzian norm zero relies on the fact that 12 + 22 + ... + 242 is a perfect square (in fact 702); the number 24 is the only integer bigger than 1 with this property (see cannonball problem). This was conjectured by Édouard Lucas, but the proof came much later, based on elliptic functions. The vector in this construction is really the Weyl vector of the even sublattice D24 of the odd unimodular lattice I25. More generally, if L is any positive definite unimodular lattice of dimension 25 with at least 4 vectors of norm 1, then the Weyl vector of its norm 2 roots has integral length, and there is a similar construction of the Leech lattice using L and this Weyl vector. Based on other lattices described another 23 constructions for the Leech lattice, each based on a Niemeier lattice. It can also be constructed by using three copies of the E8 lattice, in the same way that the binary Golay code can be constructed using three copies of the extended Hamming code, H8. This construction is known as the Turyn construction of the Leech lattice. As a laminated lattice Starting with a single point, Λ0, one can stack copies of the lattice Λn to form an (n + 1)-dimensional lattice, Λn+1, without reducing the minimal distance between points. Λ1 corresponds to the integer lattice, Λ2 is to the hexagonal lattice, and Λ3 is the face-centered cubic packing. showed that the Leech lattice is the unique laminated lattice in 24 dimensions. As a complex lattice The Leech lattice is also a 12-dimensional lattice over the Eisenstein integers. This is known as the complex Leech lattice, and is isomorphic to the 24-dimensional real Leech lattice. In the complex construction of the Leech lattice, the binary Golay code is replaced with the ternary Golay code, and the Mathieu group M24 is replaced with the Mathieu group M12. The E6 lattice, E8 lattice and Coxeter–Todd lattice also have constructions as complex lattices, over either the Eisenstein or Gaussian integers. Using the icosian ring The Leech lattice can also be constructed using the ring of icosians. The icosian ring is abstractly isomorphic to the E8 lattice, three copies of which can be used to construct the Leech lattice using the Turyn construction. Witt's construction In 1972 Witt gave the following construction, which he said he found in 1940, on January 28. Suppose that H is an n by n Hadamard matrix, where n=4ab. Then the matrix defines a bilinear form in 2n dimensions, whose kernel has n dimensions. The quotient by this kernel is a nonsingular bilinear form taking values in (1/2)Z. It has 3 sublattices of index 2 that are integral bilinear forms. Witt obtained the Leech lattice as one of these three sublattices by taking a=2, b=3, and taking H to be the 24 by 24 matrix (indexed by Z/23Z ∪ ∞) with entries Χ(m+n) where Χ(∞)=1, Χ(0)=−1, Χ(n)=is the quadratic residue symbol mod 23 for nonzero n. This matrix H is a Paley matrix with some insignificant sign changes. Using a Paley matrix described a construction using a skew Hadamard matrix of Paley type. The Niemeier lattice with root system can be made into a module for the ring of integers of the field . Multiplying this Niemeier lattice by a non-principal ideal of the ring of integers gives the Leech lattice. Using higher power residue codes constructed the Leech lattice using higher power residue codes over the ring . A similar construction is used to construct some of the other lattices of rank 24. Using octonions If L is the set of octonions with coordinates on the lattice, then the Leech lattice is the set of triplets such that where . This construction is due to . Symmetries The Leech lattice is highly symmetrical. Its automorphism group is the Conway group Co0, which is of order 8 315 553 613 086 720 000. The center of Co0 has two elements, and the quotient of Co0 by this center is the Conway group Co1, a finite simple group. Many other sporadic groups, such as the remaining Conway groups and Mathieu groups, can be constructed as the stabilizers of various configurations of vectors in the Leech lattice. Despite having such a high rotational symmetry group, the Leech lattice does not possess any hyperplanes of reflection symmetry. In other words, the Leech lattice is chiral. It also has far fewer symmetries than the 24-dimensional hypercube and simplex, or even the Cartesian product of three copies of the E8 lattice. The automorphism group was first described by John Conway. The 398034000 vectors of norm 8 fall into 8292375 'crosses' of 48 vectors. Each cross contains 24 mutually orthogonal vectors and their negatives, and thus describe the vertices of a 24-dimensional orthoplex. Each of these crosses can be taken to be the coordinate system of the lattice, and has the same symmetry of the Golay code, namely 212 × |M24|. Hence the full automorphism group of the Leech lattice has order 8292375 × 4096 × 244823040, or 8 315 553 613 086 720 000. Geometry showed that the covering radius of the Leech lattice is ; in other words, if we put a closed ball of this radius around each lattice point, then these just cover Euclidean space. The points at distance at least from all lattice points are called the deep holes of the Leech lattice. There are 23 orbits of them under the automorphism group of the Leech lattice, and these orbits correspond to the 23 Niemeier lattices other than the Leech lattice: the set of vertices of deep hole is isometric to the affine Dynkin diagram of the corresponding Niemeier lattice. The Leech lattice has a density of . showed that it gives the densest lattice packing of balls in 24-dimensional space. improved this by showing that it is the densest sphere packing, even among non-lattice packings. The 196560 minimal vectors are of three different varieties, known as shapes: vectors of shape (42,022), for all permutations and sign choices; vectors of shape (28,016), where the '2's correspond to an octad in the Golay code, and there are any even number of minus signs; vectors of shape (∓3,±123), where the lower sign is used for the '1's of any codeword of the Golay code, and the '∓3' can appear in any position. The ternary Golay code, binary Golay code and Leech lattice give very efficient 24-dimensional spherical codes of 729, 4096 and 196560 points, respectively. Spherical codes are higher-dimensional analogues of Tammes problem, which arose as an attempt to explain the distribution of pores on pollen grains. These are distributed as to maximise the minimal angle between them. In two dimensions, the problem is trivial, but in three dimensions and higher it is not. An example of a spherical code in three dimensions is the set of the 12 vertices of the regular icosahedron. Theta series One can associate to any (positive-definite) lattice Λ a theta function given by The theta function of a lattice is then a holomorphic function on the upper half-plane. Furthermore, the theta function of an even unimodular lattice of rank n is actually a modular form of weight n/2 for the full modular group PSL(2,Z). The theta function of an integral lattice is often written as a power series in so that the coefficient of qn gives the number of lattice vectors of squared norm 2n. In the Leech lattice, there are 196560 vectors of squared norm 4, 16773120 vectors of squared norm 6, 398034000 vectors of squared norm 8 and so on. The theta series of the Leech lattice is where is the normalized Eisenstein series of weight 12, is the modular discriminant, is the divisor function for exponent 11, and is the Ramanujan tau function. It follows that for m≥1 the number of vectors of squared norm 2m is History Many of the cross-sections of the Leech lattice, including the Coxeter–Todd lattice and Barnes–Wall lattice, in 12 and 16 dimensions, were found much earlier than the Leech lattice. discovered a related odd unimodular lattice in 24 dimensions, now called the odd Leech lattice, one of whose two even neighbors is the Leech lattice. The Leech lattice was discovered in 1965 by , by improving some earlier sphere packings he found . calculated the order of the automorphism group of the Leech lattice, and, working with John G. Thompson, discovered three new sporadic groups as a by-product: the Conway groups, Co1, Co2, Co3. They also showed that four other (then) recently announced sporadic groups, namely, Higman-Sims, Suzuki, McLaughlin, and the Janko group J2 could be found inside the Conway groups using the geometry of the Leech lattice. (Ronan, p. 155) , has a single rather cryptic sentence mentioning that he found more than 10 even unimodular lattices in 24 dimensions without giving further details. stated that he found 9 of these lattices earlier in 1938, and found two more, the Niemeier lattice with A root system and the Leech lattice (and also the odd Leech lattice), in 1940. See also Sphere packing E8 lattice References External links Leech lattice (CP4space) The Leech Lattice, U. of Illinois at Chicago, Mark Ronan's website Papers by R. E. Borcherds Quadratic forms Lattice points Sporadic groups Moonshine theory
Leech lattice
[ "Mathematics" ]
3,599
[ "Lattice points", "Quadratic forms", "Number theory" ]
344,922
https://en.wikipedia.org/wiki/Neuroevolution%20of%20augmenting%20topologies
NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm (GA) for the generation of evolving artificial neural networks (a neuroevolution technique) developed by Kenneth Stanley and Risto Miikkulainen in 2002 while at The University of Texas at Austin. It alters both the weighting parameters and structures of networks, attempting to find a balance between the fitness of evolved solutions and their diversity. It is based on applying three key techniques: tracking genes with history markers to allow crossover among topologies, applying speciation (the evolution of species) to preserve innovations, and developing topologies incrementally from simple initial structures ("complexifying"). Performance On simple control tasks, the NEAT algorithm often arrives at effective networks more quickly than other contemporary neuro-evolutionary techniques and reinforcement learning methods, as of 2006. Algorithm Traditionally, a neural network topology is chosen by a human experimenter, and effective connection weight values are learned through a training procedure. This yields a situation whereby a trial and error process may be necessary in order to determine an appropriate topology. NEAT is an example of a topology and weight evolving artificial neural network (TWEANN) which attempts to simultaneously learn weight values and an appropriate topology for a neural network. In order to encode the network into a phenotype for the GA, NEAT uses a direct encoding scheme which means every connection and neuron is explicitly represented. This is in contrast to indirect encoding schemes which define rules that allow the network to be constructed without explicitly representing every connection and neuron, allowing for more compact representation. The NEAT approach begins with a perceptron-like feed-forward network of only input neurons and output neurons. As evolution progresses through discrete steps, the complexity of the network's topology may grow, either by inserting a new neuron into a connection path, or by creating a new connection between (formerly unconnected) neurons. Competing conventions The competing conventions problem arises when there is more than one way of representing information in a phenotype. For example, if a genome contains neurons A, B and C and is represented by [A B C], if this genome is crossed with an identical genome (in terms of functionality) but ordered [C B A] crossover will yield children that are missing information ([A B A] or [C B C]), in fact 1/3 of the information has been lost in this example. NEAT solves this problem by tracking the history of genes by the use of a global innovation number which increases as new genes are added. When adding a new gene the global innovation number is incremented and assigned to that gene. Thus the higher the number the more recently the gene was added. For a particular generation if an identical mutation occurs in more than one genome they are both given the same number, beyond that however the mutation number will remain unchanged indefinitely. These innovation numbers allow NEAT to match up genes which can be crossed with each other. Implementation The original implementation by Ken Stanley is published under the GPL. It integrates with Guile, a GNU scheme interpreter. This implementation of NEAT is considered the conventional basic starting point for implementations of the NEAT algorithm. Extensions rtNEAT In 2003, Stanley devised an extension to NEAT that allows evolution to occur in real time rather than through the iteration of generations as used by most genetic algorithms. The basic idea is to put the population under constant evaluation with a "lifetime" timer on each individual in the population. When a network's timer expires, its current fitness measure is examined to see whether it falls near the bottom of the population, and if so, it is discarded and replaced by a new network bred from two high-fitness parents. A timer is set for the new network and it is placed in the population to participate in the ongoing evaluations. The first application of rtNEAT is a video game called Neuro-Evolving Robotic Operatives, or NERO. In the first phase of the game, individual players deploy robots in a 'sandbox' and train them to some desired tactical doctrine. Once a collection of robots has been trained, a second phase of play allows players to pit their robots in a battle against robots trained by some other player, to see how well their training regimens prepared their robots for battle. Phased pruning An extension of Ken Stanley's NEAT, developed by Colin Green, adds periodic pruning of the network topologies of candidate solutions during the evolution process. This addition addressed concern that unbounded automated growth would generate unnecessary structure. HyperNEAT HyperNEAT is specialized to evolve large scale structures. It was originally based on the CPPN theory and is an active field of research. cgNEAT Content-Generating NEAT (cgNEAT) evolves custom video game content based on user preferences. The first video game to implement cgNEAT is Galactic Arms Race, a space-shooter game in which unique particle system weapons are evolved based on player usage statistics. Each particle system weapon in the game is controlled by an evolved CPPN, similarly to the evolution technique in the NEAT Particles interactive art program. odNEAT odNEAT is an online and decentralized version of NEAT designed for multi-robot systems. odNEAT is executed onboard robots themselves during task execution to continuously optimize the parameters and the topology of the artificial neural network-based controllers. In this way, robots executing odNEAT have the potential to adapt to changing conditions and learn new behaviors as they carry out their tasks. The online evolutionary process is implemented according to a physically distributed island model. Each robot optimizes an internal population of candidate solutions (intra-island variation), and two or more robots exchange candidate solutions when they meet (inter-island migration). In this way, each robot is potentially self-sufficient and the evolutionary process capitalizes on the exchange of controllers between multiple robots for faster synthesis of effective controllers. See also Evolutionary acquisition of neural topologies References Bibliography Implementations Stanley's original, mtNEAT and rtNEAT for C++ ECJ, JNEAT, NEAT 4J, ANJI for Java SharpNEAT for C# MultiNEAT () and mtNEAT for C++ and Python neat-python for Python NeuralFit (not an exact implementation) and neat-python for Python Encog for Java and C# peas for Python RubyNEAT for Ruby neatjs for Javascript Neataptic for Javascript (not an exact implementation) Neat-Ex for Elixir EvolutionNet for C++ goNEAT for Go (programming language) External links NEAT Homepage () "Evolutionary Complexity Research Group at UCF" - Ken Stanley's current research group NERO: Neuro-Evolving Robotic Operatives - an example application of rtNEAT GAR: Galactic Arms Race - an example application of cgNEAT "PicBreeder.org" - Online, collaborative art generated by CPPNs evolved with NEAT. "EndlessForms.com" - A 3D version of Picbreeder, where you interactively evolve 3D objects that are encoded with CPPNs and evolved with NEAT. BEACON Blog: What is neuroevolution? MarI/O - Machine Learning for Video Games, a YouTube video demonstrating an implementation of NEAT learning to play Super Mario World "GekkoQuant.com" - A visual tutorial series on NEAT, including solving the classic pole balancing problem using NEAT in R "Artificial intelligence learns Mario level in just 34 attempts NEAT explained via MarI/O program Evolutionary algorithms and artificial neuronal networks Evolutionary computation Genetic algorithms
Neuroevolution of augmenting topologies
[ "Biology" ]
1,552
[ "Bioinformatics", "Evolutionary computation", "Genetics techniques", "Genetic algorithms" ]
344,933
https://en.wikipedia.org/wiki/Index%20of%20optics%20articles
Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. A B C D E F G H I J K L M N O P Q R S T U W Z See also :Category:Optical components :Category:Optical materials References External links Index Optics Optics
Index of optics articles
[ "Physics", "Chemistry" ]
116
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
344,948
https://en.wikipedia.org/wiki/Partial%20charge
In atomic physics, a partial charge (or net atomic charge) is a non-integer charge value when measured in elementary charge units. It is represented by the Greek lowercase delta (𝛿), namely 𝛿− or 𝛿+. Partial charges are created due to the asymmetric distribution of electrons in chemical bonds. For example, in a polar covalent bond like HCl, the shared electron oscillates between the bonded atoms. The resulting partial charges are a property only of zones within the distribution, and not the assemblage as a whole. For example, chemists often choose to look at a small space surrounding the nucleus of an atom: When an electrically neutral atom bonds chemically to another neutral atom that is more electronegative, its electrons are partially drawn away. This leaves the region about that atom's nucleus with a partial positive charge, and it creates a partial negative charge on the atom to which it is bonded. In such a situation, the distributed charges taken as a group always carries a whole number of elementary charge units. Yet one can point to zones within the assemblage where less than a full charge resides, such as the area around an atom's nucleus. This is possible in part because particles are not like mathematical points—which must be either inside a zone or outside it—but are smeared out by the uncertainty principle of quantum mechanics. Because of this smearing effect, if one defines a sufficiently small zone, a fundamental particle may be both partly inside and partly outside it. Uses Partial atomic charges are used in molecular mechanics force fields to compute the electrostatic interaction energy using Coulomb's law, even though this leads to substantial failures for anisotropic charge distributions. Partial charges are also often used for a qualitative understanding of the structure and reactivity of molecules. Occasionally, δδ+ is used to indicate a partial charge that is less positively charged than δ+ (likewise for δδ-) in cases where it is relevant to do so. This can be extended to δδδ+ to indicate even weaker partial charges as well. Generally, a single δ+ (or δ-) is sufficient for most discussions of partial charge in organic chemistry. Determining partial atomic charges Partial atomic charges can be used to quantify the degree of ionic versus covalent bonding of any compound across the periodic table. The necessity for such quantities arises, for example, in molecular simulations to compute bulk and surface properties in agreement with experiment. Evidence for chemically different compounds shows that available experimental data and chemical understanding lead to justified atomic charges. Atomic charges for a given compound can be derived in multiple ways, such as: extracted from electron densities measured using high resolution x-ray, gamma ray, or electron beam diffraction experiments measured dipole moments the Extended Born thermodynamic cycle, including an analysis of covalent and ionic bonding contributions spectroscopically measured properties, such as core-electron binding energy shifts the relationship of atomic charges to melting points, solubility, and cleavage energies for a set of similar compounds with similar degree of covalent bonding the relationship of atomic charges to chemical reactivity and reaction mechanisms for similar compounds reported in the literature. The discussion of individual compounds in prior work has shown convergence in atomic charges, i.e., a high level of consistency between the assigned degree of polarity and the physical-chemical properties mentioned above. The resulting uncertainty in atomic charges is ±0.1e to ±0.2e for highly charged compounds, and often <0.1e for compounds with atomic charges below ±1.0e. Often, the application of one or two of the above concepts already leads to very good values, especially taking into account a growing library of experimental benchmark compounds and compounds with tested force fields. The published research literature on partial atomic charges varies in quality from extremely poor to extremely well-done. Although a large number of different methods for assigning partial atomic charges from quantum chemistry calculations have been proposed over many decades, the vast majority of proposed methods do not work well across a wide variety of material types. Only as recently as 2016 was a method for theoretically computing partial atomic charges developed that performs consistently well across an extremely wide variety of material types. All of the earlier methods had fundamental deficiencies that prevented them from assigning accurate partial atomic charges in many materials. Mulliken and Löwdin partial charges are physically unreasonable, because they do not have a mathematical limit as the basis set is improved towards completeness. Hirshfeld partial charges are usually too low in magnitude. Some methods for assigning partial atomic charges do not converge to a unique solution. In some materials, atoms in molecules analysis yields non-nuclear attractors describing electron density partitions that cannot be assigned to any atom in the material; in such cases, atoms in molecules analysis cannot assign partial atomic charges. According to Cramer (2002), partial charge methods can be divided into four classes: Class I charges are those that are not determined from quantum mechanics, but from some intuitive or arbitrary approach. These approaches can be based on experimental data such as dipoles and electronegativities. Class II charges are derived from partitioning the molecular wave function using some arbitrary, orbital based scheme. Class III charges are based on a partitioning of a physical observable derived from the wave function, such as electron density. Class IV charges are derived from a semiempirical mapping of a precursor charge of type II or III to reproduce experimentally determined observables such as dipole moments. The following is a detailed list of methods, partly based on Meister and Schwarz (1994). Population analysis of wavefunctions Mulliken population analysis Löwdin population analysis Coulson's charges Natural charges CM1, CM2, CM3, CM4, and CM5 charge models Partitioning of electron density distributions Bader charges (obtained from an atoms in molecules analysis) Density fitted atomic charges Hirshfeld charges Maslen's corrected Bader charges Politzer's charges Voronoi Deformation Density charges Density Derived Electrostatic and Chemical (DDEC) charges, which simultaneously reproduce the chemical states of atoms in a material and the electrostatic potential surrounding the material's electron density distribution Charges derived from dipole-dependent properties Dipole charges Dipole derivative charges, also called atomic polar tensor (APT) derived charges, or Born, Callen, or Szigeti effective charges Charges derived from electrostatic potential Chelp ChelpG (Breneman model) Merz-Singh-Kollman (also known as Merz-Kollman, or MK) RESP (Restrained Electrostatic Potential) Charges derived from spectroscopic data Charges from infrared intensities Charges from X-ray photoelectron spectroscopy (ESCA) Charges from X-ray emission spectroscopy Charges from X-ray absorption spectra Charges from ligand-field splittings Charges from UV-vis intensities of transition metal complexes Charges from other spectroscopies, such as NMR, EPR, EQR Charges from other experimental data Charges from bandgaps or dielectric constants Apparent charges from the piezoelectric effect Charges derived from adiabatic potential energy curves Electronegativity-based charges Other physicochemical data, such as equilibrium and reaction rate constants, thermochemistry, and liquid densities. Formal charges References Computational chemistry Electric charge
Partial charge
[ "Physics", "Chemistry", "Mathematics" ]
1,510
[ "Physical quantities", "Electric charge", "Quantity", "Theoretical chemistry", "Computational chemistry", "Wikipedia categories named after physical quantities" ]
344,971
https://en.wikipedia.org/wiki/Binary%20Golay%20code
In mathematics and electronics engineering, a binary Golay code is a type of linear error-correcting code used in digital communications. The binary Golay code, along with the ternary Golay code, has a particularly deep and interesting connection to the theory of finite sporadic groups in mathematics. These codes are named in honor of Marcel J. E. Golay whose 1949 paper introducing them has been called, by E. R. Berlekamp, the "best single published page" in coding theory. There are two closely related binary Golay codes. The extended binary Golay code, G24 (sometimes just called the "Golay code" in finite group theory) encodes 12 bits of data in a 24-bit word in such a way that any 3-bit errors can be corrected or any 4-bit errors can be detected. The other, the perfect binary Golay code, G23, has codewords of length 23 and is obtained from the extended binary Golay code by deleting one coordinate position (conversely, the extended binary Golay code is obtained from the perfect binary Golay code by adding a parity bit). In standard coding notation, the codes have parameters [24, 12, 8] and [23, 12, 7], corresponding to the length of the codewords, the dimension of the code, and the minimum Hamming distance between two codewords, respectively. Mathematical definition In mathematical terms, the extended binary Golay code G24 consists of a 12-dimensional linear subspace W of the space of 24-bit words such that any two distinct elements of W differ in at least 8 coordinates. W is called a linear code because it is a vector space. In all, W comprises elements. The elements of W are called code words. They can also be described as subsets of a set of 24 elements, where addition is defined as taking the symmetric difference of the subsets. In the extended binary Golay code, all code words have Hamming weights of 0, 8, 12, 16, or 24. Code words of weight 8 are called octads and code words of weight 12 are called dodecads. Octads of the code G24 are elements of the S(5,8,24) Steiner system. There are octads and 759 complements thereof. It follows that there are dodecads. Two octads intersect (have 1's in common) in 0, 2, or 4 coordinates in the binary vector representation (these are the possible intersection sizes in the subset representation). An octad and a dodecad intersect at 2, 4, or 6 coordinates. Up to relabeling coordinates, W is unique. The binary Golay code, G23 is a perfect code. That is, the spheres of radius three around code words form a partition of the vector space. G23 is a 12-dimensional subspace of the space F. The automorphism group of the perfect binary Golay code G23 (meaning the subgroup of the group S23 of permutations of the coordinates of F which leave G23 invariant), is the Mathieu group . The automorphism group of the extended binary Golay code is the Mathieu group , of order . is transitive on octads and on dodecads. The other Mathieu groups occur as stabilizers of one or several elements of W. There is a single word of weight 24, which is a 1-dimensional invariant subspace. therefore has an 11-dimensional irreducible representation on the field with 2 elements. In addition, since the binary golay code is a 12-dimensional subspace of a 24-dimensional space, also acts on the 12-dimensional quotient space, called the binary Golay cocode. A word in the cocode is in the same coset as a word of length 0, 1, 2, 3, or 4. In the last case, 6 (disjoint) cocode words all lie in the same coset. There is an 11-dimensional invariant subspace, consisting of cocode words with odd weight, which gives a second 11-dimensional representation on the field with 2 elements. Constructions Lexicographic code: Order the vectors in V lexicographically (i.e., interpret them as unsigned 24-bit binary integers and take the usual ordering). Starting with w0 = 0, define w1, w2, ..., w12 by the rule that wn is the smallest integer which differs from all linear combinations of previous elements in at least eight coordinates. Then W can be defined as the span of w1, ..., w12. Mathieu group: Witt in 1938 published a construction of the largest Mathieu group that can be used to construct the extended binary Golay code. Quadratic residue code: Consider the set N of quadratic non-residues (mod 23). This is an 11-element subset of the cyclic group Z/23Z. Consider the translates t+N of this subset. Augment each translate to a 12-element set St by adding an element ∞. Then labeling the basis elements of V by 0, 1, 2, ..., 22, ∞, W can be defined as the span of the words St together with the word consisting of all basis vectors. (The perfect code is obtained by leaving out ∞.) As a cyclic code: The perfect G23 code can be constructed via the factorization of over the binary field GF(2): It is the code generated by . Either of degree 11 irreducible factors can be used to generate the code. Turyn's construction of 1967, "A Simple Construction of the Binary Golay Code," that starts from the Hamming code of length 8 and does not use the quadratic residues mod 23. From the Steiner System S(5,8,24), consisting of 759 subsets of a 24-set. If one interprets the support of each subset as a 0-1-codeword of length 24 (with Hamming-weight 8), these are the "octads" in the binary Golay code. The entire Golay code can be obtained by repeatedly taking the symmetric differences of subsets, i.e. binary addition. An easier way to write down the Steiner system resp. the octads is the Miracle Octad Generator of R. T. Curtis, that uses a particular 1:1-correspondence between the 35 partitions of an 8-set into two 4-sets and the 35 partitions of the finite vector space into 4 planes. Nowadays often the compact approach of Conway's hexacode, that uses a 4×6 array of square cells, is used. Winning positions in the mathematical game of Mogul: a position in Mogul is a row of 24 coins. Each turn consists of flipping from one to seven coins such that the leftmost of the flipped coins goes from head to tail. The losing positions are those with no legal move. If heads are interpreted as 1 and tails as 0 then moving to a codeword from the extended binary Golay code guarantees it will be possible to force a win. A generator matrix for the binary Golay code is I A, where I is the 12×12 identity matrix, and A is the complement of the adjacency matrix of the icosahedron. A convenient representation It is convenient to use the "Miracle Octad Generator" format, with coordinates in an array of 4 rows, 6 columns. Addition is taking the symmetric difference. All 6 columns have the same parity, which equals that of the top row. A partition of the 6 columns into 3 pairs of adjacent ones constitutes a trio. This is a partition into 3 octad sets. A subgroup, the projective special linear group PSL(2,7) x S3 of a trio subgroup of M24 is useful for generating a basis. PSL(2,7) permutes the octads internally, in parallel. S3 permutes the 3 octads bodily. The basis begins with octad T: 0 1 1 1 1 1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 and 5 similar octads. The sum N of all 6 of these code words consists of all 1's. Adding N to a code word produces its complement. Griess (p. 59) uses the labeling: ∞ 0 | ∞ 0 | ∞ 0 3 2 | 3 2 | 3 2 5 1 | 5 1 | 5 1 6 4 | 6 4 | 6 4 PSL(2,7) is naturally the linear fractional group generated by (0123456) and (0∞)(16)(23)(45). The 7-cycle acts on T to give a subspace including also the basis elements 0 1 1 0 1 0 0 0 0 0 0 0 0 1 0 1 0 1 1 1 0 0 0 0 and 0 1 1 0 1 0 0 1 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 The resulting 7-dimensional subspace has a 3-dimensional quotient space upon ignoring the latter 2 octads. There are 4 other code words of similar structure that complete the basis of 12 code words for this representation of W. W has a subspace of dimension 4, symmetric under PSL(2,7) x S3, spanned by N and 3 dodecads formed of subsets {0,3,5,6}, {0,1,4,6}, and {0,1,2,5}. Practical applications of Golay codes NASA deep space missions Error correction was vital to data transmission in the Voyager 1 and 2 spacecraft particularly because memory constraints dictated offloading data virtually instantly leaving no second chances. Hundreds of color pictures of Jupiter and Saturn in their 1979, 1980, and 1981 fly-bys would be transmitted within a constrained telecommunications bandwidth. Color image transmission required three times as much data as black and white images, so the 7-error correcting Reed–Muller code that had been used to transmit the black and white Mariner images was replaced with the much higher data rate Golay (24,12,8) code. Radio communications The MIL-STD-188 American military standards for automatic link establishment in high frequency radio systems specify the use of an extended (24,12) Golay code for forward error correction. In two-way radio communication digital-coded squelch (DCS, CDCSS) system uses 23-bit Golay (23,12) code word which has the ability to detect and correct errors of 3 or fewer bits. See also Leech lattice Linear code References Sources Error detection and correction
Binary Golay code
[ "Engineering" ]
2,203
[ "Error detection and correction", "Reliability engineering" ]
344,974
https://en.wikipedia.org/wiki/Chemosynthesis
In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria. Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen. Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water. It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later. Hydrogen sulfide chemosynthesis process Giant tube worms use bacteria in their trophosome to fix carbon dioxide (using hydrogen sulfide as their energy source) and produce sugars and amino acids. Some reactions produce sulfur: hydrogen sulfide chemosynthesis: 18H2S + 6CO2 + 3O2 → C6H12O6 (carbohydrate) + 12H2O + 18S Instead of releasing oxygen gas while fixing carbon dioxide as in photosynthesis, hydrogen sulfide chemosynthesis produces solid globules of sulfur in the process. In bacteria capable of chemoautotrophy (a form a chemosynthesis), such as purple sulfur bacteria, yellow globules of sulfur are present and visible in the cytoplasm. Discovery In 1890, Sergei Winogradsky proposed a novel type of life process called "anorgoxydant". His discovery suggested that some microbes could live solely on inorganic matter and emerged during his physiological research in the 1880s in Strasbourg and Zürich on sulfur, iron, and nitrogen bacteria. In 1897, Wilhelm Pfeffer coined the term "chemosynthesis" for the energy production by oxidation of inorganic substances, in association with autotrophic carbon dioxide assimilation—what would be named today as chemolithoautotrophy. Later, the term would be expanded to include also chemoorganoautotrophs, which are organisms that use organic energy substrates in order to assimilate carbon dioxide. Thus, chemosynthesis can be seen as a synonym of chemoautotrophy. The term "chemotrophy", less restrictive, was introduced in the 1940s by André Lwoff for the production of energy by the oxidation of electron donors, organic or not, associated with auto- or heterotrophy. Hydrothermal vents Winogradsky's suggestion was confirmed nearly 90 years later, when hydrothermal ocean vents were discovered in the 1970s. The hot springs and strange creatures were discovered by Alvin, the world's first deep-sea submersible, in 1977 at the Galapagos Rift. At about the same time, then-graduate student Colleen Cavanaugh proposed chemosynthetic bacteria that oxidize sulfides or elemental sulfur as a mechanism by which tube worms could survive near hydrothermal vents. Cavanaugh later managed to confirm that this was indeed the method by which the worms could thrive, and is generally credited with the discovery of chemosynthesis. A 2004 television series hosted by Bill Nye named chemosynthesis as one of the 100 greatest scientific discoveries of all time. Oceanic crust In 2013, researchers reported their discovery of bacteria living in the rock of the oceanic crust below the thick layers of sediment, and apart from the hydrothermal vents that form along the edges of the tectonic plates. Preliminary findings are that these bacteria subsist on the hydrogen produced by chemical reduction of olivine by seawater circulating in the small veins that permeate the basalt that comprises oceanic crust. The bacteria synthesize methane by combining hydrogen and carbon dioxide. Chemosynthesis as an innovative area for continuing research Despite the fact that the process of chemosynthesis has been known for more than a hundred years, its significance and importance are still relevant today in the transformation of chemical elements in biogeochemical cycles. Today, the vital processes of nitrifying bacteria, which lead to the oxidation of ammonia to nitric acid, require scientific substantiation and additional research. The ability of bacteria to convert inorganic substances into organic ones suggests that chemosynthetics can accumulate valuable resources for human needs. Chemosynthetic communities in different environments are important biological systems in terms of their ecology, evolution and biogeography, as well as their potential as indicators of the availability of permanent hydrocarbon- based energy sources. In the process of chemosynthesis, bacteria produce organic matter where photosynthesis is impossible. Isolation of thermophilic sulfate-reducing bacteria Thermodesulfovibrio yellowstonii and other types of chemosynthetics provides prospects for further research. Thus, the importance of chemosynthesis remains relevant for use in innovative technologies, conservation of ecosystems, human life in general. Sergey Winogradsky helped discover the phenomenon of chemosynthesis. See also Primary nutritional groups Autotroph Heterotroph Photosynthesis Movile Cave References External links Chemosynthetic Communities in the Gulf of Mexico Biological processes Metabolism Environmental microbiology Ecosystems
Chemosynthesis
[ "Chemistry", "Biology", "Environmental_science" ]
1,364
[ "Symbiosis", "Ecosystems", "nan", "Cellular processes", "Biochemistry", "Environmental microbiology", "Metabolism" ]
345,017
https://en.wikipedia.org/wiki/Finite%20volume%20method
The finite volume method (FVM) is a method for representing and evaluating partial differential equations in the form of algebraic equations. In the finite volume method, volume integrals in a partial differential equation that contain a divergence term are converted to surface integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods are conservative. Another advantage of the finite volume method is that it is easily formulated to allow for unstructured meshes. The method is used in many computational fluid dynamics packages. "Finite volume" refers to the small volume surrounding each node point on a mesh. Finite volume methods can be compared and contrasted with the finite difference methods, which approximate derivatives using nodal values, or finite element methods, which create local approximations of a solution using local data, and construct a global approximation by stitching them together. In contrast a finite volume method evaluates exact expressions for the average value of the solution over some volume, and uses this data to construct approximations of the solution within cells. Example Consider a simple 1D advection problem: Here, represents the state variable and represents the flux or flow of . Conventionally, positive represents flow to the right while negative represents flow to the left. If we assume that equation () represents a flowing medium of constant area, we can sub-divide the spatial domain, , into finite volumes or cells with cell centers indexed as . For a particular cell, , we can define the volume average value of at time and , as and at time as, where and represent locations of the upstream and downstream faces or edges respectively of the cell. Integrating equation () in time, we have: where . To obtain the volume average of at time , we integrate over the cell volume, and divide the result by , i.e. We assume that is well behaved and that we can reverse the order of integration. Also, recall that flow is normal to the unit area of the cell. Now, since in one dimension , we can apply the divergence theorem, i.e. , and substitute for the volume integral of the divergence with the values of evaluated at the cell surface (edges and ) of the finite volume as follows: where . We can therefore derive a semi-discrete numerical scheme for the above problem with cell centers indexed as , and with cell edge fluxes indexed as , by differentiating () with respect to time to obtain: where values for the edge fluxes, , can be reconstructed by interpolation or extrapolation of the cell averages. Equation () is exact for the volume averages; i.e., no approximations have been made during its derivation. This method can also be applied to a 2D situation by considering the north and south faces along with the east and west faces around a node. General conservation law We can also consider the general conservation law problem, represented by the following PDE, Here, represents a vector of states and represents the corresponding flux tensor. Again we can sub-divide the spatial domain into finite volumes or cells. For a particular cell, , we take the volume integral over the total volume of the cell, , which gives, On integrating the first term to get the volume average and applying the divergence theorem to the second, this yields where represents the total surface area of the cell and is a unit vector normal to the surface and pointing outward. So, finally, we are able to present the general result equivalent to (), i.e. Again, values for the edge fluxes can be reconstructed by interpolation or extrapolation of the cell averages. The actual numerical scheme will depend upon problem geometry and mesh construction. MUSCL reconstruction is often used in high resolution schemes where shocks or discontinuities are present in the solution. Finite volume schemes are conservative as cell averages change through the edge fluxes. In other words, one cell's loss is always another cell's gain! See also Finite element method Flux limiter Godunov's scheme Godunov's theorem High-resolution scheme KIVA (software) MIT General Circulation Model MUSCL scheme Sergei K. Godunov Total variation diminishing Finite volume method for unsteady flow References Further reading Eymard, R. Gallouët, T. R., Herbin, R. (2000) The finite volume method Handbook of Numerical Analysis, Vol. VII, 2000, p. 713–1020. Editors: P.G. Ciarlet and J.L. Lions. Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Volume 2: Computational Methods for Inviscid and Viscous Flows, Wiley. Laney, Culbert B. (1998), Computational Gas Dynamics, Cambridge University Press. LeVeque, Randall (1990), Numerical Methods for Conservation Laws, ETH Lectures in Mathematics Series, Birkhauser-Verlag. LeVeque, Randall (2002), Finite Volume Methods for Hyperbolic Problems, Cambridge University Press. Patankar, Suhas V. (1980), Numerical Heat Transfer and Fluid Flow, Hemisphere. Tannehill, John C., et al., (1997), Computational Fluid mechanics and Heat Transfer, 2nd Ed., Taylor and Francis. Toro, E. F. (1999), Riemann Solvers and Numerical Methods for Fluid Dynamics, Springer-Verlag. Wesseling, Pieter (2001), Principles of Computational Fluid Dynamics, Springer-Verlag. External links Finite volume methods by R. Eymard, T Gallouët and R. Herbin, update of the article published in Handbook of Numerical Analysis, 2000 , available under the GFDL. FiPy: A Finite Volume PDE Solver Using Python from NIST. CLAWPACK: a software package designed to compute numerical solutions to hyperbolic partial differential equations using a wave propagation approach Numerical differential equations Computational fluid dynamics Numerical analysis
Finite volume method
[ "Physics", "Chemistry", "Mathematics" ]
1,227
[ "Computational fluid dynamics", "Computational mathematics", "Computational physics", "Mathematical relations", "Numerical analysis", "Approximations", "Fluid dynamics" ]
345,023
https://en.wikipedia.org/wiki/Intuitionistic%20type%20theory
Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory (MLTT)) is a type theory and an alternative foundation of mathematics. Intuitionistic type theory was created by Per Martin-Löf, a Swedish mathematician and philosopher, who first published it in 1972. There are multiple versions of the type theory: Martin-Löf proposed both intensional and extensional variants of the theory and early impredicative versions, shown to be inconsistent by Girard's paradox, gave way to predicative versions. However, all versions keep the core design of constructive logic using dependent types. Design Martin-Löf designed the type theory on the principles of mathematical constructivism. Constructivism requires any existence proof to contain a "witness". So, any proof of "there exists a prime greater than 1000" must identify a specific number that is both prime and greater than 1000. Intuitionistic type theory accomplished this design goal by internalizing the BHK interpretation. A useful consequence is that proofs become mathematical objects that can be examined, compared, and manipulated. Intuitionistic type theory's type constructors were built to follow a one-to-one correspondence with logical connectives. For example, the logical connective called implication () corresponds to the type of a function (). This correspondence is called the Curry–Howard isomorphism. Prior type theories had also followed this isomorphism, but Martin-Löf's was the first to extend it to predicate logic by introducing dependent types. Type theory Intuitionistic type theory has three finite types, which are then composed using five different type constructors. Unlike set theories, type theories are not built on top of a logic like Frege's. So, each feature of the type theory does double duty as a feature of both math and logic. If you are unfamiliar with type theory and know set theory, a quick summary is: Types contain terms just like sets contain elements. Terms belong to one and only one type. Terms like and compute ("reduce") down to canonical terms like 4. For more, see the article on type theory. 0 type, 1 type and 2 type There are three finite types: The 0 type contains 0 terms. The 1 type contains 1 canonical term. And the 2 type contains 2 canonical terms. Because the 0 type contains 0 terms, it is also called the empty type. It is used to represent anything that cannot exist. It is also written and represents anything unprovable. (That is, a proof of it cannot exist.) As a result, negation is defined as a function to it: . Likewise, the 1 type contains 1 canonical term and represents existence. It also is called the unit type. Finally, the 2 type contains 2 canonical terms. It represents a definite choice between two values. It is used for Boolean values but not propositions. Propositions are instead represented by particular types. For instance, a true proposition can be represented by the 1 type, while a false proposition can be represented by the 0 type. But we cannot assert that these are the only propositions, i.e. the law of excluded middle does not hold for propositions in intuitionistic type theory. Σ type constructor Σ-types contain ordered pairs. As with typical ordered pair (or 2-tuple) types, a Σ-type can describe the Cartesian product, , of two other types, and . Logically, such an ordered pair would hold a proof of and a proof of , so one may see such a type written as . Σ-types are more powerful than typical ordered pair types because of dependent typing. In the ordered pair, the type of the second term can depend on the value of the first term. For example, the first term of the pair might be a natural number and the second term's type might be a sequence of reals of length equal to the first term. Such a type would be written: Using set-theory terminology, this is similar to an indexed disjoint union of sets. In the case of the usual cartesian product, the type of the second term does not depend on the value of the first term. Thus the type describing the cartesian product is written: It is important to note here that the value of the first term, , is not depended on by the type of the second term, . Σ-types can be used to build up longer dependently-typed tuples used in mathematics and the records or structs used in most programming languages. An example of a dependently-typed 3-tuple is two integers and a proof that the first integer is smaller than the second integer, described by the type: Dependent typing allows Σ-types to serve the role of existential quantifier. The statement "there exists an of type , such that is proven" becomes the type of ordered pairs where the first item is the value of type and the second item is a proof of . Notice that the type of the second item (proofs of ) depends on the value in the first part of the ordered pair (). Its type would be: Π type constructor Π-types contain functions. As with typical function types, they consist of an input type and an output type. They are more powerful than typical function types however, in that the return type can depend on the input value. Functions in type theory are different from set theory. In set theory, you look up the argument's value in a set of ordered pairs. In type theory, the argument is substituted into a term and then computation ("reduction") is applied to the term. As an example, the type of a function that, given a natural number , returns a vector containing real numbers is written: When the output type does not depend on the input value, the function type is often simply written with a . Thus, is the type of functions from natural numbers to real numbers. Such Π-types correspond to logical implication. The logical proposition corresponds to the type , containing functions that take proofs-of-A and return proofs-of-B. This type could be written more consistently as: Π-types are also used in logic for universal quantification. The statement "for every of type , is proven" becomes a function from of type to proofs of . Thus, given the value for the function generates a proof that holds for that value. The type would be = type constructor =-types are created from two terms. Given two terms like and , you can create a new type . The terms of that new type represent proofs that the pair reduce to the same canonical term. Thus, since both and compute to the canonical term , there will be a term of the type . In intuitionistic type theory, there is a single way to introduce =-types and that is by reflexivity: It is possible to create =-types such as where the terms do not reduce to the same canonical term, but you will be unable to create terms of that new type. In fact, if you were able to create a term of , you could create a term of . Putting that into a function would generate a function of type . Since is how intuitionistic type theory defines negation, you would have or, finally, . Equality of proofs is an area of active research in proof theory and has led to the development of homotopy type theory and other type theories. Inductive types Inductive types allow the creation of complex, self-referential types. For example, a linked list of natural numbers is either an empty list or a pair of a natural number and another linked list. Inductive types can be used to define unbounded mathematical structures like trees, graphs, etc.. In fact, the natural numbers type may be defined as an inductive type, either being or the successor of another natural number. Inductive types define new constants, such as zero and the successor function . Since does not have a definition and cannot be evaluated using substitution, terms like and become the canonical terms of the natural numbers. Proofs on inductive types are made possible by induction. Each new inductive type comes with its own inductive rule. To prove a predicate for every natural number, you use the following rule: Inductive types in intuitionistic type theory are defined in terms of W-types, the type of well-founded trees. Later work in type theory generated coinductive types, induction-recursion, and induction-induction for working on types with more obscure kinds of self-referentiality. Higher inductive types allow equality to be defined between terms. Universe types The universe types allow proofs to be written about all the types created with the other type constructors. Every term in the universe type can be mapped to a type created with any combination of and the inductive type constructor. However, to avoid paradoxes, there is no term in that maps to for any . To write proofs about all "the small types" and , you must use , which does contain a term for , but not for itself . Similarly, for . There is a predicative hierarchy of universes, so to quantify a proof over any fixed constant universes, you can use . Universe types are a tricky feature of type theories. Martin-Löf's original type theory had to be changed to account for Girard's paradox. Later research covered topics such as "super universes", "Mahlo universes", and impredicative universes. Judgements The formal definition of intuitionistic type theory is written using judgements. For example, in the statement "if is a type and is a type then is a type" there are judgements of "is a type", "and", and "if ... then ...". The expression is not a judgement; it is the type being defined. This second level of the type theory can be confusing, particularly where it comes to equality. There is a judgement of term equality, which might say . It is a statement that two terms reduce to the same canonical term. There is also a judgement of type equality, say that , which means every element of is an element of the type and vice versa. At the type level, there is a type and it contains terms if there is a proof that and reduce to the same value. (Terms of this type are generated using the term-equality judgement.) Lastly, there is an English-language level of equality, because we use the word "four" and symbol "" to refer to the canonical term . Synonyms like these are called "definitionally equal" by Martin-Löf. The description of judgements below is based on the discussion in Nordström, Petersson, and Smith. The formal theory works with types and objects. A type is declared by: An object exists and is in a type if: Objects can be equal and types can be equal A type that depends on an object from another type is declared and removed by substitution , replacing the variable with the object in . An object that depends on an object from another type can be done two ways. If the object is "abstracted", then it is written and removed by substitution , replacing the variable with the object in . The object-depending-on-object can also be declared as a constant as part of a recursive type. An example of a recursive type is: Here, is a constant object-depending-on-object. It is not associated with an abstraction. Constants like can be removed by defining equality. Here the relationship with addition is defined using equality and using pattern matching to handle the recursive aspect of : is manipulated as an opaque constant - it has no internal structure for substitution. So, objects and types and these relations are used to express formulae in the theory. The following styles of judgements are used to create new objects, types and relations from existing ones: By convention, there is a type that represents all other types. It is called (or ). Since is a type, the members of it are objects. There is a dependent type that maps each object to its corresponding type. In most texts is never written. From the context of the statement, a reader can almost always tell whether refers to a type, or whether it refers to the object in that corresponds to the type. This is the complete foundation of the theory. Everything else is derived. To implement logic, each proposition is given its own type. The objects in those types represent the different possible ways to prove the proposition. If there is no proof for the proposition, then the type has no objects in it. Operators like "and" and "or" that work on propositions introduce new types and new objects. So is a type that depends on the type and the type . The objects in that dependent type are defined to exist for every pair of objects in and . If or has no proof and is an empty type, then the new type representing is also empty. This can be done for other types (booleans, natural numbers, etc.) and their operators. Categorical models of type theory Using the language of category theory, R. A. G. Seely introduced the notion of a locally cartesian closed category (LCCC) as the basic model of type theory. This has been refined by Hofmann and Dybjer to Categories with Families or Categories with Attributes based on earlier work by Cartmell. A category with families is a category C of contexts (in which the objects are contexts, and the context morphisms are substitutions), together with a functor T : Cop → Fam(Set). Fam(Set) is the category of families of Sets, in which objects are pairs of an "index set" A and a function B: X → A, and morphisms are pairs of functions f : A → A' and g : X → X' , such that B' ° g = f ° B – in other words, f maps Ba to Bg(a). The functor T assigns to a context G a set of types, and for each , a set of terms. The axioms for a functor require that these play harmoniously with substitution. Substitution is usually written in the form Af or af, where A is a type in and a is a term in , and f is a substitution from D to G. Here and . The category C must contain a terminal object (the empty context), and a final object for a form of product called comprehension, or context extension, in which the right element is a type in the context of the left element. If G is a context, and , then there should be an object final among contexts D with mappings p : D → G, q : Tm(D,Ap). A logical framework, such as Martin-Löf's, takes the form of closure conditions on the context-dependent sets of types and terms: that there should be a type called Set, and for each set a type, that the types should be closed under forms of dependent sum and product, and so forth. A theory such as that of predicative set theory expresses closure conditions on the types of sets and their elements: that they should be closed under operations that reflect dependent sum and product, and under various forms of inductive definition. Extensional versus intensional A fundamental distinction is extensional vs intensional type theory. In extensional type theory, definitional (i.e., computational) equality is not distinguished from propositional equality, which requires proof. As a consequence type checking becomes undecidable in extensional type theory because programs in the theory might not terminate. For example, such a theory allows one to give a type to the Y-combinator; a detailed example of this can be found in Nordstöm and Petersson Programming in Martin-Löf's Type Theory. However, this does not prevent extensional type theory from being a basis for a practical tool; for example, Nuprl is based on extensional type theory. In contrast, in intensional type theory type checking is decidable, but the representation of standard mathematical concepts is somewhat more cumbersome, since intensional reasoning requires using setoids or similar constructions. There are many common mathematical objects that are hard to work with or cannot be represented without this, for example, integer numbers, rational numbers, and real numbers. Integers and rational numbers can be represented without setoids, but this representation is difficult to work with. Cauchy real numbers cannot be represented without this. Homotopy type theory works on resolving this problem. It allows one to define higher inductive types, which not only define first-order constructors (values or points), but higher-order constructors, i.e. equalities between elements (paths), equalities between equalities (homotopies), ad infinitum. Implementations of type theory Different forms of type theory have been implemented as the formal systems underlying a number of proof assistants. While many are based on Per Martin-Löf's ideas, many have added features, more axioms, or a different philosophical background. For instance, the Nuprl system is based on computational type theory and Coq is based on the calculus of (co)inductive constructions. Dependent types also feature in the design of programming languages such as ATS, Cayenne, Epigram, Agda, and Idris. Martin-Löf type theories Per Martin-Löf constructed several type theories that were published at various times, some of them much later than when the preprints with their description became accessible to specialists (among others Jean-Yves Girard and Giovanni Sambin). The list below attempts to list all the theories that have been described in a printed form and to sketch the key features that distinguished them from each other. All of these theories had dependent products, dependent sums, disjoint unions, finite types and natural numbers. All the theories had the same reduction rules that did not include η-reduction either for dependent products or for dependent sums, except for MLTT79 where the η-reduction for dependent products is added. MLTT71 was the first type theory created by Per Martin-Löf. It appeared in a preprint in 1971. It had one universe, but this universe had a name in itself, i.e., it was a type theory with, as it is called today, "Type in Type". Jean-Yves Girard has shown that this system was inconsistent, and the preprint was never published. MLTT72 was presented in a 1972 preprint that has now been published. That theory had one universe V and no identity types (=-types). The universe was "predicative" in the sense that the dependent product of a family of objects from V over an object that was not in V such as, for example, V itself, was not assumed to be in V. The universe was à la Russell's Principia Mathematica, i.e., one would write directly "T∈V" and "t∈T" (Martin-Löf uses the sign "∈" instead of modern ":") without an added constructor such as "El". MLTT73 was the first definition of a type theory that Per Martin-Löf published (it was presented at the Logic Colloquium '73 and published in 1975). There are identity types, which he describes as "propositions", but since no real distinction between propositions and the rest of the types is introduced the meaning of this is unclear. There is what later acquires the name of J-eliminator but yet without a name (see pp. 94–95). There is in this theory an infinite sequence of universes V0, ..., Vn, ... . The universes are predicative, à la Russell and non-cumulative. In fact, Corollary 3.10 on p. 115 says that if A∈Vm and B∈Vn are such that A and B are convertible then m = n. This means, for example, that it would be difficult to formulate univalence axiom in this theory—there are contractible types in each of the Vi, but it is unclear how to declare them to be equal since there are no identity types connecting Vi and Vj for i ≠ j. MLTT79 was presented in 1979 and published in 1982. In this paper, Martin-Löf introduced the four basic types of judgement for the dependent type theory that has since become fundamental in the study of the meta-theory of such systems. He also introduced contexts as a separate concept in it (see p. 161). There are identity types with the J-eliminator (which already appeared in MLTT73 but did not have this name there) but also with the rule that makes the theory "extensional" (p. 169). There are W-types. There is an infinite sequence of predicative universes that are cumulative. Bibliopolis: there is a discussion of a type theory in the Bibliopolis book from 1984, but it is somewhat open-ended and does not seem to represent a particular set of choices and so there is no specific type theory associated with it. See also Intuitionistic logic Typed lambda calculus Notes References Further reading Per Martin-Löf's Notes, as recorded by Giovanni Sambin (1980) External links EU Types Project: Tutorials – lecture notes and slides from the Types Summer School 2005 n-Categories - Sketch of a Definition – letter from John Baez and James Dolan to Ross Street, November 29, 1995 Foundations of mathematics Dependently typed programming Constructivism (mathematics) Type theory Logic in computer science Intuitionism
Intuitionistic type theory
[ "Mathematics" ]
4,516
[ "Mathematical structures", "Logic in computer science", "Foundations of mathematics", "Mathematical logic", "Mathematical objects", "Type theory", "Constructivism (mathematics)" ]
345,035
https://en.wikipedia.org/wiki/Halo%20%28optical%20phenomenon%29
A halo () is an optical phenomenon produced by light (typically from the Sun or Moon) interacting with ice crystals suspended in the atmosphere. Halos can have many forms, ranging from colored or white rings to arcs and spots in the sky. Many of these appear near the Sun or Moon, but others occur elsewhere or even in the opposite part of the sky. Among the best known halo types are the circular halo (properly called the 22° halo), light pillars, and sun dogs, but many others occur; some are fairly common while others are extremely rare. The ice crystals responsible for halos are typically suspended in cirrus or cirrostratus clouds in the upper troposphere (), but in cold weather they can also float near the ground, in which case they are referred to as diamond dust. The particular shape and orientation of the crystals are responsible for the type of halo observed. Light is reflected and refracted by the ice crystals and may split into colors because of dispersion. The crystals behave like prisms and mirrors, refracting and reflecting light between their faces, sending shafts of light in particular directions. Atmospheric optical phenomena like halos were part of weather lore, which was an empirical means of weather forecasting before meteorology was developed. They often do indicate that rain will fall within the next 24 hours, since the cirrostratus clouds that cause them can signify an approaching frontal system. Other common types of optical phenomena involving water droplets rather than ice crystals include the glory and the rainbow. History While Aristotle had mentioned halos and parhelia, in antiquity, the first European descriptions of complex displays were those of Christoph Scheiner in Rome (), Johannes Hevelius in Danzig (1661), and Tobias Lowitz in St Petersburg (). Chinese observers had recorded these for centuries, the first reference being a section of the "Official History of the Chin Dynasty" (Chin Shu) in 637, on the "Ten Haloes", giving technical terms for 26 solar halo phenomena. Vädersolstavlan While mostly known and often quoted for being the oldest color depiction of the city of Stockholm, Vädersolstavlan (Swedish; "The Sundog Painting", literally "The Weather Sun Painting") is arguably also one of the oldest known depictions of a halo display, including a pair of sun dogs. For two hours in the morning of 20 April 1535, the skies over the city were filled with white circles and arcs crossing the sky, while additional suns (i.e., sun dogs) appeared around the Sun. Light pillar A light pillar, or sun pillar, appears as a vertical pillar or column of light rising from the Sun near sunset or sunrise, though it can appear below the Sun, particularly if the observer is at a high elevation or altitude. Hexagonal plate- and column-shaped ice crystals cause the phenomenon. Plate crystals generally cause pillars only when the Sun is within 6 degrees of the horizon; column crystals can cause a pillar when the Sun is as high as 20 degrees above the horizon. The crystals tend to orient themselves near-horizontally as they fall or float through the air, and the width and visibility of a sun pillar depend on crystal alignment. Light pillars can also form around the Moon, and around street lights or other bright lights. Pillars forming from ground-based light sources may appear much taller than those associated with the Sun or Moon. Since the observer is closer to the light source, crystal orientation matters less in the formation of these pillars. Circular halo Among the best-known halos is the 22° halo, often just called "halo", which appears as a large ring around the Sun or Moon with a radius of about 22° (roughly the width of an outstretched hand at arm's length). The ice crystals that cause the 22° halo are oriented semi-randomly in the atmosphere, in contrast to the horizontal orientation required for some other halos such as sun dogs and light pillars. As a result of the optical properties of the ice crystals involved, no light is reflected towards the inside of the ring, leaving the sky noticeably darker than the sky around it, and giving it the impression of a "hole in the sky". The 22° halo is not to be confused with the corona, which is a different optical phenomenon caused by water droplets rather than ice crystals, and which has the appearance of a multicolored disk rather than a ring. Other halos can form at 46° to the Sun, or at the horizon, or around the zenith, and can appear as full halos or incomplete arcs. Bottlinger's ring A Bottlinger's ring is a rare type of halo that is elliptical instead of circular. It has a small diameter, which makes it very difficult to see in the Sun's glare and more likely to be noticed around the dimmer subsun, often seen from mountain tops or airplanes. Bottlinger's rings are not well understood yet. It is suggested that they are formed by very flat pyramidal ice crystals with faces at uncommonly low angles, suspended horizontally in the atmosphere. These precise and physically problematic requirements would explain why the halo is very rare. Other names In the Cornish dialect of English, a halo around the sun or the moon is called a cock's eye and is an omen of bad weather. The term is related to the Breton word kog-heol (sun cock) which has the same meaning. In Nepal, the halo round the sun is called Indrasabha with a connotation of the assembly court of Lord Indra – the Hindu god of lightning, thunder, and rain. Artificial halos The natural phenomena may be reproduced artificially by several means. Firstly, by computer simulations, or secondly by experimental means. Regarding the latter, this occurs when a single crystal is rotated around the appropriate axis/axes, or a chemical approach. A still further and more indirect experimental approach is to find analogous refraction geometries. Analogous refraction approach This approach employs the fact that in some cases the average geometry of refraction through an ice crystal may be imitated / mimicked via the refraction through another geometrical object. In this way, the circumzenithal arc, the circumhorizontal arc, and the suncave Parry arcs may be recreated by refraction through rotationally symmetric (i.e. non-prismatic) static bodies. A particularly simple table-top experiment reproduces artificially the colorful circumzenithal and circumhorizontal arcs using a water glass only. The refraction through the cylinder of water turns out to be (almost) identical to the rotationally averaged refraction through an upright hexagonal ice crystal / plate-oriented crystals, thereby creating vividly colored circumzenithal and the circumhorizontal arcs. In fact, the water glass experiment is often confused as representing a rainbow and has been around at least since 1920. Following Huygens' idea of the (false) mechanism of the 22° parhelia, one may also illuminate (from the side) a water-filled cylindrical glass with an inner central obstruction of half the glasses' diameter to achieve upon projection on a screen an appearance which closely resembles parhelia (cf. footnote [39] in Ref., or see here), an inner red edge transitioning into a white band at larger angles on both sides of the direct transmission direction. However, while the visual match is close, this particular experiment does not involve a fake caustic mechanism and is thus no real analogue. Chemical approaches The earliest chemical recipes to generate artificial halos has been put forward by Brewster and studied further by A. Cornu in 1889. The idea was to generate crystals by precipitation of a salt solution. The innumerable small crystals hereby generated will then, upon illumination with light, cause halos corresponding to the particular crystal geometry and the orientation / alignment. Several recipes exist and continue to be discovered. Rings are a common outcome of such experiments. But also Parry arcs have been artificially produced in this way. Mechanical approaches Single axis The earliest experimental studies on halo phenomena have been attributed to Auguste Bravais in 1847. Bravais used an equilateral glass prism which he spun around its vertical axis. When illuminated by parallel white light, this produced an artificial parhelic circle and many of the embedded parhelia. Similarly, A. Wegener used hexagonal rotating crystals to produce artificial subparhelia. In a more recent version of this experiment, many more embedded parhelia have been found using commercially available hexagonal BK7 glass crystals. Simple experiments like these can be used for educational purposes and demonstration experiments. Unfortunately, using glass crystals one cannot reproduce the circumzenithal arc or the circumhorizontal arc due to total internal reflections preventing the required ray-paths when . Even earlier than Bravais, the Italian scientist F. Venturi experimented with pointed water-filled prisms to demonstrate the circumzenithal arc. However, this explanation was replaced later by the CZA's correct explanation by Bravais. Artificial ice crystals have been employed to create halos which are otherwise unattainable in the mechanical approach via the use of glass crystals, e.g. circumzenithal and circumhorizontal arcs. The use of ice crystals ensures that the generated halos have the same angular coordinates as the natural phenomena. Other crystals such as sodium fluoride (NaF) also have a refractive index close to ice and have been used in the past. Two axes In order to produce artificial halos such as the tangent arcs or the circumscribed halo one should rotate a single columnar hexagonal crystal about 2 axes. Similarly, the Lowitz arcs can be created by rotating a single plate crystal about two axes. This can be done by engineered halo machines. The first such machine was constructed in 2003; several more followed. Putting such machines inside spherical projection screens, and by the principle of the so-called sky transform, the analogy is nearly perfect. A realization using micro-versions of the aforementioned machines produces authentic distortion-free projections of such complex artificial halos. Finally, superposition of several images and projections produced by such halo machines may be combined to create a single image. The resulting superposition image is then a representation of complex natural halo displays containing many different orientation sets of ice prisms. Three axes The experimental reproduction of circular halos is the most difficult using a single crystal only, while it is the simplest and typically achieved one using chemical recipes. Using a single crystal, one needs to realize all possible 3D orientations of the crystal. This has recently been achieved by two approaches. The first one using pneumatics and a sophisticated rigging, and a second one using an Arduino-based random walk machine which stochastically reorients a crystal embedded in a transparent thin-walled sphere. Gallery See also References External links Halo explanations and image galleries at Atmospheric Optics Meteoros AKM – Halo explanations and image galleries (in German) Halo reports of interesting halo observations around the World Southern Hemisphere Halo and other atmospheric phenomena Halo in Chisinau Moldova (photo and video) Walter Tape & Jarmo Moilanen, Atmospheric Halos and the Search for Angle x (free e-book) Halo Phenomena – Hyperphysics Atmospheric optical phenomena
Halo (optical phenomenon)
[ "Physics" ]
2,354
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
345,066
https://en.wikipedia.org/wiki/Barycentric%20Dynamical%20Time
Barycentric Dynamical Time (TDB, from the French Temps Dynamique Barycentrique) is a relativistic coordinate time scale, intended for astronomical use as a time standard to take account of time dilation when calculating orbits and astronomical ephemerides of planets, asteroids, comets and interplanetary spacecraft in the Solar System. TDB is now (since 2006) defined as a linear scaling of Barycentric Coordinate Time (TCB). A feature that distinguishes TDB from TCB is that TDB, when observed from the Earth's surface, has a difference from Terrestrial Time (TT) that is about as small as can be practically arranged with consistent definition: the differences are mainly periodic, and overall will remain at less than 2 milliseconds for several millennia. TDB applies to the Solar-System-barycentric reference frame, and was first defined in 1976 as a successor to the (non-relativistic) former standard of ephemeris time (adopted by the IAU in 1952 and superseded 1976). In 2006, after a history of multiple time-scale definitions and deprecations since the 1970s, a redefinition of TDB was approved by the IAU. The 2006 IAU redefinition of TDB as an international standard expressly acknowledged that the long-established JPL ephemeris time argument Teph, as implemented in JPL Development Ephemeris DE405, "is for practical purposes the same as TDB defined in this Resolution". (By 2006, ephemeris DE405 had already been in use for a few years as the official basis for planetary and lunar ephemerides in the Astronomical Almanac; it was the basis for editions for 2003 through 2014; in the edition for 2015 it was superseded by DE430.) Definition IAU resolution 3 of 2006 defines TDB as a linear transformation of TCB. TCB diverges from both TDB and TT. TCB progresses faster at a differential rate of about 0.5 second/year, while TDB and TT remain close. As of the beginning of 2011, the difference between TDB and TCB is about 16.6 seconds. TDB = TCB − L×(JD − T)×86400 + TDB where L = 1.550519768, TDB = −6.55 s, T = 2443144.5003725, and JD is the TCB Julian date (that is, a quantity which was equal to T on 1977 January 1 00:00:00 TAI at the geocenter and which increases by one every 86400 seconds of TCB). History From the 17th century to the late 19th century, planetary ephemerides were calculated using time scales based on the Earth's rotation: usually the mean solar time of one of the principal observatories, such as Paris or Greenwich. After 1884, mean solar time at Greenwich became a standard, later named Universal Time (UT). But in the later 19th and early 20th centuries, with the increasing precision of astronomical measurements, it began to be suspected, and was eventually established, that the rotation of the Earth (i.e. the length of the day) showed irregularities on short time scales, and was slowing down on longer time scales. Ephemeris time was consequently developed as a standard that was free from the irregularities of Earth rotation, by defining the time "as the independent variable of the equations of celestial mechanics", and it was at first measured astronomically, relying on the existing gravitational theories of the motions of the Earth about the Sun and of the Moon about the Earth. After the caesium atomic clock was invented, such clocks were used increasingly from the late 1950s as secondary realizations of ephemeris time (ET). These secondary realizations improved on the original ET standard by the improved uniformity of the atomic clocks, and (e.g. in the late 1960s) they were used to provide standard time for planetary ephemeris calculations and in astrodynamics. But ET in principle did not yet take account of relativity theory. The size of the periodic part of the variations due to time dilation between earth-based atomic clocks and the coordinate time of the Solar-System barycentric reference frame had been estimated at under 2 milliseconds, but in spite of this small size, it was increasingly considered in the early 1970s that time standards should be made suitable for applications in which differences due to relativistic time dilation could no longer be neglected. In 1976, two new time scales were defined to replace ET (in the ephemerides for 1984 and afterwards) to take account of relativity. ET's direct successor for measuring time on a geocentric basis was Terrestrial Dynamical Time (TDT). The new time scale to supersede ET for planetary ephemerides was to be Barycentric Dynamical Time (TDB). TDB was to tick uniformly in a reference frame comoving with the barycenter of the Solar System. (As with any coordinate time, a corresponding clock, to coincide in rate, would need not only to be at rest in that reference frame, but also (an unattainable hypothetical condition) to be located outside all of the relevant gravity wells.) In addition, TDB was to have (as observed/evaluated at the Earth's surface), over the long term average, the same rate as TDT (now TT). TDT and TDB were defined in a series of resolutions at the same 1976 meeting of the International Astronomical Union. It was eventually realized that TDB was not well defined because it was not accompanied by a general relativistic metric and because the exact relationship between TDB and TDT had not been specified. (It was also later criticized as being not physically possible in exact accordance with its original definition: among other things the 1976 definition excluded a necessary small offset for the initial epoch of 1977.) After the difficulties were appreciated, in 1991 the IAU refined the official definitions of timescales by creating additional new time scales: Barycentric Coordinate Time (TCB) and Geocentric Coordinate Time (TCG). TCB was intended as a replacement for TDB, and TCG was its equivalent for use in near-Earth space. TDT was also renamed to Terrestrial Time (TT), because of doubts raised about the appropriateness of the word "dynamical" in that connection. In 2006 TDB was redefined by IAU 2006 resolution 3; the 'new' TDB was expressly acknowledged as equivalent for practical purposes to JPL ephemeris time argument Teph; the difference between TDB according to the 2006 standard and TT (both as observed from the surface of the Earth), remains under 2 ms for several millennia around the present epoch. Use of TDB TDB is a successor of Ephemeris Time (ET), in that ET can be seen (within the limits of the lesser accuracy and precision achievable in its time) to be an approximation to TDB as well as to Terrestrial Time (TT) (see Ephemeris time § Implementations). TDB in the form of the very closely analogous, and practically equivalent, time scale Teph continues to be used for the important DE405 planetary and lunar ephemerides from the Jet Propulsion Laboratory. Arguments have been put forward for the continued practical use of TDB rather than TCB based on the very small size of the difference between TDB and TT, not exceeding 0.002 second, which can be neglected for many applications. It has been argued that the smallness of this difference makes for a lower risk of damage if TDB is ever confused with TT, compared to the possible damage of confusing TCB and TT, which have a relative linear drift of about 0.5 second per year, (the difference was close to zero at the start of 1977, and by 2009 was already over a quarter of a minute and increasing). References External links United States Naval Observatory Circular 179 : The IAU Resolutions on Astronomical Reference Systems, Time Scales, and Earth Rotation Models Explanation and Implementation General relativity Special relativity Time scales
Barycentric Dynamical Time
[ "Physics", "Astronomy" ]
1,705
[ "Physical quantities", "Time", "General relativity", "Special relativity", "Astronomical coordinate systems", "Theory of relativity", "Spacetime", "Time scales" ]
345,141
https://en.wikipedia.org/wiki/Product%20detector
A product detector is a type of demodulator used for AM and SSB signals. Rather than converting the envelope of the signal into the decoded waveform like an envelope detector, the product detector takes the product of the modulated signal and a local oscillator, hence the name. A product detector is a frequency mixer. Product detectors can be designed to accept either IF or RF frequency inputs. A product detector which accepts an IF signal would be used as a demodulator block in a superheterodyne receiver, and a detector designed for RF can be combined with an RF amplifier and a low-pass filter into a direct-conversion receiver. A simple product detector The simplest form of product detector mixes (or heterodynes) the RF or IF signal with a locally derived carrier (the Beat Frequency Oscillator, or BFO) to produce an audio frequency copy of the original audio signal and a mixer product at twice the original RF or IF frequency. This high-frequency component can then be filtered out, leaving the original audio frequency signal. Mathematical model of the simple product detector If m(t) is the original message, the AM signal can be shown to be Multiplying the AM signal x(t) by an oscillator at the same frequency as and in phase with the carrier yields which can be re-written as After filtering out the high-frequency component based around cos(2ωt) and the DC component C, the original message will be recovered. Drawbacks of the simple product detector Although this simple detector works, it has two major drawbacks: The frequency of the local oscillator must be the same as the frequency of the carrier, or else the output message will fade in and out in the case of AM, or be frequency shifted in the case of SSB Once the frequency is matched, the phase of the carrier must be obtained, or else the demodulated message will be attenuated, but the noise will not be. The local oscillator can be synchronized with the carrier using a phase-locked loop in a synchronous detector arrangement. For SSB, the only solution is to construct a highly stable oscillator. Another example There are many other kinds of product detectors as well, which are practical if one has access to digital signal processing equipment. For instance, it is possible to multiply the incoming signal by the carrier, times the square of another carrier 90° out of phase with it. This will produce a copy of the original message, and another AM signal at the fourth harmonic, by means of the trigonometric identity The high-frequency component can again be filtered out, leaving the original signal. Mathematical model of the detector If m(t) is the original message, the AM signal can be shown to be Multiplying the AM signal by the new set of frequencies yields After filtering out the component based around cos(4ωt) and the DC component C, the original message will be recovered. A more sophisticated product detector A more sophisticated product detector can be constructed in a way much like a single-sideband modulator. Two copies of the modulated input signals are created. The first copy is mixed with a local oscillator and low-pass filtered. The second copy is mixed with a 90° phase-shifted copy of the oscillator and the output of this mixer is also 90° phase-shifted and then low-pass filtered. These copies are then combined to produce the original message. This operation is similar to that performed by a dual-phase lock-in amplifier. Example: I-Q Demodulator Advantages and disadvantages The product demodulator has some advantages over an envelope detector for AM signal reception. The product demodulator can decode overmodulated AM and AM with suppressed carrier. A signal demodulated with a product detector will have a higher signal-to-noise ratio than the same signal demodulated with an envelope detector. On the other hand, the envelope detector is a simple and relatively inexpensive circuit, and it can provide higher fidelity, since there is no possibility of mistuning the local oscillator. A product detector (or equivalent) is needed to demodulate SSB signals. Frequency mixers Communication circuits Demodulation de:Amplitudenmodulation#Koh.C3.A4rente_Demodulation
Product detector
[ "Engineering" ]
909
[ "Radio electronics", "Telecommunications engineering", "Demodulation", "Frequency mixers", "Communication circuits" ]
345,188
https://en.wikipedia.org/wiki/VEGAS%20algorithm
The VEGAS algorithm, due to G. Peter Lepage, is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the integrand that make the greatest contribution to the final integral. The VEGAS algorithm is based on importance sampling. It samples points from the probability distribution described by the function so that the points are concentrated in the regions that make the largest contribution to the integral. The GNU Scientific Library (GSL) provides a VEGAS routine. Sampling method In general, if the Monte Carlo integral of over a volume is sampled with points distributed according to a probability distribution described by the function we obtain an estimate The variance of the new estimate is then where is the variance of the original estimate, If the probability distribution is chosen as then it can be shown that the variance vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution. Approximation of probability distribution The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region while histogramming the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like with dimension d the probability distribution is approximated by a separable function: so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. See also Las Vegas algorithm Monte Carlo integration Importance sampling References Monte Carlo methods Computational physics Statistical algorithms Variance reduction
VEGAS algorithm
[ "Physics" ]
418
[ "Monte Carlo methods", "Computational physics stubs", "Computational physics" ]
345,247
https://en.wikipedia.org/wiki/Jonathan%20Schaeffer
Jonathan Herbert Schaeffer (born 1957) is a Canadian researcher and professor at the University of Alberta and the former Canada Research Chair in Artificial Intelligence. He led the team that wrote Chinook, the world's strongest American checkers player, after some relatively good results in writing computer chess programs. He is involved in the University of Alberta GAMES group developing computer poker systems. Schaeffer is also a member of the research group that created Polaris, a program designed to play the Texas Hold'em variant of poker. He is a Founder of Onlea, which produces online learning experiences. Early life Born in Toronto, Ontario, he received a Bachelor of Science degree in 1979 from the University of Toronto. He received a Master of Mathematics degree in 1980 and a Ph.D. in 1986 from the University of Waterloo. Schaeffer reached national master strength in chess while in his early 20s, but has played little competitive chess since that time. Draughts: Chinook Chinook is the first computer program to win the world champion title in a competition against humans. In 1990 it won the right to play in the human World Championship by being second to Marion Tinsley in the US Nationals. At first the American Checkers Federation and English Draughts Association were against the participation of a computer in a human championship. When Tinsley resigned his title in protest, the ACF and EDA created the new title Man vs. Machine World Championship, and competition proceeded. Tinsley won with four wins to Chinook's two. In a rematch, Chinook was declared the Man-Machine World Champion in checkers in 1994 in a match against Marion Tinsley after six drawn games, and Tinsley's withdrawal due to pancreatic cancer. While Chinook became the world champion, it had never defeated the best checkers player of all time, Tinsley, who was significantly superior to even his closest peer. The championship continued with Chinook defending its title against Don Lafferty when it lost one game, won one and drew 18. After the match, Jonathan Schaeffer decided not to let Chinook compete anymore, but instead try to solve checkers. It was rated at 2814 Elo. In 2007, after 18 years of computation, he proved through a weak solution that checkers always results in a draw if neither player makes a mistake. The solution involved 1014 calculations from endgame positions with fewer than 10 pieces on the board. Poker: Polaris Schaeffer is a member and, until 2004, leader of the computer poker research group at the University of Alberta, which has developed several strong computer programs for playing Texas hold 'em poker. The earliest and most general of these is Poki, which uses Monte Carlo simulation to choose actions during a game. More recently, the group has focused on the two-player (Heads-Up) variant, and has developed a series of programs that approximate Nash equilibrium strategies for the game. Several of these programs (such as Poki, SparBot and VexBot) are available in products such as Poker Academy from BioTools. In July 2007, Schaeffer announced a competition between the group's newest program, Polaris, and two human professionals, Phil Laak and Ali Eslami. The competition was held at the 2007 Association for the Advancement of Artificial Intelligence (AAAI) conference, which also hosted an international competition between computer poker programs. Out of four matches against the human professionals, Polaris won one, tied one, and lost twice; overall, the humans won the competition by a small margin. In the computer competition, Polaris (playing under the name Hyperborean) won the Limit Hold'em event and came first in the No-Limit Hold'em event. In 2008, an updated version of Polaris defeated a team of human professionals in the Second Man-Machine Poker Competition. Currently Schaeffer was previously the vice-provost for information technology at the University of Alberta. On July 1, 2012, he started serving a five-year term as dean of science at the University of Alberta. He is a founder of Onlea, a nonprofit organization, which produces interactive online learning experiences such as Massive Open Online Courses. See also List of University of Waterloo people References Canadian Who's Who 1997. University of Toronto Press. . Further reading Schaeffer, Jonathan. One Jump Ahead: Challenging Human Supremacy in Checkers, 1997, Springer, . External links 1957 births Living people Canadian artificial intelligence researchers Canadian computer scientists Canada Research Chairs Canadian draughts players Canadian chess players Fellows of the Association for the Advancement of Artificial Intelligence Fellows of the Royal Society of Canada Game theorists Scientists from Toronto Chess players from Toronto Canadian poker players Academic staff of the University of Alberta University of Toronto alumni University of Waterloo alumni
Jonathan Schaeffer
[ "Mathematics" ]
967
[ "Game theorists", "Game theory" ]
345,286
https://en.wikipedia.org/wiki/Gel%20permeation%20chromatography
Gel permeation chromatography (GPC) is a type of size-exclusion chromatography (SEC), that separates high molecular weight or colloidal analytes on the basis of size or diameter, typically in organic solvents. The technique is often used for the analysis of polymers. As a technique, SEC was first developed in 1955 by Lathe and Ruthven. The term gel permeation chromatography can be traced back to J.C. Moore of the Dow Chemical Company who investigated the technique in 1964. The proprietary column technology was licensed to Waters Corporation, who subsequently commercialized this technology in 1964. GPC systems and consumables are now also available from a number of manufacturers. It is often necessary to separate polymers, both to analyze them as well as to purify the desired product. When characterizing polymers, it is important to consider their size distribution and dispersity (Đ) as well their molecular weight. Polymers can be characterized by a variety of definitions for molecular weight including the number average molecular weight (Mn), the weight average molecular weight (Mw) (see molar mass distribution), the size average molecular weight (Mz), or the viscosity molecular weight (Mv). GPC allows for the determination of Đ as well as Mv and, based on other data, the Mn, Mw, and Mz can be determined. How it works GPC is a type of chromatography in which analytes are separated, based on their size or hydrodynamic volume (radius of gyration). This differs from other chromatographic techniques, which depend upon chemical or physical interactions between the mobile and stationary phases to separate analytes. Separation occurs via the use of porous gel beads packed inside a column (see stationary phase (chemistry)). The principle of separation relies on the differential exclusion or inclusion of the macromolecules by the porous gel stationary phase. Larger molecules are excluded from entering the pores and elute earlier, while smaller molecules can enter the pores, thus staying longer inside the column. The entire process takes place without any interaction of the analytes with the surface of the stationary phase. The smaller analytes relative to the pore sizes can permeate these pores and spend more time inside the gel particles, increasing their retention time. Conversely, larger analytes relative to the pores sizes spend little if any time inside the column, hence they elute sooner. Each type of column has a range of molecular weights that can be separated, according to their pores sizes. If an analyte is too large relative to the column's pores, it will not be retained at all and will be totally excluded; conversely, if the analyte is small relative to the pores sizes, it will be totally permeating. Analytes that are totally excluded, elute with the free volume outside around the particles (Vo), the total exclusion limit, while analytes that are completely delayed, elute with the solvent, marking the total permeation volume of the column, including also the solvent held inside the pores (Vi). The total volume can be considered by the following equation, where Vg is the volume of the polymer gel and Vt is the total volume: As can be inferred, there is a limited range of molecular weights that can be separated by each column, therefore the size of the pores for the packing should be chosen according to the range of molecular weight of analytes to be separated. For polymer separations the pore sizes should be on the order of the polymers being analyzed. If a sample has a broad molecular weight range it may be necessary to use several GPC columns with varying pores volumes in tandem to resolve the sample fully. Application GPC is often used to determine the relative molecular weight of polymer samples as well as the distribution of molecular weights. What GPC truly measures is the molecular volume and shape function as defined by the intrinsic viscosity. If comparable standards are used, this relative data can be used to determine molecular weights within ± 5% accuracy. Polystyrene standards with dispersities of less than 1.2 are typically used to calibrate the GPC. Unfortunately, polystyrene tends to be a very linear polymer and therefore as a standard it is only useful to compare it to other polymers that are known to be linear and of relatively the same size. Material and methods Instrumentation Gel permeation chromatography is conducted almost exclusively in chromatography systems. The experimental design is not much different from other techniques of High Performance liquid chromatography. Samples are dissolved in an appropriate solvent, in the case of GPC these tend to be organic solvents and after filtering the solution it is injected onto a column. The separation of multi-component mixture takes place in the column. The constant supply of fresh eluent to the column is accomplished by the use of a pump. Since most analytes are not visible to the naked eye a detector is needed. Often multiple detectors are used to gain additional information about the polymer sample. The availability of a detector makes the fractionation convenient and accurate. Gel Gels are used as stationary phase for GPC. The pore size of a gel must be carefully controlled in order to be able to apply the gel to a given separation. Other desirable properties of the gel forming agent are the absence of ionizing groups and, in a given solvent, low affinity for the substances to be separated. Commercial gels like PLgel & Styragel (cross-linked polystyrene-divinylbenzene), LH-20 (hydroxypropylated Sephadex), Bio-Gel (cross-linked polyacrylamide), HW-20 & HW-40 (hydroxylated methacrylic polymer), and agarose gel are often used based on different separation requirements. Column The column used for GPC is filled with a microporous packing material. The column is filled with the gel. Since the total penetration volume is the maximum volume permeated by the analytes, and there is no retention on the surface of the stationary phase, the total column volume is usually large, relatively to the sample volume. Eluent The eluent (mobile phase) should be the appropriate solvent to dissolve the polymer, should not interfere with the response of the polymer analyzed, and should wet the packing surface and make it inert to interactions with the polymers. The most common eluents for polymers that dissolve at room temperature GPC are tetrahydrofuran (THF), o-dichlorobenzene and trichlorobenzene at 130–150 °C for crystalline polyalkynes and hexafluoroisopropanol (HFIP) for crystalline condensation polymers such as polyamides and polyesters. Pump There are two types of pumps available for uniform delivery of relatively small liquid volumes for GPC: piston or peristaltic pumps. The delivery of a constant flow free of fluctuations is especially important to the precision of the GPC analysis, as the flow-rate is used for the calibration of the molecular weight, or diameter. Detector In GPC, the concentration by weight of polymer in the eluting solvent may be monitored continuously with a detector. There are many detector types available and they can be divided into two main categories. The first is concentration sensitive detectors which includes UV-VIS absorption, differential refractometer (DRI) or refractive index (RI) detectors, infrared (IR) absorption and density detectors. The second category is molecular weight sensitive detectors, which include low angle light scattering detectors (LALLS) and multi angle light scattering (MALS). The resulting chromatogram is therefore a weight distribution of the polymer as a function of retention volume. The most sensitive detector is the differential UV photometer and the most common detector is the differential refractometer (DRI). When characterizing copolymer, it is necessary to have two detectors in series. For accurate determinations of copolymer composition at least two of those detectors should be concentration detectors. The determination of most copolymer compositions is done using UV and RI detectors, although other combinations can be used. Data analysis Gel permeation chromatography (GPC) has become the most widely used technique for analyzing polymer samples in order to determine their molecular weights and weight distributions. Examples of GPC chromatograms of polystyrene samples with their molecular weights and dispersities are shown on the left. Benoit and co-workers proposed that the hydrodynamic volume, Vη, which is proportional to the product of [η] and M, where [η] is the intrinsic viscosity of the polymer in the SEC eluent, may be used as the universal calibration parameter. If the Mark–Houwink–Sakurada constants K and α are known (see Mark–Houwink equation), a plot of log [η]M versus elution volume (or elution time) for a particular solvent, column and instrument provides a universal calibration curve which can be used for any polymer in that solvent. By determining the retention volumes (or times) of monodisperse polymer standards (e.g. solutions of monodispersed polystyrene in THF), a calibration curve can be obtained by plotting the logarithm of the molecular weight versus the retention time or volume. Once the calibration curve is obtained, the gel permeation chromatogram of any other polymer can be obtained in the same solvent and the molecular weights (usually Mn and Mw) and the complete molecular weight distribution for the polymer can be determined. A typical calibration curve is shown to the right and the molecular weight from an unknown sample can be obtained from the calibration curve. Advantages As a separation technique, GPC has many advantages. First of all, it has a well-defined separation time due to the fact that there is a final elution volume for all unretained analytes. Additionally, GPC can provide narrow bands, although this aspect of GPC is more difficult for polymer samples that have broad ranges of molecular weights present. Finally, since the analytes do not interact chemically or physically with the column, there is a lower chance for analyte loss to occur. For investigating the properties of polymer samples in particular, GPC can be very advantageous. GPC provides a more convenient method of determining the molecular weights of polymers. In fact most samples can be thoroughly analyzed in an hour or less. Other methods used in the past were fractional extraction and fractional precipitation. As these processes were quite labor-intensive molecular weights and mass distributions typically were not analyzed. Therefore, GPC has allowed for the quick and relatively easy estimation of molecular weights and distribution for polymer samples Disadvantages There are disadvantages to GPC, however. First, there is a limited number of peaks that can be resolved within the short time scale of the GPC run. Also, as a technique GPC requires around at least a 10% difference in molecular weight for a reasonable resolution of peaks to occur. In regards to polymers, the molecular masses of most of the chains will be too close for the GPC separation to show anything more than broad peaks. Another disadvantage of GPC for polymers is that filtrations must be performed before using the instrument to prevent dust and other particulates from ruining the columns and interfering with the detectors. Although useful for protecting the instrument, there is the possibility of the pre-filtration of the sample removing higher molecular weight sample before it can be loaded on the column. Another possibility to overcome these issues is the separation by field-flow fractionation (FFF). Orthogonal methods Field-flow fractionation (FFF) can be considered as an alternative to GPC, especially when particles or high molar mass polymers cause clogging of the column, shear degradation is an issue or agglomeration takes place but cannot be made visible. FFF is separation in an open flow channel without having a static phase involved so no interactions occur. With one field-flow fractionation version, thermal field-flow fractionation, separation of polymers having the same size but different chemical compositions is possible. References Biochemical separation processes Chromatography Polymers Polyolefins
Gel permeation chromatography
[ "Chemistry", "Materials_science", "Biology" ]
2,578
[ "Chromatography", "Biochemistry methods", "Separation processes", "Biochemical separation processes", "Polymer chemistry", "Polymers" ]
345,322
https://en.wikipedia.org/wiki/Barycentric%20Coordinate%20Time
Barycentric Coordinate Time (TCB, from the French Temps-coordonnée barycentrique) is a coordinate time standard intended to be used as the independent variable of time for all calculations pertaining to orbits of planets, asteroids, comets, and interplanetary spacecraft in the Solar System. It is equivalent to the proper time experienced by a clock at rest in a coordinate frame co-moving with the barycenter (center of mass) of the Solar System : that is, a clock that performs exactly the same movements as the Solar System but is outside the system's gravity well. It is therefore not influenced by the gravitational time dilation caused by the Sun and the rest of the system. TCB is the time coordinate for the Barycentric Celestial Reference System (BCRS). TCB was defined in 1991 by the International Astronomical Union, in Recommendation III of the XXIst General Assembly. It was intended as one of the replacements for the problematic 1976 definition of Barycentric Dynamical Time (TDB). Unlike former astronomical time scales, TCB is defined in the context of the general theory of relativity. The relationships between TCB and other relativistic time scales are defined with fully general relativistic metrics. The transformation between TCB and Geocentric Coordinate Time (TCG) may be approximated with an uncertainty not larger than in rate as: where and are the barycentric coordinate position and velocity of the geocenter, with the barycentric position of the observer, , is the origin of TCB and TCG defined so that 1977 January 1, 00:00:00 TAI is 1977 January 1, 00:00:32.184 TCG / TCB, is the sum of gravitational potentials for all solar system bodies apart from the Earth evaluated at the geocenter, and is similarly the sum . The approximation discards higher powers of as they have been found to be negligible. Because the reference frame for TCB is not influenced by the gravitational potential caused by the Solar System, TCB ticks faster than clocks on the surface of the Earth by 1.550505 × 10−8 (about 490 milliseconds per year). Consequently, the values of physical constants to be used with calculations using TCB differ from the traditional values of physical constants (The traditional values were in a sense wrong, incorporating corrections for the difference in time scales). Adapting the large body of existing software to change from TDB to TCB is an ongoing task, and many calculations continued to use TDB in some form. Time coordinates on the TCB scale are specified conventionally using traditional means of specifying days, inherited from slightly non-uniform time standards based on the rotation of the Earth. Specifically, both Julian Dates and the Gregorian calendar are used. For continuity with its predecessor Ephemeris Time, TCB was set to match ET at around Julian Date 2443144.5 (1977-01-01T00Z). More precisely, it was defined that TCB instant 1977-01-01T00:00:32.184 corresponds exactly to the International Atomic Time (TAI) instant 1977-01-01T00:00:00.000, at the geocenter. This is also the instant at which TAI introduced corrections for gravitational time dilation. See also Terrestrial Time References Time scales General relativity Time in astronomy
Barycentric Coordinate Time
[ "Physics", "Astronomy" ]
704
[ "Time in astronomy", "Physical quantities", "Time", "General relativity", "Astronomical coordinate systems", "Theory of relativity", "Spacetime", "Time scales" ]
345,332
https://en.wikipedia.org/wiki/Geocentric%20Coordinate%20Time
Geocentric Coordinate Time (TCG - Temps-coordonnée géocentrique) is a coordinate time standard intended to be used as the independent variable of time for all calculations pertaining to precession, nutation, the Moon, and artificial satellites of the Earth. It is equivalent to the proper time experienced by a clock at rest in a coordinate frame co-moving with the center of the Earth : that is, a clock that performs exactly the same movements as the Earth but is outside the Earth's gravity well. It is therefore not influenced by the gravitational time dilation caused by the Earth. The TCG is the time coordinate for the Geocentric Celestial Reference System (GCRS). TCG was defined in 1991 by the International Astronomical Union. Unlike former astronomical time scales, TCG is defined in the context of the general theory of relativity. The relationships between TCG and other relativistic time scales are defined with fully general relativistic metrics. Because the reference frame for TCG is not rotating with the surface of the Earth and not in the gravitational potential of the Earth, TCG ticks faster than clocks on the surface of the Earth by a factor of about 7.0 × 10−10 (about 22 milliseconds per year). Consequently, the values of physical constants to be used with calculations using TCG differ from the traditional values of physical constants. (The traditional values were in a sense wrong, incorporating corrections for the difference in time scales.) Adapting the large body of existing software to change from TDB (Barycentric Dynamical Time) to TCG is a formidable task, and as of 2002 many calculations continue to use TDB in some form. Time coordinates on the TCG scale are conventionally specified using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian Dates and the Gregorian calendar are used. For continuity with its predecessor Ephemeris Time, TCG was set to match ET at around Julian Date 2443144.5 (1977-01-01T00Z). More precisely, it was defined that TCG instant 1977-01-01T00:00:32.184 exactly corresponds to TAI instant 1977-01-01T00:00:00.000 exactly. This is also the instant at which TAI introduced corrections for gravitational time dilation. TCG is a Platonic time scale: a theoretical ideal, not dependent on a particular realisation. For practical purposes, TCG must be realised by actual clocks in the Earth system. Because of the linear relationship between Terrestrial Time (TT) and TCG, the same clocks that realise TT also serve for TCG. See the article on TT for details of the relationship and how TT is realised. Barycentric Coordinate Time (TCB) is the analog of TCG, used for calculations relating to the Solar System beyond Earth orbit. TCG is defined by a different reference frame from TCB, such that they are not linearly related. Over the long term, TCG ticks more slowly than TCB by about 1.6 × 10−8 (about 0.5 seconds per year). In addition there are periodic variations, as Earth moves within the Solar System. When the Earth is at perihelion in January, TCG ticks even more slowly than it does on average, due to gravitational time dilation from being deeper in the Sun's gravity well and also velocity time dilation from moving faster relative to the Sun. At aphelion in July the opposite holds, with TCG ticking faster than it does on average. References Time scales Time in astronomy
Geocentric Coordinate Time
[ "Physics", "Astronomy" ]
760
[ "Time in astronomy", "Physical quantities", "Time", "Astronomical coordinate systems", "Spacetime", "Time scales" ]
345,351
https://en.wikipedia.org/wiki/List%20of%20montes%20on%20Venus
This is a list of montes (mountains, singular mons) on the planet Venus. Venusian mountains are all named after goddesses in the mythologies of various cultures, except for the Maxwell Montes. The four main mountain ranges of Venus are named Akna Montes, Danu Montes, Freyja Montes, and Maxwell Montes. These are found on Ishtar Terra. Mountain ranges are formed by the folding and buckling of a planet's crust. The mountain ranges of Venus, like those of the Earth, are characterized by many parallel folds and faults. The presence of mountain ranges on Venus may provide evidence that the planet's surface is in motion. Montes Key DIAM — Longest dimension of feature in kilometres AS — Approval status (1) — Adopted by the International Astronomical Union (IAU) General Assembly (2) — Working Group for Planetary System Nomenclature (WGPSN) approval (3) — Dropped, no longer in use See also List of coronae on Venus List of craters on Venus List of tallest mountains in the Solar System List of mountains on Mars by height References External links List of named mountains on Venus BBC: Venus has 'heavy metal mountains' Venus Surface features of Venus Venus-related lists
List of montes on Venus
[ "Astronomy" ]
253
[ "Lists of extraterrestrial mountains", "Astronomy-related lists" ]
345,396
https://en.wikipedia.org/wiki/List%20of%20differential%20geometry%20topics
This is a list of differential geometry topics. See also glossary of differential and metric geometry and list of Lie group topics. Differential geometry of curves and surfaces Differential geometry of curves List of curves topics Frenet–Serret formulas Curves in differential geometry Line element Curvature Radius of curvature Osculating circle Curve Fenchel's theorem Differential geometry of surfaces Theorema egregium Gauss–Bonnet theorem First fundamental form Second fundamental form Gauss–Codazzi–Mainardi equations Dupin indicatrix Asymptotic curve Curvature Principal curvatures Mean curvature Gauss curvature Elliptic point Types of surfaces Minimal surface Ruled surface Conical surface Developable surface Nadirashvili surface Foundations Calculus on manifolds See also multivariable calculus, list of multivariable calculus topics Manifold Differentiable manifold Smooth manifold Banach manifold Fréchet manifold Tensor analysis Tangent vector Tangent space Tangent bundle Cotangent space Cotangent bundle Tensor Tensor bundle Vector field Tensor field Differential form Exterior derivative Lie derivative pullback (differential geometry) pushforward (differential) jet (mathematics) Contact (mathematics) jet bundle Frobenius theorem (differential topology) Integral curve Differential topology Diffeomorphism Large diffeomorphism Orientability characteristic class Chern class Pontrjagin class spin structure differentiable map submersion immersion Embedding Whitney embedding theorem Critical value Sard's theorem Saddle point Morse theory Lie derivative Hairy ball theorem Poincaré–Hopf theorem Stokes' theorem De Rham cohomology Sphere eversion Frobenius theorem (differential topology) Distribution (differential geometry) integral curve foliation integrability conditions for differential systems Fiber bundles Fiber bundle Principal bundle Frame bundle Hopf bundle Associated bundle Vector bundle Tangent bundle Cotangent bundle Line bundle Jet bundle Fundamental structures Sheaf (mathematics) Pseudogroup G-structure synthetic differential geometry Riemannian geometry Fundamental notions Metric tensor Riemannian manifold Pseudo-Riemannian manifold Levi-Civita connection Non-Euclidean geometry Non-Euclidean geometry Elliptic geometry Spherical geometry Sphere-world Angle excess hyperbolic geometry hyperbolic space hyperboloid model Poincaré disc model Poincaré half-plane model Poincaré metric Angle of parallelism Geodesic Prime geodesic Geodesic flow Exponential map (Lie theory) Exponential map (Riemannian geometry) Injectivity radius Geodesic deviation equation Jacobi field Symmetric spaces (and related topics) Riemannian symmetric space Margulis lemma Space form Constant curvature taut submanifold Uniformization theorem Myers theorem Gromov's compactness theorem Riemannian submanifolds Gauss–Codazzi equations Darboux frame Hypersurface Induced metric Nash embedding theorem minimal surface Helicoid Catenoid Costa's minimal surface Hsiang–Lawson's conjecture Curvature of Riemannian manifolds Theorema Egregium Gauss–Bonnet theorem Chern–Gauss–Bonnet theorem Chern–Weil homomorphism Gauss map Second fundamental form Curvature form Riemann curvature tensor Geodesic curvature Scalar curvature Sectional curvature Ricci curvature, Ricci flat Ricci decomposition Schouten tensor Weyl curvature Ricci flow Einstein manifold Holonomy Theorems in Riemannian geometry Gauss–Bonnet theorem Hopf–Rinow theorem Cartan–Hadamard theorem Myers theorem Rauch comparison theorem Morse index theorem Synge theorem Weinstein theorem Toponogov theorem Sphere theorem Hodge theory Uniformization theorem Yamabe problem Isometry Killing vector field Myers-Steenrod theorem Laplace–Beltrami operator Hodge star operator Weitzenböck identity Laplacian operators in differential geometry Formulas and other tools List of coordinate charts List of formulas in Riemannian geometry Christoffel symbols Related structures Intrinsic metric Pseudo-Riemannian manifold Sub-Riemannian manifold Finsler geometry General relativity G2 manifold Information geometry Fisher information metric Lie groups Connections covariant derivative exterior covariant derivative Levi-Civita connection parallel transport Development (differential geometry) connection form Cartan connection affine connection conformal connection projective connection method of moving frames Cartan's equivalence method Vierbein, tetrad Cartan connection applications Einstein–Cartan theory connection (vector bundle) connection (principal bundle) Ehresmann connection curvature curvature form holonomy, local holonomy Chern–Weil homomorphism Curvature vector Curvature form Curvature tensor Cocurvature torsion (differential geometry) Complex manifolds Riemann surface Complex projective space Kähler manifold Dolbeault operator CR manifold Stein manifold Almost complex structure Hermitian manifold Newlander–Nirenberg theorem Generalized complex manifold Calabi–Yau manifold Hyperkähler manifold K3 surface hypercomplex manifold Quaternion-Kähler manifold Symplectic geometry Symplectic topology Symplectic space Symplectic manifold Symplectic structure Symplectomorphism Contact structure Contact geometry Hamiltonian system Sasakian manifold Poisson manifold Conformal geometry Möbius transformation Conformal map conformal connection tractor bundle Weyl curvature Weyl–Schouten theorem ambient construction Willmore energy Willmore flow Index theory Atiyah–Singer index theorem de Rham cohomology Dolbeault cohomology elliptic complex Hodge theory pseudodifferential operator Homogeneous spaces Klein geometry, Erlangen programme symmetric space space form Maurer–Cartan form Examples hyperbolic space Gauss–Bolyai–Lobachevsky space Grassmannian Complex projective space Real projective space Euclidean space Stiefel manifold Upper half-plane Sphere Systolic geometry Loewner's torus inequality Pu's inequality Gromov's inequality for complex projective space Wirtinger inequality (2-forms) Gromov's systolic inequality for essential manifolds Essential manifold Filling radius Filling area conjecture Bolza surface First Hurwitz triplet Hermite constant Systoles of surfaces Systolic freedom Systolic category Other Envelope (mathematics) Bäcklund transform Differential geometry Differential geometry Differential geometry
List of differential geometry topics
[ "Mathematics" ]
1,215
[ "nan" ]
345,573
https://en.wikipedia.org/wiki/Alfred%20University
Alfred University is a private university in Alfred, New York, United States. It has a total undergraduate population of approximately 1,600 students. The university hosts the statutory New York State College of Ceramics, which includes The Inamori School of Engineering and the School of Art and Design. History Alfred University was founded as a non-sectarian select school by Seventh Day Baptists. In 1836, Bethuel C. Church, a Seventh Day Baptist, was asked to organize a college in Alfred and began teaching, receiving financial assistance from the Seventh Day Baptist Educational Society with resources, in part, from "Female Educational Societies" of local churches. Unusual for the time, the school was co-educational, and within its first 20 years, it also enrolled its first African-American and Native American students. From its founding as a select school, the institution received a charter as Alfred Academy from the New York State Board of Regents in 1842. Focused initially on the education of teachers, the institution continued to grow. In 1855, a curriculum was created for the Academic Department and the Collegiate with courses divided into three areas: the classic, the scientific and one for women involves most subjects in the other areas. There was no theology course in the initial period, however, the desire to organize a theological seminary led the academy, through Jonathan Allen, an early teacher, later second president, to apply for a license for a government-accredited university. After facing difficulties for more than two years, he received his charter as Alfred University from the New York State Legislature in March 1857, so that years later the Department of Theology was created. Although preceded by the short-lived New York Central College, Alfred University is the oldest surviving co-educational college in New York and New England, and the oldest college in the United States to admit women to all its programs of study, rather than having female-specific programs. In 1900, the New York State Legislature approved the formation of "a State School of Clay-Working and Ceramics" at Alfred University, with the intention of establishing a public college "to serve New York State industry and assist in developing New York State raw materials and assist its ceramic industry." The college has evolved into the New York State College of Ceramics at Alfred University and contains certain departments of both the School of Engineering and the School of Art and Design. The engineering curriculum includes the study of ceramics and glass, while the School of Art and Design provides art practice instruction in ceramics and glass. The College of Ceramics remains part of the State University of New York system, while Alfred University also maintains a College of Liberal Arts and Sciences and a College of Business in its private sector. In 1908, the New York State Legislature approved the formation of the New York College of Agriculture at Alfred University. That college became autonomous in 1941 as a junior college, and, in 1948, became a member of the State University of New York system. While a separate and autonomous institution, Alfred State College, located on the opposite side of Main Street in the Village of Alfred, maintains close relations with Alfred University, and both institutions host an annual "Hot Dog Day" in the spring. The origin of the name "Alfred" is uncertain. Residents of the town and students at the two schools believe that the town received its name in honor of Alfred the Great, king of the Saxons, although the first documented occurrence of this connection was in 1881, 73 years after the first record of the name being used to describe the geophysical area during assignments by the state legislature. State records which might have verified the connection between the Saxon king and the university were lost in a fire in 1911. Regardless of whether the connection is historically accurate, Alfred University has embraced King Alfred as a symbol of the school's educational values, and a statue of the king stands in the center of the campus quad. Alfred University has hosted guest lecturers, artists and musicians including Frederick Douglass, Ralph Waldo Emerson and Ghostface Killah. In April 2000, Alfred University received national attention when freshman Eric Zuckerman orchestrated a campus visit from then–First Lady, Hillary Clinton, during her campaign for the United States Senate from New York. In the 1990s, Alfred University, together with Corning Incorporated and the State of New York, began developing the Ceramic Corridor, an incubator project designed to take advantage of the emerging ceramics industry and to create new jobs. This industrial development program has focused on developing start-up industries between Corning, NY and Alfred, NY and includes business incubator facilities in Alfred and Corning. Since its initiation, the incubator facility in Alfred has joined The Western New York Incubator Network. In 1971, the village of Alfred, where the university is located, became only the fourth municipality in the U.S. to ban employment discrimination based on sexuality. Amidst the dissolution of the AU Greek System, the Lambda Chi Alpha fraternity chapter at Alfred University led a successful effort to ban discrimination based on religion, age, disability, and sexual orientation in the constitution of the 210 chapter international fraternity in 2002. Alfred University's ranking by U.S. News & World Report in its 2021 edition of Best Colleges is Regional Universities North, #45, while in 2019 the university had an acceptance rate of 66% with the middle 50% of students admitted having an SAT score between 940 and 1180 or an ACT score between 20 and 27. Events and culture Mascot Alfred University's athletic teams became known as The Saxons in 1929, but did not institute an official mascot when the moniker was selected. In 1940, two Kappa Psi Upsilon brothers, James Lippke and Walter Lawrence, developed a character named Lil Alf to be used on their fraternity house's signs during football games. In his original design, Lil Alf was a knight in shining armor, simplified to a small cartoonish form in a 1948 redesign. Lil Alf was not formally adopted as a campus mascot, with many sports teams complaining that he was "too cute and not fierce enough." The use of his image was formally banned on official publications by the university's Visual Identity Standards document. In spite of opposition, his image remained ubiquitous through the 2000s and was common on unofficial sports signs and clothing. In 2013 the university introduced Lil Alf as its official mascot. He was redesigned to feature a somewhat more historically accurate armor and helmet in Alfred University's purple and gold. The Black Knight The Black Knight has been a part of Alfred University folklore since the early 1900s. The relic was originally part of a parlor stove in a classroom in Kanakadea Hall. When the stove was discarded, the figure was claimed by the Class of 1908 as their mascot. They passed it on to the Class of 1910, thus causing a "war of possession" between the even and odd numbered classes. Many times over the years it disappeared and re-appeared on campus. In 2005 it was transferred to a glass case in the Powell Campus Center, along with a plaque describing its history. However, after only a few months, the glass enclosure was destroyed in the middle of the night and the Black Knight stolen. Hot Dog Day Hot Dog Day, one of the largest yearly gatherings in Alfred, was first organized in 1972 by Mark O'Meara and Eric Vaughn as a way to bring the community together, raise money for local charities, and improve the reputations of campus Greek life. Since then the event has been organized and run by Alfred University and Alfred State College. From 2014 through 2022 the festival was held on alternating campuses, but in 2023 it resumed its original location on Main Street, Alfred. The event usually features live music, a soapbox derby, vendors, and carnival games for local children. In popular culture Alfred University was mentioned on Saturday Night Live once in 1975 by host and Alfred University alumnus Robert Klein. When Klein hosted SNL again in 1977, he talked at length about Alfred University in his monologue. Campus There are two libraries on Alfred's campus, the Herrick Memorial Library, which primarily serves the private colleges, and the Scholes Library, which primarily serves the New York State College of Ceramics. The Alfred Ceramic Art Museum has a collection of 8,000 ceramic objects, including both ancient and modern ceramic art and craft. Alfred has an astronomy program with the 7-telescope Stull Observatory, which has one of the largest optical telescopes in New York state. Asteroid 31113 Stull was named for physics professor John Stull, who helped establish the observatory in 1966. The Bromley-Daggett Equestrian Center, located at the Maris Cuneo Equine Park, was constructed in 2005. It hosts equine classes, an intramural equestrian team, varsity and JV for both English and Western disciplines, clinics, and horse shows. Stalls are available for boarding by university students. The facility has an indoor arena of 16000 ft2 and lighted outdoor arenas of ; the entire property consists of of land. The Miller Performing Arts Center was dedicated in 1995. Alfred University was once associated with the Seventh Day Baptist Church, until 1945 all presidents were admitted from among the seventh day Baptists, and had a school of theology. Formerly the campus chapel, Alumni Hall is now used primarily to house the Admissions and Financial Aid Departments, and has a place on the National Register of Historic Places. In the mid-1980s, Alumni Hall was preserved through a restoration effort. Alfred's Davis Memorial Carillon, erected in 1937 as a tribute to longtime president Boothe C. Davis, can occasionally be heard while on campus. The bells of the carillon, purchased from Antwerp, were thought to be the oldest bells in the western hemisphere. Research later (2004) showed that the bells were of a more recent vintage, and that Alfred had been the victim of a fraud. On the brighter side, the non-historic nature of the bells allows the university to replace those that have poor tonal quality. Besides the resident carillonneur, guest carillonneurs have in the past visited and played during the summer. Academics Colleges and schools Alfred University has 47 majors across its four colleges and schools. Alfred's four private colleges are The College of Liberal Arts and Sciences, The College of Professional Studies, The Inamori School of Engineering, and The Graduate School. The School of Business is part of The College of Professional Studies. The New York State College of Ceramics (NYSCC) consists of the School of Art and Design, with its own dean, and four state-supported materials programs cross-organized within Alfred University's School of Engineering. The College of Ceramics is functioning technically as a "holding entity" for the fiscal support of the state programs and the NYSCC mission. The unit head assists with budget preparation for the two aforementioned AU schools and the NYSCC-affiliated Scholes Library of Ceramics (part of the campuswide, unified AU library system), and acts in a liaison role to SUNY. The School of Art and Design, technically a sub-unit of the College of Ceramics but autonomously run with its own dean, is further subdivided into divisions. A visit to the school in 2009 led media historian Siegfried Zielinski to state that Alfred is "the center of alchemy for the 21st century." Alfred's School of Engineering (also autonomously run with its own dean) currently has four state-supported programs and two privately endowed programs. Partnerships Alfred University maintains a research agreement with the China University of Geosciences in Wuhan. Alfred University also hosted a Confucius Institute supported by the China University of Geosciences since 2009. The partnerships gained attention in 2023 when the United States House Select Committee on Strategic Competition between the United States and the Chinese Communist Party announced a probe into them over national security concerns. In June 2023, Alfred University announced that it was closing its Confucius Institute but did not state that it would end its partnership with the China University of Geosciences. Rankings For its 2022-2023 ranking, U.S. News & World Report ranked Alfred University tied for #48 in Regional Universities North. Museums and galleries Alfred University and The New York State College of Ceramics (NYSCC) are associated with five galleries: Alfred Ceramic Art Museum, The Cohen Center for the Arts Gallery, The Fosdick-Nelson Gallery, Robert C. Turner Gallery, and Institute for Electronic Art's (IEA) John Woods Studios. Other exhibition spaces for undergraduate and graduate students to show work include the Sculpture Dimensional Studies Exhibition Spaces (the Cube, the Box and the Cell Space), the Printmaking Critique Room, Flex Space, the New Deal, and Rhodes Room. The Robert C. Turner Gallery Alfred University's student-run gallery, the Robert C. Turner Gallery, was refurbished in 2011 during a building improvement project. The gallery was once a unique space that hosted undergraduate experimental shows with a loose criteria that encouraged experimentation. The gallery now has two floors; the main space and the catwalk, which also has a "black box" interactive space for expanded (electronic) media. This gallery space is named after internationally acclaimed artist and Alfred University alumnus, Robert C. Turner, a former professor of ceramic art at Alfred University with a sixty-year-long career in ceramics. IEA John Wood Studios NYSCC is host to the John Wood Studios of the Institute of Electronic Arts (IEA) within the School of Art and Design (SoAD), NYSCC which offers a residency program for up to two weeks for international artists. Student life Current student organizations As of 2020, Alfred has over 80 student organizations and clubs. There are three main media organizations on campus; AUTV, the Fiat Lux newspaper, and the WALF 89.7FM radio station. The student-run yearbook, the Kanakadea, ceased publication in 2014. Notable extracurricular clubs include the Student Activities Board, Forest People, and Art Force Five. AU has been granted chapters of a number of honor societies, including Phi Beta Kappa (the Alpha Gamma chapter of New York, granted in 2004), Phi Kappa Phi, and Alpha Lambda Delta; Alfred also has chapters of the service societies Alpha Phi Omega and Omicron Delta Kappa. Other honor societies include Alpha Iota Delta, Beta Gamma Sigma, Delta Mu Delta, Omicron Delta Epsilon, Pi Gamma Mu, Pi Mu Epsilon (the Alpha Iota chapter of New York, chartered in 2002), Pi Sigma Alpha, Sigma Tau Delta, Tau Beta Pi, Phi Alpha Theta, Phi Sigma Iota, Psi Chi, Keramos, and the Financial Management Association. Greek social organizations Fraternities and sororities were established at Alfred University for nearly 100 years prior to 2002, when they were discontinued, partially in response to the death of Zeta Beta Tau (ZBT) fraternity member Benjamin Klein under suspicious circumstances and charges of gross negligence on behalf of the fraternity. In 1978, prior to Klein's death, student Chuck Stenzel died in a hazing-related incident at Alfred's Klan Alpine fraternity. After Stenzel's death, his mother, Eileen Stevens, created a lobbying organization to increase awareness of hazing and promote anti-hazing laws, as documented in Hank Nuwer's book "Broken Pledges" and a later TV movie of the same name (in which Alfred was not named for legal reasons). Stevens later served as an advisor to Alfred on hazing-related issues, and received an honorary doctorate from the school in 1999. During the summer of 2002, all Greek social organizations lost recognition after an in-depth analysis of the Alfred University Greek system by an eight-member task force appointed by the board of trustees. More than 50% of the task force were themselves members of a fraternity or sorority while in college, and 82% of the board of trustees are Alfred University alumni. While Alfred University has banned fraternities and sororities, Alfred State College has not, and these organizations remain active within the village of Alfred. Athletics Alfred teams participate as a member of the National Collegiate Athletic Association's Division III, with the exception of alpine skiing which is governed by the USCSA and the equestrian team which is governed by the IHSA. The Saxons are a member of the Empire 8 Athletic Conference (Empire 8). They compete in the following sports: alpine skiing, basketball, cross country, equestrian, football, lacrosse, soccer, swimming and diving, tennis, and track and field, women's volleyball, and women's softball. On July 15, 2020, due to the COVID-19 pandemic, the Empire 8 Conference postponed all fall sports. Sports have since resumed operating as normal. Notable alumni and faculty See also List of university art museums and galleries in New York State References This article incorporates material from Statutory college. External links Official athletics website 1836 establishments in New York (state) Education in Rochester, New York Educational institutions established in 1836 Materials science institutes Universities and colleges in Allegany County, New York Private universities and colleges in New York (state) Glassmaking schools
Alfred University
[ "Materials_science", "Engineering" ]
3,452
[ "Glass engineering and science", "Materials science organizations", "Glassmaking schools", "Materials science institutes" ]
345,674
https://en.wikipedia.org/wiki/Beta%20Ursae%20Minoris
Kochab , Bayer designation Beta Ursae Minoris (β Ursae Minoris, abbreviated β UMi, Beta UMi), is the brightest star in the bowl of the Little Dipper asterism (which is part of the constellation of Ursa Minor), and only slightly fainter than Polaris, the northern pole star and brightest star in Ursa Minor. Kochab is 16 degrees from Polaris and has an apparent visual magnitude of 2.08. The distance to this star from the Sun can be deduced from the parallax measurements made during the Hipparcos mission, yielding a value of . Amateur astronomers can use Kochab as a precise guide for equatorial mount alignment: The celestial north pole is located 38 arcminutes away from Polaris, very close to the line connecting Polaris with Kochab. Nomenclature β Ursae Minoris (Latinised to Beta Ursae Minoris) is the star's Bayer designation. It bore the traditional name Kochab, which appeared in the Renaissance and has an uncertain meaning. It may be from or , both of which are broadly used to describe a celestial body and can be translated as 'planet' or 'star'. (The Hebrew term was also applied to the planet Mercury, especially due to its lack of distinguishing features in comparison to other visible planets.) However, it is more likely derived from or , a name applied to Theta Ursae Majoris. In 2016, the International Astronomical Union organized a Working Group on Star Names (IAU-WGSN) to catalog and standardize proper names for stars. The IAU-WGSN's first bulletin, July 2016, included a table of the first two batches of names approved by the IAU-WGSN, which included Kochab for this star. In Chinese astronomy, ('North Pole') refers to an asterism consisting of Beta Ursae Minoris, Gamma Ursae Minoris, 5 Ursae Minoris, 4 Ursae Minoris and Σ 1694. Consequently, the Chinese name for Beta Ursae Minoris itself is ('the Second Star of North Pole'), representing ('emperor'). Properties This is a red giant star with a stellar classification of K4 III. Kochab has reached a state in its evolution where the outer envelope has expanded to 44 times the radius of the Sun. This enlarged atmosphere is radiating 540 times as much light from its outer atmosphere as the Sun, but through a surface more than 1,470 times larger than the Sun's surface area, hence at a lower effective temperature of 4,126 K. (The Sun's effective temperature is 5,772 K.) This relatively low heat gives the star the typical orange-hued glow of a K-type star. It is not known for certain if Kochab is on the red giant branch, fusing hydrogen into helium in a shell surrounding an inert helium core, or on the horizontal branch fusing helium into carbon. By modelling this star based upon evolutionary tracks, the mass of this star can be estimated as . A mass estimate using the interferometrically-measured radius of this star and its spectroscopically-determined surface gravity yields 2.5 ± 0.9 . The star is known to undergo periodic variations in luminosity over roughly 4.6 days, with the astroseismic frequencies depending sensitively on the star's mass. From this, a much lower mass estimate of 1.3 ± 0.3  is reached. As the pole star From around 2500 BCE, as Thuban became less and less aligned with the north celestial pole, Kochab became one "pillar" of the circumpolar stars, first with Mizar, a star in the middle of the handle of the Big Dipper (Ursa Major), and later with Pherkad (in Ursa Minor). In fact, around the year 2467 BCE, the true north was best determined by drawing a plumb line between Mizar and Kochab, a fact with which the Ancient Egyptians were well acquainted, as they aligned the great Pyramid of Giza with it. This cycle of the succession of pole stars occurs due to the precession of the equinoxes. Kochab and Mizar were referred to by Ancient Egyptian astronomers as 'The Indestructibles' lighting the North. As precession continued, by the year 1100 BCE, Kochab was within roughly 7° of the north celestial pole, with old references over-emphasizing this near pass by referring to Beta Ursae Minoris as "Polaris", relating it to the current pole star, Polaris, which is slightly brighter and will have a much closer alignment of less than 0.5° by 2100 CE. This change in the identity of the pole stars is a result of Earth's axial precession. After 2000 BCE, Kochab and a new star, its neighbor Pherkad, were closer to the pole and together served as twin pole stars, circling the North Pole from around 1700 BCE until just after 300 CE. Neither star was as close to the north celestial pole as Polaris is now. Today, they are sometimes referred to as the "Guardians of the Pole". Planetary system Estimated to be around 2.95 billion years old, give or take 1 billion years, Kochab was announced to have a planetary companion around 6.1 times as massive as Jupiter with an orbit of 522 days. References Ursae Minoris, Beta K-type giants Ursa Minor Northern pole stars Kochab Ursae Minoris, 07 072607 5563 131873 Durchmusterung objects Planetary systems with one confirmed planet
Beta Ursae Minoris
[ "Astronomy" ]
1,183
[ "Ursa Minor", "Constellations" ]
345,675
https://en.wikipedia.org/wiki/Envelope%20detector
An envelope detector (sometimes called a peak detector) is an electronic circuit that takes a (relatively) high-frequency signal as input and outputs the envelope of the original signal. Diode detector A simple form of envelope detector used in detectors for early radios is the diode detector. Its output approximates a voltage-shifted version of the input's upper envelope. Between the circuit's input and output is a diode that performs half-wave rectification, allowing substantial current flow only when the input voltage is around a diode drop higher than the output terminal. The output is connected to a capacitor of value and resistor of value in parallel to ground. The capacitor is charged as the input voltage approaches its positive peaks. At other times, the capacitor is gradually discharged through the resistor. The resistor and capacitor form a 1st-order low pass filter, which attenuates higher frequencies at a rate of -6 dB per octave above its cutoff frequency of . The filter's RC time constant must be small enough to track quickly-falling envelope slopes and "top up" the envelope's voltage every peak to prevent negative peak clipping. AM demodulation Envelope detectors can be used to demodulate an amplitude modulated (AM) signal. Such a device is often used to demodulate AM radio signals because the envelope of the modulated signal is equivalent to the baseband signal. To sufficiently attenuate the frequency of the carrier wave frequency , the cutoff frequency of the low-pass filter should be well-below the carrier wave's frequency. To avoid negative peak clipping, the original signal that is modulated is usually limited to a maximum frequency to limit the maximum rate of fall of the AM signal. To minimize distortions from both ripple and negative peak clipping, the following inequality should be observed: Next, to filter out the DC component, the output could pass through a simple high-pass filter, such as a DC-blocking capacitor. General considerations Most practical envelope detectors use either half-wave or full-wave rectification of the signal to convert the AC audio input into a pulsed DC signal. Full-wave rectification traces both positive and negative peaks of the envelope. Half-wave rectification ignores negative peaks, which may be acceptable based on the application, particularly if the input signal is symmetric about the horizontal axis. Low threshold voltage diodes (e.g. germanium or Schottky diodes) may be preferable for tracking very small envelopes. The filtering for smoothing the final result is rarely perfect and some "ripple" is likely to remain on the output, particularly for low frequency inputs such from a bass instrument. Reducing the filter cutoff frequency gives a smoother output, but designers must compromise this with the circuit's high frequency response. Definition of the envelope Any AM or FM signal can be written in the following form In the case of AM, φ(t) (the phase component of the signal) is constant and can be ignored. In AM, the carrier frequency is also constant. Thus, all the information in the AM signal is in R(t). R(t) is called the envelope of the signal. Hence an AM signal is given by the function with m(t) representing the original audio frequency message, C the carrier amplitude and R(t) equal to C + m(t). So, if the envelope of the AM signal can be extracted, the original message can be recovered. In the case of FM, the transmitted has a constant envelope R(t) = R and can be ignored. However, many FM receivers measure the envelope anyway for received signal strength indication. Precision detector An envelope detector can also be constructed using a precision rectifier feeding into a low-pass filter. Drawbacks The envelope detector has several drawbacks: The input to the detector must be band-pass filtered around the desired signal, or else the detector will simultaneously demodulate several signals. The filtering can be done with a tunable filter or, more practically, a superheterodyne receiver It is more susceptible to noise than a product detector If the signal is overmodulated (i.e. modulation index > 1), distortion will occur Most of these drawbacks are relatively minor and are usually acceptable tradeoffs for the simplicity and low cost of using an envelope detector. Audio An envelope detector is sometimes referred to as an envelope follower in musical environments. It is still used to detect the amplitude variations of an incoming signal to produce a control signal that resembles those variations. However, in this case the input signal is made up of audible frequencies. Envelope detectors are often a component of other circuits, such as a compressor or an auto-wah or envelope-followed filter. In these circuits, the envelope follower is part of what is known as the "side chain", a circuit which describes some characteristic of the input, in this case its volume. Both expanders and compressors use the envelope's output voltage to control the gain of an amplifier. Auto-wah uses the voltage to control the cutoff frequency of a filter. The voltage-controlled filter of an analog synthesizer is a similar circuit. Modern envelope followers can be implemented: directly as electronic hardware, or as software using either a digital signal processor (DSP) or on a general-purpose CPU. See also Analytic signal Attack–decay–sustain–release envelope References External links Envelope detector Envelope and envelope recovery Electronic music Audio engineering Communication circuits Detectors Demodulation
Envelope detector
[ "Engineering" ]
1,134
[ "Radio electronics", "Telecommunications engineering", "Demodulation", "Electrical engineering", "Audio engineering", "Communication circuits" ]
345,704
https://en.wikipedia.org/wiki/Poisson%20algebra
In mathematics, a Poisson algebra is an associative algebra together with a Lie bracket that also satisfies Leibniz's law; that is, the bracket is also a derivation. Poisson algebras appear naturally in Hamiltonian mechanics, and are also central in the study of quantum groups. Manifolds with a Poisson algebra structure are known as Poisson manifolds, of which the symplectic manifolds and the Poisson–Lie groups are a special case. The algebra is named in honour of Siméon Denis Poisson. Definition A Poisson algebra is a vector space over a field K equipped with two bilinear products, ⋅ and {, }, having the following properties: The product ⋅ forms an associative K-algebra. The product {, }, called the Poisson bracket, forms a Lie algebra, and so it is anti-symmetric, and obeys the Jacobi identity. The Poisson bracket acts as a derivation of the associative product ⋅, so that for any three elements x, y and z in the algebra, one has {x, y ⋅ z} = {x, y} ⋅ z + y ⋅ {x, z}. The last property often allows a variety of different formulations of the algebra to be given, as noted in the examples below. Examples Poisson algebras occur in various settings. Symplectic manifolds The space of real-valued smooth functions over a symplectic manifold forms a Poisson algebra. On a symplectic manifold, every real-valued function H on the manifold induces a vector field XH, the Hamiltonian vector field. Then, given any two smooth functions F and G over the symplectic manifold, the Poisson bracket may be defined as: . This definition is consistent in part because the Poisson bracket acts as a derivation. Equivalently, one may define the bracket {,} as where [,] is the Lie derivative. When the symplectic manifold is R2n with the standard symplectic structure, then the Poisson bracket takes on the well-known form Similar considerations apply for Poisson manifolds, which generalize symplectic manifolds by allowing the symplectic bivector to be rank deficient. Lie algebras The tensor algebra of a Lie algebra has a Poisson algebra structure. A very explicit construction of this is given in the article on universal enveloping algebras. The construction proceeds by first building the tensor algebra of the underlying vector space of the Lie algebra. The tensor algebra is simply the disjoint union (direct sum ⊕) of all tensor products of this vector space. One can then show that the Lie bracket can be consistently lifted to the entire tensor algebra: it obeys both the product rule, and the Jacobi identity of the Poisson bracket, and thus is the Poisson bracket, when lifted. The pair of products {,} and ⊗ then form a Poisson algebra. Observe that ⊗ is neither commutative nor is it anti-commutative: it is merely associative. Thus, one has the general statement that the tensor algebra of any Lie algebra is a Poisson algebra. The universal enveloping algebra is obtained by modding out the Poisson algebra structure. Associative algebras If A is an associative algebra, then imposing the commutator [x, y] = xy − yx turns it into a Poisson algebra (and thus, also a Lie algebra) AL. Note that the resulting AL should not be confused with the tensor algebra construction described in the previous section. If one wished, one could also apply that construction as well, but that would give a different Poisson algebra, one that would be much larger. Vertex operator algebras For a vertex operator algebra (V, Y, ω, 1), the space V/C2(V) is a Poisson algebra with {a, b} = a0b and a ⋅ b = a−1b. For certain vertex operator algebras, these Poisson algebras are finite-dimensional. Z2 grading Poisson algebras can be given a Z2-grading in one of two different ways. These two result in the Poisson superalgebra and the Gerstenhaber algebra. The difference between the two is in the grading of the product itself. For the Poisson superalgebra, the grading is given by whereas in the Gerstenhaber algebra, the bracket decreases the grading by one: In both of these expressions denotes the grading of the element ; typically, it counts how can be decomposed into an even or odd product of generating elements. Gerstenhaber algebras conventionally occur in BRST quantization. See also Moyal bracket Kontsevich quantization formula References Algebras Symplectic geometry
Poisson algebra
[ "Mathematics" ]
995
[ "Algebras", "Mathematical structures", "Algebraic structures" ]
345,733
https://en.wikipedia.org/wiki/Anisochronous
In telecommunications, the term anisochronous refers to a periodic signal, pertaining to transmission in which the time interval separating any two corresponding transitions is not necessarily related to the time interval separating any other two transitions. It can also pertain to a data transmission in which there is always a whole number of unit intervals between any two significant instants in the same block or character, but not between significant instants in different blocks or characters. In practice, anisochronous typically means that data packets are not arriving in the same order they were transmitted, thus dramatically altering the quality of a multimedia transmission (e.g. voice, video, music), or after processing to restore isochronicity, have had significant amounts of latency added. Isochronous and anisochronous are characteristics, while synchronous and asynchronous are relationships. References Telecommunication theory Synchronization
Anisochronous
[ "Engineering" ]
189
[ "Telecommunications engineering", "Synchronization" ]
345,756
https://en.wikipedia.org/wiki/Lighter
A lighter is a portable device which uses mechanical or electrical means to create a controlled flame, and can be used to ignite a variety of flammable items, such as cigarettes, butane gas, fireworks, candles, or campfires. A lighter typically consists of a metal or plastic container filled with a flammable liquid, a compressed flammable gas, or in rarer cases a flammable solid (e.g. rope in a trench lighter); a means of ignition to produce the flame; and some provision for extinguishing the flame or else controlling it to such a degree that the user may extinguish it with their breath. Alternatively, a lighter can be one which uses electricity to create an electric arc utilizing the created plasma as the source of ignition or a heating element can be used in a similar vein to heat the target to its ignition temperatures, as first formally utilized by Friedrich Wilhelm Schindler to light cigars and now more commonly seen incorporated into the automobile auxiliary power outlet to ignite the target material. Different lighter fuels have different characteristics which is the main influence behind the creation and purchasing of a variety of lighter types. History The first lighters were converted flintlock pistols that used gunpowder. In 1662 the Turkish traveller Evliya Çelebi visited Vienna as a member of an Ottoman diplomatic mission and admired the lighters being manufactured there: "Enclosed in a kind of tiny box are tinder, a steel, sulphur and resinous wood. When struck just like a firearm wheel the wood bursts into flame. This is useful for soldiers on campaign." One of the first lighters was invented by a German chemist named Johann Wolfgang Döbereiner in 1823 and was often called Döbereiner's lamp. This lighter worked by passing flammable hydrogen gas, produced within the lighter by a chemical reaction, over a platinum metal catalyst which in turn caused it to ignite and give off a great amount of heat and light. The development of ferrocerium (often misidentified as flint) by Carl Auer von Welsbach in 1903 has made modern lighters possible. When scratched, it produces a large spark which is responsible for lighting the fuel of many lighters, and is suitably inexpensive for use in disposable items. Using Carl Auer von Welsbach's flint, companies like Ronson were able to develop practical and easy to use lighters. In 1910, Ronson released the first Pist-O-Liter, and in 1913, the company developed its first lighter, called the "Wonderlite", which was a permanent match style of lighter. During WWI soldiers started to create lighters out of empty cartridge cases. During that time one of the soldiers came up with a means to insert a chimney cap with holes in it to make it more windproof. The Zippo lighter and company were invented and founded by George Grant Blaisdell in 1932. The Zippo was noted for its reliability, "Life Time Warranty" and marketing as "Wind-Proof". Most early Zippos used naphtha as a fuel source. In the 1950s, there was a switch in the fuel of choice from naphtha to butane, as butane allows for a controllable flame and has less odour. This also led to the use of piezoelectric spark, which replaced the need for a flint wheel in some lighters and was used in many Ronson lighters. Around the end of the 20th century most of the world's lighters were produced in France, the United States, China, and Thailand. Operation Earlier lighters mostly burned "lighter fluid", naphtha, saturating a cloth wick and fibre packing to absorb the fluid and prevent it from leaking. The wick is covered by an enclosed top to prevent the volatile liquid from evaporating, which is opened to operate the lighter, and extinguishes the flame when closed after use. Later lighters use liquefied butane gas as fuel, with a valved orifice that allows gas to escape at a controlled rate when the lighter is used. Older lighters were usually ignited by a spark created by striking metal against a lighter flint. Later piezo ignition was introduced: a piezoelectric crystal is compressed on pressing a button, generating an electric spark. In naphtha lighters, the liquid is sufficiently volatile, and flammable vapour is present as soon as the top of the lighter is opened. Butane lighters combine the striking action with the opening of the valve to release gas. The spark ignites the flammable gas causing a flame to come out of the lighter which continues until either the top is closed (naphtha type), or the valve is released (butane type). A metal enclosure with air holes, designed to allow mixing of fuel and air while making the lighter less sensitive to wind, usually surrounds the flame. The gas jet in butane lighters mixes air and gas by using Bernoulli's principle, requiring air holes in that are much smaller and further from the flame. Specialized "windproof" butane lighters are manufactured for demanding conditions such as shipboard, high altitude, and wet climates. Some dedicated models double as synthetic rope cutters. Such lighters are often far hotter than normal lighters (those that use a "soft flame") and can burn in excess of . The windproof capabilities are not achieved from higher pressure fuel; windproof lighters use the same fuel (butane) as standard lighters, and therefore develop the same vapour pressure. Instead, windproof lighters mix the fuel with air and pass the butane–air mixture through a catalytic coil. An electric spark starts the initial flame, and soon the coil is hot enough to cause the fuel–air mixture to burn on contact. Other types Jet lighter As opposed to lighters of the naphtha or standard butane type (whether refillable or disposable), which combust incompletely and thus create a sooty, orange "safety" flame, jet lighters produce a blue flame that in some cases is almost invisible and invariably burns at a far higher temperature. The spark in such lighters is almost always produced by an electric arc (as seen below), but some jet lighters burn with incomplete combustion. Disadvantages of the jet lighter include a "roaring" noise in operation, as well as higher fuel consumption. Electric arc lighter Arc lighters use a spark to create a plasma conduit between electrodes, which is then maintained by a lower voltage. The arc is then applied to a flammable substance to cause ignition. Automobile lighter Some vehicles are equipped with an electric lighter located on the dashboard or in the well between the front seats. Its electric heating element becomes hot in seconds upon activation. Match lighter Not to be confused with the meaning of match as in matchsticks or the "permanent match" (see below), this type of lighter consists of a length of slow match in a holder, with means to ignite and to extinguish the match. While the glowing match does not generally supply enough energy to start a fire without further kindling, it is fully sufficient to light a cigarette. The main advantage of this design shows itself in windy conditions, where the glow of the match is fanned by the wind instead of being blown out. Permanent match A typical form of lighter is the permanent match or everlasting match, consisting of a naphtha fuel-filled metal shell and a separate threaded metal rod assembly—the "match"—serving as the striker and wick. This "metal match" is stored screwed into the fuel storage compartment: the shell. The fuel-saturated striker/wick assembly is unscrewed to remove, and scratched against a flint on the side of the case to create a spark. Its concealed wick catches fire, resembling a match. The flame is extinguished by blowing it out before screwing the "match" back into the shell, where it absorbs fuel for the next use. An advantage over other naphtha lighters is that the fuel compartment is sealed shut with a rubber o-ring, which slows or stops fuel evaporation. Flameless lighter A flameless lighter is a safe alternative to traditional lighters. The flameless lighter uses an enclosed heating element which glows, so that the device does not produce an open flame. Typical flameless heating elements are an electrically heated wire or an artificial coal. Flameless lighters are designed for use in any environment where an open flame, conventional lighters or matches are not permitted. The flameless lighter is used in many environments such as prisons and detention facilities, oil and gas facilities, mental health facilities, nursing homes, airports and night clubs/restaurants. Many advertised so-called flameless lighters are not flameless at all, but the flame is invisible (such as a windproof lighter). If a piece of paper can easily be ignited, it is probably not a true flameless lighter and may not be safe in hazardous environments where smoking is confined to specific safe areas. The flameless lighter was invented by brothers Douglas Hammond and David Hammond in the UK in 1966 under the "Ciglow" name. Catalytic lighter Catalytic lighters use methanol or methylated spirits as fuel and a thin platinum wire which heats up in the presence of flammable vapours and produces a flame. Solar lighter A solar lighter is a pocket-sized stainless steel parabolic mirror, shaped to concentrate sunlight on a small prong holding combustible material at the focal point. A revival of an old gadget marketed as a cigarette lighter by RadioShack in the 1980s, it is a useful hiking and camping accessory as its functioning is not affected by having been soaked by rain or falling in rivers or the sea. To operate it needs sunlight and a small piece of flammable material. Once a glowing spark has been achieved, careful blowing will produce a blaze. ISO Standards The International Standard EN ISO 9994:2002 and the European standard EN 13869:2002 are two primary references. The ISO establishes non-functional specifications on quality, reliability, safety of lighters, and appropriate test procedures. For instance, a lighter should generate flame only through positive action on the part of the user, two or more independent actions by the user, or an actuating force greater than or equal to 15 Newtons. The standard also specifies other safety features, such as the lighter's maximum flame height and its resistance to elevated temperatures, dropping, and damages from continuous burning. However, the standard does not include child resistance specifications. The European standard EN 13869:2002 establishes child-resistance specifications and defines as novelty lighters those that resemble another object commonly recognized as appealing to children younger than 51 months, or those that have entertaining audio or animated effects. As matches, lighters, and other heat sources are the leading causes of fire deaths for children, many jurisdictions, such as the EU, have prohibited the marketing of novelty or non-child resistant lighters. Examples of child resistance features include the use of a smooth or shielded spark wheel. Many people remove these child resistance features, making the lighter easier to ignite. In 2005 the fourth edition of the ISO standard was released (ISO9994:2005). The main change to the 2004 Standard is the inclusion of specifications on safety symbols. See also Automobile auxiliary power outlet Butane torch Clipper (lighter) Ferrocerium Gas lighter References External links Fire making Home appliances Butane 1823 introductions 19th-century inventions German inventions Tobacco accessories
Lighter
[ "Physics", "Technology" ]
2,382
[ "Physical systems", "Machines", "Home appliances" ]
345,758
https://en.wikipedia.org/wiki/Solar%20constant
The solar constant (GSC) measures the amount of energy received by a given area one astronomical unit away from the Sun. More specifically, it is a flux density measuring mean solar electromagnetic radiation (total solar irradiance) per unit area. It is measured on a surface perpendicular to the rays, one astronomical unit (au) from the Sun (roughly the distance from the Sun to the Earth). The solar constant includes radiation over the entire electromagnetic spectrum. It is measured by satellite as being 1.361 kilowatts per square meter (kW/m2) at solar minimum (the time in the 11-year solar cycle when the number of sunspots is minimal) and approximately 0.1% greater (roughly 1.362 kW/m2) at solar maximum. The solar "constant" is not a physical constant in the modern CODATA scientific sense; that is, it is not like the Planck constant or the speed of light which are absolutely constant in physics. The solar constant is an average of a varying value. In the past 400 years it has varied less than 0.2 percent. Billions of years ago, it was significantly lower. This constant is used in the calculation of radiation pressure, which aids in the calculation of a force on a solar sail. Calculation Solar irradiance is measured by satellites above Earth's atmosphere, and is then adjusted using the inverse square law to infer the magnitude of solar irradiance at one Astronomical Unit (au) to evaluate the solar constant. The approximate average value cited, 1.3608 ± 0.0005  kW/m2, which is 81.65 kJ/m2 per minute, is equivalent to approximately 1.951 calories per minute per square centimeter, or 1.951 langleys per minute. Solar output is nearly, but not quite, constant. Variations in total solar irradiance (TSI) were small and difficult to detect accurately with technology available before the satellite era (±2% in 1954). Total solar output is now measured as varying (over the last three 11-year sunspot cycles) by approximately 0.1%; see solar variation for details. For extrasolar planets Therefore: Where f is the irradiance of the star at the extrasolar planet at distance d. Historical measurements In 1838, Claude Pouillet made the first estimate of the solar constant. Using a very simple pyrheliometer he developed, he obtained a value of 1.228 kW/m2, close to the current estimate. In 1875, Jules Violle resumed the work of Pouillet and offered a somewhat larger estimate of 1.7 kW/m2 based, in part, on a measurement that he made from Mont Blanc in France. In 1884, Samuel Pierpont Langley attempted to estimate the solar constant from Mount Whitney in California. By taking readings at different times of day, he tried to correct for effects due to atmospheric absorption. However, the final value he proposed, 2.903 kW/m2, was much too large. Between 1902 and 1957, measurements by Charles Greeley Abbot and others at various high-altitude sites found values between 1.322 and 1.465 kW/m2. Abbot showed that one of Langley's corrections was erroneously applied. Abbot's results varied between 1.89 and 2.22 calories (1.318 to 1.548  kW/m2), a variation that appeared to be due to the Sun and not the Earth's atmosphere. In 1954 the solar constant was evaluated as 2.00 cal/min/cm2 ± 2%. Current results are about 2.5 percent lower. Relationship to other measurements Solar irradiance The actual direct solar irradiance at the top of the atmosphere fluctuates by about 6.9% during a year (from 1.412 kW/m2 in early January to 1.321 kW/m2 in early July) due to the Earth's varying distance from the Sun, and typically by much less than 0.1% from day to day. Thus, for the whole Earth (which has a cross section of 127,400,000 km2), the power is 1.730×1017 W (or 173,000 terawatts), plus or minus 3.5% (half the approximately 6.9% annual range). The solar constant does not remain constant over long periods of time (see Solar variation), but over a year the solar constant varies much less than the solar irradiance measured at the top of the atmosphere. This is because the solar constant is evaluated at a fixed distance of 1 Astronomical Unit (au) while the solar irradiance will be affected by the eccentricity of the Earth's orbit. Its distance to the Sun varies annually between 147.1·106 km at perihelion and 152.1·106 km at aphelion. In addition, several long term (tens to hundreds of millennia) cycles of subtle variation in the Earth's orbit (Milankovich cycles) affect the solar irradiance and insolation (but not the solar constant). The Earth receives a total amount of radiation determined by its cross section (π·RE2), but as it rotates this energy is distributed across the entire surface area (4·π·RE2). Hence the average incoming solar radiation, taking into account the angle at which the rays strike and that at any one moment half the planet does not receive any solar radiation, is one-fourth the solar constant (approximately 340 W/m2). The amount reaching the Earth's surface (as insolation) is further reduced by atmospheric attenuation, which varies. At any given moment, the amount of solar radiation received at a location on the Earth's surface depends on the state of the atmosphere, the location's latitude, and the time of day. Apparent magnitude The solar constant includes all wavelengths of solar electromagnetic radiation, not just the visible light (see Electromagnetic spectrum). It is positively correlated with the apparent magnitude of the Sun which is −26.8. The solar constant and the magnitude of the Sun are two methods of describing the apparent brightness of the Sun, though the magnitude is based on the Sun's visual output only. The Sun's total radiation The angular diameter of the Earth as seen from the Sun is approximately 1/11,700 radians (about 18 arcseconds), meaning the solid angle of the Earth as seen from the Sun is approximately 1/175,000,000 of a steradian. Thus the Sun emits about 2.2 billion times the amount of radiation that is caught by Earth, in other words about 3.846×1026 watts. Past variations in solar irradiance Space-based observations of solar irradiance started in 1978. These measurements show that the solar constant is not constant. It varies with the 11-year sunspot solar cycle. When going further back in time, one has to rely on irradiance reconstructions, using sunspots for the past 400 years or cosmogenic radionuclides for going back 10,000 years. Such reconstructions show that solar irradiance varies with distinct periodicities. These cycles are: 11 years (Schwabe), 88 years (Gleisberg cycle), 208 years (DeVries cycle) and 1,000 years (Eddy cycle). Over billions of years, the Sun is gradually expanding, and emitting more energy from the resultant larger surface area. The unsolved question of how to account for the clear geological evidence of liquid water on the Earth billions of years ago, at a time when the sun's luminosity was only 70% of its current value, is known as the faint young Sun paradox. Variations due to atmospheric conditions At most about 75% of the solar energy actually reaches the earth's surface, as even with a cloudless sky it is partially reflected and absorbed by the atmosphere. Even light cirrus clouds reduce this to 50%, stronger cirrus clouds to 40%. Thus the solar energy arriving at the surface with the sun directly overhead can vary from 550 W/m2 with cirrus clouds to 1025 W/m2 with a clear sky. See also References Atmospheric radiation Photovoltaics Radiometry Sun
Solar constant
[ "Engineering" ]
1,705
[ "Telecommunications engineering", "Radiometry" ]
345,780
https://en.wikipedia.org/wiki/Backhaul%20%28broadcasting%29
In the context of broadcasting, backhaul refers to uncut program content that is transmitted point-to-point to an individual television station or radio station, broadcast network or other receiving entity where it will be integrated into a finished TV show or radio show. The term is independent of the medium being used to send the backhaul, but communications satellite transmission is very common. When the medium is satellite, it is called a wildfeed. Backhauls are also referred to sometimes as clean feeds, being clean in the sense that they lack any of the post-production elements that are added later to the feed's content (i.e. on-screen graphics, voice-overs, bumpers, etc.) during the integration of the backhaul feed into a finished show. In live sports production, a backhaul is used to obtain live game footage (usually for later repackaging in highlights shows) when an off-air source is not readily available. In this instance the feed that is being obtained contains all elements except for TV commercials or radio ads run by the host network's master control. This is particularly useful for obtaining live coverage of post-game press conferences or extended game highlights (melts), since the backhaul may stay up to feed these events after the network has concluded their broadcast. Electronic news gathering, including live via satellite interviews, reporters' live shots, and sporting events are all examples of radio or television content that is backhauled to a station or network before being made available to the public through that station or network. Cable TV channels, particularly public, educational, and government access (PEG) along with (local origination) channels, may also backhauled to cable headends before making their way to the subscriber. Finished network feeds are not considered backhauls, even if local insertion is used to modify the content prior to final transmission. There exists a dedicated group of enthusiasts who use TVRO (TV receive-only) gear such as satellite dishes to peek in on backhaul signals that are available on any of the dozens of broadcast satellites that are visible from almost any point on Earth. In its early days, their hobby was strengthened by the fact that most backhaul was analog and in the clear (unscrambled or unencrypted) which made for a vast smorgasbord of free television available for the technically inclined amateur. In recent years, full-time content and cable channels have added encryption and conditional access, and occasional signals are steadily becoming digital, which has had a deleterious effect on the hobby. Some digital signals remain freely accessible (sometimes using Ku band dishes as small as one meter) under the international DVB-S standard or the US Motorola-proprietary Digicipher system. The small dishes may either be fixed (much like DBS antennas), positioned using a rotor (usually DiSEqC-standard) or may be toroidal in design (twin toroidal reflectors focus the incoming signal as a line, not a point, so that multiple LNBs may receive signal from multiple satellites). A blind-search receiver is often used to try every possible combination of frequency and bitrate to search for backhaul signals on individual communication satellites. Documentaries containing backhauled content The 1992 documentary Feed was compiled almost entirely using unedited backhaul from political campaign coverage by local and network television. A similar documentary about the 1992 U.S. presidential election named Spin was made in the same way in 1995. References External links LyngSat Broadcasting Broadcast engineering Television technology
Backhaul (broadcasting)
[ "Technology", "Engineering" ]
735
[ "Information and communications technology", "Broadcast engineering", "Electronic engineering", "Television technology" ]
345,807
https://en.wikipedia.org/wiki/Group%20extension
In mathematics, a group extension is a general means of describing a group in terms of a particular normal subgroup and quotient group. If and are two groups, then is an extension of by if there is a short exact sequence If is an extension of by , then is a group, is a normal subgroup of and the quotient group is isomorphic to the group . Group extensions arise in the context of the extension problem, where the groups and are known and the properties of are to be determined. Note that the phrasing " is an extension of by " is also used by some. Since any finite group possesses a maximal normal subgroup with simple factor group , all finite groups may be constructed as a series of extensions with finite simple groups. This fact was a motivation for completing the classification of finite simple groups. An extension is called a central extension if the subgroup lies in the center of . Extensions in general One extension, the direct product, is immediately obvious. If one requires and to be abelian groups, then the set of isomorphism classes of extensions of by a given (abelian) group is in fact a group, which is isomorphic to cf. the Ext functor. Several other general classes of extensions are known but no theory exists that treats all the possible extensions at one time. Group extension is usually described as a hard problem; it is termed the extension problem. To consider some examples, if , then is an extension of both and . More generally, if is a semidirect product of and , written as , then is an extension of by , so such products as the wreath product provide further examples of extensions. Extension problem The question of what groups are extensions of by is called the extension problem, and has been studied heavily since the late nineteenth century. As to its motivation, consider that the composition series of a finite group is a finite sequence of subgroups , where each is an extension of by some simple group. The classification of finite simple groups gives us a complete list of finite simple groups; so the solution to the extension problem would give us enough information to construct and classify all finite groups in general. Classifying extensions Solving the extension problem amounts to classifying all extensions of H by K; or more practically, by expressing all such extensions in terms of mathematical objects that are easier to understand and compute. In general, this problem is very hard, and all the most useful results classify extensions that satisfy some additional condition. It is important to know when two extensions are equivalent or congruent. We say that the extensions and are equivalent (or congruent) if there exists a group isomorphism making commutative the diagram below. In fact it is sufficient to have a group homomorphism; due to the assumed commutativity of the diagram, the map is forced to be an isomorphism by the short five lemma. Warning It may happen that the extensions and are inequivalent but G and G are isomorphic as groups. For instance, there are inequivalent extensions of the Klein four-group by , but there are, up to group isomorphism, only four groups of order containing a normal subgroup of order with quotient group isomorphic to the Klein four-group. Trivial extensions A trivial extension is an extension that is equivalent to the extension where the left and right arrows are respectively the inclusion and the projection of each factor of . Classifying split extensions A split extension is an extension with a homomorphism such that going from H to G by s and then back to H by the quotient map of the short exact sequence induces the identity map on H i.e., . In this situation, it is usually said that s splits the above exact sequence. Split extensions are very easy to classify, because an extension is split if and only if the group G is a semidirect product of K and H. Semidirect products themselves are easy to classify, because they are in one-to-one correspondence with homomorphisms from , where Aut(K) is the automorphism group of K. For a full discussion of why this is true, see semidirect product. Warning on terminology In general in mathematics, an extension of a structure K is usually regarded as a structure L of which K is a substructure. See for example field extension. However, in group theory the opposite terminology has crept in, partly because of the notation , which reads easily as extensions of Q by N, and the focus is on the group Q. A paper of Ronald Brown and Timothy Porter on Otto Schreier's theory of nonabelian extensions uses the terminology that an extension of K gives a larger structure. Central extension A central extension''' of a group G is a short exact sequence of groups such that A is included in , the center of the group E. The set of isomorphism classes of central extensions of G by A is in one-to-one correspondence with the cohomology group . Examples of central extensions can be constructed by taking any group G and any abelian group A, and setting E to be . This kind of split example corresponds to the element 0 in under the above correspondence. More serious examples are found in the theory of projective representations, in cases where the projective representation cannot be lifted to an ordinary linear representation. In the case of finite perfect groups, there is a universal perfect central extension. Similarly, the central extension of a Lie algebra is an exact sequence such that is in the center of . There is a general theory of central extensions in Maltsev varieties. Generalization to general extensions There is a similar classification of all extensions of G by A in terms of homomorphisms from , a tedious but explicitly checkable existence condition involving and the cohomology group . Lie groups In Lie group theory, central extensions arise in connection with algebraic topology. Roughly speaking, central extensions of Lie groups by discrete groups are the same as covering groups. More precisely, a connected covering space of a connected Lie group is naturally a central extension of , in such a way that the projection is a group homomorphism, and surjective. (The group structure on depends on the choice of an identity element mapping to the identity in .) For example, when is the universal cover of , the kernel of π'' is the fundamental group of , which is known to be abelian (see H-space). Conversely, given a Lie group and a discrete central subgroup , the quotient is a Lie group and is a covering space of it. More generally, when the groups , and occurring in a central extension are Lie groups, and the maps between them are homomorphisms of Lie groups, then if the Lie algebra of is , that of is , and that of is , then is a central Lie algebra extension of by . In the terminology of theoretical physics, generators of are called central charges. These generators are in the center of ; by Noether's theorem, generators of symmetry groups correspond to conserved quantities, referred to as charges. The basic examples of central extensions as covering groups are: the spin groups, which double cover the special orthogonal groups, which (in even dimension) doubly cover the projective orthogonal group. the metaplectic groups, which double cover the symplectic groups. The case of involves a fundamental group that is infinite cyclic. Here the central extension involved is well known in modular form theory, in the case of forms of weight . A projective representation that corresponds is the Weil representation, constructed from the Fourier transform, in this case on the real line. Metaplectic groups also occur in quantum mechanics. See also Lie algebra extension Virasoro algebra HNN extension Group contraction Extension of a topological group References Further reading Group theory
Group extension
[ "Mathematics" ]
1,578
[ "Group theory", "Fields of abstract algebra" ]
345,919
https://en.wikipedia.org/wiki/Three%20prime%20untranslated%20region
In molecular genetics, the three prime untranslated region (3′-UTR) is the section of messenger RNA (mRNA) that immediately follows the translation termination codon. The 3′-UTR often contains regulatory regions that post-transcriptionally influence gene expression. During gene expression, an mRNA molecule is transcribed from the DNA sequence and is later translated into a protein. Several regions of the mRNA molecule are not translated into a protein including the 5' cap, 5' untranslated region, 3′ untranslated region and poly(A) tail. Regulatory regions within the 3′-untranslated region can influence polyadenylation, translation efficiency, localization, and stability of the mRNA. The 3′-UTR contains binding sites for both regulatory proteins and microRNAs (miRNAs). By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also has silencer regions which bind to repressor proteins and will inhibit the expression of the mRNA. Many 3′-UTRs also contain AU-rich elements (AREs). Proteins bind AREs to affect the stability or decay rate of transcripts in a localized manner or affect translation initiation. Furthermore, the 3′-UTR contains the sequence AAUAAA that directs addition of several hundred adenine residues called the poly(A) tail to the end of the mRNA transcript. Poly(A) binding protein (PABP) binds to this tail, contributing to regulation of mRNA translation, stability, and export. For example, poly(A) tail bound PABP interacts with proteins associated with the 5' end of the transcript, causing a circularization of the mRNA that promotes translation. The 3′-UTR can also contain sequences that attract proteins to associate the mRNA with the cytoskeleton, transport it to or from the cell nucleus, or perform other types of localization. In addition to sequences within the 3′-UTR, the physical characteristics of the region, including its length and secondary structure, contribute to translation regulation. These diverse mechanisms of gene regulation ensure that the correct genes are expressed in the correct cells at the appropriate times. Physical characteristics The 3′-UTR of mRNA has a great variety of regulatory functions that are controlled by the physical characteristics of the region. One such characteristic is the length of the 3′-UTR, which in the mammalian genome has considerable variation. This region of the mRNA transcript can range from 60 nucleotides to about 4000. On average the length for the 3′-UTR in humans is approximately 800 nucleotides, while the average length of 5'-UTRs is only about 200 nucleotides. The length of the 3′-UTR is significant since longer 3′-UTRs are associated with lower levels of gene expression. One possible explanation for this phenomenon is that longer regions have a higher probability of possessing more miRNA binding sites that have the ability to inhibit translation. In addition to length, the nucleotide composition also differs significantly between the 5' and 3′-UTR. The mean G+C percentage of the 5'-UTR in warm-blooded vertebrates is about 60% as compared to only 45% for 3′-UTRs. This is important because an inverse correlation has been observed between the G+C% of 5' and 3′-UTRs and their corresponding lengths. The UTRs that are GC-poor tend to be longer than those located in GC-rich genomic regions. Sequences within the 3′-UTR also have the ability to degrade or stabilize the mRNA transcript. Modifications that control a transcript's stability allow expression of a gene to be rapidly controlled without altering translation rates. One group of elements in the 3′-UTR that can help destabilize an mRNA transcript are the AU-rich elements (AREs). These elements range in size from 50 to 150 base pairs and generally contain multiple copies of the pentanucleotide AUUUA. Early studies indicated that AREs can vary in sequence and fall into three main classes that differ in the number and arrangement of motifs. Another set of elements that is present in both the 5' and 3′-UTR are iron response elements (IREs). The IRE is a stem-loop structure within the untranslated regions of mRNAs that encode proteins involved in cellular iron metabolism. The mRNA transcript containing this element is either degraded or stabilized depending upon the binding of specific proteins and the intracellular iron concentrations. The 3′-UTR also contains sequences that signal additions to be made, either to the transcript itself or to the product of translation. For example, there are two different polyadenylation signals present within the 3′-UTR that signal the addition of the poly(A) tail. These signals initiate the synthesis of the poly(A) tail at a defined length of about 250 base pairs. The primary signal used is the nuclear polyadenylation signal (PAS) with the sequence AAUAAA located toward the end of the 3′-UTR. However, during early development cytoplasmic polyadenylation can occur instead and regulate the translational activation of maternal mRNAs. The element that controls this process is called the CPE which is AU-rich and located in the 3′-UTR as well. The CPE generally has the structure UUUUUUAU and is usually within 100 base pairs of the nuclear PAS. Another specific addition signaled by the 3′-UTR is the incorporation of selenocysteine at UGA codons of mRNAs encoding selenoproteins. Normally the UGA codon encodes for a stop of translation, but in this case a conserved stem-loop structure called the selenocysteine insertion sequence (SECIS) causes for the insertion of selenocysteine instead. Role in gene expression The 3′-untranslated region plays a crucial role in gene expression by influencing the localization, stability, export, and translation efficiency of an mRNA. It contains various sequences that are involved in gene expression, including microRNA response elements (MREs), AU-rich elements (AREs), and the poly(A) tail. In addition, the structural characteristics of the 3′-UTR as well as its use of alternative polyadenylation play a role in gene expression. MicroRNA response elements The 3′-UTR often contains microRNA response elements (MREs), which are sequences to which miRNAs bind. miRNAs are short, non-coding RNA molecules capable of binding to mRNA transcripts and regulating their expression. One miRNA mechanism involves partial base pairing of the 5' seed sequence of an miRNA to an MRE within the 3′-UTR of an mRNA; this binding then causes translational repression. AU-rich elements In addition to containing MREs, the 3′-UTR also often contains AU-rich elements (AREs), which are 50 to 150 bp in length and usually include many copies of the sequence AUUUA. ARE binding proteins (ARE-BPs) bind to AU-rich elements in a manner that is dependent upon tissue type, cell type, timing, cellular localization, and environment. In response to different intracellular and extracellular signals, ARE-BPs can promote mRNA decay, affect mRNA stability, or activate translation. This mechanism of gene regulation is involved in cell growth, cellular differentiation, and adaptation to external stimuli. It therefore acts on transcripts encoding cytokines, growth factors, tumor suppressors, proto-oncogenes, cyclins, enzymes, transcription factors, receptors, and membrane proteins. Poly(A) tail The poly(A) tail contains binding sites for poly(A) binding proteins (PABPs). These proteins cooperate with other factors to affect the export, stability, decay, and translation of an mRNA. PABPs bound to the poly(A) tail may also interact with proteins, such as translation initiation factors, that are bound to the 5' cap of the mRNA. This interaction causes circularization of the transcript, which subsequently promotes translation initiation. Furthermore, it allows for efficient translation by causing recycling of ribosomes. While the presence of a poly(A) tail usually aids in triggering translation, the absence or removal of one often leads to exonuclease-mediated degradation of the mRNA. Polyadenylation itself is regulated by sequences within the 3′-UTR of the transcript. These sequences include cytoplasmic polyadenylation elements (CPEs), which are uridine-rich sequences that contribute to both polyadenylation activation and repression. CPE-binding protein (CPEB) binds to CPEs in conjunction with a variety of other proteins in order to elicit different responses. Structural characteristics While the sequence that constitutes the 3′-UTR contributes greatly to gene expression, the structural characteristics of the 3′-UTR also play a large role. In general, longer 3′-UTRs correspond to lower expression rates since they often contain more miRNA and protein binding sites that are involved in inhibiting translation. Human transcripts possess 3′-UTRs that are on average twice as long as other mammalian 3′-UTRs. This trend reflects the high level of complexity involved in human gene regulation. In addition to length, the secondary structure of the 3′-untranslated region also has regulatory functions. Protein factors can either aid or disrupt folding of the region into various secondary structures. The most common structure is a stem-loop, which provides a scaffold for RNA binding proteins and non-coding RNAs that influence expression of the transcript. Alternative polyadenylation Another mechanism involving the structure of the 3′-UTR is called alternative polyadenylation (APA), which results in mRNA isoforms that differ only in their 3′-UTRs. This mechanism is especially useful for complex organisms as it provides a means of expressing the same protein but in varying amounts and locations. It is utilized by about half of human genes. APA can result from the presence of multiple polyadenylation sites or mutually exclusive terminal exons. Since it can affect the presence of protein and miRNA binding sites, APA can cause differential expression of mRNA transcripts by influencing their stability, export to the cytoplasm, and translation efficiency. Methods of study Scientists use a number of methods to study the complex structures and functions of the 3′ UTR. Even if a given 3′-UTR in an mRNA is shown to be present in a tissue, the effects of localization, functional half-life, translational efficiency, and trans-acting elements must be determined to understand the 3′-UTR's full functionality. Computational approaches, primarily by sequence analysis, have shown the existence of AREs in approximately 5 to 8% of human 3′-UTRs and the presence of one or more miRNA targets in as many as 60% or more of human 3′-UTRs. Software can rapidly compare millions of sequences at once to find similarities between various 3′ UTRs within the genome. Experimental approaches have been used to define sequences that associate with specific RNA-binding proteins; specifically, recent improvements in sequencing and cross-linking techniques have enabled fine mapping of protein binding sites within the transcript. Induced site-specific mutations, for example those that affect the termination codon, polyadenylation signal, or secondary structure of the 3′-UTR, can show how mutated regions can cause translation deregulation and disease. These types of transcript-wide methods should help our understanding of known cis elements and trans-regulatory factors within 3′-UTRs. Disease 3′-UTR mutations can be very consequential because one alteration can be responsible for the altered expression of many genes. Transcriptionally, a mutation may affect only the allele and genes that are physically linked. However, since 3′-UTR binding proteins also function in the processing and nuclear export of mRNA, a mutation can also affect other unrelated genes. Dysregulation of ARE-binding proteins (AUBPs) due to mutations in AU-rich regions can lead to diseases including tumorigenesis (cancer), hematopoietic malignancies, leukemogenesis, and developmental delay/autism spectrum disorders. An expanded number of trinucleotide (CTG) repeats in the 3’-UTR of the dystrophia myotonica protein kinase (DMPK) gene causes myotonic dystrophy. Retro-transposal 3-kilobase insertion of tandem repeat sequences within the 3′-UTR of fukutin protein is linked to Fukuyama-type congenital muscular dystrophy. Elements in the 3′-UTR have also been linked to human acute myeloid leukemia, alpha-thalassemia, neuroblastoma, Keratinopathy, Aniridia, IPEX syndrome, and congenital heart defects. The few UTR-mediated diseases identified only hint at the countless links yet to be discovered. Future development Despite current understanding of 3′-UTRs, they are still relative mysteries. Since mRNAs usually contain several overlapping control elements, it is often difficult to specify the identity and function of each 3′-UTR element, let alone the regulatory factors that may bind at these sites. Additionally, each 3′-UTR contains many alternative AU-rich elements and polyadenylation signals. These cis- and trans-acting elements, along with miRNAs, offer a virtually limitless range of control possibilities within a single mRNA. Future research through the increased use of deep-sequencing based ribosome profiling will reveal more regulatory subtleties as well as new control elements and AUBPs. See also Five prime untranslated region UTRdb UTRome References Further reading External links Brief introduction to mRNA regulatory elements UTResource 3′ UTR analysis UTRome.org 3′ UTRs in nematodes Medical Subject Heading: 3′ Untranslated Regions RNA Gene expression
Three prime untranslated region
[ "Chemistry", "Biology" ]
2,936
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
345,929
https://en.wikipedia.org/wiki/SECIS%20element
In biology, the SECIS element (SECIS: selenocysteine insertion sequence) is an RNA element around 60 nucleotides in length that adopts a stem-loop structure. This structural motif (pattern of nucleotides) directs the cell to translate UGA codons as selenocysteines (UGA is normally a stop codon). SECIS elements are thus a fundamental aspect of messenger RNAs encoding selenoproteins, proteins that include one or more selenocysteine residues. In bacteria the SECIS element appears soon after the UGA codon it affects. In archaea and eukaryotes, it occurs in the 3' UTR of an mRNA, and can cause multiple UGA codons within the mRNA to code for selenocysteine. One archaeal SECIS element, in Methanococcus, is located in the 5' UTR. The SECIS element appears defined by sequence characteristics, i.e. particular nucleotides tend to be at particular positions in it, and a characteristic secondary structure. The secondary structure is the result of base-pairing of complementary RNA nucleotides, and causes a hairpin-like structure. The eukaryotic SECIS element includes non-canonical A-G base pairs, which are uncommon in nature, but are critically important for correct SECIS element function. Although the eukaryotic, archaeal and bacterial SECIS elements each share a general hairpin structure, they are not alignable, e.g. an alignment-based scheme to recognize eukaryotic SECIS elements will not be able to recognize archaeal SECIS elements. However, in Lokiarcheota, SECIS elements are more similar to eukaryotic elements. In bioinformatics, several computer programs have been created that search for SECIS elements within a genome sequence, based on the sequence and secondary structure characteristics of SECIS elements. These programs have been used in searches for novel selenoproteins. Species distribution The SECIS element is found in a wide variety of organisms from all three domains of life (including their viruses). References External links Gene expression Cis-regulatory RNA elements
SECIS element
[ "Chemistry", "Biology" ]
462
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
345,937
https://en.wikipedia.org/wiki/Database%20schema
The database schema is the structure of a database described in a formal language supported typically by a relational database management system (RDBMS). The term "schema" refers to the organization of data as a blueprint of how the database is constructed (divided into database tables in the case of relational databases). The formal definition of a database schema is a set of formulas (sentences) called integrity constraints imposed on a database. These integrity constraints ensure compatibility between parts of the schema. All constraints are expressible in the same language. A database can be considered a structure in realization of the database language. The states of a created conceptual schema are transformed into an explicit mapping, the database schema. This describes how real-world entities are modeled in the database. "A database schema specifies, based on the database administrator's knowledge of possible applications, the facts that can enter the database, or those of interest to the possible end-users." The notion of a database schema plays the same role as the notion of theory in predicate calculus. A model of this "theory" closely corresponds to a database, which can be seen at any instant of time as a mathematical object. Thus a schema can contain formulas representing integrity constraints specifically for an application and the constraints specifically for a type of database, all expressed in the same database language. In a relational database, the schema defines the tables, fields, relationships, views, indexes, packages, procedures, functions, queues, triggers, types, sequences, materialized views, synonyms, database links, directories, XML schemas, and other elements. A database generally stores its schema in a data dictionary. Although a schema is defined in text database language, the term is often used to refer to a graphical depiction of the database structure. In other words, schema is the structure of the database that defines the objects in the database. In an Oracle Database system, the term "schema" has a slightly different connotation. Ideal requirements for schema integration The requirements listed below influence the detailed structure of schemas that are produced. Certain applications will not require that all of these conditions are met, but these four requirements are the most ideal. Overlap preservation Each of the overlapping elements specified in the input mapping is also in a database schema relation. Extended overlap preservation Source-specific elements that are associated with a source’s overlapping elements are passed through to the database schema. Normalization Independent entities and relationships in the source data should not be grouped together in the same relation in the database schema. In particular, source specific schema elements should not be grouped with overlapping schema elements, if the grouping co-locates independent entities or relationships. Minimality If any elements of the database schema are dropped then the database schema is not ideal. Example of two schema integrations Suppose we want a mediated schema to integrate two travel databases, Go-travel and Ok-flight. Go-travel has two relations: Go-flight(flight-number, time, meal(yes/no)) Go-price(flight-number, date, price) Ok-flight has just one relation: Ok-flight(flight-number, date, time, price, nonstop(yes/no)) The overlapping information in Go-travel’s and Ok-flight’s schemas could be represented in a mediated schema: Flight(flight-number, date, time, price) Oracle database specificity In the context of Oracle Databases, a schema object is a logical data storage structure. An Oracle database associates a separate schema with each database user. A schema comprises a collection of schema objects. Examples of schema objects include: tables views sequences synonyms indexes clusters database links snapshots procedures functions packages On the other hand, non-schema objects may include: users roles contexts directory objects Schema objects do not have a one-to-one correspondence to physical files on disk that store their information. However, Oracle databases store schema objects logically within a tablespace of the database. The data of each object is physically contained in one or more of the tablespace's datafiles. For some objects (such as tables, indexes, and clusters) a database administrator can specify how much disk space the Oracle RDBMS allocates for the object within the tablespace's datafiles. There is no necessary relationship between schemas and tablespaces: a tablespace can contain objects from different schemas, and the objects for a single schema can reside in different tablespaces. Oracle database specificity does, however, enforce platform recognition of nonhomogenized sequence differentials, which is considered a crucial limiting factor in virtualized applications. Microsoft SQL Server In Microsoft SQL Server, the default schema of every database is the dbo schema. See also Data element Data mapping Data model Database design Entity–relationship model Knowledge representation and reasoning Object-role modeling Olog Schema matching Three-schema approach References External links Tip/Trick: Online Database Schema Samples Library Database Schema Samples Designing the Star Schema Database Data management Relational model Data modeling
Database schema
[ "Technology", "Engineering" ]
1,064
[ "Data management", "Data engineering", "Data modeling", "Data" ]
345,957
https://en.wikipedia.org/wiki/Phthalic%20anhydride
Phthalic anhydride is the organic compound with the formula C6H4(CO)2O. It is the anhydride of phthalic acid. Phthalic anhydride is a principal commercial form of phthalic acid. It was the first anhydride of a dicarboxylic acid to be used commercially. This white solid is an important industrial chemical, especially for the large-scale production of plasticizers for plastics. In 2000, the worldwide production volume was estimated to be about 3 million tonnes per year. Synthesis and production Phthalic anhydride was first reported in 1836 by Auguste Laurent. Early procedures involved liquid-phase mercury-catalyzed oxidation of naphthalene. The modern industrial variant process instead uses vanadium pentoxide (V2O5) as the catalyst in a gas-phase reaction with naphthalene using molecular oxygen. The overall process involves oxidative cleavage of one of the rings and loss of two of the carbon atoms as carbon dioxide. An alternative process involves oxidation of the two methyl groups of o-xylene, a more atom-economical process. This reaction is run at about 320–400 °C and has the following stoichiometry: C6H4(CH3)2 + 3 O2 → C6H4(CO)2O + 3 H2O The reaction proceeds with about 70% selectivity. About 10% of maleic anhydride is also produced: C6H4(CH3)2 +  O2 → C4H2O3 + 4 H2O + 4 CO2 Phthalic anhydride and maleic anhydride are recovered by distillation by a series of switch condensers. The naphthalene route (the Gibbs phthalic anhydride process or the Gibbs–Wohl naphthalene oxidation reaction) has declined relative to the o-xylene route. Phthalic anhydride can also be prepared from phthalic acid by simple thermal dehydration above 210°C. Uses Phthalate esters plasticizers The primary use of phthalic anhydride is a precursor to phthalate esters, used as plasticizers in vinyl chloride. Phthalate esters are derived from phthalic anhydride by the alcoholysis reaction. In the 1980s, approximately 6.5 million tonnes of these esters were produced annually, and the scale of production was increasing each year, all from phthalic anhydride. The process begins with the reaction of phthalic anhydride with alcohols, giving the monoesters: C6H4(CO)2O + ROH → C6H4(CO2H)CO2R The second esterification is more difficult and requires removal of water: C6H4(CO2H)CO2R + ROH C6H4(CO2R)2 + H2O The most important diester is bis(2-ethylhexyl) phthalate ("DEHP"), used in the manufacture of polyvinyl chloride compounds. Precursor to dyestuffs Phthalic anhydride is widely used in industry for the production of certain dyes. A well-known application of this reactivity is the preparation of the anthraquinone dye quinizarin by reaction with para-chlorophenol followed by hydrolysis of the chloride. Phenolphthalein can be synthesized by the condensation of phthalic anhydride with two equivalents of phenol under acidic conditions (hence the name). It was discovered in 1871 by Adolf von Baeyer. Pharmaceuticals Phthalic anhydride treated with cellulose acetate gives cellulose acetate phthalate (CAP), a common enteric coating excipient that has also been shown to have antiviral activity. Phthalic anhydride is a degradation product of CAP. Reactions Phthalic anhydride is a versatile intermediate in organic chemistry, in part because it is bifunctional and cheaply available. Hydrolysis, alcoholysis, ammonolysis Hydrolysis by hot water forms ortho-phthalic acid: C6H4(CO)2O + H2O → C6H4(CO2H)2 Hydrolysis of anhydrides is not typically a reversible process. Phthalic acid is however easily dehydrated to form phthalic anhydride. Above 180 °C, phthalic anhydride re-forms. Chiral alcohols form half-esters (see above), and these derivatives are often resolvable because they form diastereomeric salts with chiral amines such as brucine. A related ring-opening reaction involves peroxides to give the useful peroxy acid: C6H4(CO)2O + H2O2 → C6H4(CO3H)CO2H Phthalimide can be prepared by heating phthalic anhydride with aqueous ammonia giving a 95–97% yield. Alternatively, it may be prepared by treating the anhydride with ammonium carbonate or urea. It can also be produced by ammoxidation of o-xylene. Potassium phthalimide is commercially available and is the potassium salt of phthalimide. It may be prepared by adding a hot solution of phthalimide to a solution of potassium hydroxide; the desired product precipitates. Preparation of aliphatic nitroalkenes Phthalic anhydride is used to dehydrate short-chain nitro-alcohols to yield nitroalkenes, compounds with a high tendency to polymerize. Safety The most probable human exposure to phthalic anhydride is through skin contact or inhalation during manufacture or use. Studies show that exposure to phthalic anhydride can cause rhinitis, chronic bronchitis, and asthma. Phthalic anhydride's reaction on human health is generally an asthma–rhinitis–conjunctivitis syndrome or a delayed reaction and influenza-like symptoms and with increased immunoglobulin (E and G) levels in the blood. References External links International Chemical Safety Card 0315 NIOSH Pocket Guide to Chemical Hazards Carboxylic anhydrides Phthalides Commodity chemicals Substances discovered in the 19th century
Phthalic anhydride
[ "Chemistry" ]
1,375
[ "Commodity chemicals", "Products of chemical industry" ]
346,011
https://en.wikipedia.org/wiki/Kegworth%20air%20disaster
The Kegworth air disaster occurred when British Midland Airways Flight 092, a Boeing 737-400, crashed onto the motorway embankment between the M1 motorway and A453 road near Kegworth, Leicestershire, England, while attempting to make an emergency landing at East Midlands Airport on 8 January 1989. The aircraft was on a scheduled flight from London Heathrow Airport to Belfast International Airport. When a fan blade broke in the left engine, smoke was drawn into the cabin through the air conditioning system. The pilots believed this indicated a fault in the right engine, since earlier models of the 737 ventilated the cabin from the right, and they were unaware that the 737-400 used a different system. The pilots retarded the right thrust lever and the symptoms of smoke and vibration cleared, leading them to believe the problem had been identified, and then the right engine was shut down. On the final stage of the approach, thrust was increased on the left engine. The tip of the fan blade that had lodged in the cowling from the earlier event became dislodged and was drawn into the core of the engine, damaging it and causing a fire. The fan blade had initially suffered a fracture caused by aerodynamic flutter. Those responsible for the pre-certification test programme and the issue of a Certificate of Airworthiness 'acted contrary' to the wealth of literature that was available on this subject. This knowledge made clear that static ground testing to discover the presence of flutter was unreliable and the fan blade had to be subjected to the full flight envelope to be certain of the test results. The accident was the first hull loss of a Boeing 737 Classic aircraft, and the first fatal accident involving a Boeing 737 Classic aircraft. Of the 126 people aboard, 47 died and 74 sustained serious injuries. Aircraft involved and crew Aircraft The aircraft was a British Midland-operated Boeing 737-4Y0, registration on a scheduled flight from London Heathrow Airport to Belfast International Airport, Northern Ireland, having already flown from Heathrow to Belfast and back that day. The 737-400 was the newest design from Boeing, with the first unit entering service less than four months earlier, in September 1988. G-OBME had accumulated 521 airframe hours. The aircraft was powered by two CFM International CFM56 turbofan engines. Cockpit crew The flight was crewed by 43-year-old Captain Kevin Hunt and 39-year-old First Officer David McClelland. Hunt had been with British Midland since 1966 and had about 13,200 hours of flying experience. First Officer McClelland joined the airline in 1988 and had about 3,300 total flight hours. Between them, the pilots had close to 1,000 hours in the Boeing 737 cockpit (Hunt had 763 hours, and McClelland had 192 hours), but only 76 of these had been in Boeing 737-400 series aircraft (Hunt 23 hours and McClelland 53 hours). Accident After taking off from Heathrow at 19:52, Flight BD 092 was climbing through to reach its cruising altitude of when a blade detached from the fan of the port (left) engine. The pilots did not know the source of the problem, but heard a pounding noise, accompanied by severe vibrations. Smoke poured into the cabin through the ventilation system, and passengers became aware of the smell of burning. Several passengers sitting near the rear of the plane noticed smoke and sparks coming from the left engine. The flight was diverted to nearby East Midlands Airport at the suggestion of British Midland Airways Operations. After the initial blade fracture, Captain Kevin Hunt, the non-handling pilot, took control without first advising McClelland, and disengaged the plane's autopilot. Hunt then asked First Officer David McClelland which engine was malfunctioning, McClelland replied: "It's the le... It's the right one". In previous versions of the 737, the right (number 2) engine supplied air to the flight deck. The pilots had been used to the older version of the aircraft and did not realise that this aircraft was different. The captain later claimed that his perception of smoke as coming forward from the passenger cabin led them to assume the fault was in the right engine. The pilots throttled back the working right engine instead of the malfunctioning left engine. They had no way of visually checking the engines from the cockpit, and the cabin crew – who did not hear the captain refer to the right hand engine in his cabin address – did not inform them that smoke and flames had been seen from the left engine. When the pilots retarded the right engine, they could no longer smell the smoke or feel the vibration, which led them to believe that they had correctly dealt with the problem. As it turned out, this was due to a combination of the Power Management Control unit and autothrottle which was disengaged prior to shutting down the right engine, the fuel flow to both engines was reduced, and the excess fuel, which had been igniting in the left engine exhaust, disappeared; therefore, the ongoing damage was reduced, the smell of smoke ceased, and the vibration reduced, although it would still have been visible on cockpit instruments which were "at best unclear and at worst misleading" according to Dr Roger Green from the RAF Institute of Air Medicine. During the final approach to the East Midlands Airport, the pilots selected increased thrust from the operating, damaged engine. This led to an engine fire, caused by the tip of the fan blade dislodging from the cowling, going into the core of the engine and causing it to cease operating entirely. The ground proximity warning system activated, sounding several "glideslope" warnings. The pilots attempted to restart the right engine by windmilling, but the aircraft was by now only 900 above the ground and flying too slowly for a restart. At 20:24:33, Captain Hunt broadcast to the passengers via the aircraft's public-address system: "Prepare for crash landing", instructing passengers to take the brace position. The stick shaker then activated. Just before crossing the M1 motorway at 20:24:43, the tail and main landing gear struck the ground at a speed of and the aircraft bounced back into the air and over the motorway, knocking down trees and a lamp post before crashing on the far embankment around short of the active runway's paved surface and about from its threshold. The aircraft broke into three sections. This was adjacent to the motorway, but no vehicles were travelling on that part of the M1 at the moment of the crash. Casualties Of the 118 passengers on board, 39 were killed outright in the crash and eight died later of their injuries, giving a total of 47 fatalities. All eight crew members survived the accident. Of the 79 survivors, 74 suffered serious injuries and five suffered minor injuries. In addition, five firefighters also suffered minor injuries during the rescue operation. No-one on the motorway was injured, and all vehicles in the vicinity of the disaster were undamaged. The first person to arrive at the scene and render aid was a motorist, Graham Pearson. A former Royal Marine, he helped passengers for over three hours, and subsequently received damages for post-traumatic stress disorder. Aid was also given by a troop of eight SAS soldiers, four of whom were regimentally qualified paramedics. Their truck had been on the motorway when the crash occurred. Causes The investigation established that the wiring associated with the fire warning lights was properly connected. Initially there was a concern that the sensors in the engines and the warning lights on the flight deck may have been cross-wired. Shutting down of wrong engine Captain Hunt believed the right engine was malfunctioning due to the smell of smoke in the cabin because in previous Boeing 737 variants bleed air for cabin air conditioning was taken from the right engine. Starting with the Boeing 737-400 variant, Boeing had redesigned the system to use bleed air from both engines. Several cabin staff and passengers noticed that the left engine had a stream of unburnt fuel igniting in the jet exhaust, but this information was not passed to the pilots because cabin staff assumed they were aware that the left engine was malfunctioning. The smell of smoke disappeared when the autothrottle was disengaged and the right engine shut down due to reduction of fuel to the damaged left engine as it reverted to manual throttle. In the event of a malfunction, pilots were trained to check all meters and review all decisions, and Captain Hunt proceeded to do so. Whilst he was conducting the review, however, he was interrupted by a transmission from East Midlands Airport informing him he could descend further to in preparation for the diverted landing. He did not resume the review after the transmission ended, and instead commenced descent. The dials on the two vibration gauges (one for each engine) were smaller than on the previous versions of the 737 in which the pilots had the majority of their experience and the LED needle went around the outside of the dial as opposed to the inside. The pilots had received no simulator training on the new model, as no simulator for the 737-400 existed in the UK at that time. At the time, vibration indicators were known for being unreliable (and normally ignored by pilots), but unknown to the pilots, this was one of the first aircraft to have a very accurate vibration readout, although it was still permitted to fly with one gauge unserviceable under Boeing's Minimum Equipment List. Engine malfunction Analysis of the engine from the crash determined that the fan blades (LP stage 1 compressor) of the uprated CFM International CFM56 engine used on the 737-400 were subject to abnormal amounts of vibration when operating at high power settings above . As it was an upgrade to an existing engine, in-flight testing was not mandatory, and the engine had only been tested in the laboratory. Upon this discovery, the remaining 99 Boeing 737-400s then in service were grounded and the engines modified. Following the crash, testing all newly designed and significantly redesigned turbofan engines under representative flight conditions is now mandatory. This unnoticed vibration created excessive metal fatigue in the fan blades, and on G-OBME this caused one of the fan blades to break off. This damaged the engine terminally and also upset its delicate balance, causing a reduction in power and an increase in vibration. The autothrottle attempted to compensate for this by increasing the fuel flow to the engine. The damaged engine was unable to burn all the additional fuel, with much of it igniting in the exhaust flow, creating a large trail of flame behind the engine. Aftermath The official report into the disaster made 31 safety recommendations. Evaluation of the injuries sustained led to considerable improvements in aircraft safety and emergency instructions for passengers. These were derived from a research programme funded by the CAA and carried out by teams from the University of Nottingham and Hawtal Whiting Structures (an engineering consultancy company). The study between medical staff and engineers used analytical "occupant kinematics" techniques to assess the effectiveness of the brace position. A new notice to operators revising the brace position was issued in October 1993. The research into this accident led to the formation on 21 November 2016 of the International Board for Research into Aircraft Crash Events, which is a joint co-operation between experts in the field for the purpose of producing an internationally agreed-upon, evidence-based set of impact bracing positions for passengers and (eventually) cabin crew members in a variety of seating configurations. These will be submitted to the International Civil Aviation Organization through its Cabin Safety Group. A memorial was built in the village cemetery in nearby Kegworth to "those who died, those who were injured and those who took part in the rescue operation", together with a garden made using soil from the crash site. Captain Hunt and First Officer McClelland, both seriously injured in the crash, were dismissed following the criticisms of their actions in the Air Accidents Investigation Branch report. Hunt suffered injuries to his spine and legs in the crash. In April 1991, he told a BBC documentary, "We were the easy option – the cheap option if you wish. We made a mistake – we both made mistakes – but the question we would like answered is why we made those mistakes." British Midland later paid McClelland an out-of-court settlement for unfair dismissal. Alan Webb, the chief fire officer at East Midlands Airport, was made an MBE in the 1990 New Year Honours list for the co-ordination of his team in the rescue efforts that followed the crash. Graham Pearson, a passing motorist who assisted Kegworth survivors at the crash site for three hours, sued the airline for post-traumatic stress disorder and was awarded £57,000 in damages in 1998 (). Media The crash was featured in a 1991 documentary, an episode of the series Taking Liberties named "Fatal Error". ITV aired a documentary in 1999 of the Kegworth crash. Flight 092 was also featured in an episode of Seconds From Disaster called "Motorway Plane Crash". It was also featured in the 2011 Discovery Channel documentary Aircrash Confidential. In 2015, the incident was featured in the episode "Choosing Sides" or "M1 Plane Crash" of the documentary television series Mayday, or Air Crash Investigation, as it is known in the UK. In 2024, the incident was also featured on the "M1 Plane Crash" episode of Terror at 30,000 Feet on Channel 5. See also TransAsia Airways Flight 235, South African Airlink Flight 8911, Transair Flight 810 and Azerbaijan Airlines Flight A-56 - other cases of misidentification of a failing engine List of accidents and incidents involving commercial aircraft Notes References References Bibliography Macarthur Job, Air Disaster Volume 2: Aerospace Publications Pty Ltd, 1996, , p. 173–185 David Owen, Air Accident Investigation: Patrick Stephens Limited, 2001, . (The Kegworth air disaster is given a detailed mention in Chapter 9, "Pressing the Wrong Button") HW Structures, CAA Paper 90012 Occupant modelling in aircraft crash conditions: Civil Aviation Authority, 1990, . Hawtal Whiting Technology Group, CAA Paper 95004 A study of aircraft passenger brace positions for impact: Civil Aviation Authority, 1995, Report file (G-OBME.pdf Archive) Appendices (G-OBME Append.pdf Archive) External links BBC 10th anniversary page about the crash BBC 'On This Day' page about the crash Pre-crash and crash pictures of the aircraft from Airliners.net Transport in Leicestershire Airliner accidents and incidents caused by mechanical failure Airliner accidents and incidents caused by pilot error Aviation accidents and incidents in 1989 Aviation accidents and incidents in England British Midland International accidents and incidents 1989 disasters in the United Kingdom 1989 in England 1980s in Leicestershire History of Leicestershire 1989 in Northern Ireland Accidents and incidents involving the Boeing 737 Classic East Midlands Airport North West Leicestershire District M1 motorway January 1989 events in the United Kingdom Airliner accidents and incidents in the United Kingdom Airliner accidents and incidents caused by design or manufacturing errors Airliner accidents and incidents caused by engine failure Airliner accidents and incidents caused by wrong engine shutdown
Kegworth air disaster
[ "Materials_science" ]
3,083
[ "Airliner accidents and incidents caused by mechanical failure", "Mechanical failure" ]
346,030
https://en.wikipedia.org/wiki/Improper%20integral
In mathematical analysis, an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. In the context of Riemann integrals (or, equivalently, Darboux integrals), this typically involves unboundedness, either of the set over which the integral is taken or of the integrand (the function being integrated), or both. It may also involve bounded but not closed sets or bounded but not continuous functions. While an improper integral is typically written symbolically just like a standard definite integral, it actually represents a limit of a definite integral or a sum of such limits; thus improper integrals are said to converge or diverge. If a regular definite integral (which may retronymically be called a proper integral) is worked out as if it is improper, the same answer will result. In the simplest case of a real-valued function of a single variable integrated in the sense of Riemann (or Darboux) over a single interval, improper integrals may be in any of the following forms: , where is undefined or discontinuous somewhere on The first three forms are improper because the integrals are taken over an unbounded interval. (They may be improper for other reasons, as well, as explained below.) Such an integral is sometimes described as being of the "first" type or kind if the integrand otherwise satisfies the assumptions of integration. Integrals in the fourth form that are improper because has a vertical asymptote somewhere on the interval may be described as being of the "second" type or kind. Integrals that combine aspects of both types are sometimes described as being of the "third" type or kind. In each case above, the improper integral must be rewritten using one or more limits, depending on what is causing the integral to be improper. For example, in case 1, if is continuous on the entire interval , then The limit on the right is taken to be the definition of the integral notation on the left. If is only continuous on and not at itself, then typically this is rewritten as for any choice of . Here both limits must converge to a finite value for the improper integral to be said to converge. This requirement avoids the ambiguous case of adding positive and negative infinities (i.e., the "" indeterminate form). Alternatively, an iterated limit could be used or a single limit based on the Cauchy principal value. If is continuous on and , with a discontinuity of any kind at , then for any choice of . The previous remarks about indeterminate forms, iterated limits, and the Cauchy principal value also apply here. The function can have more discontinuities, in which case even more limits would be required (or a more complicated principal value expression). Cases 2–4 are handled similarly. See the examples below. Improper integrals can also be evaluated in the context of complex numbers, in higher dimensions, and in other theoretical frameworks such as Lebesgue integration or Henstock–Kurzweil integration. Integrals that are considered improper in one framework may not be in others. Examples The original definition of the Riemann integral does not apply to a function such as on the interval , because in this case the domain of integration is unbounded. However, the Riemann integral can often be extended by continuity, by defining the improper integral instead as a limit The narrow definition of the Riemann integral also does not cover the function on the interval . The problem here is that the integrand is unbounded in the domain of integration. In other words, the definition of the Riemann integral requires that both the domain of integration and the integrand be bounded. However, the improper integral does exist if understood as the limit Sometimes integrals may have two singularities where they are improper. Consider, for example, the function integrated from 0 to (shown right). At the lower bound of the integration domain, as goes to 0 the function goes to , and the upper bound is itself , though the function goes to 0. Thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of /6. To integrate from 1 to , a Riemann sum is not possible. However, any finite upper bound, say (with ), gives a well-defined result, . This has a finite limit as goes to infinity, namely /2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, coincidentally again producing /6. Replacing 1/3 by an arbitrary positive value (with ) is equally safe, giving . This, too, has a finite limit as goes to zero, namely /2. Combining the limits of the two fragments, the result of this improper integral is This process does not guarantee success; a limit might fail to exist, or might be infinite. For example, over the bounded interval from 0 to 1 the integral of does not converge; and over the unbounded interval from 1 to the integral of does not converge. It might also happen that an integrand is unbounded near an interior point, in which case the integral must be split at that point. For the integral as a whole to converge, the limit integrals on both sides must exist and must be bounded. For example: But the similar integral cannot be assigned a value in this way, as the integrals above and below zero in the integral domain do not independently converge. (However, see Cauchy principal value.) Convergence of the integral An improper integral converges if the limit defining it exists. Thus for example one says that the improper integral exists and is equal to L if the integrals under the limit exist for all sufficiently large t, and the value of the limit is equal to L. It is also possible for an improper integral to diverge to infinity. In that case, one may assign the value of ∞ (or −∞) to the integral. For instance However, other improper integrals may simply diverge in no particular direction, such as which does not exist, even as an extended real number. This is called divergence by oscillation. A limitation of the technique of improper integration is that the limit must be taken with respect to one endpoint at a time. Thus, for instance, an improper integral of the form can be defined by taking two separate limits; to which provided the double limit is finite. It can also be defined as a pair of distinct improper integrals of the first kind: where c is any convenient point at which to start the integration. This definition also applies when one of these integrals is infinite, or both if they have the same sign. An example of an improper integral where both endpoints are infinite is the Gaussian integral An example which evaluates to infinity is But one cannot even define other integrals of this kind unambiguously, such as since the double limit is infinite and the two-integral method yields an indeterminate form, In this case, one can however define an improper integral in the sense of Cauchy principal value: The questions one must address in determining an improper integral are: Does the limit exist? Can the limit be computed? The first question is an issue of mathematical analysis. The second one can be addressed by calculus techniques, but also in some cases by contour integration, Fourier transforms and other more advanced methods. Types of integrals There is more than one theory of integration. From the point of view of calculus, the Riemann integral theory is usually assumed as the default theory. In using improper integrals, it can matter which integration theory is in play. For the Riemann integral (or the Darboux integral, which is equivalent to it), improper integration is necessary both for unbounded intervals (since one cannot divide the interval into finitely many subintervals of finite length) and for unbounded functions with finite integral (since, supposing it is unbounded above, then the upper integral will be infinite, but the lower integral will be finite). The Lebesgue integral deals differently with unbounded domains and unbounded functions, so that often an integral which only exists as an improper Riemann integral will exist as a (proper) Lebesgue integral, such as . On the other hand, there are also integrals that have an improper Riemann integral but do not have a (proper) Lebesgue integral, such as . The Lebesgue theory does not see this as a deficiency: from the point of view of measure theory, and cannot be defined satisfactorily. In some situations, however, it may be convenient to employ improper Lebesgue integrals as is the case, for instance, when defining the Cauchy principal value. The Lebesgue integral is more or less essential in the theoretical treatment of the Fourier transform, with pervasive use of integrals over the whole real line. For the Henstock–Kurzweil integral, improper integration is not necessary, and this is seen as a strength of the theory: it encompasses all Lebesgue integrable and improper Riemann integrable functions. Improper Riemann integrals and Lebesgue integrals In some cases, the integral can be defined as an integral (a Lebesgue integral, for instance) without reference to the limit but cannot otherwise be conveniently computed. This often happens when the function f being integrated from a to c has a vertical asymptote at c, or if c = ∞ (see Figures 1 and 2). In such cases, the improper Riemann integral allows one to calculate the Lebesgue integral of the function. Specifically, the following theorem holds : If a function f is Riemann integrable on [a,b] for every b ≥ a, and the partial integrals are bounded as b → ∞, then the improper Riemann integrals both exist. Furthermore, f is Lebesgue integrable on [a, ∞), and its Lebesgue integral is equal to its improper Riemann integral. For example, the integral can be interpreted alternatively as the improper integral or it may be interpreted instead as a Lebesgue integral over the set (0, ∞). Since both of these kinds of integral agree, one is free to choose the first method to calculate the value of the integral, even if one ultimately wishes to regard it as a Lebesgue integral. Thus improper integrals are clearly useful tools for obtaining the actual values of integrals. In other cases, however, a Lebesgue integral between finite endpoints may not even be defined, because the integrals of the positive and negative parts of f are both infinite, but the improper Riemann integral may still exist. Such cases are "properly improper" integrals, i.e. their values cannot be defined except as such limits. For example, cannot be interpreted as a Lebesgue integral, since But is nevertheless integrable between any two finite endpoints, and its integral between 0 and ∞ is usually understood as the limit of the integral: Singularities One can speak of the singularities of an improper integral, meaning those points of the extended real number line at which limits are used. Cauchy principal value Consider the difference in values of two limits: The former is the Cauchy principal value of the otherwise ill-defined expression Similarly, we have but The former is the principal value of the otherwise ill-defined expression All of the above limits are cases of the indeterminate form . These pathologies do not affect "Lebesgue-integrable" functions, that is, functions the integrals of whose absolute values are finite. Summability An improper integral may diverge in the sense that the limit defining it may not exist. In this case, there are more sophisticated definitions of the limit which can produce a convergent value for the improper integral. These are called summability methods. One summability method, popular in Fourier analysis, is that of Cesàro summation. The integral is Cesàro summable (C, α) if exists and is finite . The value of this limit, should it exist, is the (C, α) sum of the integral. An integral is (C, 0) summable precisely when it exists as an improper integral. However, there are integrals which are (C, α) summable for α > 0 which fail to converge as improper integrals (in the sense of Riemann or Lebesgue). One example is the integral which fails to exist as an improper integral, but is (C,α) summable for every α > 0. This is an integral version of Grandi's series. Multivariable improper integrals The improper integral can also be defined for functions of several variables. The definition is slightly different, depending on whether one requires integrating over an unbounded domain, such as , or is integrating a function with singularities, like . Improper integrals over arbitrary domains If is a non-negative function that is Riemann integrable over every compact cube of the form , for , then the improper integral of f over is defined to be the limit provided it exists. A function on an arbitrary domain A in is extended to a function on by zero outside of A: The Riemann integral of a function over a bounded domain A is then defined as the integral of the extended function over a cube containing A: More generally, if A is unbounded, then the improper Riemann integral over an arbitrary domain in is defined as the limit: Improper integrals with singularities If f is a non-negative function which is unbounded in a domain A, then the improper integral of f is defined by truncating f at some cutoff M, integrating the resulting function, and then taking the limit as M tends to infinity. That is for , set . Then define provided this limit exists. Functions with both positive and negative values These definitions apply for functions that are non-negative. A more general function f can be decomposed as a difference of its positive part and negative part , so with and both non-negative functions. The function f has an improper Riemann integral if each of and has one, in which case the value of that improper integral is defined by In order to exist in this sense, the improper integral necessarily converges absolutely, since Notes Bibliography . . . External links Numerical Methods to Solve Improper Integrals at Holistic Numerical Methods Institute Integral calculus
Improper integral
[ "Mathematics" ]
2,995
[ "Integral calculus", "Calculus" ]
346,034
https://en.wikipedia.org/wiki/Cholinergic
Cholinergic agents are compounds which mimic the action of acetylcholine and/or butyrylcholine. In general, the word "choline" describes the various quaternary ammonium salts containing the N,N,N-trimethylethanolammonium cation. Found in most animal tissues, choline is a primary component of the neurotransmitter acetylcholine and functions with inositol as a basic constituent of lecithin. Choline also prevents fat deposits in the liver and facilitates the movement of fats into cells. The parasympathetic nervous system, which uses acetylcholine almost exclusively to send its messages, is said to be almost entirely cholinergic. Neuromuscular junctions, preganglionic neurons of the sympathetic nervous system, the basal forebrain, and brain stem complexes are also cholinergic, as are the receptor for the merocrine sweat glands. In neuroscience and related fields, the term cholinergic is used in these related contexts: A substance (or ligand) is cholinergic if it is capable of producing, altering, or releasing acetylcholine, or butyrylcholine ("indirect-acting"), or mimicking their behaviours at one or more of the body's acetylcholine receptor ("direct-acting") or butyrylcholine receptor types ("direct-acting"). Such mimics are called parasympathomimetic drugs or cholinomimetic drugs. A receptor is cholinergic if it uses acetylcholine as its neurotransmitter. A synapse is cholinergic if it uses acetylcholine as its neurotransmitter. Cholinergic drug Structure activity relationship for cholinergic drugs A molecule must possess a nitrogen atom capable of bearing a positive charge, preferably a quaternary ammonium salt. For maximum potency, the size of the alkyl groups substituted on the nitrogen should not exceed the size of a methyl group. The molecule should have an oxygen atom, preferably an ester-like oxygen capable of participating in a hydrogen bond. A two-carbon unit should occur between the oxygen atom and the nitrogen atom. There must be two methyl groups on the nitrogen A larger third alkyl group is tolerated but more than one large alkyl groups leads to loss of activity The overall size of the molecule cannot be altered much. Bigger molecules have poorer activity Cholinergic hypothesis of Alzheimer's disease The hypothesis states that a possible cause of AD is the reduced synthesis of acetylcholine, a neurotransmitter involved in both memory and learning, two important components of AD. Many current drug therapies for AD are centered on the cholinergic hypothesis, although not all have been effective. Studies performed in the 1980s demonstrated significant impairment of cholinergic markers in Alzheimer's patients. Thus it was proposed that degeneration of cholinergic neurons in the basal forebrain and the associated loss of cholinergic neurotransmission in the cerebral cortex and other areas contributed significantly to the deterioration in cognitive function seen in patients with Alzheimer's disease Further studies on the cholinergic system and AD demonstrated acetylcholine plays a role in learning and memory. Scopolamine, an anticholinergic drug, was used to block cholinergic activity in young adults and induce memory impairments similar to those present in the elderly. The memory impairments were reversed when treated with physostigmine, a cholinergic agonist. However, reversing memory impairments in AD patients may not be this easy due to permanent changes in brain structure. When young adults perform memory and attention tasks, brain activation patterns are balanced between the frontal and occipital lobes, creating a balance between bottom-up and top-down processing. Normal cognitive aging may affect long term and working memory, though the cholinergic system and cortical areas maintain performance through functional compensation. Adults with AD presenting with dysfunction of the cholinergic system are not able to compensate for long-term and working memory deficits. AD is currently treated by increasing acetylcholine concentration by using acetylcholinesterase inhibitors to inhibit acetylcholinesterase from breaking down acetylcholine. Current acetylcholinesterase inhibitors approved in the United States by the FDA to treat Alzheimer's include donepezil, rivastigmine, and galantamine. These drugs work to increase the levels of acetylcholine and subsequently increase the function of neural cells. However, not all treatments based upon the cholinergic hypothesis have been successful in treating the symptoms or slowing the progression of AD. Therefore, a disruption to the cholinergic system has been proposed as a consequence of AD rather than a direct cause. See also Choline Acetylcholine Parasympathetic nervous system Neuromuscular junction Adrenergic Anticholinergic Dopaminergic GABAergic Glutamatergic Moly (herb) Nootropic Racetam Serotonergic Nicotinic acetylcholine receptor References Parasympathetic nervous system Neurochemistry Sympathomimetics
Cholinergic
[ "Chemistry", "Biology" ]
1,102
[ "Biochemistry", "Neurochemistry" ]
346,036
https://en.wikipedia.org/wiki/Family%20Radio%20Service
The Family Radio Service (FRS) is an improved walkie-talkie radio system authorized in the United States since 1996. This personal radio service uses channelized frequencies around 462 and 467 MHz in the ultra high frequency (UHF) band. It does not suffer the interference effects found on citizens' band (CB) at 27 MHz, or the 49 MHz band also used by cordless telephones, toys, and baby monitors. FRS uses frequency modulation (FM) instead of amplitude modulation (AM). Since the UHF band has different radio propagation characteristics, short-range use of FRS may be more predictable than the more powerful license-free radios operating in the HF CB band. Initially proposed by RadioShack in 1994 for use by families, FRS has also seen significant adoption by business interests, as an unlicensed, low-cost alternative to the business band. New rules issued by the FCC in May 2017 clarify and simplify the overlap between FRS and General Mobile Radio Service (GMRS) radio services. Worldwide, a number of similar personal radio services exist; these share the characteristics of low power operation in the UHF (or upper VHF) band using FM, and simplified or no end-user licenses. Exact frequency allocations differ, so equipment legal to operate in one country may cause unacceptable interference in another. Radios approved for FRS are not legal to operate anywhere in Europe. Technical information FRS radios use narrow-band frequency modulation (NBFM) with a maximum deviation of 2.5 kilohertz. The channels are spaced at 12.5 kilohertz intervals. All 22 channels are shared with GMRS radios. Initially, the FRS radios were limited to 500 milliwatts across all channels. However, after May 18, 2017, the limit is increased to 2 watts on channels 1-7 and 15–22. FRS radios frequently have provisions for using sub-audible tone squelch (CTCSS and DCS) codes, filtering out unwanted chatter from other users on the same frequency. Although these codes are sometimes called "privacy codes" or "private line codes" (PL codes), they offer no protection from eavesdropping and are intended only to help reduce unwanted audio when sharing busy channels. Tone codes also do nothing to prevent desired transmissions from being swamped by stronger signals having a different code. All equipment used on FRS must be certified according to FCC regulations. Radios are not certified for use in this service if they exceed limits on power output, have a detachable antenna, allow for unauthorized selection of transmitting frequencies outside of the 22 frequencies designated for FRS, or for other reasons. After December 2017, the FCC no longer accepts applications to certify hand-held FRS units providing for transmission in any other radio band. FRS radios must use only permanently attached antennas; there are also table-top FRS "base station" radios that have whip antennas. This limitation intentionally restricts the range of communications, allowing greatest use of the available channels by the community. The use of duplex radio repeaters and interconnects to the telephone network are prohibited under FRS rules. The range advertised on specific devices might not apply in real-world situations, since large buildings, trees, etc., can interfere with the signal and reduce range. Under exceptional conditions, (such as hilltop to hilltop, or over open water) communication is possible at or more, but that is rare. Under normal conditions, with line of sight blocked by a few buildings or trees, FRS has an actual range of about 0.5 to 1.5 km (0.3 to 1 mile). FRS/GMRS hybrid radios In May 2017, the FCC significantly revised the rules for combination FRS/GMRS radios. Combination radios will be permitted to radiate up to 2 watts on 15 of the 22 channels (as opposed to 0.5 watts), and all FRS channels are now considered shared with the GMRS service. Operation over 2 watts, or operation on GMRS repeater input channels, will still require GMRS licensing. The FCC will not certify combination FRS/GMRS radios that exceed the current power limits for the FRS service. Hybrid FRS/GMRS consumer radios have been introduced that have 22 channels. Before May 2017, radios had been certified for unlicensed operation on the 7 FRS frequencies, channels 8–14, under FRS rules. Prior to the 2017 revision, FCC rules required a GMRS license to operate on channels 1–7 using more than 0.5 watts. Many hybrid radios have an ERP that is lower than 0.5 watts on channels 1–7, or can be set by the user to operate at low power on these channels. This allows hybrid radios to be used under the license-free FRS rules if the ERP is less than 0.5 watts and the unit is certified for FRS operation on these frequencies. Beginning September 28, 2017, FRS operation is permitted at up to 2 watts on these channels. Interference to licensed services may be investigated by the FCC.<ref name="FCC-EB"></ref> Channels 8–14, formerly exclusive to FRS, since 28 September 2017 can be used by GMRS at 0.5 watts. Channels 15–22, formerly reserved exclusively for GMRS, can be used at up to 2 watts in the FRS. Effective September 30, 2019, it became unlawful in the US to import, manufacture, sell or offer to sell radio equipment capable of operating under both GMRS and FRS. This does not include amateur and other radio equipment that are not certified under Part 95, such as many handheld radios that are marketed for amateur use but are also able to transmit on FRS and GMRS frequencies. List of FRS channels compared to GMRS GMRS has other exclusive channels for repeater input No FRS unit shall exceed 0.5 watt ERP (Effective Radiated Power) on channels 8-14. FRS Channels 15-22 are shared with GMRS also under 2 watt ERP limit. However, if the device includes any of the following channels (467.5500, 467.5750, 467.6000, 467.6250, 467.6500, 467.6750, 467.7000, and 467.7250 MHz) a GMRS license is required. Benefits of a GMRS license include the ability to use repeaters, run higher power (up to 50 watts), and utilize external antennas, which result in much greater communication distances. FRS radios in other countries Personal UHF radio services similar to the American FRS exist in other countries, although since technical standards and frequency bands will differ, usually FCC-approved FRS equipment may not be used in other jurisdictions. Canada American-standard FRS radios have been approved for use in Canada since April 2000. only low-power (2 W ERP), half duplex GMRS operation is permitted, but a license is not required. Repeater and high-power operations are not permitted. This allows the use of dual-mode FRS/GMRS walkie-talkies, but precludes the use of higher-powered GMRS devices designed for vehicle and base-station purposes.. Mexico Since tourists often bring their FRS radios with them, and since trade between the U.S., Canada, and Mexico is of great value to all three countries, the Mexican Secretary of Communication and Transportation has authorized use of the FRS frequencies and equipment similar to that in the US. However, dual-mode FRS/GMRS equipment is not approved in Mexico, so caution should be exercised in operating hybrid FRS/GMRS devices purchased elsewhere. South America Dual-mode GMRS/FRS equipment is also approved in Brazil (GMRS only in simplex mode, GMRS frequencies 462.550, 467.550, 462.725, 467.725 are not allowed) and most other South American countries. Portable radios are heavily used in private communications, mainly by security staff in nightclubs and malls, but also in private parking, maintenance, and delivery services. See also ChatNow LPD433 Multi-Use Radio Service PMR446 Public Radio Service Notes References External links Industry Canada discussion on the approval of FRS in Canada Bandplans Radio hobbies Radio regulations Radio technology American inventions Telecommunications-related introductions in 1996 1996 establishments in the United States
Family Radio Service
[ "Technology", "Engineering" ]
1,701
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
346,133
https://en.wikipedia.org/wiki/Annihilation
In particle physics, annihilation is the process that occurs when a subatomic particle collides with its respective antiparticle to produce other particles, such as an electron colliding with a positron to produce two photons. The total energy and momentum of the initial pair are conserved in the process and distributed among a set of other particles in the final state. Antiparticles have exactly opposite additive quantum numbers from particles, so the sums of all quantum numbers of such an original pair are zero. Hence, any set of particles may be produced whose total quantum numbers are also zero as long as conservation of energy, conservation of momentum, and conservation of spin are obeyed. During a low-energy annihilation, photon production is favored, since these particles have no mass. High-energy particle colliders produce annihilations where a wide variety of exotic heavy particles are created. The word "annihilation" takes its use informally for the interaction of two particles that are not mutual antiparticles not charge conjugate. Some quantum numbers may then not sum to zero in the initial state, but conserve with the same totals in the final state. An example is the "annihilation" of a high-energy electron antineutrino with an electron to produce a W boson. If the annihilating particles are composite, such as mesons or baryons, then several different particles are typically produced in the final state. The inverse of annihilation is pair production, the process in which a high-energy photon converts its energy into mass. Production of a single boson If the initial two particles are elementary (not composite), then they may combine to produce only a single elementary boson, such as a photon (), gluon (), , or a Higgs boson (). If the total energy in the center-of-momentum frame is equal to the rest mass of a real boson (which is impossible for a massless boson such as the ), then that created particle will continue to exist until it decays according to its lifetime. Otherwise, the process is understood as the initial creation of a boson that is virtual, which immediately converts into a real particle + antiparticle pair. This is called an s-channel process. An example is the annihilation of an electron with a positron to produce a virtual photon, which converts into a muon and anti-muon. If the energy is large enough, a could replace the photon. Examples Electron–positron annihilation  +  →  +  When a low-energy electron annihilates a low-energy positron (antielectron), the most probable result is the creation of two or more photons, since the only other final-state Standard Model particles that electrons and positrons carry enough mass–energy to produce are neutrinos, which are approximately 10,000 times less likely to produce, and the creation of only one photon is forbidden by momentum conservation—a single photon would carry nonzero momentum in any frame, including the center-of-momentum frame where the total momentum vanishes. Both the annihilating electron and positron particles have a rest energy of about 0.511 million electron-volts (MeV). If their kinetic energies are relatively negligible, this total rest energy appears as the photon energy of the photons produced. Each of the photons then has an energy of about 0.511 MeV. Momentum and energy are both conserved, with 1.022 MeV of photon energy (accounting for the rest energy of the particles) moving in opposite directions (accounting for the total zero momentum of the system). If one or both charged particles carry a larger amount of kinetic energy, various other particles can be produced. Furthermore, the annihilation (or decay) of an electron–positron pair into a single photon can occur in the presence of a third charged particle, to which the excess momentum can be transferred by a virtual photon from the electron or positron. The inverse process, pair production by a single real photon, is also possible in the electromagnetic field of a third particle. Proton–antiproton annihilation When a proton encounters its antiparticle (and more generally, if any species of baryon encounters the corresponding antibaryon), the reaction is not as simple as electron–positron annihilation. Unlike an electron, a proton is a composite particle consisting of three "valence quarks" and an indeterminate number of "sea quarks" bound by gluons. Thus, when a proton encounters an antiproton, one of its quarks, usually a constituent valence quark, may annihilate with an antiquark (which more rarely could be a sea quark) to produce a gluon, after which the gluon together with the remaining quarks, antiquarks, and gluons will undergo a complex process of rearrangement (called hadronization or fragmentation) into a number of mesons, (mostly pions and kaons), which will share the total energy and momentum. The newly created mesons are unstable, and unless they encounter and interact with some other material, they will decay in a series of reactions that ultimately produce only photons, electrons, positrons, and neutrinos. This type of reaction will occur between any baryon (particle consisting of three quarks) and any antibaryon consisting of three antiquarks, one of which corresponds to a quark in the baryon. (This reaction is unlikely if at least one among the baryon and anti-baryon is exotic enough that they share no constituent quark flavors.) Antiprotons can and do annihilate with neutrons, and likewise antineutrons can annihilate with protons, as discussed below. Reactions in which proton–antiproton annihilation produces as many as 9 mesons have been observed, while production of 13 mesons is theoretically possible. The generated mesons leave the site of the annihilation at moderate fractions of the speed of light and decay with whatever lifetime is appropriate for their type of meson. Similar reactions will occur when an antinucleon annihilates within a more complex atomic nucleus, save that the resulting mesons, being strongly interacting, have a significant probability of being absorbed by one of the remaining "spectator" nucleons rather than escaping. Since the absorbed energy can be as much as ~2 GeV, it can in principle exceed the binding energy of even the heaviest nuclei. Thus, when an antiproton annihilates inside a heavy nucleus such as uranium or plutonium, partial or complete disruption of the nucleus can occur, releasing large numbers of fast neutrons. Such reactions open the possibility for triggering a significant number of secondary fission reactions in a subcritical mass and may potentially be useful for spacecraft propulsion. Higgs production In collisions of two nucleons at very high energies, sea quarks and gluons tend to dominate the interaction rate, so neither nucleon need be an anti-particle for annihilation of a quark pair or "fusion" of two gluons to occur. Examples of such processes contribute to the production of the long-sought Higgs boson. The Higgs is directly produced very weakly by annihilation of light (valence) quarks, but heavy or sea or produced quarks are available. In 2012, the CERN laboratory in Geneva announced the discovery of the Higgs in the debris from proton–proton collisions at the Large Hadron Collider (LHC). The strongest Higgs yield is from fusion of two gluons (via annihilation of a heavy quark pair), while two quarks or antiquarks produce more easily identified events through radiation of a Higgs by a produced virtual vector boson or annihilation of two such vector bosons. See also Pair production Creation and annihilation operators Photon energy References Footnotes Notations External links Antimatter
Annihilation
[ "Physics" ]
1,690
[ "Antimatter", "Matter" ]
346,153
https://en.wikipedia.org/wiki/Historic%20Columbia%20River%20Highway
The Historic Columbia River Highway is an approximately scenic highway in the U.S. state of Oregon between Troutdale and The Dalles, built through the Columbia River Gorge between 1913 and 1922. As the first planned scenic roadway in the United States, it has been recognized in numerous ways, including being listed on the National Register of Historic Places, being designated as a National Historic Landmark by the U.S. Secretary of the Interior, being designated as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers, and being considered a "destination unto itself" as an All-American Road by the U.S. Secretary of Transportation. The historic roadway was bypassed by the present Columbia River Highway No. 2 (Interstate 84) from the 1930s to the 1950s, leaving behind the old two-lane road. The road is now mostly owned and maintained by the state through the Oregon Department of Transportation as the Historic Columbia River Highway No. 100 (still partially marked as U.S. Route 30; see Oregon highways and routes) or the Oregon Parks and Recreation Department as the Historic Columbia River Highway State Trail. The original highway was promoted by lawyer and entrepreneur Sam Hill and engineer Samuel C. Lancaster, to be modeled after the great scenic roads of Europe. From the very beginning, the roadway was envisioned not just as means of traveling by the then popular Model T, but designed with an elegance that took full advantage of all the natural beauty along the route. When the United States highway system was officially established in 1926, the highway became the part of U.S. Route 30. Since then, modern Interstate 84 has been built parallel to the highway between Portland and The Dalles, replacing it as the main travel route and resulting in the loss of some of the original sections of road. History Planning and construction The Columbia River Gorge is the lowest crossing of the Cascade Mountains, carved by the Columbia River during the Cascades' uplift. Rafting down the gorge from The Dalles was one of the most expensive and dangerous parts of the Oregon Trail, traveled by thousands of emigrants to the Oregon Territory, until the Barlow Road opened in 1846 around the south side of Mount Hood. A wagon road was finally built through the gorge in the 1870s, when The Dalles and Sandy Wagon Road was constructed along the south shore from The Dalles to the Sandy River east of Portland. However, this road had steep (20%) grades and a crooked and narrow alignment, and it was not until 1882 that the Oregon Railway and Navigation Company finally opened a water-level route, partially destroying the wagon road. With the onset of the automobile and the good roads movement of the early 20th century, a road was once again needed, and Multnomah County began constructing a 20-foot (6 m) roadway with 9% grades, but ran into difficulties relating to the railroad's location. At Shellrock Mountain to the east, long believed to be an impassable barrier, Governor Oswald West used prison labor in 1912 to prove that it was possible to build a road, at least temporarily. The eventual highway was primarily designed by engineer and landscape architect Samuel C. Lancaster, a lifelong friend of good roads promoter Samuel Hill. His first contribution to the Pacific Northwest was as a consultant for Seattle's Olmsted boulevard system, part of its preparations for the 1909 Alaska-Yukon-Pacific Exposition. In 1908, the two traveled to Europe for the First International Road Congress, where Hill represented the state of Washington. Hill was especially impressed by Switzerland's Axenstrasse, a road built along Lake Lucerne in 1865 that included a windowed tunnel, and wanted to build a similar scenic highway through the Columbia River Gorge. With Lancaster's help, Hill built the experimental Maryhill Loops Road from the river east of the gorge up the Columbia Hills to his planned Quaker utopian community at Maryhill. The road was the first asphalt road in the state, designed with gradual horseshoe curves to avoid steep grades. However, Washington's lawmakers denied his request for a cross-state trunk route on the river's north bank, and Hill crossed the river to Oregon, the last of the states in the far Western U.S. to create a highway department. With the help of his life-size model at Maryhill, he convinced the state legislature to create the State Highway Commission in 1913, which would work with the counties to build roads. The Multnomah County commissioners agreed later that year that the state should design the route to distance it from county politics, and set aside an initial $75,000. In laying out the highway, Lancaster sought not only to create a transportation artery, but to make the gorge's "beautiful waterfalls, canyons, cliffs and mountain domes" accessible to "men from all climes". According to locating engineer John Arthur Elliott, Lancaster began surveying near the Chanticleer Inn, where Larch Mountain Road, part of Multnomah County's existing road network, began climbing the hills of the gorge. For five months, from September 1913 to January 1914, he laid out a route for about 21 miles (34 km) to the Hood River County line west of Cascade Locks. The alignment generally had a maximum grade of 5% and curve radius of 200 feet (60 m), and was wide enough for 18 feet (5.5 m) of macadam (later asphalt) and two 3-foot (1 m) gravel shoulders. To accomplish this, Lancaster used curves similar to the road he had designed at Maryhill where the highway descended from Crown Point. To carry rainwater off the road, Lancaster designed a comprehensive drainage system, including raising the center of the road, installing concrete curbs and gutters as on a city street, and taking the road over heavy flows on culverts. Eleven larger reinforced concrete bridges and several full or half viaducts were specially designed for the Multnomah County portion of the highway, taking the road over streams or along steep hillsides with a minimum of earthmoving. Masonry was used for retaining walls, which kept the highway from falling off the hillside, and guard walls, which kept drivers and pedestrians from falling off the road. At Oneonta Bluff, the highway passed through the first of five tunnels, as the land to the north was taken by the rail line. With the completion of the Oneonta Tunnel and a number of bridges, the road was open to traffic west of Warrendale, near Horsetail Falls, by October 1914. In April 1915, Multnomah County voters approved the cost of covering the initial macadam with a patented long-lasting bituminous mixture known as Warrenite, which was completed to the county line by the end of the summer. For the section west of the Chanticleer Inn, Multnomah County generally made improvements to existing roads. Base Line Road (Stark Street) stretched east from Portland almost to the Sandy River; the roadway east of Troutdale Road to the river, including the present Sweetbriar Road, was somewhat circuitous. An old wooden Pratt through truss bridge over the Sandy collapsed on April 25, 1914, and its steel replacement was built as part of the Columbia River Highway project. A new extension of Base Line Road, built in 1915, gradually descended the riverbank to the bridge. Between the river and the inn, existing roads were incorporated into the highway, which bypassed other sections such as Nielson Road and Bell Road. The county built a second approach to the highway in 1916, using the existing Sandy Boulevard to Troutdale and a 1912 through truss bridge that connected to Woodard Road. A new roadway bypassed Woodard Road's steep grades, following the riverbank to the east end of the 1914 bridge. The entire length of the highway in Multnomah County was maintained by the county until January 16, 1930, when the state took over maintenance of the Sandy Boulevard route. (Stark Street was never a state-maintained highway, though for a time it was signed as U.S. Route 30 Alternate.) Beyond Multnomah County, State Highway Department engineer John Arthur Elliott surveyed a route along the river through Hood River County in 1913 and 1914, mostly using the 1870s wagon road where available. County voters approved a bond issue in mid-1914 to pay for construction west of the city of Hood River, helped by highway promoter Simon Benson's purchase of the entire issue and promise to pay any overruns. The most difficult location was at Mitchell Point, where the old road included grades of up to 23% to take it over a saddle, and the railroad occupied the only available land between the cliff and the river. Elliott solved the problem by building the Mitchell Point Tunnel—a windowed tunnel like on Switzerland's Axenstrasse—through the cliff, with a viaduct on the west approach. Construction began in March 1915, and the Mitchell Point section was opened to traffic in early September, at a cost of about $47,000. To dedicate the completed highway between Portland and Hood River, two ceremonies were held at Multnomah Falls and Crown Point on the same day in June 1916. Between Hood River and The Dalles, construction was delayed by rugged terrain west of and debate over the best route east of Mosier. Elliott considered several options west of Mosier, including a route close to the railroad, which had again taken the best location along the river, and a route over the Mosier Hills, closer to the existing county road (now Old Dalles Drive and Hood River Road). The former, while shorter, would be, in Elliott's words, "passing a section made up of views which would leave a lasting impression on the traveler". Elliott had left the State Highway Department by 1917, when new locating engineer Roy A. Klein surveyed a third alignment. It was closer to the river than the old county road, yet higher than Elliott's river alignment, in order to avoid closing the rail line during blasting. Just after leaving Hood River on a 1918 bridge over the Hood River, which had replaced an older wooden truss bridge, the highway climbed via a series of loops, similar to the ones at Crown Point. From there it followed the course of the river, partway up the hillside. Near the east end, the Mosier Twin Tunnels, completed in 1920, carried the road through a portion of the hill; the eastern of the two included two windows, similar to the five at Mitchell Point. Because of its beauty, photographers like William Henry Jackson, Benjamin A. Gifford, Arthur Prentiss and Carleton Watkins documented the construction of this highway. The final piece to The Dalles was laid out by J. H. Scott of the State Highway Department. It followed an inland route, climbing existing county roads to the Rowena Crest, where it used a third set of loops to descend to river level at Rowena. There it picked up a former alignment of the Oregon-Washington Railroad and Navigation Company most of the way to The Dalles. Most of the bridges in Wasco County were designed by Conde McCullough, who would later become famous for his work on U.S. Route 101, the Oregon Coast Highway. A completion ceremony for the Columbia River Highway was held on June 27, 1922, when Simon Benson symbolically helped pave the final portion near Rowena. By then, the roadway was part of a longer Columbia River Highway, stretching from Astoria on the Pacific Ocean east to Pendleton as Highway No. 2 in a large network of state highways. In the State Highway Department's fifth biennial report, published in 1922, it reported that construction costs to date on the Columbia River Highway totaled about $11 million, with the state contributing $7.6 million, the federal government $1.1 million, and the counties $2.3 million ($1.5 million of which was from Multnomah County). In 1926, the American Association of State Highway Officials designated the road as part of U.S. Route 30. The first realignment was made by 1935 at the west entrance to The Dalles, where a more direct route along West 2nd Street bypassed the old alignment along West 6th Street, the Mill Creek Bridge, and West 3rd Place. Water-level bypass Even as construction was ongoing on the east end of the Columbia River Highway, the design had become obsolete, as motorists wanting to get to their destination greatly outnumbered tourists taking a pleasure drive. There were also problems with rockfall, especially west of the Mosier Twin Tunnels. By 1932, Lancaster proposed a new water-level route, while keeping the old road as a scenic highway. The first such bypass was necessitated by the federal government's creation of Bonneville Dam on the Columbia River. The dam would flood the railroad, and the highway would need to be moved so the railroad could take its place. The highway's new two-lane alignment, completed in 1937, crossed the old road several times between the community of Bonneville (just east of Tanner Creek) and Cascade Locks. The realignment had the effect of closing the old road to all but the most local of traffic, since the construction of the east portal of the new Toothrock Tunnel, just west of a new bridge over Eagle Creek, had destroyed a section of road on the hillside. By the end of the 1940s, the original cross section of 18 feet (5.5 m) of pavement and two 3-foot (1 m) shoulders had been modified to 24 feet (7.5 m) of pavement. The Mosier Twin Tunnels were similarly widened from 8⅔ feet (2⅔ m) to 10 feet (3 m) in each direction in 1938 to accommodate larger trucks, but this was not enough, and traffic signals were later installed at the tunnels to regulate one-way traffic. A 1948 bypass of the Oneonta Tunnel was made possible by moving the railroad slightly north on fill; the railroad benefited by removing the risk of the thin tunnel wall collapsing onto the track. Oneonta Tunnel was sealed in 1948 but revealed again fifty-five years later as part of the Historic Columbia River Highway restoration project. More comprehensive bypass planning began by 1941, when the State Highway Commission adopted surveys for the new highway. Restoration and current use Starting in June 2006, the Oregon Department of Transportation, using about $1.5 million in state and federal money, began restoring the Oneonta Tunnel to its 1920s appearance. The tunnel officially reopened March 21, 2009 for pedestrian and bicycle traffic. The Eagle Creek Fire swept through the Gorge in September 2017, causing rockslides that closed the historic highway for a year. The highway remained closed between Bridal Veil and Ainsworth State Park until November 23, 2018 for restoration and reconstruction work. Route description and historic designations Although the city of Troutdale has named the old highway "Columbia River Highway" west to 244th Avenue, where it is cut by I-84, signs for the scenic byway begin at exit 17 of I-84, and point south on Graham Road to the west end of downtown Troutdale. Modern milepoint zero of the Historic Columbia River Highway No. 100 is located at the west end of the Sandy River bridge, historic milepost 14.2. Modern highways, including I-84, and other developments have resulted in the abandonment of major sections of the historic original highway. In the interest of tourism and historical preservation, seventy-four miles of the original road—from Troutdale to The Dalles—have been established as the Historic Columbia River Highway (HCRH). Forty miles of the route are open to motor vehicles: The 24 westernmost miles starting in Troutdale (at the eastern edge of urban Portland) provide access to dozens of hiking trails, Crown Point Vista House, and numerous waterfalls such as Multnomah Falls. This section forms a loop with the Mount Hood Scenic Byway. The 16 easternmost miles ending in The Dalles. The remaining portions of the HCRH designated for non-motorized use are now known as the Historic Columbia River Highway State Trail. These are being developed as money becomes available. Roughly seven miles between Hood River and Mosier have been open to non-motorized traffic since 2000, passing through the historic Mosier Tunnels. Once restoration is complete, the highway will serve as a scenic and alternative bicycle route for I-84 and US 30 between The Dalles and Portland. Currently, cyclists wishing to travel between these two towns must ride on the shoulders of I-84 for much of the distance, or the much more dangerous and narrow State Route 14 on the Washington side of the river. The Columbia River Highway is the nation's oldest scenic highway. In 1984 it was recognized as a National Historic Civil Engineering Landmark by the American Society of Civil Engineers. In 2000 it was designated a National Historic Landmark by the National Park Service as "an outstanding example of modern highway development". The Columbia River Highway Historic District was listed on the National Register of Historic Places in 1983. It includes 38 contributing structures on . See also Columbia River Gorge List of National Historic Landmarks in Oregon List of Registered Historic Places in Oregon References External links : History, Features, Historic Photos and Postcards, Hiking, Camping, Cycling 50th Anniversary Exhibit of the Oregon State Archives Heritage Preservation Services of the National Park Service National Historic Landmark data sheet National Scenic Byways Program page Oregon Department of Transportation page Oregon Parks and Recreation state trail page Heritage Preservation Services, National Park Service Flexibility in Highway Design, Federal Highway Administration Friends of the Historic Columbia River Highway (Historical) Oregon State Highway System Map Oregon State Highway Commission (as adopted: November 27, 1917) Official State Map of Oregon ODOT Transportation Development Division Geographic Information Services Unit History of State Highways in Oregon ODOT Salem Headquarters, Right of Way Engineering (August 4, 2017) Auto trails in the United States Historic trails and roads in Oregon All-American Roads Columbia River Gorge U.S. Route 30 National Historic Landmarks in Oregon Named state highways in Oregon Interstate 84 (Oregon–Utah) Scenic highways in Oregon State parks of Oregon Roads on the National Register of Historic Places in Oregon National Register of Historic Places in Multnomah County, Oregon National Forest Scenic Byways Transportation in Multnomah County, Oregon Historic Civil Engineering Landmarks Transportation in Wasco County, Oregon Transportation in Hood River County, Oregon National Register of Historic Places in Wasco County, Oregon Tourist attractions in Multnomah County, Oregon Tourist attractions in Wasco County, Oregon Tourist attractions in Hood River County, Oregon National Register of Historic Places in Hood River County, Oregon Historic American Engineering Record in Oregon
Historic Columbia River Highway
[ "Engineering" ]
3,770
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
346,167
https://en.wikipedia.org/wiki/List%20of%20mathematical%20logic%20topics
This is a list of mathematical logic topics. For traditional syllogistic logic, see the list of topics in logic. See also the list of computability and complexity topics for more theory of algorithms. Working foundations Peano axioms Giuseppe Peano Mathematical induction Structural induction Recursive definition Naive set theory Element (mathematics) Ur-element Singleton (mathematics) Simple theorems in the algebra of sets Algebra of sets Power set Empty set Non-empty set Empty function Universe (mathematics) Axiomatization Axiomatic system Axiom schema Axiomatic method Formal system Mathematical proof Direct proof Reductio ad absurdum Proof by exhaustion Constructive proof Nonconstructive proof Tautology Consistency proof Arithmetization of analysis Foundations of mathematics Formal language Principia Mathematica Hilbert's program Impredicative Definable real number Algebraic logic Boolean algebra (logic) Dialectica space categorical logic Model theory Finite model theory Descriptive complexity theory Model checking Trakhtenbrot's theorem Computable model theory Tarski's exponential function problem Undecidable problem Institutional model theory Institution (computer science) Non-standard analysis Non-standard calculus Hyperinteger Hyperreal number Transfer principle Overspill Elementary Calculus: An Infinitesimal Approach Criticism of non-standard analysis Standard part function Set theory Forcing (mathematics) Boolean-valued model Kripke semantics General frame Predicate logic First-order logic Infinitary logic Many-sorted logic Higher-order logic Lindström quantifier Second-order logic Soundness theorem Gödel's completeness theorem Original proof of Gödel's completeness theorem Compactness theorem Löwenheim–Skolem theorem Skolem's paradox Gödel's incompleteness theorems Structure (mathematical logic) Interpretation (logic) Substructure (mathematics) Elementary substructure Skolem hull Non-standard model Atomic model (mathematical logic) Prime model Saturated model Existentially closed model Ultraproduct Age (model theory) Amalgamation property Hrushovski construction Potential isomorphism Theory (mathematical logic) Complete theory Vaught's test Morley's categoricity theorem Stability spectrum Morley rank Stable theory Forking extension Strongly minimal theory Stable group Tame group o-minimal theory Weakly o-minimal structure C-minimal theory Spectrum of a theory Vaught conjecture Model complete theory List of first-order theories Conservative extension Elementary class Pseudoelementary class Strength (mathematical logic) Differentially closed field Exponential field Ax–Grothendieck theorem Ax–Kochen theorem Peano axioms Non-standard model of arithmetic First-order arithmetic Second-order arithmetic Presburger arithmetic Wilkie's theorem Functional predicate T-schema Back-and-forth method Barwise compactness theorem Skolemization Lindenbaum–Tarski algebra Löb's theorem Arithmetical set Definable set Ehrenfeucht–Fraïssé game Herbrand interpretation / Herbrand structure Imaginary element Indiscernibles Interpretation (model theory) / Interpretable structure Pregeometry (model theory) Quantifier elimination Reduct Signature (logic) Skolem normal form Type (model theory) Zariski geometry Set theory Algebra of sets Axiom of choice Axiom of countable choice Axiom of dependent choice Zorn's lemma Boolean algebra (structure) Boolean-valued model Burali-Forti paradox Cantor's back-and-forth method Cantor's diagonal argument Cantor's first uncountability proof Cantor's theorem Cantor–Bernstein–Schroeder theorem Cardinality Aleph number Aleph-null Aleph-one Beth number Cardinal number Hartogs number Cartesian product Class (set theory) Complement (set theory) Complete Boolean algebra Continuum (set theory) Suslin's problem Continuum hypothesis Countable set Descriptive set theory Analytic set Analytical hierarchy Borel equivalence relation Infinity-Borel set Lightface analytic game Perfect set property Polish space Prewellordering Projective set Property of Baire Uniformization (set theory) Universally measurable set Determinacy AD+ Axiom of determinacy Axiom of projective determinacy Axiom of real determinacy Empty set Forcing (mathematics) Fuzzy set Internal set theory Intersection (set theory) L L(R) Large cardinal property Musical set theory Ordinal number Infinite descending chain Limit ordinal Successor ordinal Transfinite induction ∈-induction Well-founded set Well-order Power set Russell's paradox Set theory Alternative set theory Axiomatic set theory Kripke–Platek set theory with urelements Morse–Kelley set theory Naive set theory New Foundations Positive set theory Zermelo–Fraenkel set theory Zermelo set theory Set (mathematics) Simple theorems in the algebra of sets Subset Θ (set theory) Tree (descriptive set theory) Tree (set theory) Union (set theory) Von Neumann universe Zero sharp Descriptive set theory Analytical hierarchy Large cardinals Almost Ramsey cardinal Erdős cardinal Extendible cardinal Huge cardinal Hyper-Woodin cardinal Inaccessible cardinal Ineffable cardinal Mahlo cardinal Measurable cardinal N-huge cardinal Ramsey cardinal Rank-into-rank Remarkable cardinal Shelah cardinal Strong cardinal Strongly inaccessible cardinal Subtle cardinal Supercompact cardinal Superstrong cardinal Totally indescribable cardinal Weakly compact cardinal Weakly hyper-Woodin cardinal Weakly inaccessible cardinal Woodin cardinal Unfoldable cardinal Recursion theory Entscheidungsproblem Decision problem Decidability (logic) Church–Turing thesis Computable function Algorithm Recursion Primitive recursive function Mu operator Ackermann function Turing machine Halting problem Computability theory, computation Herbrand Universe Markov algorithm Lambda calculus Church-Rosser theorem Calculus of constructions Combinatory logic Post correspondence problem Kleene's recursion theorem Recursively enumerable set Recursively enumerable language Decidable language Undecidable language Rice's theorem Post's theorem Turing degree Effective results in number theory Diophantine set Matiyasevich's theorem Word problem for groups Arithmetical hierarchy Subrecursion theory Presburger arithmetic Computational complexity theory Polynomial time Exponential time Complexity class Complexity classes P and NP Cook's theorem List of complexity classes Polynomial hierarchy Exponential hierarchy NP-complete Time hierarchy theorem Space hierarchy theorem Natural proof Hypercomputation Oracle machine Rózsa Péter Alonzo Church Emil Post Alan Turing Jacques Herbrand Haskell Curry Stephen Cole Kleene Definable real number Proof theory Metamathematics Cut-elimination Tarski's undefinability theorem Diagonal lemma Provability logic Interpretability logic Sequent Sequent calculus Analytic proof Structural proof theory Self-verifying theories Substructural logics Structural rule Weakening Contraction Linear logic Intuitionistic linear logic Proof net Affine logic Strict logic Relevant logic Proof-theoretic semantics Ludics System F Gerhard Gentzen Gentzen's consistency proof Reverse mathematics Nonfirstorderizability Interpretability Weak interpretability Cointerpretability Tolerant sequence Cotolerant sequence Deduction theorem Cirquent calculus Mathematical constructivism Nonconstructive proof Existence theorem Intuitionistic logic Intuitionistic type theory Type theory Lambda calculus Church–Rosser theorem Simply typed lambda calculus Typed lambda calculus Curry–Howard isomorphism Calculus of constructions Constructivist analysis Lambda cube System F Introduction to topos theory LF (logical framework) Computability logic Computable measure theory Finitism Ultraintuitionism Luitzen Egbertus Jan Brouwer Modal logic Kripke semantics Sahlqvist formula Interior algebra Theorem provers First-order resolution Automated theorem proving ACL2 theorem prover E equational theorem prover Gandalf theorem prover HOL theorem prover Isabelle theorem prover LCF theorem prover Otter theorem prover Paradox theorem prover Vampire theorem prover Interactive proof system Mizar system QED project Coq Discovery systems Automated Mathematician Eurisko Historical Begriffsschrift Systems of Logic Based on Ordinals – Alan Turing's Ph.D. thesis See also Kurt Gödel Alfred Tarski Saharon Shelah Logic L Mathematical logic Mathematical logic Mathematical logic
List of mathematical logic topics
[ "Mathematics" ]
1,666
[ "Mathematical logic", "nan" ]
346,209
https://en.wikipedia.org/wiki/Vinyon
Vinyon is a synthetic fiber made from polyvinyl chloride. In some countries other than the United States, vinyon fibers are referred to as polyvinyl chloride fibers. It can bind non-woven fibers and fabrics. It was invented in 1939. It has the same health problems associated with chlorinated polymers. In the past, Vinyon was used a substitute for plant-based filters in tea bags. Vinyon fiber characteristics doesn't flame, but softens at low temperatures(55 C) high resistance to chemicals Moisture absorption is less than 0.5% and moisture regained is less than 0.1% crease resistant and elastic Major vinyon fiber uses industrial applications as a bonding agent for non-woven fabrics and products Production The U.S. Federal Trade Commission definition for vinyon fiber is "A manufactured fiber in which the fiber-forming substance is any long chain synthetic polymer composed of at least 85 percent by weight of vinyl chloride units (—CH2—CHCl—)." First U.S. commercial vinyon fiber production: 1939, FMC Corporation, Fiber Division (formerly American Viscose Corporation). See also Textile References Synthetic fibers
Vinyon
[ "Chemistry" ]
239
[ "Synthetic materials", "Synthetic fibers" ]
346,213
https://en.wikipedia.org/wiki/Blogosphere
The blogosphere is made up of all blogs and their interconnections. The term implies that blogs exist together as a connected community (or as a collection of connected communities) or as a social networking service in which everyday authors can publish their opinions and views. History The term was coined on September 10, 1999 by Brad L. Graham, as a joke. It was re-coined in 2002 by William Quick, and was quickly adopted and propagated by the warblog community. The term resembles the older word logosphere (from Greek logos meaning word, and sphere, interpreted as world), "the world of words", the universe of discourse. Despite the term's humorous intent, CNN, the BBC, and National Public Radio's programs Morning Edition, Day To Day, and All Things Considered used it several times to discuss public opinion. A number of media outlets in the late 2000s started treating the blogosphere as a gauge of public opinion, and it has been cited in both academic and non-academic work as evidence of rising or falling resistance to globalization, voter fatigue, and many other phenomena, and also in reference to identifying influential bloggers and "familiar strangers" in the blogosphere. Proliferation In 1999, Pyra Labs opened blogging to the masses by simplifying the process of creating and maintaining personal web spaces. Prior to the creation of Pyra's "Blogger", the number of blogs in existence was thought to be less than one hundred. Blogger led to the birth of the wider blogosphere. In 2005, a Gallup poll showed that a third of Internet users read blogs at least on occasion, and in May 2006, a study showed that there were over forty-two million bloggers contributing to the blogosphere. With less than 1 million blogs in existence at the start of 2003, the number of blogs had doubled in size every six months through 2006. In 2011, it was estimated that there were more than 153 million blogs, with nearly 1 million new posts being produced by the blogosphere each day. Revenue In a 2010 Technorati study, 36% of bloggers reported some sort of income from their blogs, most often in the form of ad revenue. This shows a steady increase from their 2009 report, in which 28% of the blogging world reported their blog as a source of income, with the mean annual income from advertisements at $42,548. Other common sources of blog-related income are paid speaking engagements and paid postings. Paid postings may be subject to rules on clearly disclosing commercial advertisements as such (regulated by, for example, the Federal Trade Commission in the US and the Advertising Standards Authority in the UK). As a social network Sites such as Technorati, BlogPulse, and Tailrank track the interconnections between bloggers. Taking advantage of hypertext links which act as markers for the subjects the bloggers are discussing, these sites can follow a piece of conversation as it moves from blog to blog. These also can help information researchers study how fast a meme spreads through the blogosphere, to determine which sites are the most important for gaining early recognition. Sites also exist to track specific blogospheres, such as those related by a certain genre, culture, subject matter, or geopolitical location. Mapping In 2007, following six weeks of observation, social media expert Matthew Hurst mapped the blogosphere, generating the plot to the left based on the interconnections between blogs. The most densely populated areas represent the most active portions of the blogosphere. White dots represent individual blogs. They are sized according to the number of links surrounding that particular blog. Links are plotted in both green and blue, with green representing one-way links and blue representing reciprocal links. DISCOVER Magazine described six major 'hot spots' of the blogosphere. While points 1 and 2 represent influential individual blogs, point 3 is the perfect example of "blogging island", where individual blogs are highly connected within a sub-community but lack many connections to the larger blogosphere. Point 4 describes a sociopolitical blogging niche, in which links demonstrate the constant dialogue between bloggers who write about the same subject of interest. Point 5 is an isolated sub-community of blogs dedicated to the world of pornography. Lastly, point 6 represents a collection of sports' lovers who largely segregate themselves but still manage to link back to the higher traffic blogs toward the center of the blogosphere. Merging with other social networks Over time, the blogosphere developed as its own network of interconnections. In this time, bloggers began to engage in other online communities, specifically social networking sites, melding the two realms of social media together. According to Technorati's 2010 "State of the Blogosphere" report, 78% of bloggers were using the microblogging service Twitter, with much larger percentages of individuals who blogged as a part-time job (88%) or full-time for a specific company (88%). Almost half of all bloggers surveyed used Twitter to interact with the readers of their blog, while 72% of bloggers used it for blog promotion. For bloggers whose blog was their business (self-employed), 63% used Twitter to market their business. Additionally, according to the report, almost 9 out of 10 (87%) bloggers were using Facebook. News blogs have become popular, and have created competition for traditional print newspaper and news magazines. The Huffington Post was ranked the most powerful blog in the world by The Observer in 2008, and has come to dominate current event reporting. Political blogs are often tied to a large media or news corporation, such as "The Caucus" (affiliated with The New York Times), "CNN Political Ticker", and the National Reviews "The Corner". Gossip blogs have grown extensively with the development of the blogosphere. One of the first influential gossip bloggers was Perez Hilton, a celebrity and entertainment media gossip blogger. His blog posts tabloid photographs of celebrities, accompanied by captions and comments. Web traffic to the often controversial and raunchy Perez Hilton site increased significantly in 2005, prompting similar gossip blogs, such as TMZ.com to gain popularity. Food blogs allow chefs to share recipes, cooking techniques, and food porn. Food blogs such as 101 Cookbooks, Smitten Kitchen, and Simply Recipes can serve as online cookbooks for followers and often contain restaurant critiques, product reviews, and step-by-step photography for recipes. Fashion blogs have also become large sub-communities following the growth of the blogosphere. blogs like Racked, The Cut, and Fashionista give readers an eye into the fashion industry. Besides fashion news blogs, street style blogs have also become popular. Such bloggers include Scott Schuman (The Sartorialist), Tommy Ton (Jak and Jil), Jane Aldridge (Sea of Shoes), Bryan Grey-Yambao (Bryanboy), and Tavi Gevinson (Style Rookie). They are able to earn considerable livings through advertising, selling their photos and even providing their services as photographers, stylists, and guest designers. Health blogs cover health topics, events and/or related content of the health industry and the general community. A health blog can cover diverse health related concerns such as nutrition and diet, fitness, weight control, diseases, disease management, societal trends affecting health, analysis about health, business of health and health research. Scientific blogs cover different scientific and mathematical topics. Some of these are written by leading researchers, others by interested laymen. These are often free to access and thus provide an alternative to pay walled scientific literature. Genealogy blogs cover a variety of topics related to genealogy and family history, including the genealogy industry, genealogy software and technology, as well as educational "how to" posts related to specific research areas. Philosophy blogs both in analytic philosophy and Continental philosophy are a significant part of the blogosphere, often covering metaphysics, ethics and philosophy of language. See also Bloggernacle Customer engagement Global Voices Online Group blogging J-blogosphere References External links Technorati's State of the Blogosphere Internet terminology Blogs
Blogosphere
[ "Technology" ]
1,669
[ "Computing terminology", "Internet terminology" ]
346,222
https://en.wikipedia.org/wiki/North%20America%20Nebula
The North America Nebula (NGC 7000 or Caldwell 20) is an emission nebula in the constellation Cygnus, close to Deneb (the tail of the swan and its brightest star). It is named because its shape resembles North America. History On October 24, 1786, William Herschel observing from Slough, England, noted a “faint milky nebulosity scattered over this space, in some places pretty bright.” The most prominent region was catalogued by his son John Herschel on August 21, 1829. It was listed in the New General Catalogue as NGC 7000, where it is described as a "faint, most extremely large, diffuse nebulosity.” In 1890, the pioneering German astrophotographer Max Wolf noticed this nebula's characteristic shape on a long-exposure photograph, and dubbed it the North America Nebula. In his study of nebulae on the Palomar Sky Survey plates in 1959, American astronomer Stewart Sharpless realised that the North America Nebula is part of the same interstellar cloud of ionized hydrogen (H II region) as the Pelican Nebula, separated by a dark band of dust, and listed the two nebulae together in his second list of 313 bright nebulae as Sh2-117. American astronomer Beverly T. Lynds catalogued the obscuring dust cloud as L935 in her 1962 compilation of dark nebulae. Dutch radio astronomer Gart Westerhout detected the HII region Sh2-117 as a strong radio emitter, 3° across, and it appears as W80 in his 1958 catalogue of radio sources in the band of the Milky Way. General information The North America Nebula covers a region more than ten times the area of the full moon, but its surface brightness is low, so normally it cannot be seen with the unaided eye. Binoculars and telescopes with large fields of view (approximately 3°) will show it as a foggy patch of light under sufficiently dark skies. However, using a UHC filter, which filters out some unwanted wavelengths of light, it can be seen without magnification under dark skies. Its shape and reddish color (from the hydrogen Hα emission line) show up only in photographs of the area. The portion of the nebula resembling Mexico and Central America is known as the Cygnus Wall. This region exhibits the most concentrated star formation. At optical wavelengths, the North America Nebula and the Pelican Nebula (IC 5070) appear distinct as they are separated by the silhouette of the dark band of interstellar dust L935. The dark cloud is however transparent to radio waves and infrared radiation, and these wavelengths reveal the central regions of Sh2-117 that are not visible to an ordinary telescope, including many highly luminous stars. Distance and size The distances to the North America and Pelican nebulae were controversial, because there are few precise methods for determining how far away an HII region lies. Until 2020, most astronomers accepted a value of 2,000 light years, though estimates ranged from 1,500 to 3,000 light years. But in 2020, the Gaia astrometry spacecraft measured the distances to 395 stars lying within the HII region, giving the North America and Pelican nebulae a distance of 2,590 light years (795±25 parsecs). The entire HII region Sh2-117 is estimated to be 140 light years across, and the North America nebula stretches 90 light years north to south. Ionising star HII regions shine because their hydrogen gas is ionised by the ultraviolet radiation from a hot star. In 1922, Edwin Hubble proposed that Deneb may be responsible for lighting up the North America Nebula, but it soon became apparent that it is not hot enough: Deneb has a surface temperature of 8,500 K, while the nebula's spectrum shows it is being heated by a star hotter than 30,000 K. In addition, Deneb is well away from the middle of the complete North America/Pelican Nebula complex (Sh2-117), and by 1958 George Herbig realised that the ionizing star had to lie behind the central dark cloud L935. In 2004, European astronomers Fernando Comerón and Anna Pasquali searched for the ionizing star behind L935 at infrared wavelengths, using data from the 2MASS survey, and then made detailed observations of likely suspects with the 2.2 m telescope at the Calar Alto Observatory in Spain. One star, catalogued J205551.3+435225, fulfilled all the criteria. Lying right in the centre of Sh2-117, with a temperature of over 40,000 K, it is almost certainly the ionising star for the North America and Pelican nebulae. Later observations have revealed J205551.3+435225 is a spectral type O3.5 star, with another hot star (type O8) in orbit. J205551.3+435225 lies just off the “Florida coast” of the North America Nebula, so it has been more conveniently nicknamed the Bajamar Star ("Islas de Bajamar," meaning "low-tide islands" in Spanish, was the original name of the Bahamas because many of them are only easily seen from a ship during low tide). Although the light from the Bajamar Star is dimmed by 9.6 magnitudes (almost 10,000 times) by the dark cloud L935, it is faintly visible at optical wavelengths, at magnitude 13.2. If we saw this star undimmed, it would shine at magnitude 3.6, almost as bright as Albireo, the star marking the swan's head. See also Pelican Nebula References External links The North America Nebula (NGC 7000) at the astro-photography site of Mr. T. Yoshida. NASA APOD: The North America and Pelican Nebulae (June 30, 2009) NASA APOD: The North America Nebula (May 1, 2000) starpointing.com – Central part of the North America Nebula: The Great Wall Creative Commons North America Nebula Data North America Nebula – Creative Commons data Download & editing guide Cygnus (constellation) H II regions NGC objects 020b Sharpless objects 17861024 Articles containing video clips Star-forming regions
North America Nebula
[ "Astronomy" ]
1,293
[ "Cygnus (constellation)", "Constellations" ]
346,262
https://en.wikipedia.org/wiki/Betti%20number
In algebraic topology, the Betti numbers are used to distinguish topological spaces based on the connectivity of n-dimensional simplicial complexes. For the most reasonable finite-dimensional spaces (such as compact manifolds, finite simplicial complexes or CW complexes), the sequence of Betti numbers is 0 from some point onward (Betti numbers vanish above the dimension of a space), and they are all finite. The nth Betti number represents the rank of the nth homology group, denoted Hn, which tells us the maximum number of cuts that can be made before separating a surface into two pieces or 0-cycles, 1-cycles, etc. For example, if then , if then , if then , if then , etc. Note that only the ranks of infinite groups are considered, so for example if , where is the finite cyclic group of order 2, then . These finite components of the homology groups are their torsion subgroups, and they are denoted by torsion coefficients. The term "Betti number" was coined by Henri Poincaré after Enrico Betti. The modern formulation is due to Emmy Noether. Betti numbers are used today in fields such as simplicial homology, computer science and digital images. Geometric interpretation Informally, the kth Betti number refers to the number of k-dimensional holes on a topological surface. A "k-dimensional hole" is a k-dimensional cycle that is not a boundary of a (k+1)-dimensional object. The first few Betti numbers have the following definitions for 0-dimensional, 1-dimensional, and 2-dimensional simplicial complexes: b0 is the number of connected components; b1 is the number of one-dimensional or "circular" holes; b2 is the number of two-dimensional "voids" or "cavities". Thus, for example, a torus has one connected surface component so b0 = 1, two "circular" holes (one equatorial and one meridional) so b1 = 2, and a single cavity enclosed within the surface so b2 = 1. Another interpretation of bk is the maximum number of k-dimensional curves that can be removed while the object remains connected. For example, the torus remains connected after removing two 1-dimensional curves (equatorial and meridional) so b1 = 2. The two-dimensional Betti numbers are easier to understand because we can see the world in 0, 1, 2, and 3-dimensions. Formal definition For a non-negative integer k, the kth Betti number bk(X) of the space X is defined as the rank (number of linearly independent generators) of the abelian group Hk(X), the kth homology group of X. The kth homology group is , the s are the boundary maps of the simplicial complex and the rank of Hk is the kth Betti number. Equivalently, one can define it as the vector space dimension of Hk(X; Q) since the homology group in this case is a vector space over Q. The universal coefficient theorem, in a very simple torsion-free case, shows that these definitions are the same. More generally, given a field F one can define bk(X, F), the kth Betti number with coefficients in F, as the vector space dimension of Hk(X, F). Poincaré polynomial The Poincaré polynomial of a surface is defined to be the generating function of its Betti numbers. For example, the Betti numbers of the torus are 1, 2, and 1; thus its Poincaré polynomial is . The same definition applies to any topological space which has a finitely generated homology. Given a topological space which has a finitely generated homology, the Poincaré polynomial is defined as the generating function of its Betti numbers, via the polynomial where the coefficient of is . Examples Betti numbers of a graph Consider a topological graph G in which the set of vertices is V, the set of edges is E, and the set of connected components is C. As explained in the page on graph homology, its homology groups are given by: This may be proved straightforwardly by mathematical induction on the number of edges. A new edge either increments the number of 1-cycles or decrements the number of connected components. Therefore, the "zero-th" Betti number b0(G) equals |C|, which is simply the number of connected components. The first Betti number b1(G) equals |E| + |C| - |V|. It is also called the cyclomatic number—a term introduced by Gustav Kirchhoff before Betti's paper. See cyclomatic complexity for an application to software engineering. All other Betti numbers are 0. Betti numbers of a simplicial complex Consider a simplicial complex with 0-simplices: a, b, c, and d, 1-simplices: E, F, G, H and I, and the only 2-simplex is J, which is the shaded region in the figure. There is one connected component in this figure (b0); one hole, which is the unshaded region (b1); and no "voids" or "cavities" (b2). This means that the rank of is 1, the rank of is 1 and the rank of is 0. The Betti number sequence for this figure is 1, 1, 0, 0, ...; the Poincaré polynomial is . Betti numbers of the projective plane The homology groups of the projective plane P are: Here, Z2 is the cyclic group of order 2. The 0-th Betti number is again 1. However, the 1-st Betti number is 0. This is because H1(P) is a finite group - it does not have any infinite component. The finite component of the group is called the torsion coefficient of P. The (rational) Betti numbers bk(X) do not take into account any torsion in the homology groups, but they are very useful basic topological invariants. In the most intuitive terms, they allow one to count the number of holes of different dimensions. Properties Euler characteristic For a finite CW-complex K we have where denotes Euler characteristic of K and any field F. Cartesian product For any two spaces X and Y we have where denotes the Poincaré polynomial of X, (more generally, the Hilbert–Poincaré series, for infinite-dimensional spaces), i.e., the generating function of the Betti numbers of X: see Künneth theorem. Symmetry If X is n-dimensional manifold, there is symmetry interchanging and , for any : under conditions (a closed and oriented manifold); see Poincaré duality. Different coefficients The dependence on the field F is only through its characteristic. If the homology groups are torsion-free, the Betti numbers are independent of F. The connection of p-torsion and the Betti number for characteristic p, for p a prime number, is given in detail by the universal coefficient theorem (based on Tor functors, but in a simple case). More examples The Betti number sequence for a circle is 1, 1, 0, 0, 0, ...; the Poincaré polynomial is . The Betti number sequence for a three-torus is 1, 3, 3, 1, 0, 0, 0, ... . the Poincaré polynomial is . Similarly, for an n-torus, the Poincaré polynomial is (by the Künneth theorem), so the Betti numbers are the binomial coefficients. It is possible for spaces that are infinite-dimensional in an essential way to have an infinite sequence of non-zero Betti numbers. An example is the infinite-dimensional complex projective space, with sequence 1, 0, 1, 0, 1, ... that is periodic, with period length 2. In this case the Poincaré function is not a polynomial but rather an infinite series , which, being a geometric series, can be expressed as the rational function More generally, any sequence that is periodic can be expressed as a sum of geometric series, generalizing the above. For example has the generating function and more generally linear recursive sequences are exactly the sequences generated by rational functions; thus the Poincaré series is expressible as a rational function if and only if the sequence of Betti numbers is a linear recursive sequence. The Poincaré polynomials of the compact simple Lie groups are: Relationship with dimensions of spaces of differential forms In geometric situations when is a closed manifold, the importance of the Betti numbers may arise from a different direction, namely that they predict the dimensions of vector spaces of closed differential forms modulo exact differential forms. The connection with the definition given above is via three basic results, de Rham's theorem and Poincaré duality (when those apply), and the universal coefficient theorem of homology theory. There is an alternate reading, namely that the Betti numbers give the dimensions of spaces of harmonic forms. This requires the use of some of the results of Hodge theory on the Hodge Laplacian. In this setting, Morse theory gives a set of inequalities for alternating sums of Betti numbers in terms of a corresponding alternating sum of the number of critical points of a Morse function of a given index: Edward Witten gave an explanation of these inequalities by using the Morse function to modify the exterior derivative in the de Rham complex. See also Topological data analysis Torsion coefficient Euler characteristic References . . Algebraic topology Graph invariants Topological graph theory Generating functions
Betti number
[ "Mathematics" ]
2,028
[ "Sequences and series", "Mathematical structures", "Graph theory", "Algebraic topology", "Fields of abstract algebra", "Topology", "Mathematical relations", "Graph invariants", "Generating functions", "Topological graph theory" ]
346,287
https://en.wikipedia.org/wiki/Mode%20X
Mode X is a 256-color graphics display mode of the VGA graphics hardware for IBM PC compatibles. It was first publicized by Michael Abrash in his July 1991 column in Dr. Dobb's Journal and then in chapters 47-49 of Abrash's Graphics Programming Black Book. The term "Mode X" was coined by Abrash. Mode X is a variant of the Mode 13h with the resolution increased to , giving square pixels instead of the slightly elongated pixels of Mode 13h. It is enabled by entering Mode 13h via an BIOS system call, then changing the values of several VGA registers. Additionally, Abrash enabled the VGA's planar memory mode (also called "unchained mode"). Even though planar memory mode is a documented part of the VGA standard and was used in earlier commercial games, it was first widely publicized in the Mode X articles, leading many programmers to consider Mode X and planar memory synonymous. It is possible to enable planar memory in standard mode, which became known as Mode Y in the Usenet rec.games.programmer group. Planar memory arrangement splits the pixels horizontally into groups of four. For any given byte in video memory, four pixels on screen can be accessed depending on which plane(s) are enabled. This is more complicated for the programmer, but the advantages gained by this arrangement—primarily the ability to use all 256 KB of VGA memory for one or more display buffers, instead of only one quarter of that (64 KB)—were considered worthwhile by many. Variants In addition to unchained being called Mode Y, Mode Q (short for "cube") is sometimes used to refer to a 256-color mode. The Y coordinate can simply be put in the high byte of the address, and the X coordinate in the low byte, forming the address of the pixel without a multiply. References External links Graphics Programming Black Book by Michael Abrash, chapters 47, 48, 49. Mode X tutorial at GameDev.net (archived copy) Tweaked VGA Modes by Robert C. Pendleton (archived copy) Introduction to Mode X by Robert Jambor (archived copy) Computer display standards
Mode X
[ "Technology" ]
459
[ "Computing stubs", "Computer hardware stubs" ]
346,360
https://en.wikipedia.org/wiki/Cuckoo%20clock
A cuckoo clock is a type of clock, typically pendulum driven, that strikes the hours with a sound like a common cuckoo call and has an automated cuckoo bird that moves with each note. Some move their wings and open and close their beaks while leaning forwards, whereas others have only the bird's body leaning forward. The mechanism to produce the cuckoo call has been in use since the middle of the 18th century and has remained almost without variation. It is unknown who invented the cuckoo clock and where the first one was made. It is thought that much of its development and evolution was made in the Black Forest area in southwestern Germany (in the modern state of Baden-Württemberg), the region where the cuckoo clock was popularized and from where it was exported to the rest of the world, becoming world-famous from the mid-1850s on. Today, the cuckoo clock is one of the favourite souvenirs of travellers in Germany, Switzerland, Austria and Eastern France. It has become a cultural icon of Germany. Characteristics The design of a cuckoo clock is now conventional. Many are made in the "traditional style", which are made to hang on a wall. The classical or traditional type includes two subgroups; the carved ones, whose wooden cases are decorated with leaves, animals, etc., and a second one with cases in the shape of a chalet. They have an automaton of a bird that appears through a small trap door when the clock strikes. The cuckoo bird is activated by the clock movement as the clock strikes by means of an arm that is triggered on the hour and half hour. There are two kinds of movements: one-day (30-hour) and eight-day clockworks. Some have musical devices, and play a tune on a Swiss music box after striking the hours and half-hours. Usually the melody sounds only at full hours in eight-day clocks and both at full and half hours in the one-day timepieces. Musical cuckoo clocks frequently have other automata which move when the music box plays. Today's cuckoo clocks are almost always weight driven. The weights are made of cast iron usually in a pine cone shape and the "cuckoo" sound is created by two tiny gedackt pipes in the clock, with bellows attached to their tops. The clock's movement activates the bellows to send a puff of air into each pipe alternately when the timekeeper strikes. Since the 1970s, quartz battery-powered cuckoo clocks have become available. As with their mechanical counterparts, the cuckoo bird emerges from its enclosure and moves up and down, but often on the quartz timepieces it also flaps its wings and open its beak while it sings. Just before the call, and in case it has a door, the single or double door opens and the bird emerges as usual, but only on the full hour, and they do not have a gong wire chime. The movement of the cuckoo in such clocks is regulated by an electromagnet that pulses on and off, attracting a weight, that acts as a fulcrum, connected to the tail of the plastic cuckoo, thus moving the bird up and down in its enclosure. In quartz cuckoos, different systems have been used to produce the bird's call; the usual bellows, a digital recording of a real cuckoo in the wild (with a corresponding echo accompanied by the sound of a waterfall and other birds in the background) or a recording of the bird's call only. In musical versions, the hourly chime is followed by the replay of one of twelve popular melodies (one for each hour). Some musical quartz clocks in the chalet style also reproduce many of the popular automata found on mechanical musical clocks, such as beer drinkers, wood-choppers, and jumping deer. Uniquely, quartz cuckoo clocks often include a sensor, so that when the lights are turned off at night they automatically silence the hourly chime. Others are pre-programmed not to strike between a set of pre-determined hours. Whether this is controlled by a light sensor or pre-programmed, the function is referred to as a "night silence" feature. On quartz wall clocks in the traditional style, the weights are conventionally cast in the shape of pine cones made of plastic rather than iron. The pendulum bob is often another carved leaf. Here, the weights and pendulum are purely ornamental as the clock is driven by battery power. History First modern cuckoo clocks In 1629, many decades before clockmaking was established in the Black Forest, an Augsburg merchant by the name of Philipp Hainhofer (1578–1647) penned one of the first known descriptions of a modern cuckoo clock. In Dresden, he visited the Kunstkammer (Cabinet of curiosities) of Prince Elector August von Sachsen. One of the rooms contained a chiming clock with a moving bird, a cuckoo announcing every quarter of an hour, which he briefly described as: "A beautiful chiming clock, inside a cuckoo, indicating the quarter hours with its beak and call, the hours with its flapping wings and pour sugar from its tail" (translated from the German). Hainhofer does not describe what this clock may have looked like and who built it. This piece is no longer part of the Dresden Green Vault collection, but appears in a 1619 inventory book as: "In addition, there is also a new entry. 1 Clock with a cuckoo that yells. It stands on a black pedestal made of ebony on the barber's chest" (translated from the German). The Dresden timepiece should not have been unique, because the mechanical cuckoo was considered part of the known mechanical arts in the 17th century. In a widely known handbook on music, Musurgia Universalis (1650), the scholar Athanasius Kircher describes a mechanical organ with several automated figures, including a mechanical cuckoo. This book contains the first documented description—in words and pictures—of how a mechanical cuckoo works. Kircher did not invent the cuckoo mechanism, because this book, like his other works, is a compilation of known facts into a handbook for reference purposes. The engraving clearly shows all the elements of a mechanical cuckoo. The bird automatically opens its beak and moves both its wings and tail. Simultaneously, there is heard the whistle—call of the cuckoo, created by two organ pipes, tuned to a minor or major third. There is only one fundamental difference from the Black Forest-type cuckoo mechanism: The functions of Kircher's bird are not governed by a count wheel in a strike train, but a pinned program barrel synchronizes the movements and sounds of the bird. On the other hand, in 1669 Domenico Martinelli, in his handbook on elementary clocks Horologi Elementari, suggests using the call of the cuckoo to indicate the hours. Starting at that time the mechanism of the cuckoo clock was known. Any mechanic or clockmaker, who could read Latin or Italian, knew after reading the books that it was feasible to have the cuckoo announce the hours. Subsequently, cuckoo clocks appeared in regions that had not been known for their clockmaking. For instance, the Historische Nachrichten (1713), an anonymous publication generally attributed to Court Preacher Bartholomäus Holzfuss, mentions a musical clock in the Oranienburg palace in Berlin. This clock, originating in West Prussia, played eight church hymns and had a cuckoo that announced the quarter hours. Unfortunately this clock, like the one mentioned by Hainhofer in 1629, can no longer be traced today. In the 18th century, people in the Black Forest started to build cuckoo clocks. First cuckoo clocks made in the Black Forest It is not clear who built the first cuckoo clocks in the Black Forest, but there is unanimity that the unusual clock with the bird call very quickly conquered the region. By the middle of the 18th century, several small clockmaking shops between Neustadt and Sankt Georgen were making cuckoo clocks out of wood and shields decorated with paper. After a journey through south-west Germany in 1762, Count Giuseppe Garampi, Prefect of the Vatican Archives, remarked: "In this region large quantities of wooden movement clocks are made, and even if they were not completely unknown earlier, they have now been perfected, and one has started to equip them with the cuckoo's call." It is hard to judge how large the proportion of cuckoo clocks was among the total production of early days Black Forest clocks. Based on the proportion of pieces surviving to the present, it must have been a small fraction of the total production. Especially 18th century cuckoo clocks, in which all the parts of the movement, including gears, were made of wood. They are extremely rare, Wilhelm Schneider was only able to list a dozen of pieces with wooden movements in his book Frühe Kuckucksuhren (Early Cuckoo Clocks) (2008). The cuckoo clock remained a niche product until the middle of the 19th century, made by a few specialized workshops. Regarding its murky origins, there are two main fables from the first two chroniclers of Black Forest horology which tell contradicting stories about it: The first is from Father Franz Steyrer, written in his Geschichte der Schwarzwälder Uhrmacherkunst (History of the Art of Clockmaking in the Black Forest) in 1796. He describes a meeting, happened around 1742, between two clock peddlers (Uhrenträger, literally "clock carriers", who carried the dials and movements on their backs displayed on huge backpacks), Joseph Ganther from Neukirch (Furtwangen) and Joseph Kammerer from Furtwangen, who met a travelling Bohemian merchant who sold wooden cuckoo clocks. When they returned home, they brought with them this novelty, since it had caught their eyes, and show it to Michael Dilger from Neukirch and Matthäus Hummel from Glashütte, who were very pleased with it and began to copy it. Its popularity grew in the region and more and more clockmakers started making them. With regard to this chronicle, the historian Adolf Kistner claimed in his book Die Schwarzwälder Uhr (The Black Forest Clock), published in 1927, that there is not any Bohemian cuckoo clock in existence to verify the thesis that such a clock was used as a sample to copy and produce Black Forest cuckoo clocks. Bohemia had no fundamental clockmaking industry during that period. The second story is related by another priest, Markus Fidelis Jäck, in a passage extracted from his report Darstellungen aus der Industrie und des Verkehrs aus dem Schwarzwald (Descriptions of the Industry and Transport of the Black Forest), (1810) said as follows: "The cuckoo clock was invented (in the early 1730s) by a clock-master [Franz Anton Ketterer] from Schönwald. This craftsman adorned a clock with a moving bird that announced the hour with the cuckoo-call. The clock-master got the idea of how to make the cuckoo-call from the bellows of a church organ". Unfortunately, neither Steyrer nor Jäck quote any sources for their claims, making them unverifiable. As time went on, the second version became the more popular, and is the one generally related today, though evidence suggests its inaccuracy. This type of clock is much older than clockmaking in the Black Forest. As early as 1650, the mechanical cuckoo was part of the reference book knowledge recorded in handbooks. It took nearly a century for the cuckoo clock to find its way to the Black Forest, where for many decades it remained a tiny niche product. In addition, R. Dorer pointed out in 1948 that Franz Anton Ketterer (1734–1806) could not have been the inventor of the cuckoo clock in 1730, because he had not yet been born. This statement was corroborated by Gerd Bender in the most recent edition of the first volume of his work Die Uhrenmacher des hohen Schwarzwaldes und ihre Werke (The Clockmakers of the High Black Forest and their Works) (1998) in which he wrote that the cuckoo clock was not native to the Black Forest and also stated that: "There are no traces of the first production line of cuckoo clocks made by Ketterer". Schaaf, in Schwarzwalduhren (Black Forest Clocks) (1995), provides his own research which leads to the earliest cuckoos having been built in the Franconia and Lower Bavaria area, in the southeast of Germany, (forming nowadays the northern two-thirds of the Free State of Bavaria), in the direction of Bohemia (nowadays the main region of the Czech Republic), which he notes, lends credence to the Steyrer version. Although the idea of placing an automaton cuckoo bird in a clock to announce the passing of time did not originate in the Black Forest, the cuckoo clock as it is known today (in its traditional form decorated with wood carvings) comes from this region located in southwest Germany. The Black Forest people who created the cuckoo clock industry developed it, and still come up with new designs and technical improvements. Even though the functionality of the cuckoo mechanism has remained basically unchanged, the appearance has changed as case designs and clock movements evolved in the region. Around 1800, the first lacquered shield clocks appeared, the so-called Lackschilduhr ("lacquered shield clock"), characterized by having a painted flat square wooden face behind which all the clockwork was attached. On top of the square was usually a semicircle of highly decorated painted wood which contained the door for the cuckoo. These usually depicted floral motifs, like roses, and often had a painted column, on either side of the chapter ring, others were decorated with fruits as well. Some pieces also bore the names of the bride and bridegroom on the dial, which were normally painted by women. There was no cabinet surrounding the clockwork in this model. This design was the most prevalent during the first half of the 19th century. By the middle of the 19th century, Black Foresters began to experiment with a variety of forms. In the 1840s, the Beha company had already been selling Biedermeier style table cuckoo clocks. Up until now, clocks had mainly been manufactured with a large shield hiding the movement behind, without a case surrounding it. Now, for the first time, timepieces with a real case were produced in large numbers. These clocks with their simple geometric shapes, some with small columns on both sides of the dial for decoration, are reminiscent of the art of the Biedermeier period. Such pieces were built between 1840 and the 1890s - and sometimes a cuckoo was included in these simple "Biedermeier clocks". Some models had also a painting of a person or animal with moving eyes. Towards the middle of the 19th century until the 1880s, picture frame cuckoo clocks also became available. As the name suggests, these wall timepieces consisted of a picture frame, usually with a typical Black Forest scene painted on a wooden background or a sheet metal, lithography and screen-printing were other techniques used. Other common themes depicted were; hunting, love, family, death, birth, mythology, military and Christian religious scenes. Works by painters such as Johann Baptist Laule (1817–1895) and Carl Heine (1842–1882) were used to decorate the fronts of this and other types of clocks. The painting was almost always protected by a glass and some models displayed a person or an animal with blinking or flirty eyes as well, being operated by a simple mechanism worked by means of the pendulum swinging. The cuckoo normally took part in the scene painted, and would pop out in 3D, as usual, to announce the hour. Another type of picture frame clock (Rahmenuhr) produced in the region from the middle of the 19th century, was based on a Viennese model from around 1830. The front of these timepieces was decorated with a serially stamped brass plate. The brass was given a gold-coloured surface by polishing it or treating it with nitric acid. Some of these pieces, which were produced in large numbers up until the 1880s, were also available with a cuckoo mechanism. As for house-shaped cases, in the 1870s the Beha company marketed table and wall models of considerable size, so-called Herrenhäusle ("House of Lords", a manor house or mansion), whose detailed wooden cases replicated attic windows from where the cuckoo pop out, a shingle roof with chimney, rain gutters and downpipes, etc. On the other hand, from the 1860s until the early 20th century, cases were manufactured in a wide variety of styles such as; Neoclassical or Georgian (certain pieces also displayed a painting), neo-Gothic, neo-Renaissance, neo-Baroque, Art Nouveau, etc., becoming a suitable decorative object for the bourgeois home. These timepieces are less common than the popular ones looking like gatekeeper-houses (Bahnhäusle style clocks) and they could be mantel, wall or bracket clocks. However, the popular house-shaped Bahnhäusleuhr ("railway-house clock") virtually forced the discontinuation of other styles within a few decades. Bahnhäusle style, a successful design from Furtwangen In September 1850, the first director of the Grand Duchy of Baden Clockmakers School in Furtwangen, Robert Gerwig, launched a public competition to submit designs for modern clockcases, which would allow homemade products to attain a professional appearance. Friedrich Eisenlohr (1805–1854), who as an architect had been responsible for creating the buildings along the then new and first Badenese Rhine valley railway, submitted the most far-reaching design. Eisenlohr enhanced the facade of a standard railroad-guard's residence, as he had built many of them, with a clock dial. His "Wallclock with shield decorated by ivy vines", (in reality the ornament were grapevines and not ivy) as it is referred to in a surviving, handwritten report from the Clockmakers School from 1851 or 1852, became the prototype of today's popular souvenir cuckoo clocks. Eisenlohr was also up-to-date stylistically. He was inspired by local images; rather than copying them slavishly, he modified them. Contrary to most present-day cuckoo clocks, his case features light, unstained wood and were decorated with symmetrical, flat fretwork ornaments. His idea became an instant hit, because the modern design of the Bahnhäusle clock appealed to the decorating tastes of the growing bourgeoisie and thereby tapped into new and growing markets. While the Clockmakers School was satisfied to have Eisenlohr's clock case sketches, they were not fully realized in their original form. Eisenlohr had proposed a wooden facade; Gerwig preferred a painted metal front combined with an enamel dial. But despite intensive campaigns by the Clockmakers School, sheet metal fronts decorated with oil paintings (or coloured lithographs) never became a major market segment because of the high cost and labour-intensive process, hence they were only produced from the 1850s until around 1880, whether wall or mantel versions. Characteristically, the makers of the first Bahnhäusle clocks deviated from Eisenlohr's sketch in only one way: they left out the cuckoo mechanism. Unlike today, the design with the little house was not synonymous with a cuckoo clock in the first years after 1850. This is another indication that at that time cuckoo clocks could not have been an important market segment. Only in December 1854, Johann Baptist Beha, the best known maker of cuckoo clocks of his time, sold two of them, with oil paintings on their fronts, to the Furtwangen clock dealer Gordian Hettich, which were described as Bahnhöfle Uhren ("railway station clocks"). More than a year later, on 20 January 1856, another respected Furtwangen-based cuckoo clockmaker, Theodor Ketterer, sold one to Joseph Ruff in Glasgow. Concurrently with Beha and Ketterer, other Black Forest clockmakers must have started to equip Bahnhäusle clocks with cuckoo mechanisms to satisfy the rapidly growing demand for this type of clock. Starting in the mid-1850s there was a real boom in this market. For example, numerous exhibitors at the trade exhibition in Villingen in 1858 offered cuckoo clocks in the Bahnhäuschenkasten or Bahnwartshaus. And in the annual report of the Furtwangen Clockmakers School of 1857/58 is stated: "The cuckoo clock therefore found a very special market again as soon as the Bahnhäuschen, which was so very suitable for it, was used as a clock case." By 1862, Johann Baptist Beha started to enhance his richly decorated Bahnhäusle clocks with hands carved from bone and weights cast in the shape of fir cones. Even today this combination of elements is characteristic for cuckoo clocks, although the hands are usually made of wood or plastic, white celluloid was employed in the past too. As for the weights, there was during this second half of the 19th century, a few models which featured weights cast in the shape of a Gnome and other curious forms. Thanks to Eisenlohr's design, the cuckoo clock became one of the most successful Black Forest products within a few years. In a report on the exhibition of local products at the 1873 Vienna World's Fair, Karl Schott, the then head of the Furtwanger Landesgewerbehalle (Furtwangen State Trade Hall), wrote "that today the cuckoo clock is one of the most sought-after clocks in the Black Forest". At the time of the Vienna exhibition, cuckoo clocks were not only sold on the German domestic market, but in many regions of the world. The main export countries in Europe were Switzerland, England, Russia and the Ottoman Empire. Schott also named overseas sales in his 1873 report: North America, Mexico, South America, Australia, India, Japan, China and even the Sandwich Islands (Hawaii). By 1860, the Bahnhäusle style had started to develop away from its original, "severe" graphic form, and evolved toward the well-known case with three-dimensional woodcarvings, like the Jagdstück ("hunt piece", design created in Furtwangen in 1861), a cuckoo clock with a carved oak foliage and hunting motives, such as trophy animals, guns and powder pouches. Only ten years after its invention by Friedrich Eisenlohr, all variations of the house-theme had reached maturity. Bahnhäusle timepieces and its variations were also available as a mantel clock, but not as many compared to the wall version. These ornate timepieces were not made by one clockmaker only, otherwise such a complex product could not have been produced at acceptable prices. There were numerous specialists who assisted the clockmakers. In 1873, Karl Schott reported on the division of labour at the Vienna Exhibition: "The birds are mostly carved and painted by women. The pipes are made by a pipe maker. In addition to a number of master craftsmen, there are also a number of large companies involved in the manufacture of cuckoo clocks, and the cuckoo clock maker rarely makes them himself. Rather, he obtains the movements, reworks them with precision, attaches the bellows and pipes and thus puts the finished movement in the case." The division of labour meant that different clockmakers purchased completely identical parts from the same suppliers. Therefore, small components in particular, such as hands or dials, showed a tendency towards standardization. But it also happened from time to time that movements from different manufacturers were found in cases that looked the same on the outside, simply because they came from the same case maker. The basic cuckoo clock of today is the railway-house (Bahnhäusle) form, still with its rich ornamentation, and these are known under the name of "carved", "classic" or "traditional"; which display carved leaves, birds, deer heads (Jagdstück design), other animals, etc. The richly decorated Bahnhäusle clocks have become a symbol of the Black Forest that is instantly understood anywhere in the world. The cuckoo clock became successful and world-famous after Friedrich Eisenlohr contributed the Bahnhäusle design to the 1850 competition at the Furtwangen Clockmakers School. Chalet style, the Swiss contribution The chalet style cuckoo clock, whose case reproduce to scale a traditional farmhouse, originated in Switzerland in the late 19th century. The miniature Swiss chalets date back to the beginnings of artistic wood carving in Brienz, in the early 19th century. The Brienzerware chalet became a popular souvenir, allowing tourists to take home an explicit reminder of a quintessential Swiss structure, though some were rather grand in scale, measuring three or more feet across. Many of these chalets, crafted in different sizes, doubled as music boxes, jewellery boxes, decorative objects, timepieces, etc. Some of those table clocks had also the added feature of a cuckoo bird or the tandem composed of a cuckoo and quail. Eventually, Black Forest makers incorporated the chalet style to their production in the early 20th century, and still remains a popular choice, along with the carved ones, among buyers of this cult item. Cases are usually made after the traditional farmhouses of different regions, such as the Black Forest, Swiss Alps, Emmental, Bavaria and Tyrol. They often have a musical movement, as well as moving figurines and some other elements. Contrary to popular belief, Switzerland is not the birthplace of the cuckoo clock. In the English-speaking world, cuckoo clocks are sufficiently identified with Switzerland that the 1949 film The Third Man has an oft-quoted speech (and it even had antecedents) in which the villainous Harry Lime mockingly says: " (...) in Italy, for 30 years under the Borgias, they had warfare, terror, murder and bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland they had brotherly love, they had five hundred years of democracy and peace - and what did that produce? The cuckoo clock." In England Apart from the Black Forest, the cuckoo clock was also made in England in the 18th century. It seems that very few of these London timepieces were produced, an indication that in those days, before the worldwide popularization of the cuckoo clock from the second half of the 19th century, there was not a high demand for them. There is at least one example, intended for the Spanish market. It is a circa 1785 George III bracket clock, eight-day time, three-fusee, verge escapement, which announces the quarters on eight bells and gives the hours on a deep toned cuckoo, pull quarter repeat on command. The two pipes and bellows for the bird's sound are located at the base of the case, below the movement. Those pipes are placed horizontally, the same position seen in early Black Forest cuckoos. Both the dial and the elaborately engraved back plate, read: "Higgs y / Diego Evans / Londres". Robert Higgs and his son Peter were in partnership together as Robert and Peter Higgs, and later, between 1780 and 1785 with James Evans, who sometimes styled himself in Spanish as Diego Evans. They traded musical and other complex clocks, many for the Spanish market. In the mid-20th century, Camerer, Cuss & Co., London, a retailer of Black Forest clocks, etc., produced a few different models in the shape of a half-timbered Tudor style house. The bird was cast aluminum with movable beak and fixed wings and the weights were cylindrical, rather than pine-cone shaped. They were featured in a Pathé News newsreel in 1950. According to author Terence Camerer Cuss, the company hoped to produce them in large quantities, but due to the high manufacturing cost, only fifty were made between 1949 and 1951. One of them, marked "01", was presented by the maker to the then Prince Charles in 1949 and is part of the Royal Collection. In the United States Cuckoo clocks have been imported to the US by German immigrants for a long time, especially in the 19th century. There are two well-known cuckoo clock manufacturers in the USA: The New England Cuckoo Clock Company was founded in 1958 by W. Kenneth Sessions Jr. and operated in Bristol, Connecticut. The design of the models is clearly American. The clocks were made with Hubert Herr clockworks that were imported from Triberg. The printed and colored paper dials of the clocks are unmistakable, as is the early American design. The clocks were designed by Nils Magnus Tornquist. A kit watch was also offered. The second is the American Cuckoo Clock Company of Philadelphia, Pennsylvania, which originated in the 1890s and imported German clocks. Eventually, the company switched to importing clockworks only and building cuckoo clocks in the USA. In Portugal and Brazil In the 1940s and 50s cuckoo clocks were made in Portugal by Fábrica Nacional de Relógios A Boa Reguladora, Reguladora from 1953, in Vila Nova de Famalicão. Their models were in the Black Forest traditional style, with wood carved animals and leaves, and they could be spring-driven or weight-driven. Since its early years, the Portuguese clockmaking company was a fully integrated enterprise, making its own cases and movements (until 1995). In later catalogues, they sold cuckoos imported from Germany. In Brazil, they were manufactured between the 1940s and the 1970s, marketed under different brand names such as Astro, Rei, H and Inrebra, the last two by INREBRA (Indústria de Relógios do Brasil Ltda.), São Paulo. Same as the Portuguese cuckoos, they were inspired by Black Forest models, with wood carvings or cases in the shape of a chalet. In the Soviet Union and East Asia From the early 1950s until the 1990s, cuckoo clocks were made in the former Soviet Union by the Serdobsk Clock Factory, which were sold under the trademark Маяк (transliterated as Majak) from 1963. They produced a range of models with a distinctive style; a colourful front painted with floral and vegetal motifs, spruce branches in relief, Russian motifs, basic decoration or any, a deer head on top, etc. One model in particular, composed of a bird on top and five wine leaves was directly based on a Black Forest one. In Japan, its production began in 1949. Those early timepieces, in the Black Forest style, were marketed under the trademark Poppo by Tezuka Clock Co., Ltd., Tokyo, and usually had stamped "Made in Occupied Japan" on plate and dial. This term was used in items produced in the country between late 1945 and early 1952, after World War II. In China and South Korea, cuckoo clocks also began to be manufactured in the second half of the 20th century. Designer cuckoo clocks The early 21st century has seen a revitalization of the iconic timepiece with designs, materials, technologies, shapes and colours never seen before in cuckoo clock manufacturing. These pieces are distinguished by its functional and minimalist aesthetic. Although simplified designs with simple, clear lines had already been produced in the 20th century, the boom of designer cuckoo clocks was initiated in the 2000s (the first examples dating back to the 1990s), particularly in Italy, Germany and Japan. There are a wide variety of models, many of them avant-garde creations made of different materials and geometric shapes, such as rhombuses, squares, cubes, circles, rectangles, etc. Without carving, these clocks are usually flat and smooth. Some are painted in a single colour while others are polychromes with abstract or figurative paintings, others include text and phrases, etc. About the clockwork, there are quartz, mechanical and sometimes, digital. Museums In Europe, museums that display collections are the Cuckooland Museum in the UK with more than 700 clocks, the Deutsches Uhrenmuseum and Dorf- und Uhrenmuseum Gütenbach in Germany. The James J. Fiorentino Museum has one of the biggest collections in the United States. Located in Minneapolis, Minnesota. It contains more than 300 cuckoo clocks. See also Automaton clock Black Forest Clock Association Cuckoo clock in culture List of largest cuckoo clocks Singing bird box Striking clock References General bibliography Schneider, Wilhelm (1985): "Zur Entstehungsgeschichte der Kuckucksuhr". In: Alte Uhren, Fascicle 3, pp. 13–21. Schneider, Wilhelm (1987): "Frühe Kuckucksuhren von Johann Baptist Beha in Eisenbach im Hochschwarzwald". In: Uhren, Fascicle 3, pp. 45–53. Mühe, Richard, Kahlert, Helmut and Techen, "Beatrice" (1988): Kuckucksuhren. München. Schneider, Wilhelm (1988): "The Cuckoo Clocks of Johann Baptist Beha". In: Antiquarian Horology, Vol. 17, pp. 455–462. Schneider, Wilhelm, Schneider, Monika (1988): "Black Forest Cuckoo Clocks at the Exhibitions in Philadelphia 1876 and Chicago 1893". In: Watch & Clock Bulletin, Vol. 30/2, No. 253, pp. 116–127 and 128–132. Schneider, Wilhelm (1989): "Die eiserne Kuckucksuhr". In: Uhren, 12. Jg., Fascicle 5, pp. 37–44. Kahlert, Helmut (2002): "Erinnerung an ein geniales Design. 150 Jahre Bahnhäusle-Uhren". In: Klassik-Uhren, F. 4, pp. 26–30. Graf, Johannes (December 2006): "The Black Forest Cuckoo Clock: A Success Story". In: Watch & Clock Bulletin. Volume 49, Issue 365. pp. 646–652. Miller, Justin (2012). Rare and Unusual Black Forest Clocks. Atglen, Penn.: Schiffer Pub. . . pp. 27–103. Scholz, Julia (2013): Kuckucksuhr Mon Amour. Faszination Schwarzwalduhr. Stuttgart: Konrad Theiss Verlag, 160 p. . External links Article on Designer Cuckoo Clocks published in the NAWCC bulletin Catalogue of Philipp Haas & Söhne (PHS), St. Georgen 1880 (Deutsches Uhrenmuseum) Cuckooland Museum, a museum devoted to the cuckoo clock German clock and watch museum Dorf und Uhrenmuseum Articles containing video clips Birds in art Black Forest Clock designs Culture of Baden-Württemberg Culture of Switzerland Symbols
Cuckoo clock
[ "Mathematics" ]
7,200
[ "Symbols" ]
346,382
https://en.wikipedia.org/wiki/Image%20analysis
Image analysis or imagery analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face. Computers are indispensable for the analysis of large amounts of data, for tasks that require complex computation, or for the extraction of quantitative information. On the other hand, the human visual cortex is an excellent image analysis apparatus, especially for extracting higher-level information, and for many applications — including medicine, security, and remote sensing — human analysts still cannot be replaced by computers. For this reason, many important image analysis tools such as edge detectors and neural networks are inspired by human visual perception models. Digital Digital Image Analysis or Computer Image Analysis is when a computer or electrical device automatically studies an image to obtain useful information from it. Note that the device is often a computer but may also be an electrical circuit, a digital camera or a mobile phone. It involves the fields of computer or machine vision, and medical imaging, and makes heavy use of pattern recognition, digital geometry, and signal processing. This field of computer science developed in the 1950s at academic institutions such as the MIT A.I. Lab, originally as a branch of artificial intelligence and robotics. It is the quantitative or qualitative characterization of two-dimensional (2D) or three-dimensional (3D) digital images. 2D images are, for example, to be analyzed in computer vision, and 3D images in medical imaging. The field was established in the 1950s—1970s, for example with pioneering contributions by Azriel Rosenfeld, Herbert Freeman, Jack E. Bresenham, or King-Sun Fu. Techniques There are many different techniques used in automatically analysing images. Each technique may be useful for a small range of tasks, however there still aren't any known methods of image analysis that are generic enough for wide ranges of tasks, compared to the abilities of a human's image analysing capabilities. Examples of image analysis techniques in different fields include: 2D and 3D object recognition, image segmentation, motion detection e.g. Single particle tracking, video tracking, optical flow, medical scan analysis, 3D Pose Estimation. Applications The applications of digital image analysis are continuously expanding through all areas of science and industry, including: anatomy, allows for precise measurements, visualization, and statistical analysis of anatomical structures. assay micro plate reading, such as detecting where a chemical was manufactured. astronomy, such as calculating the size of a planet. automated species identification (e.g. plant and animal species) defense error level analysis filtering machine vision, such as to automatically count items in a factory conveyor belt. materials science, such as determining if a metal weld has cracks. medicine, such as detecting cancer in a mammography scan. metallography, such as determining the mineral content of a rock sample. microscopy, such as counting the germs in a swab. automatic number plate recognition; optical character recognition, such as automatic license plate detection. remote sensing, such as detecting intruders in a house, and producing land cover/land use maps. robotics, such as to avoid steering into an obstacle. security, such as detecting a person's eye color or hair color. Object-based Object-based image analysis (OBIA) involves two typical processes, segmentation and classification. Segmentation helps to group pixels into homogeneous objects. The objects typically correspond to individual features of interest, although over-segmentation or under-segmentation is very likely. Classification then can be performed at object levels, using various statistics of the objects as features in the classifier. Statistics can include geometry, context and texture of image objects. Over-segmentation is often preferred over under-segmentation when classifying high-resolution images. Object-based image analysis has been applied in many fields, such as cell biology, medicine, earth sciences, and remote sensing. For example, it can detect changes of cellular shapes in the process of cell differentiation.; it has also been widely used in the mapping community to generate land cover. When applied to earth images, OBIA is known as geographic object-based image analysis (GEOBIA), defined as "a sub-discipline of geoinformation science devoted to (...) partitioning remote sensing (RS) imagery into meaningful image-objects, and assessing their characteristics through spatial, spectral and temporal scale". The international GEOBIA conference has been held biannually since 2006. OBIA techniques are implemented in software such as eCognition or the Orfeo toolbox. See also Archeological imagery Imaging technologies Image processing imc FAMOS (1987), graphical data analysis Land cover mapping Military intelligence Remote sensing References Further reading The Image Processing Handbook by John C. Russ, (2006) Image Processing and Analysis - Variational, PDE, Wavelet, and Stochastic Methods by Tony F. Chan and Jianhong (Jackie) Shen, (2005) Front-End Vision and Multi-Scale Image Analysis by Bart M. ter Haar Romeny, Paperback, (2003) Practical Guide to Image Analysis by J.J. Friel, et al., ASM International, (2000). Fundamentals of Image Processing by Ian T. Young, Jan J. Gerbrands, Lucas J. Van Vliet, Paperback, (1995) Image Analysis and Metallography edited by P.J. Kenny, et al., International Metallographic Society and ASM International (1989). Quantitative Image Analysis of Microstructures by H.E. Exner & H.P. Hougardy, DGM Informationsgesellschaft mbH, (1988). "Metallographic and Materialographic Specimen Preparation, Light Microscopy, Image Analysis and Hardness Testing", Kay Geels in collaboration with Struers A/S, ASTM International 2006. Computer vision Formal sciences
Image analysis
[ "Engineering" ]
1,215
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
346,497
https://en.wikipedia.org/wiki/Tullimonstrum
Tullimonstrum, colloquially known as the Tully monster or sometimes Tully's monster, is an extinct genus of soft-bodied bilaterian animal that lived in shallow tropical coastal waters of muddy estuaries during the Pennsylvanian geological period, about 300 million years ago. A single species, T. gregarium, is known. Examples of Tullimonstrum have been found only in the Essex biota, a smaller section of the Mazon Creek fossil beds of Illinois, United States. Its classification has been the subject of controversy, and interpretations of the fossil have likened it to molluscs, arthropods, conodonts, worms, tunicates, and vertebrates. This creature had a mostly cigar-shaped body, with a triangular tail fin, two long stalked eyes, and a proboscis tipped with a mouth-like appendage. Based on the fossils, it seems this creature was a nektonic carnivore that hunted in the ocean’s water column. When Tullimonstrum was alive, Illinois was a mixture of ecosystems like muddy estuaries, marine environments, and rivers and lakes. Fossils of other organisms like crustacean Belotelson, the cnidarian Essexella, and the elasmobranch fish Bandringa have been found alongside Tullimonstrum. Description Tullimonstrum probably reached lengths of up to ; the smallest individuals are about long. Tullimonstrum had a pair of vertical, ventral fins (though the fidelity of preservation of fossils of its soft body makes this difficult to determine) situated at the tail end of its body, and typically featured a long proboscis with up to eight small sharp teeth on each "jaw", with which it may have actively probed for small creatures and edible detritus in the muddy bottom. It was part of the ecological community represented in the unusually rich group of soft-bodied organisms found among the assemblage called the Mazon Creek fossils from their site in Grundy County, Illinois. The absence of a hard part in the fossil implies that the animal did not possess organs composed of bone, chitin or calcium carbonate. There is evidence of serially repeated internal structures. Its head is poorly differentiated. A transverse bar-shaped structure, which was either dorsal or ventral, terminates in two round organs which are associated with dark material which have been identified as melanosomes (containing the pigment melanin). Their form and structure is suggestive of a camera-type eye. Tullimonstrum possessed structures which have been interpreted as gills, and a possible notochord or rudimentary spinal cord. History of discovery Amateur collector found the first of these fossils in 1955 in a fossil bed known as the Mazon Creek formation. He took the strange creature to the Field Museum of Natural History, but paleontologists were stumped as to which phylum Tullimonstrum belonged in. The species Tullimonstrum gregarium ("Tully's common monster"), as these fossils later were named, takes its genus name from Tully, whereas the species name, gregarium, means "common", and reflects its abundance. The term monstrum ("monster") relates to the creature's outlandish appearance and strange body plan. The fossil remains "a puzzle", and interpretations liken it to a worm, a mollusc, an arthropod, a conodont, or a vertebrate. Since it appears to lack characteristics of the well-known modern phyla, some speculate that it was representative of a stem group to one of the many phyla of worms that are poorly represented today. Similarities with Cambrian fossil organisms were noted. Chen et al. suggested similarities to Nectocaris pteryx. Others pointed to a general resemblance between Tullimonstrum and Opabinia regalis, although Cave et al. note that they were too morphologically dissimilar to be related. Classification The classification of Tullimonstrum has been an ongoing debate since the creature was first described, with many scientists presenting evidence of a vertebrate affinity, and others of an invertebrate affinity. Arguments in favor of vertebrate affinities McCoy et al. (2016) In 2016 two studies were released simultaneously showing that Tullimonstrum may have been a basal vertebrate, thus a member of the phylum Chordata. McCoy et al. undertook a morphological study of several specimens; their analysis indicated that Tullimonstrum may be closely related to modern lampreys. This affinity was attributed based on pronounced cartilaginous vertebral structures known as arcualia, a dorsal fin and asymmetric caudal fin, keratinous teeth, a single nostril, and tectal cartilages like in lampreys. While McCoy et al. raised the possibility that Tullimonstrum belong to the ancestral group of lamprey, it also has many features not found in Cyclostomes (lampreys and hagfishes). Clements et al. (2016) A second study, by Clements et al. (2016), came to the conclusion that Tullimonstrum was a stem-vertebrate based on its eye anatomy. Close examination revealed that the animal had a camera-like eye, with preserved lenses and the presence of cylindrical and spheroid melanosomes arranged in distinct layers. These ocular pigments and their unique structure was interpreted to be a retinal pigmented epithelium (RPE), offering strong support that the bar organs were indeed eyes. The dark pigments in the eye were chemically tested and found to be fossilized melanin, as opposed to ommochromes or pterins (which are ocular pigments used by many invertebrate groups). While the authors admitted that the ocular pigments of many invertebrate groups have been poorly investigated, at the time of publication the presence of RPE and two distinct melanosome morphology is a uniquely vertebrate trait. McCoy et al. (2020); Wiemann et al. (2022) In 2020, McCoy, Wiemann and colleagues used Raman spectroscopy to identify the molecular bonds present in the organic material preserved with Tullimonstrum. Based on samples from multiple points in the body, they identified the organic material as representing the decay products of chordate tissues as opposed to the polysaccharide-based chitin (as is seen in arthropods), offering independent and rather unambiguous evidence for the interpretation that Tullimonstrum is a chordate or vertebrate. In 2022, Wiemann and colleagues replicated these spectral signals in collaboration with independent laboratories using Fourier-Transform Infrared spectroscopy. Comparable tissue signatures have been detected in preserved carbonaceous remains of a diversity of other animals. Arguments in favour of non-vertebrate affinities Sallan et al. (2017) In 2017 Sallan et al. rejected the identification of the Tully monster as a vertebrate. Firstly, they noted that even the presence of the two melanosome types is variable among vertebrates; hagfish lack them altogether, and extant sharks as well as extinct forms found in the Mazon Creek area, such as Bandringa, only have spheroid melanosomes. Additionally, the supposed notochord extends in front of the level of the eyes, which is not the case in any other vertebrate, although is seen in lancelets. Even if the structure was a notochord, the presence of notochords is not limited to vertebrates either. Further criticism was drawn towards the identification of the blocks of the body variously as gill pouches and muscle blocks (myomeres), despite the lack of differentiation in the structure of these blocks. In vertebrates, myomeres are also thinner, and extend along the whole length of the body rather than stopping short of the head. Meanwhile, the gill pouches of lampreys are paired extensions rather than segmented structures, and are usually embedded in a complex gill skeleton, neither of which is the case in Tullimonstrum. Other identifications of soft-tissue structures were considered as being equally problematic. The supposed brain has no associated nervous tissue and is not connected to the eyes, and the purported liver was located under the gills as opposed to being further back as in other vertebrates. The "mouth" at the front of the proboscis was described as possessing gnathostome-like distinct tooth rows, despite lampreys having "tooth fields" on the interior of the mouth. This would necessitate the convergent re-evolution of grasping jaws. An additional difficulty is that the thin and jointed proboscis is inconsistent with the feeding methods typically used for open-water vertebrates: Either ram or suction feeding. The gill pouches would have obstructed the flow of water even further. Sallan et al. note that stalked eyes, tail fins, and brains are also present in anomalocaridids, and that Opabinia also has a similar proboscis. Previously arthropod affinities were rejected under the presumption that other Mazon Creek arthropods are preserved in three-dimensions, with carbonization of the exoskeleton, but the arthropods are not actually preserved in that manner. They also suggested that molluscs convergently evolved complex camera-like eyes containing melanosomes, but proponents of the vertebrate interpretation argue that no molluscs are known that have or had melanosomes in two distinct forms. Further similarities (such as the lobed brain, muscle bands, tail fin, proboscis, and "teeth") could support possible molluscan affinities. Rogers et al. (2019) Regardless, Rogers et al. (2019) demonstrated that certain squid (Loligo vulgaris) and cuttlefish (Sepia officinalis) species do in fact have two different melanosome forms which can decay to look like an RPE-like layer, similar to that observed in vertebrates and Tullimonstrum fossils. On a plot of trace metal signatures in the eyes of Mazon Creek fossils, Tullimonstrum is clearly distinct from both vertebrates (which have a higher concentration of zinc) and the eyespots of the putative cephalopod Pohlsepia (however, no evidence of melanosomes were found in Pohlsepia, and some studies deny its affinity as cephalopod) - although it should be noted however that these signals are influenced by the fossilisation process. The authors doubt that Tullimonstrum was a cephalopod (in the absence of other supporting traits), they argue that eye structure and chemistry alone cannot disprove invertebrate affinities. Even if the eye of Tullimonstrum is homologous with vertebrates, it is not necessarily a member of Vertebrata. Many vertebrate-like traits are also observed in tunicates (the larvae of which have pigmented eyes and tail fins), lancelets and acorn worms (both of which have gill openings and axial support structures), and the extinct vetulicolians. Mikami et al. (2023) In 2023, Mikami et al. (2023) scanned 153 specimens of Tullimonstrum by 3D scanner, as well as other taxa from Mazon Creek. They concluded that some of the characters used by McCoy et al. (2016) to justify a vertebrate identity (tri-lobed brain, tectal cartilages, fin rays) are not comparable to those of vertebrates. The authors also determined that Tullimonstrum has segmentation extending to the preoptic region, which is clearly different from vertebrates. Alternative classifications were discussed in detail, but Tullimonstrum could be a non-vertebrate chordate (due to its segmentation resembling the myomeres of Esconichthys apopyris, an enigmatic jawed vertebrate from Mazon Creek) or a protostome. Paleoecology Tullimonstrum was probably a free-swimming carnivore that dwelt in open marine water, and was occasionally washed to the near-shore setting in which it was preserved. This means it swam freely in the water and not clamped to a hard surface or benthic environment. Taphonomy The formation of the Mazon Creek fossils is unusual. When the creatures died, they were rapidly buried in silty outwash. The bacteria that began to decompose the plant and animal remains in the mud produced carbon dioxide in the sediments around the remains. The carbonate combined with iron from the groundwater around the remains, forming encrusting nodules of siderite. The organism was entombed, retarding decay and allowing an impression or carbonaceous remains of the organism to be preserved. The first insights into the mechanisms of carbonaceous preservation in the Mazon Creek are provided as part of a large fossil data set, however, the details are still subject of ongoing research. The combination of rapid burial and rapid formation of siderite resulted in excellent preservation of the many animals and plants that were entombed in the mud. As a result, the Mazon Creek fossils are one of the world's major Lagerstätten, or concentrated fossil assemblages. The rapid burial and compression often caused Tullimonstrum carcasses to fold and bend like other Mazon Creek animals. The proboscis is rarely preserved in its entirety; it is complete in around 3% of specimens. However, some part of the organ is preserved in about 50% of cases. Many unique fossils have been found alongside Tullimonstrum like the sea anemone Essexella, the malacostracan Belotelson, the eurypterid Adelophthalmus mazonensis, horseshoe crabs, the elasmobranch fish Bandringa, and the coleoid cephalopod Jeletzkya. Paleontologist's prank A 1966–1968 prank promulgated by paleontologist Bryan Patterson suggested that modern representatives could possibly be found in remote lakes of Kenya, known under the local name "Ekurut Loedonkakini". These "dancing worms of Turkana" could supposedly kill a man with a bite, produced some sort of milk, and were known even to school-age children. Patterson had several letters sent from Kenya under various aliases to Eugene Richardson, the Field Museum's curator of fossil invertebrates. Patterson had previously been the museum's curator of vertebrate paleontology and retained an accomplice there who was aware of the prank (and prevented it from going too far). A planned expedition was cancelled after the hoax was disclosed in a good-natured Christmas letter. Richardson later recounted the story and published the original letters, poems, and doctored photos in a book under the pseudonym E. Scumas Rory. In popular culture In 1989, Tullimonstrum gregarium was officially designated the state fossil of Illinois. Artwork of it is featured on U-Haul rental vehicles from the state. See also Paleontology in Illinois References External links "Tully: Monster vs Method", a video by the Field Museum of Natural History "The Tully Monster", a video by the Field Museum of Natural History Mazon Creek Paleobotany References by the Field Museum of Natural History Controversial taxa Fossil taxa described in 1966 Pennsylvanian animals of North America Symbols of Illinois Enigmatic prehistoric animal genera
Tullimonstrum
[ "Biology" ]
3,230
[ "Biological hypotheses", "Controversial taxa" ]
346,511
https://en.wikipedia.org/wiki/Maternal%20mortality%20ratio
The maternal mortality ratio is a key performance indicator (KPI) for efforts to improve the health and safety of mothers before, during, and after childbirth per country worldwide. Often referred to as MMR, it is the annual number of female deaths per 100,000 live births from any cause related to or aggravated by pregnancy or its management (excluding accidental or incidental causes). It is not to be confused with the maternal mortality rate, which is the number of maternal deaths (direct and indirect) in a given period per 100,000 women of reproductive age during the same time period. The statistics are gathered by WHO, UNICEF, UNFPA, World Bank Group, and the United Nations Population Division. The yearly report started in 1990 and is called Trends in Maternal Mortality. As of the 2015 data published in 2016, the countries that have seen an increase in the maternal mortality ratio since 1990 are the Bahamas, Georgia, Guyana, Jamaica, Dem. People’s Rep. Korea, Serbia, South Africa, St. Lucia, Suriname, Tonga, United States, Venezuela, RB Zimbabwe. But according to Sustainable Development Goals report 2018, the overall maternal mortality ratio has declined by 37 percent since 2002. Nearly 303,000 women died due to complications during pregnancy. With an exceptionally high mortality ratio compared to other U.S. states, the government of Texas created the Maternal Mortality and Morbidity Task Force in 2013. Country measurements This KPI was used for the Millennium Development Goals from 2000 to 2015 and is part of the Sustainable Development Goals. The list of countries with a comparison of this KPI in 1990, 2000 and 2015 are: List of aggregated data by region List of aggregated data by focus subject See also List of countries by infant and under-five mortality rates List of countries by maternal mortality ratio List of countries by death rate Maternal mortality References Death-related lists Maternal death Demography World Health Organization
Maternal mortality ratio
[ "Environmental_science" ]
386
[ "Demography", "Environmental social science" ]
346,578
https://en.wikipedia.org/wiki/Alfred%20P.%20Murrah%20Federal%20Building
The Alfred P. Murrah Federal Building was a United States federal government complex located at 200 N.W. 5th Street in downtown Oklahoma City, Oklahoma. On April 19, 1995, the building was the target of the Oklahoma City bombing by Timothy McVeigh and Terry Nichols, which ultimately killed 168 people and injured 684 others. A third of the building collapsed seconds after the truck bomb detonated. The remains were demolished a month after the attack, and the Oklahoma City National Memorial was built on the site. Construction and use The building was designed by architects Stephen H. Horton and Wendell Locke of Locke, Wright and Associates and constructed by J.W. Bateson Company, Dallas, Texas, using reinforced concrete in 1977 at a cost of $14.5 million. The building, named for federal judge Alfred P. Murrah, an Oklahoma native, opened on March 2, 1977. By the 1990s, the building contained regional offices for the Social Security Administration, the U.S. Department of Housing and Urban Development, the United States Secret Service, the Department of Veterans Affairs vocational rehabilitation counseling center, the Drug Enforcement Administration (D.E.A.), and the Bureau of Alcohol, Tobacco, and Firearms (ATF). It also contained recruiting offices for the U.S. military. It housed approximately 550 employees. It also housed America's Kids, a children's day care center. Prior bombing plots In October 1983, members of the Christian militia group The Covenant, The Sword, and the Arm of the Lord (CSA), including founder James Ellison and Richard Snell plotted to park "a van or trailer in front of the Federal Building and blow it up with rockets detonated by a timer." While the CSA was building a rocket launcher to attack the building, the ordnance accidentally detonated in a member's hands. The CSA took this as divine intervention and called off the planned attack. Convicted of murder in Arkansas in an unrelated case, Snell was executed on April 19, 1995, the same day the bombing of the federal building was carried out, after U.S. Supreme Court Justice Clarence Thomas declined to hear further appeal. Destruction At 9:02 a.m. local time on April 19, 1995, a Ryder rental truck, containing approximately 7,000 pounds (3,175 kg) of ammonium nitrate fertilizer, nitromethane, and diesel fuel was detonated in front of the building, destroying a third of it and causing severe damage to several other buildings located nearby. As a result, 168 people were killed, including 19 children, and over 800 others were injured. It remains the deadliest domestic terrorist attack, with the most property damage, in the U.S. Timothy McVeigh, a U.S. Army veteran, was found guilty of the attack in a jury trial and sentenced to death. He was executed in 2001. A co-conspirator, Terry Nichols, is serving multiple life sentences in a federal prison. Third and fourth subjects Michael Fortier and his wife, Lori, assisted in the plot. They testified against both McVeigh and Nichols in exchange for a 12-year prison term for Michael and immunity for Lori. Michael was released into the witness protection program in January 2006. McVeigh said that he bombed the building on the second anniversary of the Waco siege in 1993 to retaliate for U.S. government actions there and at the siege at Ruby Ridge. However, it is also rumored that the bombing was connected to Covenant, the Sword, and the Arm of the Lord (CSA) white supremacist Richard Snell, who was executed in Arkansas the day of the bombing and who also "predicted" that a bombing would happen on the day of his execution. Fort Smith-based federal prosecutor Steven Snyder told the FBI in May 1995 that Snell previously expressed a desire to target the Murrah building in 1983 as revenge for the IRS raiding his home. Before his execution, McVeigh said that he did not know a day care center was in the building and that, had he known, "It might have given me pause to switch targets." The FBI said that he scouted the interior of the building in December 1994 and likely knew of the day care center before the bombing. Artwork in the building Many works of art were in the building when it was destroyed in the Oklahoma City bombing. The Oklahoma City National Memorial displays art that survived the bombing. Nineteen pieces of art recovered from the Murrah Building are on permanent display on the first floor of the University of Central Oklahoma's Max Chambers Library. These pieces include: Monolith IV, sculpture by Franklin Simons Morning Mist, photograph by David Halpern Charon's Sentinels, photograph by David Halpern North of Cunningham, photograph by Albert Durr Edgar Cowboy in Coffee Shop, photograph by Curt Clyne Morning in Taos, fiber art by Betty Jo Kidson Sun Form, fiber sculpture by Dena Madole Storm, sculpture by Richard Davis Double Layers, fiber art by Jane Knight 31 Flavors, fiber sculpture by Sally Anderson Loyal Creek, clay sculptures by Carol Whitney October, fiber art by Joyce Pardington Precise Notations, tapestry by Bud Stalnaker A Fallen Oak Tree, wood mural by James Strickland Carnival, tapestry by Anna Burgess A Fur Piece, mixed media by Rebecca Friedman Oklahoma Quilt by Terrie Mangat Canyon Wall Number 2, fiber art by Joyce Pardington Sunburst , fiber art by Melanie Vandenbos Lost works are as follows: Sky Ribbons: An Oklahoma Tribute, 1978 fiber sculpture by Gerhardt Knodel Columbines at Cascade Canyon, photograph by Albert D. Edgar Winter Scene, photograph by Curt Clyne Soaring Currents, sisal and rayon textile by Karen Chapnick Monolith, porcelain sculpture by Frank Simons Through the Looking Glass, wool textile by Anna Burgress Palm Tree Coil, bronze sculpture by Jerry McMillan. An untitled acrylic sculpture by Fred Eversley was severely damaged, but survived the blast. Demolition Rescue and recovery efforts were concluded at 11:50 pm on May 1, with the bodies of all but three victims recovered. For safety reasons, the remains were to be demolished shortly afterward. However, McVeigh's attorney, Stephen Jones, called for a motion to delay the demolition until the defense team could examine the site in preparation for the trial. More than a month after the bombing, at 7:01 am on May 23, the remains were demolished. The final three bodies, those of two credit union employees and a customer, were recovered. For several days after the remains' demolition, trucks hauled 800 tons of debris a day away from the site. Some of it was used as evidence in the conspirators' trials, incorporated into parts of memorials, donated to local schools, or sold to raise funds for relief efforts. Remnants and replacement Several remnants of the building stand on the site of the Oklahoma City National Memorial. The plaza (on what was once its south side) has been incorporated into the memorial; the original flagpole is still in use. The east wall (within the building's footprint) is intact, as well as portions of the south wall. The underground parking garage survived the blast and is used today, but is guarded and closed to the public. Consideration was given to not replacing the Murrah Building and to renting office space for agencies affected. Ultimately, the General Services Administration broke ground on a replacement building in 2001 which was completed in 2003. The new 185,000-square-foot building was designed by Ross Barney Architects of Chicago, Illinois, with Carol Ross Barney as the lead designer. Constructed on a two-city-block site, one block north and west of the former site, the new building's design maximized sustainable design and workplace productivity initiatives. Security design was paramount to the Federal employees and its neighbors. Secure design was achieved based on the GSA's current standards for secure facilities including blast resistant glazing. Structural design resists progressive collapse. Building mass, glazing inside the courtyard, and bollards help to maintain a sense of openness and security. The art in architecture component of the building incorporates a water feature that acts as an additional security barrier. References External links Photos of the Murrah building before the bombing Federal buildings in the United States Former skyscrapers Buildings and structures in Oklahoma City Collapsed buildings and structures in the United States Government buildings in Oklahoma Government buildings completed in 1977 1970s architecture in the United States 1977 establishments in Oklahoma 1995 disestablishments in Oklahoma Oklahoma City bombing Demolished buildings and structures in Oklahoma Buildings and structures demolished by controlled implosion Buildings and structures demolished in 1995 Buildings and structures damaged in the Oklahoma City bombing
Alfred P. Murrah Federal Building
[ "Engineering" ]
1,778
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
346,610
https://en.wikipedia.org/wiki/Chown
The command , an abbreviation of change owner, is used on Unix and Unix-like operating systems to change the owner of file system files and directories. Unprivileged (regular) users who wish to change the group membership of a file that they own may use . The ownership of any file in the system may only be altered by a super-user. A user cannot give away ownership of a file, even when the user owns it. Similarly, only a member of a group can change a file's group ID to that group. The version of chown bundled in GNU coreutils was written by David MacKenzie and Jim Meyering. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. See also chgrp chmod takeown References External links chown manual page The chown Command by The Linux Information Project (LINFO) Operating system security Standard Unix programs Unix SUS2008 utilities IBM i Qshell commands
Chown
[ "Technology" ]
228
[ "IBM i Qshell commands", "Computing commands", "Standard Unix programs" ]
346,611
https://en.wikipedia.org/wiki/Axiom%20of%20countability
In mathematics, an axiom of countability is a property of certain mathematical objects that asserts the existence of a countable set with certain properties. Without such an axiom, such a set might not provably exist. Important examples Important countability axioms for topological spaces include: sequential space: a set is open if every sequence convergent to a point in the set is eventually in the set first-countable space: every point has a countable neighbourhood basis (local base) second-countable space: the topology has a countable base separable space: there exists a countable dense subset Lindelöf space: every open cover has a countable subcover σ-compact space: there exists a countable cover by compact spaces Relationships with each other These axioms are related to each other in the following ways: Every first-countable space is sequential. Every second-countable space is first countable, separable, and Lindelöf. Every σ-compact space is Lindelöf. Every metric space is first countable. For metric spaces, second-countability, separability, and the Lindelöf property are all equivalent. Related concepts Other examples of mathematical objects obeying axioms of countability include sigma-finite measure spaces, and lattices of countable type. References General topology Mathematical axioms
Axiom of countability
[ "Mathematics" ]
274
[ "General topology", "Mathematical logic", "Topology", "Mathematical axioms" ]
346,681
https://en.wikipedia.org/wiki/%CE%A3-compact%20space
In mathematics, a topological space is said to be σ-compact if it is the union of countably many compact subspaces. A space is said to be σ-locally compact if it is both σ-compact and (weakly) locally compact. That terminology can be somewhat confusing as it does not fit the usual pattern of σ-(property) meaning a countable union of spaces satisfying (property); that's why such spaces are more commonly referred to explicitly as σ-compact (weakly) locally compact, which is also equivalent to being exhaustible by compact sets. Properties and examples Every compact space is σ-compact, and every σ-compact space is Lindelöf (i.e. every open cover has a countable subcover). The reverse implications do not hold, for example, standard Euclidean space (Rn) is σ-compact but not compact, and the lower limit topology on the real line is Lindelöf but not σ-compact. In fact, the countable complement topology on any uncountable set is Lindelöf but neither σ-compact nor locally compact. However, it is true that any locally compact Lindelöf space is σ-compact. (The irrational numbers) is not σ-compact. A Hausdorff, Baire space that is also σ-compact, must be locally compact at at least one point. If G is a topological group and G is locally compact at one point, then G is locally compact everywhere. Therefore, the previous property tells us that if G is a σ-compact, Hausdorff topological group that is also a Baire space, then G is locally compact. This shows that for Hausdorff topological groups that are also Baire spaces, σ-compactness implies local compactness. The previous property implies for instance that Rω is not σ-compact: if it were σ-compact, it would necessarily be locally compact since Rω is a topological group that is also a Baire space. Every hemicompact space is σ-compact. The converse, however, is not true; for example, the space of rationals, with the usual topology, is σ-compact but not hemicompact. The product of a finite number of σ-compact spaces is σ-compact. However the product of an infinite number of σ-compact spaces may fail to be σ-compact. A σ-compact space X is second category (respectively Baire) if and only if the set of points at which is X is locally compact is nonempty (respectively dense) in X. See also Notes References Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). . Compactness (mathematics) General topology Properties of topological spaces
Σ-compact space
[ "Mathematics" ]
578
[ "General topology", "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology" ]
346,684
https://en.wikipedia.org/wiki/Color%20science
Color science is the scientific study of color including lighting and optics; measurement of light and color; the physiology, psychophysics, and modeling of color vision; and color reproduction. It is the modern extension of traditional color theory. Organizations International Commission on Illumination (CIE) Illuminating Engineering Society (IES) Inter-Society Color Council (ISCC) Society for Imaging Science and Technology (IS&T) International Colour Association (AIC) Optica, formerly the Optical Society of America (OSA) The Colour Group Society of Dyers and Colourists (SDC) American Association of Textile Chemists and Colorists (AATCC) Association for Research in Vision and Ophthalmology (ARVO) ACM SIGGRAPH Vision Sciences Society (VSS) Council for Optical Radiation Measurements (CORM) Journals The preeminent scholarly journal publishing research papers in color science is Color Research and Application, started in 1975 by founding editor-in-chief Fred Billmeyer, along with Gunter Wyszecki, Michael Pointer and Rolf Kuehni, as a successor to the Journal of Colour (1964–1974). Previously most color science work had been split between journals with broader or partially overlapping focus such as the Journal of the Optical Society of America (JOSA), Photographic Science and Engineering (1957–1984), and the Journal of the Society of Dyers and Colourists (renamed Coloration Technology in 2001). Other journals where color science papers are published include the Journal of Imaging Science & Technology, the Journal of Perceptual Imaging, the Journal of the International Colour Association (JAIC), the Journal of the Color Science Association of Japan, Applied Optics, and the Journal of Vision. Conferences Congress of the International Color Association IS&T Color and Imaging Conference (CIC) SIGGRAPH International Symposium for Color Science and Art Selected books 3rd ed. (2000). Author's website. 2nd ed. (2005). 1st ed. (1997). References Color Image processing Measurement Psychophysics Visual perception
Color science
[ "Physics", "Mathematics" ]
412
[ "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Psychophysics", "Measurement", "Size" ]