id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,869,700
https://en.wikipedia.org/wiki/Dimensional%20reduction
Dimensional reduction is the limit of a compactified theory where the size of the compact dimension goes to zero. In physics, a theory in D spacetime dimensions can be redefined in a lower number of dimensions d, by taking all the fields to be independent of the location in the extra D − d dimensions. For example, consider a periodic compact dimension with period L. Let x be the coordinate along this dimension. Any field can be described as a sum of the following terms: with An a constant. According to quantum mechanics, such a term has momentum nh/L along x, where h is the Planck constant. Therefore, as L goes to zero, the momentum goes to infinity, and so does the energy, unless n = 0. However n = 0 gives a field which is constant with respect to x. So at this limit, and at finite energy, will not depend on x. This argument generalizes. The compact dimension imposes specific boundary conditions on all fields, for example periodic boundary conditions in the case of a periodic dimension, and typically Neumann or Dirichlet boundary conditions in other cases. Now suppose the size of the compact dimension is L; then the possible eigenvalues under gradient along this dimension are integer or half-integer multiples of 1/L (depending on the precise boundary conditions). In quantum mechanics this eigenvalue is the momentum of the field, and is therefore related to its energy. As L → 0 all eigenvalues except zero go to infinity, and so does the energy. Therefore, at this limit, with finite energy, zero is the only possible eigenvalue under gradient along the compact dimension, meaning that nothing depends on this dimension. Dimensional reduction also refers to a specific cancellation of divergences in Feynman diagrams. It was put forward by Amnon Aharony, Yoseph Imry, and Shang-keng Ma who proved in 1976 that "to all orders in perturbation expansion, the critical exponents in a d-dimensional () system with short-range exchange and a random quenched field are the same as those of a ()-dimensional pure system". Their arguments indicated that the "Feynman diagrams which give the leading singular behavior for the random case are identically equal, apart from combinatorial factors, to the corresponding Feynman diagrams for the pure case in two fewer dimensions." This dimensional reduction was investigated further in the context of supersymmetric theory of Langevin stochastic differential equations by Giorgio Parisi and Nicolas Sourlas who "observed that the most infrared divergent diagrams are those with the maximum number of random source insertions, and, if the other diagrams are neglected, one is left with a diagrammatic expansion for a classical field theory in the presence of random sources ... Parisi and Sourlas explained this dimensional reduction by a hidden supersymmetry." See also Compactification (physics) Kaluza–Klein theory Supergravity Quantum gravity Supersymmetric theory of stochastic dynamics References String theory Quantum field theory Supersymmetry
Dimensional reduction
Physics,Astronomy
631
25,214,567
https://en.wikipedia.org/wiki/Historia%20Naturalis%20Brasiliae
Historia Naturalis Brasiliae (), originally written in Latin, is the first scientific work on the natural history of Brazil, written by Dutch naturalist Willem Piso and containing research done by the German scientist Georg Marcgraf, published in 1648. The work includes observations made by the German naturalist H. Gralitzio, in addition to humanist Johannes de Laet. It was dedicated to Johan Maurits, Count of Nassau, who was the patron of the project during the period of Dutch rule in Brazil. Though referring to Brazil generally throughout the text, the authors' research was of the coastal strip of the Northeast, occupied by the Dutch West India Company. It is based on Marcgraf and Piso's time in Brazil, starting in 1637. It offers an important early European insight into Brazilian flora and fauna by analyzing plants and animals and studying tropical diseases and indigenous therapies. Also included is William Piso's interpretation and first opinions of the indigenous people who he would go on to describe as barbarous and lacking in science. This would lead to concern amongst Piso and his contemporaries that these people might not be able to contribute to studying medicine and botany. It was edited, as stated on its title page, in: "Lugdun. Batavorum: apud Franciscum Hackium; et Amstelodami: apud Lud. Elzevirium". Elzevirium is the Latin name of the prestigious publisher Elzevir. The work consists of a single volume, originally measuring 40 centimeters (height) and its full title, with subtitle, is: " Historia naturalis Brasiliae ...: in qua non tantum plantae et animalia, sed et indigenarum morbi, ingenia et mores describuntur et iconibus supra quingentas illustrantur ". The Brazilian physician and researcher Juliano Moreira said of the work: This clearly masterful work, when carefully reexamined, shows, at each perquisition, new excellences, and thus it is still one of the most authentic glories of Dutch medical literature. We owe to Pies a description, so accurate and meticulous, of the then reigning endemics in Brazil and the means of treating them. He observed the yaws, tetanus, various types of paralysis, dysentery, hemeralopia, maculopapular. He described Ipecac and emeto - cathartic qualities, which aboriginals used long before the famous doctor Adrian Helvetius, grandfather of the notable French philosopher Claudio Adriano Helvetius received from Louis XIV a thousand louis gold, titles and honors for having discovered exactly those same therapeutic virtues. The Treaty of Helvetius titled "Remède contre le cours du ventre. The work circulated widely in northern Europe and beyond, so that while it detailed the flora and fauna of coastal South America, it was an important publication for those working elsewhere. Richly illustrated scientific texts allowed knowledge to be disseminated even when scholars themselves could not travel to the research site. The work remained unsurpassed until the nineteenth century, and between its initial publication and subsequent research it remained highly influential. Diverse writers referred to the text, including Miguel Venegas, author of Noticia de la California (1757), Anglo-American Protestant theologian Cotton Mather, who saw in the text evidence of divine planning; and amateur American naturalist Thomas Jefferson, who mentioned Marcgraf in his Notes on the State of Virginia. This work would prove to be incredibly influential especially in the field of ecology, being used by a variety of different ecologists in different locations and time. The long lasting influence could be seen outside of just the field of ecology, as well, with various other forms of science utilizing the findings in various ways. In particular, Ole Worm utilized a similar organizational structure when documenting natural history of Denmark while even using some of the images in his work, Museum Wormianum. Carl Linnaeus and Albert Aublet would also use the work of Macgrave in several of their texts and images. Relevancy was still found in the 20th century with a herbarium being discovered that would contain a hefty amount of items that were used in the Netherlands during the 17th century. The utility of these documents from John Maurice, Prince of Nassau-Siegen were able to assist other researchers and academics even in a more modern context. This discovery would also cause people to seek out a variety of these books detailing herbariums in order to achieves further information that Maurice may have. See also Dutch Brazil History of science Natural history References 1648 books Dutch Brazil Natural history Natural history books Flora of Brazil Brazilian literature 17th-century books in Latin 17th-century Dutch books Biology and natural history in the Dutch Republic History of science
Historia Naturalis Brasiliae
Technology
997
7,618,438
https://en.wikipedia.org/wiki/Bill%20Parry%20%28mathematician%29
William Parry FRS (3 July 1934 – 20 August 2006) was an English mathematician who worked in dynamical systems, and, in particular, ergodic theory. In particular, he studied subshifts of finite type nilflows. Life Bill Parry was born in Coventry in the Warwickshire (now the West Midlands), England, the sixth of seven children. Although he failed the eleven-plus exam, Parry was persuaded by his mathematics teacher at Coventry Junior Technical School, specialising in metalwork and woodwork, to aim for university. To get appropriate tuition, he had to travel to Birmingham Technical College. He won a place at University College London. Following an MSc at the University of Liverpool, he returned to London to study at Imperial College with Yael Dowker, obtaining his PhD in 1960, with thesis Ergodic and mixing transformations. Having served in lecturing positions at Birmingham University and the University of Sussex, Parry was appointed to a readership at the recently created University of Warwick in 1968; his was the first appointment in analysis. Two years later, he gave a particularly well-received address at the Sixteenth International Congress of Mathematicians in Nice, France, and was promoted to professor. He played a key role in the Warwick Mathematics Department, and was Chair of the Department for 2 years. The rapid rise of the Department's international reputation was due to many, among whom Parry featured prominently. His great mathematical achievements were recognized by his early election to the Royal Society in 1984; however, he rarely used the title "Fellow of the Royal Society" except to aid specific causes of interest. Parry continued to contribute to the mathematical community and others' learning up until his death: he taught the University of Warwick's undergraduate course on ergodic theory as late as 2003. His published works include more than 80 research articles and four books. His doctoral students include Mark Pollicott and Mary Rees. He died in Marton, Warwickshire, of cancer exacerbated by MRSA, at the age of 72, on 20 August 2006. Research In 1975, Parry and Dennis Sullivan introduced the topological Parry–Sullivan invariant for flows in one-dimensional dynamical systems. See also Parry–Daniels map Parry–Sullivan invariant Parry measure References External links Obituaries: 1934 births 2006 deaths 20th-century English mathematicians 21st-century English mathematicians Dynamical systems theorists Alumni of Imperial College London Alumni of the University of Liverpool Alumni of University College London Academics of the University of Birmingham Academics of the University of Sussex Academics of the University of Warwick Fellows of the Royal Society Infectious disease deaths in England Deaths from cancer in England Deaths from methicillin-resistant Staphylococcus aureus People from Coventry
Bill Parry (mathematician)
Mathematics
545
63,516,197
https://en.wikipedia.org/wiki/Database%20repair
The problem of database repair is a question about relational databases which has been studied in database theory, and which is a particular kind of data cleansing. The problem asks about how we can "repair" an input relational database in order to make it satisfy integrity constraints. The goal of the problem is to be able to work with data that is "dirty", i.e., does not satisfy the right integrity constraints, by reasoning about all possible repairs of the data, i.e., all possible ways to change the data to make it satisfy the integrity constraints, without committing to a specific choice. Several variations of the problem exist, depending on: what we intend to figure out about the dirty data: figuring out if some database tuple is certain (i.e., is in every repaired database), figuring out if some query answer is certain (i.e., the answer is returned when evaluating the query on every repaired database) which kinds of ways are allowed to repair the database: can we insert new facts, remove facts (so-called subset repairs), and so on which repaired databases do we study: those where we only change a minimal subset of the database tuples (e.g., minimal subset repairs), those where we only change a minimal number of database tuples (e.g., minimal cardinality repairs) The problem of database repair has been studied to understand what is the complexity of these different problem variants, i.e., can we efficiently determine information about the state of the repairs, without explicitly materializing all of these repairs. References See also Probabilistic database Database theory
Database repair
Technology
333
1,522,372
https://en.wikipedia.org/wiki/Beta%20Aurigae
Beta Aurigae (Latinized from β Aurigae, abbreviated Beta Aur, β Aur), officially named Menkalinan , is a binary star system in the northern constellation of Auriga. The combined apparent visual magnitude of the system is 1.9, making it the second-brightest member of the constellation after Capella. Using the parallax measurements made during the Hipparcos mission, the distance to this star system can be estimated as , give or take a half-light-year margin of error. Along their respective orbits around the Milky Way, Beta Aurigae and the Sun are closing in on each other, so that in around one million years it will become the brightest star in the night sky. Nomenclature β Aurigae is the star system's Bayer designation. The traditional name Menkalinan is derived from the Arabic منكب ذي العنان mankib ðī-l-‘inān "shoulder of the rein-holder". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Menkalinan for this star. It is known as 五車三 (the Third Star of the Five Chariots) in traditional Chinese astronomy. Properties Beta Aurigae is a binary star system, but it appears as a single star in the night sky. The two stars are metallic-lined subgiant stars belonging to the A-type stellar classification; they have roughly the same mass and radius. A-type entities are hot stars that release a blue-white hued light; these two stars burn brighter and with more heat than the Sun, which is a G2-type main sequence star. The pair constitute an eclipsing spectroscopic binary; the combined apparent magnitude varies over a period of 3.96 days between +1.89 and +1.94, as every 47.5 hours one of the stars partially eclipses the other from Earth's perspective. The two stars are designated Aa and Ab in modern catalogues, but have also been referred to as components 1 and 2 or A and B. There is an 11th magnitude optical companion with a separation of as of 2011, but increasing. It is also an A-class subgiant, but is an unrelated background star. At an angular separation of along a position angle of 155° is a companion star that is 8.5 magnitudes fainter than the primary. It may be the source of the X-ray emission from the vicinity. The Beta Aurigae system is believed to be a stream member of the Ursa Major Moving Group. See also Algol Capella References External links Menkalinan Image Beta Aurigae CCDM J05596+4457 Catalog Smithsonian/NASA Astrophysics Data System Menkalinan - spectroscopic binary star Menkalinan - Beta Aurigae, the spectroscopic eclipsing binary star M-type main-sequence stars A-type subgiants Am stars Algol variables Triple star systems Ursa Major moving group Menkalinan Auriga Aurigae, Beta BD+44 1328 Aurigae, 34 040183 028360 2088
Beta Aurigae
Astronomy
706
33,567,625
https://en.wikipedia.org/wiki/Marine%20Ecology%20Progress%20Series
The Marine Ecology Progress Series is a peer-reviewed scientific journal that covers all aspects of marine ecology. History The journal was founded by Otto Kinne. Its original concept was based on Marine Ecology, also once edited by Kinne and published by John Wiley & Sons. Abstracting and indexing The Marine Ecology Progress Series is indexed and abstracted in Biological Abstracts, Scopus, and the Science Citation Index. References External links Academic journals established in 1979 English-language journals Ecology journals
Marine Ecology Progress Series
Environmental_science
97
2,164,610
https://en.wikipedia.org/wiki/The%20Bottle%20Imp
"The Bottle Imp" is an 1891 short story by the Scottish author Robert Louis Stevenson usually found in the short story collection Island Nights' Entertainments. It was first published in the New York Herald (February–March 1891) and Black and White magazine (London, March–April 1891). In it, the protagonist buys a bottle with an imp inside that grants wishes. However, the bottle is cursed; if the holder dies bearing it, his or her soul is forfeit to hell. Plot Keawe, a poor Native Hawaiian, buys a strange unbreakable bottle from a sad, elderly gentleman who credits the bottle with his fortune. He promises that an imp residing in the bottle will also grant Keawe his every desire. Of course, there is a catch. The bottle must be sold, for cash, at a loss, i.e. for less than its owner originally paid, and cannot be thrown or given away, or else it will magically return to him. All of these rules must be explained by each seller to each purchaser. If an owner of the bottle dies without having sold it in the prescribed manner, that person's soul will burn for eternity in Hell. The bottle was said to have been brought to Earth by the Devil and first purchased by Prester John for millions; it was owned by Napoleon and Captain James Cook and accounted for their great successes. By the beginning of the story the price has diminished to fifty dollars. Keawe buys the bottle and instantly tests it by wishing his money to be refunded, and by trying to sell it for more than he paid and abandoning it, to test if the story is true. When these all work as described, he realizes the bottle does indeed have unholy power. He wishes for his heart's desire: a big, fancy mansion on a landed estate, and finds his wish granted, but at a price: his beloved uncle and cousins have been killed in a boating accident, leaving Keawe sole heir to his uncle's fortune. Keawe is horrified, but uses the money to build his house. Having all he wants, and being happy, he explains the risks to a friend who buys the bottle from him. Keawe lives a happy life, but there is something missing. Walking along the beach one night, he meets a beautiful woman, Kokua. They soon fall in love and become engaged. Keawe's happiness is shattered on the night of his betrothal, when he discovers that he has contracted the then-incurable disease of leprosy. He must give up his house and wife, and live in Kalaupapa—a remote community for lepers—unless he can recover the bottle and use it to cure himself. Keawe begins this quest by attempting to track down the friend to whom he sold the bottle, but the friend has become suddenly wealthy and left Hawaii. Keawe traces the path of the bottle through many buyers and eventually finds a Haole in Honolulu. The man of European ancestry has both good and bad news for Keawe: (a) he owns the bottle and is very willing to sell, but (b) he had only paid two cents for it. Therefore, if Keawe buys it, he will not be able to resell it. Keawe decides to buy the bottle anyway, for the price of one cent, and indeed cures himself. Now, however, he is understandably despondent: how can he possibly enjoy life, knowing his doom? His wife mistakes his depression for regret at their marriage, and asks for a divorce. Keawe confesses his secret to her. His wife suggests they sail, with the bottle, to Tahiti; on that archipelago the colonists of French Polynesia use centimes, a coin worth one fifth of an American cent. This offers a potential recourse for Keawe. When they arrive, however, the suspicious natives will not touch the cursed bottle. Kokua determines to make a supreme sacrifice to save her husband from his fate. Since, however, she knows he would never sell the bottle to her knowingly, Kokua is forced to bribe an old sailor to buy the bottle for four centimes, with the understanding that she will secretly buy it back for three. Now Kokua is happy, but she carries the curse. Keawe discovers what his wife has done, and resolves to sacrifice himself for her in the same manner. He arranges for a brutish boatswain to buy the bottle for two centimes, promising he will buy it back for one, thus sealing his doom. However, the drunken sailor refuses to part with it, and is unafraid of the prospect of Hell. "I reckon I'm going anyway," he says. Keawe returns to his wife, both of them free from the curse, and the reader is encouraged to believe that they live happily ever after. Background The theme of the bottle imp can be found in the German legend Spiritus familiaris by the Brothers Grimm as well. At the time of publication in 1891, the currency system of the Kingdom of Hawaii included cent coins that circulated at par with the U.S. penny. The novel reflects Stevenson's impressions gained during his five-month visit of the Kingdom of Hawaii in 1889. Part of the storyline takes place in the little town Hoʻokena at the Kona coast of the island of Hawaii, which the author visited. In a scene which takes place in Honolulu Stevenson mentions Heinrich Berger, the bandmaster of the Royal Hawaiian Band. The name of Keawe's wife refers to the Hawaiian word kōkua, which means help. In 1889 Stevenson also visited the leper colony on the island of Molokaʻi and met Father Damien there. Therefore, he had a first-hand experience from the fate of lepers. Several times Stevenson uses the Hawaiian word Haole, which is the usual term for Caucasians, for example describing the last owner of the bottle. The story could be considered as both a continuation of and a rather light-hearted counterpoint to the theme of selling one's soul to The Devil, manifested in the numerous depictions of Doctor Faust as well as in such stories as "The Devil and Tom Walker" by Washington Irving and "The Devil and Daniel Webster" by Stephen Vincent Benét. Publication "The Bottle Imp" was published in the missionary magazine O le sulu Samoa (The Samoan Torch) in 1891, with the title "O Le Tala I Le Fagu Aitu". According to Publishers Weekly and School Library Journal (both quoted by Amazon.com) "this tale was originally published, in Samoan, in 1891". The Locus Online Index to Science Fiction similarly states "The Stevenson story was first published in Samoan in 1891, appearing later that year in English." The Project Gutenberg text of the story has a note by Stevenson which says "...the tale has been designed and written for a Polynesian audience..." which also suggests initial publication in Polynesia, not in the United States. Bottle Imp paradox The premise of the story creates a logical paradox similar to the unexpected hanging paradox. Clearly no rational person would buy it for one cent as this would make it impossible for it to be sold at a loss. However, it follows that no rational person would buy it for two cents either if it is later to be sold only to a rational person for a loss. By backward induction, the bottle cannot be sold for any price in a perfectly rational world. And yet, the actions of the people in the story do not seem particularly unwise. The story shows that the paradox could be resolved by the existence of certain characters: Someone who loves the bottle's current owner enough to sacrifice their own soul for that person. Someone who believes they are inevitably destined for Hell already. Someone who believes there is someone else willing to make an irrational decision to purchase the bottle. Since the exchange rates of different currencies can fluctuate with respect to one another, it is also possible that the value of the bottle could increase from one transaction to the next even if the stated price decreases. This leads to an endless staircase-type paradox which would make it possible, in theory, for the bottle to keep getting sold infinitely many times. However, this might be forbidden depending on how the bottle imp interprets the idea of "selling at a loss". Adaptations A silent film based on Stevenson's story was released in 1917. The screenplay was adapted by Charles Maigne. The film was directed by Marshall Neilan, and starred Sessue Hayakawa, Lehua Waipahu, H. Komshi, George Kuwa, Guy Oliver and James Neill. The Witch's Tale, a horror anthology radio series, adapted the story as "The Wonderful Bottle" in 1934. Käthe von Nagy starred in the German film Love, Death and the Devil (1934) and the French film The Devil in the Bottle (1935), both based on the story. West German filmmakers the Diehl Brothers used the story as the basis for a feature film shot with a mixture of puppetry and live action, released in 1952 under the title Der Flaschenteufel. An Italian TV adaptation, "Il diavolo nella bottiglia", aired on Rai 2 on 23 June 1981 as part of the horror anthology series I giochi del diavolo. "The Imp in the Bottle" was episode number 143 of the CBS Radio Mystery Theater which aired in 1974. The Devil Inside, an opera based on Stevenson's short story written by the novelist Louise Welsh and the composer Stuart MacRae, premiered at the Theatre Royal, Glasgow in January 2016. The opera was a co-production between Scottish Opera and Music Theatre Wales. The story has inspired the trick-taking card game Bottle Imp, designed by Günter Cornett. It was first published in 1995 by Bambus Spieleverlag, and has been republished several times since under the name "Bottle Imp". See also Greater fool theory Unexpected hanging paradox References External links Short stories by Robert Louis Stevenson 1891 short stories Gothic short stories Media related to game theory Native Hawaiian Works originally published in the New York Herald Hawaii in fiction Short stories adapted into films Cultural depictions of Napoleon Cultural depictions of James Cook Imps fr:Veillées des Îles#La Bouteille endiablée
The Bottle Imp
Mathematics
2,139
80,723
https://en.wikipedia.org/wiki/South%20Atlantic%20Anomaly
The South Atlantic Anomaly (SAA) is an area where Earth's inner Van Allen radiation belt comes closest to Earth's surface, dipping down to an altitude of . This leads to an increased flux of energetic particles in this region and exposes orbiting satellites (including the ISS) to higher-than-usual levels of ionizing radiation. The effect is caused by the non-concentricity of Earth with its magnetic dipole and has been observed to be increasing in intensity recently. The SAA is the near-Earth region where Earth's magnetic field is weakest relative to an idealized Earth-centered dipole field. Definition The area of the SAA is confined by the intensity of Earth's magnetic field at less than 32,000 nanotesla at sea level, which corresponds to the dipolar magnetic field at ionospheric altitudes. However, the field itself varies in intensity as a gradient. Position and shape The Van Allen radiation belts are symmetric about the Earth's magnetic axis, which is tilted with respect to the Earth's rotational axis by an angle of approximately 11°. The intersection between the magnetic and rotation axes of the Earth is located not at the Earth's center, but some away. Because of this asymmetry, the inner Van Allen belt is closest to the Earth's surface over the south Atlantic Ocean where it dips down to in altitude, and farthest from the Earth's surface over the north Pacific Ocean. If Earth's magnetism is represented by a bar magnet of small size but strong intensity ("magnetic dipole"), the SAA variation can be illustrated by placing the magnet not in the plane of the Equator, but some small distance North, shifted more or less in the direction of Singapore. As a result, over northern South America and the south Atlantic, near Singapore's antipodal point, the magnetic field is relatively weak, resulting in a lower repulsion to trapped particles of the radiation belts there, and as a result these particles reach deeper into the upper atmosphere than they otherwise would. The shape of the SAA changes over time. Since its initial discovery in 1958, the southern limits of the SAA have remained roughly constant while a long-term expansion has been measured to the northwest, the north, the northeast, and the east. Additionally, the shape and particle density of the SAA varies on a diurnal basis, with greatest particle density corresponding roughly to local noon. At an altitude of approximately , the SAA spans from geographic latitude and from longitude. The highest intensity portion of the SAA drifts to the west at a speed of about 0.3° per year, and is noticeable in the references listed below. The drift rate of the SAA is very close to the rotation differential between the Earth's core and its surface, estimated to be between 0.3° and 0.5° per year. Current literature suggests that a slow weakening of the geomagnetic field is one of several causes for the changes in the borders of the SAA since its discovery. As the geomagnetic field continues to weaken, the inner Van Allen belt gets closer to the Earth, with a commensurate enlargement of the SAA at given altitudes. During the Middle Holocene, the Earth's magnetic field in the region occupied by the SAA was relatively calm and quiescent, contrasting with its present day activity. The South Atlantic Anomaly seems to be caused by a huge reservoir of very dense rock inside the Earth called the African large low-shear velocity province. The position of the anomaly can be that of the maximum magnetic flux or that of the centroid of the flux, which is less sensitive to sampling noise and more representative of the feature as a whole. In January 2021, the centroid was located near and drifting about 0.23°S 0.34°W per year. Intensity and effects The South Atlantic Anomaly is of great significance to astronomical satellites and other spacecraft that orbit the Earth at several hundred kilometers altitude; these orbits take satellites through the anomaly periodically, exposing them to several minutes of strong ionizing radiation, caused by the trapped protons in the inner Van Allen belt. Measurements on Space Shuttle flight STS-94 have ascertained that absorbed dose rates from charged particles have extended from 112 to 175 μGy/day, with dose equivalent rates ranging from 264.3 to 413 μSv/day. The International Space Station, orbiting with an inclination of 51.6°, requires extra shielding to deal with this problem. The Hubble Space Telescope does not take observations with its sensitive UV detectors while passing through the SAA. Passing through the anomaly caused false alarms on Skylab Apollo Telescope Mount's solar flare sensor. Astronauts are also affected by this region, which is said to be the cause of peculiar "shooting stars" (phosphenes) seen in the visual field of astronauts, an effect termed cosmic ray visual phenomena. Passing through the South Atlantic Anomaly is thought to be the reason for the failures of the Globalstar network's satellites in 2007. The PAMELA experiment, while passing through the SAA, detected antiproton levels that were orders of magnitude higher than expected. This suggests the Van Allen belt confines antiparticles produced by the interaction of the Earth's upper atmosphere with cosmic rays. NASA has reported that modern laptop computers have crashed when Space Shuttle flights passed through the anomaly. In October 2012, the SpaceX CRS-1 Dragon spacecraft attached to the International Space Station experienced a transient problem as it passed through the anomaly. The SAA is believed to have started a series of events leading to the destruction of the Hitomi, Japan's most powerful X-ray observatory. The anomaly transiently disabled a direction-finding mechanism, causing the satellite to rely solely on gyroscopes that were not working properly, after which it spun out of control, losing its solar panels in the process. See also Geomagnetic reversal Geomagnetic storm Large low-shear-velocity provinces Operation Argus Space weather References External links Section "Magnetic flip" contains a video showing the growth and movement of the South Atlantic Anomaly over the last 400 years. Atlantic Ocean Magnetic anomalies Magnetic field of the Earth Space plasmas Geology of South America
South Atlantic Anomaly
Physics
1,280
44,558,259
https://en.wikipedia.org/wiki/Psathyrella%20piluliformis
Psathyrella piluliformis is a species of agaric fungus in the family Psathyrellaceae. It produces fruit bodies (mushrooms) with broadly convex caps measuring in diameter. The caps are chestnut to reddish brown, the color fading with age and with dry weather. The closely spaced gills have an adnate attachment to the stipe. They are initially tan until the spores mature, when the gills turn dark brown. Fragments of the partial veil may remain on the cap margin, and as a wispy band of hairs on the stipe. The stipe is 2–7 cm tall and 3–7 mm wide, white, smooth, hollow, and bulging at the base. Fruiting occurs in clusters at the base of hardwood stumps. It is considered edible but of low quality, with fragile flesh and being difficult to identify. Similar species include Psathyrella carbonicola, P. longipes, P. longistriata, P. multipedata, P. spadicea, and Parasola conopilus. See also List of Psathyrella species References External links Fungi described in 1783 Fungi of Europe Fungi of North America Psathyrellaceae Edible fungi Fungus species
Psathyrella piluliformis
Biology
247
14,167,198
https://en.wikipedia.org/wiki/CLCN5
The CLCN5 gene encodes the chloride channel Cl-/H+ exchanger ClC-5. ClC-5 is mainly expressed in the kidney, in particular in proximal tubules where it participates to the uptake of albumin and low-molecular-weight proteins, which is one of the principal physiological role of proximal tubular cells. Mutations in the CLCN5 gene cause an X-linked recessive nephropathy named Dent disease (Dent disease 1 MIM#300009) characterized by excessive urinary loss of low-molecular-weight proteins and of calcium (hypercalciuria), nephrocalcinosis (presence of calcium phosphate aggregates in the tubular lumen and/or interstitium) and nephrolithiasis (kidney stones). The CLCN5 gene Structure The human CLCN5 gene (MIM#300008, reference sequence NG_007159.2) is localized in the pericentromeric region on chromosome Xp11.23. It extends over about 170 Kb of genomic DNA, has a coding region of 2,238 bp and consists of 17 exons including 11 coding exons (from 2 to 12). The CLCN5 gene has 8 paralogues (CLCN1, CLCN2, CLCN3, CLCN4, CLCN6, CLCN7, CLCNKA, CLCNKB) and 201 orthologues among jawed vertebrates (Gnathostomata). Five different CLCN5 gene transcripts have been discovered, two of which (transcript variants 3 [NM_000084.5] and 4 [NM_001282163.1]) encode for the canonical 746 amino acid protein, two (transcript variants 1 [NM_001127899.3] and 2 [NM_001127898.3]) for the NH2-terminal extended 816 amino acid protein and one does not encode for any protein (Transcript variant 5, [NM_001272102.2]). The 5’ untranslated region (5’UTR) of CLCN5 is complex and not entirely clarified. Two strong and one weak promoters were predicted to be present in the CLCN5 gene. Several different 5’ alternatively used exons have been recognized in the human kidney. The three promoters drive with varying degree of efficiency 11 different mRNAs, with transcription initiating from at least three different start sites. The chloride channel H+/Cl− exchanger ClC-5 Like all ClC channels, ClC-5 needs to dimerize to create the pore through which the ions pass. ClC-5 can form both homo- and hetero-dimers due to its marked sequence homology with ClC-3 and ClC-4. The canonical 746-amino acid ClC-5 protein has 18 membrane spanning α-helices (named A to R), an intracellular N- terminal domain and a cytoplasmic C-terminus containing two cystathionine beta-synthase (CBS) domains which are known to be involved in the regulation of ClC-5 activity. Helices B, H, I, O, P, and Q are the six major helices involved in the formation of dimer’s interface and are crucial for proper pore configuration. The Cl− selectivity filter is principally driven by helices D, F, N, and R, which are conveyed together near the channel center. Two important amino acids for the proper ClC-5 function are the glutamic acids at position 211 and 268 called respectively “gating glutamate” and “proton glutamate”. The gating glutamate is necessary for both H+ transport and ClC-5 voltage dependence. The proton glutamate is crucial to the H+ transport acting as an H+ transfer site. Localization and function ClC-5 belongs to the family of voltage gated chloride channel that are regulators of membrane excitability, transepithelial transport and cell volume in different tissues. Based on sequence homology, the nine mammalian ClC proteins can be grouped into three classes, of which the first (ClC-1, ClC-2, ClC-Ka and ClC-Kb) is expressed primarily in plasma membranes, whereas the other two (ClC-3, ClC-4, and ClC-5 and ClC-6 and ClC-7) are expressed primarily in organellar membranes. ClC-5 is expressed in minor to moderate level in brain, muscle, intestine but highly in the kidney, primarily in proximal tubular cells of S3 segment, in alfa intercalated cells of cortical collecting duct of and in cortical and medullary thick ascending limb of Henle’s loop. Proximal tubular cells (PTCs) are the main site of ClC-5 expression. By means of the receptor-mediated endocytosis process, they uptake albumin and low-molecular-weight proteins freely passed through the glomerular filter. ClC-5 is located in early endosomes of PTCs where it co-localizes with the electrogenic vacuolar H+-ATPase (V-ATPase). ClC-5 in this compartment contributes to the maintenance of intra-endosomal acidic pH. Environment acidification is necessary for the dissociation of ligand from its receptor. The receptor is then recycled to the apical membrane, while ligand is transported to the late endosome and lysosome where it is degraded. ClC-5 supports efficient acidification of endosomes either by providing a Cl− conductance to counterbalance the accumulation of positively charged H+ pumped in by V-ATPase or by directly acidifying endosome in parallel with V-ATPase. Experimental evidence indicates that endosomal Cl− concentration, which is raised by ClC-5 in exchange for protons accumulated by the V-ATPase, may play a role in endocytosis independently from endosomal acidification, thus pointing to another possible mechanism by which ClC-5 dysfunction may impair endocytosis. ClC-5 is located also at the cell surface of PTCs where probably it plays a role in the formation/function of the endocytic complex that also involves megalin and cubilin/amnionless receptors, the sodium-hydrogen antiporter 3 (NHE3), and the V-ATPase. It was demonstrated at the C-terminus of ClC-5 binds the actin-depolymerizing protein cofilin. When the nascent endosome forms, the recruitment of cofilin by ClC-5 is a prerequisite for the localized dissolution of the actin cytoskeleton, thus permitting the endosome to pass into the cytoplasm. It is conceivable that at the cell surface, the large intracellular C-terminus of ClC-5 has a crucial function in mediating the assembly, stabilization and disassembly of the endocytic complex via protein–protein interactions. Therefore, ClC-5 may accomplish two roles in the receptor-mediated endocytosis: i) vesicular acidification and receptor recycling; ii) participation to the non-selective megalin–cubilin-amnionless low-molecular-weight protein uptake at the apical membrane. Clinical significance Dent disease is mainly caused by loss-of-function mutations in the CLCN5 gene (Dent disease 1; MIM#300009). Dent disease 1 shows a marked allelic heterogeneity. To date, 265 different CLCN5 pathogenic variants have been described. A small number of pathogenic variants were found in more than one family. The 48% are truncating mutations (nonsense, frameshift or complex), 37% non-truncating (missense or in-frame insertions/deletions), 10% splice site mutations, and 5% other type (large deletions, Alu insertions or 5’UTR mutations). Functional investigations in Xenopus laevis oocytes and mammalian cells enabled these CLCN5 mutations to be classified according to their functional consequences. The most common mutations lead to a defective protein folding and processing, resulting in endoplasmic reticulum retention of the mutant protein for further degradation by the proteasome. Animal models Two independent ClC-5 knock-out mice, the so called Jentsch and Guggino models, provided critical insights into the mechanisms of proximal tubular dysfunction in Dent disease 1. These two murine models recapitulated the major features of Dent disease (low-molecular-weight proteinuria, hypercalciuria and nephrocalcinosis/nephrolithiasis) and demonstrated that ClC-5 inactivation is associated with severe impairment of both fluid phase and receptor-mediated endocytosis, as well as trafficking defects leading to the loss of megalin and cubilin at the brush border of proximal tubules. However, targeted disruption of ClC-5 in the Jentsch model did not lead to hypercalciuria, kidney stones or nephrocalcinosis, while the Guggino model did. The Jentsch murine model produced slightly more acidic urines. Urinary phosphate excretion was increased in both models by about 50%. Hyperphosphaturia in the Jentsch model was associated with decreased apical expression of the sodium/phosphate cotransporter NaPi2a that is the predominant phosphate transporter in the proximal tubule. However, NaPi2a expression is ClC-5-independent since apical NaPi2a was normally expressed in any proximal tubules of chimeric female mice, while it was decreased in all male proximal tubular knock-out cells. Serum parathormone (PTH) is normal in knock-out mice while urinary PTH is increased of about 1.7 fold.  Megalin usually mediates the endocytosis and degradation of PTH in proximal tubular cells. In knock-out mice, the downregulation of megalin leads to PTH defective endocytosis and progressively increases luminal PTH levels that enhance the internalization of NaPi2a. DNA testing and genetic counselling A clinical diagnosis of Dent disease can be confirmed through molecular genetic testing that can detect mutations in specific genes known to cause Dent disease. However, about 20-25% of Dent disease patients remain genetically unresolved. Genetic testing is useful to determine the status of healthy carrier in the mother of an affected male. In fact, being Dent disease an X-linked recessive disorder, males are more frequently affected than females, and females may be heterozygous healthy carrier. Due to skewed X-inactivation, female carriers may present some mild symptoms of Dent disease such as low-molecular-weight proteinuria or hypercalciuria. Carriers will transmit the disease to half of their sons whereas half of their daughters will be carriers. Affected males do not transmit the disease to their sons since they pass Y chromosome to males, but all their daughters will inherited mutated X chromosome. Preimplant and prenatal genetic testing is not advised for Dent disease 1 since the prognosis for the majority of the patients is good and a clear correlation between genotype and phenotype is lacking. See also Chloride channel Notes References Further reading External links Ion channels
CLCN5
Chemistry
2,453
50,553,816
https://en.wikipedia.org/wiki/SOGIN
SOGIN (, the Nuclear Plant Management Company, which is also called Sogin) is an Italian state-owned enterprise responsible for nuclear decommissioning as well as management and disposal of radioactive waste produced by industrial, research and medical processes. Founded in 1999 following the 1987 Italian referendums on nuclear power, SOGIN was originally part of state owned ENEL but became independent, but still government owned, in 2000. The company initially took over the Caorso, Enrico Fermi, Garigliano and Latina nuclear power plants, later adding other sites including ENEA's EUREX. The company has commenced the decommissioning of all the plants and is predicted to complete the work in 2036. The company has been involved in environmental remediation, radioactive waste management and nuclear safety work in Armenia, Bulgaria, China, Czech Republic, France, Kazakhstan, Lithuania, Romania, Russia, Slovakia and Ukraine. SOGIN also undertakes other decontamination work and in 2005 started to help to decommission nuclear submarines of the Russian Navy. History Following the 1987 referendums on nuclear power, the Italian government was required to decommission the country's remaining nuclear plants. SOGIN was conceived as the company to undertake this work. SOGIN was created on 1 November 1999 and took ownership of the closed Caorso, Enrico Fermi, Garigliano, Latina nuclear power plants from the state-owned electricity company, ENEL. Initially, SOGIN was created as a part of the ENEL group, but, following the passing of Legislative Decree no. 79, the so-called Bersani decree of 16 March 1999, which marked the beginning of the liberalization of the Italian electricity sector, it was decided to split the group. On 3 November 2000, the SOGIN shares were transferred to the Ministry of Economy and Finance. In 2003, SOGIN also took responsibility decommissioning ENEA sites like EUREX, the OPEC research reactors in Cesano, and the ITREC plant in Rotondella. On 16 September 2004 SOGIN became a corporate group with the acquisition of 60% of the shares in Nucleco SpA (the remaining 40% being owned by ENEA). In 2005, SOGIN acquired the nuclear enrichment plant at Bosco Marengo, and, in 2012, the company started a three-year programme to decontaminate the boxes that had been used to store plutonium-contaminated gloves up to 1986. SOGIN launched the Observatory for the Closure of the Nuclear Cycle () with the Fondazione per lo Sviluppo Sostenibile (Foundation for Sustainable Development) in 2014 as an independent monitor of the social, environmental and technical aspects of nuclear sites. SOGIN was originally tasked to completely decommission the Italian nuclear plants by 2019, but it is likely to be 2036 before the task is complete. Decommissioning activity SOGIN is responsible for decommissioning four nuclear power plants, located in Caorso, Garigliano, Latina and Trino, as well as the operations in Bosco Marengo, Casaccia, Rotondella and Saluggia. The process, agreed in 2001, involves the systematic decontamination and deconstruction of the site with the aim that the area can be returned to normal use. Work initially starts with a pre-decommissioning stage, carried out under a protective storage license, where the plant stops operation but is no action is taken to dismantle the plant. When SOGIN took responsibility for Garigliano, the plant, which had not operated since 1982, was nearing completion of this stage. The seven sites under SOGIN's control have all gone through this stage in the process. After SOGIN has completed this task, the site is decontaminated and deconstructed. As well as radioactive material, other hazardous wastes need to be carefully handled, including asbestos insulation. This can take a long time; for example, at Garigliano, the removal of asbestos from the turbine building was complete by 2007 and yet the full decontamination of the reactor building was not complete until 2010. Only once it has been decontaminated can material safely be removed. Once this stage is complete, SOGIN requests a government license to dismantle the whole site. Full decommissioning then follows, including the removal of all buildings, and the ground is decontaminated. In all, over one million tonnes of material is expected to be recovered from the decommissioning process, of which 95% is non-radioactive. This process can take decades, with estimates for the total decommissioning time varying from 27 years for Garigliano to 32 years for Caorso and Trino. Costs are similarly high, with the total bill for Garigliano expected to reach $432.4 million by the time the site is handed over. The fuel enrichment site at Bosco Marengo was the first to start decommissioning. The process started in 2008 and was completed on 31 December 2021. The first nuclear power plant to gain permission to start full decommissioning was at Trino, the decrees being granted a decree by the Ministry of Economic Development on 2 August 2012. This was followed by a decree authorising the decommissioning of Garigliano on 26 September. Caorso and Latina were granted their licenses in 2014, in January and December respectively. Senior management National repository In September 2008, a high-level discussion took place within the Italian government about a central repository for all nuclear waste. This led to, in 2010, SOGIN being given the responsibility for finding a surface site to store nuclear waste. SOGIN projected the repository to be a structure with engineering barriers and natural barriers to store approximately of low and intermediate level waste permanently, and of high level waste temporarily. SOGIN predicted that of this, 60% will come from decommissioned plants. The remainder were to come from scientific research, medical and industrial applications, both waste produced to date and that which was estimated to be generated over the next 50 years. The creation of the repository was a critical requisite for SOGIN to achieve its decommissioning deadline. The repository was to be hosted in a technology park that also contained research labs which would bring economic benefits to the community, as well as direct payment of compensation administered by SOGIN. Despite this, the search for a repository proved to be difficult. When the first site chosen, the salt mines of Scanzano Jonico, was announced in November 2003, it led to an unprecedented outcry with over 150,000 demonstrating against the decision with residents blocking roads and shutting down businesses. This led directly to the regional council declaring the area a denuclearised zone. Subsequent changes in national legislation have been put in place in an attempt to ensure that any future site can only be agreed by the Council of Ministers after review by a panel of scientists. In 2012, the Italian Parliament passed a law that implied that all nuclear waste would be stored in the repository. However, continued controversy and the lack of progress finding a site has meant that, instead, waste is mainly stored in untreated form at the nuclear facilities themselves. This is also unpopular with neighbouring communities, who fear this will become a permanent solution. International activity As a consequence of the difficulty finding a long term solution in the country, waste material has instead been sent abroad, primarily to France and the UK. Initially, up to 2005, shipments were made to BNFL in the UK. in November 2006 the Italian and the French governments agreed to transfer about of spent fuel to France which led to SOGIN signing a contract with Areva in April 2007. The first shipment under this agreement, of fuel from the Caorso nuclear power plant, was completed in June 2010. In 2015, SOGIN signed a similar contract with JAVYS (Jadrová a Vyraďovacia Spoločnosť), the Slovak nuclear decommissioning company, to send of waste to be processed at their site in Jaslovské Bohunice. SOGIN signed an agreement with the Radioactive Waste Repository Authority (RAWRA) in the Czech republic in 2016 covering the storage of nuclear waste, including collaboration to develop a deep geological repository for spent nuclear fuel and high-level waste. As well as its core business of decommissioning Italian nuclear plants, SOGIN undertakes international consultancy in environmental remediation, radioactive waste management and nuclear safety. The company has undertaken projects at Metsamor in Armenia, Belene and Kozloduy in Bulgaria, Dukovany and Temelin in the Czech Republic, Phénix in France, Aktau in Kazakhstan, Ignalina in Lithuania, Cernavodă in Romania, Beloyarsk, Bilibino, Kalinin and Kola in Russia, Bohunice and Mochovce in Slovakia and Khmelnytskyi and Rivne in Ukraine. SOGIN has been actively involved in the G8 Glocal Partnership programme, launched at the 2002 G8 summit in Kananaskis, to support and accelerate Russia's nuclear disarmament. On 3 August 2005, an agreement was signed between SOGIN and the Ministry of Industry for the company to dismantle Russian nuclear submarines. The programme required a specialist vessel, the Rossita, to be constructed, which was delivered in 2011. In 2014, SOGIN signed an agreement with China General Nuclear Power Group (CGN) to remove parts from the nuclear fuel pool of a Chinese plant. The contract opened the door to the companies sharing expertise on nuclear decommissioning and collaborating on policies and strategies to manage radioactive waste and used fuel in China. Amongst the first projects is a joint study of an innovative process for the minimization, treatment and conditioning of radioactive waste in Italy. Financial performance See also Nuclear power in Italy References Citations Bibliography Governmental nuclear organizations Nuclear power in Italy Nuclear technology companies of Italy
SOGIN
Engineering
2,059
37,541,861
https://en.wikipedia.org/wiki/Big%20Brake
The Big Brake is a theoretical scientific model suggested as one of the possibilities for the ultimate fate of the universe. In this model the effect of dark energy reverses, stopping the accelerating expansion of the Universe, and causing an infinite rate of deceleration. All cosmic matter would be subjected to extreme tidal forces and be destroyed. Another possibility is matter may still exist, albeit in a different form and organization. The consequences for space and time are also unclear. See also Big Bang Big Crunch Big Freeze Big Rip Big Whimper References Physical cosmology Ultimate fate of the universe
Big Brake
Physics,Astronomy
115
37,854,767
https://en.wikipedia.org/wiki/Cellhelmet
Cellhelmet is a mobile accessories manufacturer/distributor, headquartered in Pittsburgh, Pennsylvania with a satellite office in Shenzhen, founded in 2012. The company appeared on ABC's Shark Tank on March 8, 2013. Cellhelmet is a registered trademark of Kane and McHenry Enterprises, LLC. Its original business model was to include accidental damage coverage with its mobile phone accessories, should they fail to protect a given device. If a device breaks inside of a cellhelmet case, the company will repair or replace it, according to coverage terms. Since, the company has introduced various other cell phone accessory lines, including screen protection products, chargers, data cables, cleaning solution and selfie sticks for cell phones and tablets, which do not carry coverage. cellhelmet made its debut on Kickstarter in January 2012 as the first company to include accidental damage coverage with its protective accessories. Cellhelmet is owned and operated by Michael Kane and David Eldridge of Pittsburgh, Pennsylvania. See also Made in USA Mobile phone accessories References External links cellhelmet.com IPhone accessories Manufacturing companies based in Pittsburgh
Cellhelmet
Technology
221
43,024
https://en.wikipedia.org/wiki/Levee
A levee ( or ), dike (American English), dyke (British English; see spelling differences), embankment, floodbank, or stop bank is an elevated ridge, natural or artificial, alongside the banks of a river, often intended to protect against flooding of the area adjoining the river. It is usually earthen and often runs parallel to the course of a river in its floodplain or along low-lying coastlines. Naturally occurring levees form on river floodplains following flooding, where sediment and alluvium is deposited and settles, forming a ridge and increasing the river channel's capacity. Alternatively, levees can be artificially constructed from fill, designed to regulate water levels. In some circumstances, artificial levees can be environmentally damaging. Ancient civilizations in the Indus Valley, ancient Egypt, Mesopotamia and China all built levees. Today, levees can be found around the world, and failures of levees due to erosion or other causes can be major disasters, such as the catastrophic 2005 levee failures in Greater New Orleans that occurred as a result of Hurricane Katrina. Etymology Speakers of American English use the word levee, from the French word (from the feminine past participle of the French verb , 'to raise'). It originated in New Orleans a few years after the city's founding in 1718 and was later adopted by English speakers. The name derives from the trait of the levee's ridges being raised higher than both the channel and the surrounding floodplains. The modern word dike or dyke most likely derives from the Dutch word , with the construction of dikes well attested as early as the 11th century. The Westfriese Omringdijk, completed by 1250, was formed by connecting existing older dikes. The Roman chronicler Tacitus mentions that the rebellious Batavi pierced dikes to flood their land and to protect their retreat (70 CE). The word originally indicated both the trench and the bank. It closely parallels the English verb to dig. In Anglo-Saxon, the word already existed and was pronounced as dick in northern England and as ditch in the south. Similar to Dutch, the English origins of the word lie in digging a trench and forming the upcast soil into a bank alongside it. This practice has meant that the name may be given to either the excavation or to the bank. Thus Offa's Dyke is a combined structure and Car Dyke is a trench – though it once had raised banks as well. In the English Midlands and East Anglia, and in the United States, a dike is what a ditch is in the south of England, a property-boundary marker or drainage channel. Where it carries a stream, it may be called a running dike as in Rippingale Running Dike, which leads water from the catchwater drain, Car Dyke, to the South Forty Foot Drain in Lincolnshire (TF1427). The Weir Dike is a soak dike in Bourne North Fen, near Twenty and alongside the River Glen, Lincolnshire. In the Norfolk and Suffolk Broads, a dyke may be a drainage ditch or a narrow artificial channel off a river or broad for access or mooring, some longer dykes being named, e.g., Candle Dyke. In parts of Britain, particularly Scotland and Northern England, a dyke may be a field wall, generally made with dry stone. Uses The main purpose of artificial levees is to prevent flooding of the adjoining countryside and to slow natural course changes in a waterway to provide reliable shipping lanes for maritime commerce over time; they also confine the flow of the river, resulting in higher and faster water flow. Levees can be mainly found along the sea, where dunes are not strong enough, along rivers for protection against high floods, along lakes or along polders. Furthermore, levees have been built for the purpose of impoldering, or as a boundary for an inundation area. The latter can be a controlled inundation by the military or a measure to prevent inundation of a larger area surrounded by levees. Levees have also been built as field boundaries and as military defences. More on this type of levee can be found in the article on dry-stone walls. Levees can be permanent earthworks or emergency constructions (often of sandbags) built hastily in a flood emergency. Some of the earliest levees were constructed by the Indus Valley civilization (in Pakistan and North India from ) on which the agrarian life of the Harappan peoples depended. Levees were also constructed over 3,000 years ago in ancient Egypt, where a system of levees was built along the left bank of the River Nile for more than , stretching from modern Aswan to the Nile Delta on the shores of the Mediterranean. The Mesopotamian civilizations and ancient China also built large levee systems. Because a levee is only as strong as its weakest point, the height and standards of construction have to be consistent along its length. Some authorities have argued that this requires a strong governing authority to guide the work and may have been a catalyst for the development of systems of governance in early civilizations. However, others point to evidence of large-scale water-control earthen works such as canals and/or levees dating from before King Scorpion in Predynastic Egypt, during which governance was far less centralized. Another example of a historical levee that protected the growing city-state of Mēxihco-Tenōchtitlan and the neighboring city of Tlatelōlco, was constructed during the early 1400s, under the supervision of the tlahtoani of the altepetl Texcoco, Nezahualcoyotl. Its function was to separate the brackish waters of Lake Texcoco (ideal for the agricultural technique Chināmitls) from the fresh potable water supplied to the settlements. However, after the Europeans destroyed Tenochtitlan, the levee was also destroyed and flooding became a major problem, which resulted in the majority of The Lake being drained in the 17th century. Levees are usually built by piling earth on a cleared, level surface. Broad at the base, they taper to a level top, where temporary embankments or sandbags can be placed. Because flood discharge intensity increases in levees on both river banks, and because silt deposits raise the level of riverbeds, planning and auxiliary measures are vital. Sections are often set back from the river to form a wider channel, and flood valley basins are divided by multiple levees to prevent a single breach from flooding a large area. A levee made from stones laid in horizontal rows with a bed of thin turf between each of them is known as a spetchel. Artificial levees require substantial engineering. Their surface must be protected from erosion, so they are planted with vegetation such as Bermuda grass in order to bind the earth together. On the land side of high levees, a low terrace of earth known as a banquette is usually added as another anti-erosion measure. On the river side, erosion from strong waves or currents presents an even greater threat to the integrity of the levee. The effects of erosion are countered by planting suitable vegetation or installing stones, boulders, weighted matting, or concrete revetments. Separate ditches or drainage tiles are constructed to ensure that the foundation does not become waterlogged. River flood prevention Prominent levee systems have been built along the Mississippi River and Sacramento River in the United States, and the Po, Rhine, Meuse River, Rhône, Loire, Vistula, the delta formed by the Rhine, Maas/Meuse and Scheldt in the Netherlands and the Danube in Europe. During the Chinese Warring States period, the Dujiangyan irrigation system was built by the Qin as a water conservation and flood control project. The system's infrastructure is located on the Min River, which is the longest tributary of the Yangtze River, in Sichuan, China. The Mississippi levee system represents one of the largest such systems found anywhere in the world. It comprises over of levees extending some along the Mississippi, stretching from Cape Girardeau, Missouri, to the Mississippi delta. They were begun by French settlers in Louisiana in the 18th century to protect the city of New Orleans. The first Louisiana levees were about high and covered a distance of about along the riverside. The U.S. Army Corps of Engineers, in conjunction with the Mississippi River Commission, extended the levee system beginning in 1882 to cover the riverbanks from Cairo, Illinois to the mouth of the Mississippi delta in Louisiana. By the mid-1980s, they had reached their present extent and averaged in height; some Mississippi levees are as high as . The Mississippi levees also include some of the longest continuous individual levees in the world. One such levee extends southwards from Pine Bluff, Arkansas, for a distance of some . The scope and scale of the Mississippi levees has often been compared to the Great Wall of China. The United States Army Corps of Engineers (USACE) recommends and supports cellular confinement technology (geocells) as a best management practice. Particular attention is given to the matter of surface erosion, overtopping prevention and protection of levee crest and downstream slope. Reinforcement with geocells provides tensile force to the soil to better resist instability. Artificial levees can lead to an elevation of the natural riverbed over time; whether this happens or not and how fast, depends on different factors, one of them being the amount and type of the bed load of a river. Alluvial rivers with intense accumulations of sediment tend to this behavior. Examples of rivers where artificial levees led to an elevation of the riverbed, even up to a point where the riverbed is higher than the adjacent ground surface behind the levees, are found for the Yellow River in China and the Mississippi in the United States. Coastal flood prevention Levees are very common on the marshlands bordering the Bay of Fundy in New Brunswick and Nova Scotia, Canada. The Acadians who settled the area can be credited with the original construction of many of the levees in the area, created for the purpose of farming the fertile tidal marshlands. These levees are referred to as dykes. They are constructed with hinged sluice gates that open on the falling tide to drain freshwater from the agricultural marshlands and close on the rising tide to prevent seawater from entering behind the dyke. These sluice gates are called "aboiteaux". In the Lower Mainland around the city of Vancouver, British Columbia, there are levees (known locally as dikes, and also referred to as "the sea wall") to protect low-lying land in the Fraser River delta, particularly the city of Richmond on Lulu Island. There are also dikes to protect other locations which have flooded in the past, such as the Pitt Polder, land adjacent to the Pitt River, and other tributary rivers. Coastal flood prevention levees are also common along the inland coastline behind the Wadden Sea, an area devastated by many historic floods. Thus the peoples and governments have erected increasingly large and complex flood protection levee systems to stop the sea even during storm floods. The biggest of these are the huge levees in the Netherlands, which have gone beyond just defending against floods, as they have aggressively taken back land that is below mean sea level. Spur dykes or groynes These typically man-made hydraulic structures are situated to protect against erosion. They are typically placed in alluvial rivers perpendicular, or at an angle, to the bank of the channel or the revetment, and are used widely along coastlines. There are two common types of spur dyke, permeable and impermeable, depending on the materials used to construct them. Natural examples Natural levees commonly form around lowland rivers and creeks without human intervention. They are elongated ridges of mud and/or silt that form on the river floodplains immediately adjacent to the cut banks. Like artificial levees, they act to reduce the likelihood of floodplain inundation. Deposition of levees is a natural consequence of the flooding of meandering rivers which carry high proportions of suspended sediment in the form of fine sands, silts, and muds. Because the carrying capacity of a river depends in part on its depth, the sediment in the water which is over the flooded banks of the channel is no longer capable of keeping the same number of fine sediments in suspension as the main thalweg. The extra fine sediments thus settle out quickly on the parts of the floodplain nearest to the channel. Over a significant number of floods, this will eventually result in the building up of ridges in these positions and reducing the likelihood of further floods and episodes of levee building. If aggradation continues to occur in the main channel, this will make levee overtopping more likely again, and the levees can continue to build up. In some cases, this can result in the channel bed eventually rising above the surrounding floodplains, penned in only by the levees around it; an example is the Yellow River in China near the sea, where oceangoing ships appear to sail high above the plain on the elevated river. Levees are common in any river with a high suspended sediment fraction and thus are intimately associated with meandering channels, which also are more likely to occur where a river carries large fractions of suspended sediment. For similar reasons, they are also common in tidal creeks, where tides bring in large amounts of coastal silts and muds. High spring tides will cause flooding, and result in the building up of levees. Failures and breaches Both natural and man-made levees can fail in a number of ways. Factors that cause levee failure include overtopping, erosion, structural failures, and levee saturation. The most frequent (and dangerous) is a levee breach. Here, a part of the levee actually breaks or is eroded away, leaving a large opening for water to flood land otherwise protected by the levee. A breach can be a sudden or gradual failure, caused either by surface erosion or by subsurface weakness in the levee. A breach can leave a fan-shaped deposit of sediment radiating away from the breach, described as a crevasse splay. In natural levees, once a breach has occurred, the gap in the levee will remain until it is again filled in by levee building processes. This increases the chances of future breaches occurring in the same location. Breaches can be the location of meander cutoffs if the river flow direction is permanently diverted through the gap. Sometimes levees are said to fail when water overtops the crest of the levee. This will cause flooding on the floodplains, but because it does not damage the levee, it has fewer consequences for future flooding. Among various failure mechanisms that cause levee breaches, soil erosion is found to be one of the most important factors. Predicting soil erosion and scour generation when overtopping happens is important in order to design stable levee and floodwalls. There have been numerous studies to investigate the erodibility of soils. Briaud et al. (2008) used Erosion Function Apparatus (EFA) test to measure the erodibility of the soils and afterwards by using Chen 3D software, numerical simulations were performed on the levee to find out the velocity vectors in the overtopping water and the generated scour when the overtopping water impinges the levee. By analyzing the results from EFA test, an erosion chart to categorize erodibility of the soils was developed. Hughes and Nadal in 2009 studied the effect of combination of wave overtopping and storm surge overflow on the erosion and scour generation in levees. The study included hydraulic parameters and flow characteristics such as flow thickness, wave intervals, surge level above levee crown in analyzing scour development. According to the laboratory tests, empirical correlations related to average overtopping discharge were derived to analyze the resistance of levee against erosion. These equations could only fit to the situation, similar to the experimental tests, while they can give a reasonable estimation if applied to other conditions. Osouli et al. (2014) and Karimpour et al. (2015) conducted lab scale physical modeling of levees to evaluate score characterization of different levees due to floodwall overtopping. Another approach applied to prevent levee failures is electrical resistivity tomography (ERT). This non-destructive geophysical method can detect in advance critical saturation areas in embankments. ERT can thus be used in monitoring of seepage phenomena in earth structures and act as an early warning system, e.g., in critical parts of levees or embankments. Negative impacts Large scale structures designed to modify natural processes inevitably have some drawbacks or negative impacts. Ecological impact Levees interrupt floodplain ecosystems that developed under conditions of seasonal flooding. In many cases, the impact is two-fold, as reduced recurrence of flooding also facilitates land-use change from forested floodplain to farms. Increased height In a natural watershed, floodwaters spread over a landscape and slowly return to the river. Downstream, the delivery of water from the area of flooding is spread out in time. If levees keep the floodwaters inside a narrow channel, the water is delivered downstream over a shorter time period. The same volume of water over a shorter time interval means higher river stage (height). As more levees are built upstream, the recurrence interval for high-water events in the river increases, often requiring increases in levee height. Levee breaches produce high-energy flooding During natural flooding, water spilling over banks rises slowly. When a levee fails, a wall of water held back by the levee suddenly pours out over the landscape, much like a dam break. Impacted areas far from a breach may experience flooding similar to a natural event, while damage near a breach can be catastrophic, including carving out deep holes and channels in the nearby landscape. Prolonged flooding after levee failure Under natural conditions, floodwaters return quickly to the river channel as water-levels drop. During a levee breach, water pours out into the floodplain and moves down-slope where it is blocked from return to the river. Flooding is prolonged over such areas, waiting for floodwater to slowly infiltrate and evaporate. Subsidence and seawater intrusion Natural flooding adds a layer of sediment to the floodplain. The added weight of such layers over many centuries makes the crust sink deeper into the mantle, much like a floating block of wood is pushed deeper into the water if another board is added on top. The momentum of downward movement does not immediately stop when new sediment layers stop being added, resulting in subsidence (sinking of land surface). In coastal areas, this results in land dipping below sea level, the ocean migrating inland, and salt-water intruding into freshwater aquifers. Coastal sediment loss Where a large river spills out into the ocean, the velocity of the water suddenly slows and its ability to transport sand and silt decreases. Sediments begin to settle out, eventually forming a delta and extending to the coastline seaward. During subsequent flood events, water spilling out of the channel will find a shorter route to the ocean and begin building a new delta. Wave action and ocean currents redistribute some of the sediment to build beaches along the coast. When levees are constructed all the way to the ocean, sediments from flooding events are cut off, the river never migrates, and elevated river velocity delivers sediment to deep water where wave action and ocean currents cannot redistribute. Instead of a natural wedge shaped delta forming, a "birds-foot delta" extends far out into the ocean. The results for surrounding land include beach depletion, subsidence, salt-water intrusion, and land loss. See also Lava channel Notes References External links "Well Diggers Trick", June 1951, Popular Science article on how flood control engineers were using an old method to protect flood levees along rivers from seepage undermining the levee "Design and Construction of Levees" US Army Engineer Manual EM-1110-2-1913 The International Levee Handbook Flood control Fluvial landforms Riparian zone
Levee
Chemistry,Engineering,Environmental_science
4,165
32,171,182
https://en.wikipedia.org/wiki/BsuBI/PstI%20restriction%20endonuclease
In molecular biology, the BsuBI/PstI restriction endonuclease family is a family of type II restriction endonucleases. It includes BsuBI and PstI. The enzymes of the BsuBI restriction/modification (R/M) system recognise the target sequence 5'CTGCAG and are functionally identical with those of the PstI R/M system. References Protein families
BsuBI/PstI restriction endonuclease
Biology
86
25,089,370
https://en.wikipedia.org/wiki/HyShot
HyShot is a research project of The University of Queensland, Australia Centre for Hypersonics, to demonstrate the possibility of supersonic combustion under flight conditions using two scramjet engines, one designed by The University of Queensland and one designed by QinetiQ (formerly the MOD's Defence Evaluation & Research Agency). Overview The project has involved the successful launch of one engine designed by The University of Queensland, and one launch of the scramjet designed by the British company QinetiQ. Each combustion unit was launched on the nose of a Terrier-Orion Mk70 sounding rocket on a high ballistic trajectory, reaching altitudes of approximately 330 km. The rocket was rotated to face the ground, and the combustion unit ignited for a period of 6–10 seconds while falling between 35 km and 23 km at around Mach 7.6. The system is not designed to produce thrust. The first HyShot flight was on 30 October 2001 but was a failure due to the rocket going off course. The first successful launch (Hyshot II) was of a University of Queensland scramjet on 30 July 2002. (There has been much analysis of the data obtained in flight and comparison with results from experiments conducted in ground-testing facilities.) (It is believed by many to be the first successful flight of a scramjet engine, although some dispute this and point primarily to earlier tests by Russian scientists. Also of note was the successful achievement of thrust by GASL scramjets launched on June 20 and July 26, 2001 under DARPA sponsorship.) A second successful flight (HyShot III) using a QinetiQ scramjet was achieved on 25 March 2006. The later QinetiQ prototype is cylindrical with four stainless steel combustors around the outside. The aerodynamics of the vehicle is improved by this arrangement but it was expensive to manufacture. The HyShot IV flight on 30 March 2006 launched successfully, and telemetry was received, however it is believed that the scramjet did not function as expected. Data analysis is required to confirm what occurred. HyCAUSE was launched on 15 June 2007. (The HyCAUSE experiment differed from the HyShot launches in that a Talos-Castor combination was used for launch and the target Mach number was 10.) The carrier rocket for the HyShot experiments was composed of a RIM-2 Terrier first stage (6 second burn, 4000 km/h) and an Orion second stage (26 second burn, 8600 km/h, 56 km altitude). A fairing over the payload was then jettisoned. The package then coasted to an altitude of around 300 km. Cold gas nitrogen attitude control thrusters were used to re-orient the payload for atmospheric reentry. The experiments each lasted for some 5 seconds as the payload descended between approximately 35 and 23 kilometers altitude, when liquid hydrogen fuel was fed to the scramjet. Telemetry reported results to receivers on the ground for later analysis. The payload landed about 400 km down range from the launch site, at which time its temperature was still expected to be about 300 degrees Celsius, which may be enough to cause a small brush fire and thereby make spotting and recovery easier even though a radio beacon was in the payload. Legacy The team continue to work as part of the Australian Hypersonics Initiative, a joint program of The University of Queensland, the Australian National University and the University of New South Wales' Australian Defence Force Academy campus, the governments of Queensland and South Australia and the Australian Defence Department. The Hyshot program spawned the HyCAUSE (Hypersonic Collaborative Australian/United States Experiment) program : a collaborative effort between the United States’ Defense Advanced Research Projects Agency (DARPA) and Australia's Defence Science and Technology Organisation (DSTO), also representing the research collaborators in the Australian Hypersonics Initiative (AHI). All tests were conducted at the Woomera Test Range in South Australia. HIFiRE program The Hypersonic International Flight Research Experimentation (HIFiRE) program was created jointly by DSTO (now DSTG) and the Air Force Research Laboratory (AFRL). HIFiRE was formed to investigate hypersonic flight technology, the fundamental science and technology required, and its potential for next generation aeronautical systems. Boeing is also a commercial partner in the project. This will involve up to ten flights with The University of Queensland involved in at least the first three: HyShot V — A free-flying hypersonic glider HyShot VI — A free-flying Mach 8 scramjet HyShot VII - Sustained Mach 8 scramjet-powered flight See also Scramjet NASA X-43 Boeing X-51 References External links Hyshot images Centre for Hypersonics - HyShot Scramjet Test Programme Aerospace engineering Research projects
HyShot
Engineering
984
46,331
https://en.wikipedia.org/wiki/Flatfish
A flatfish is a member of the ray-finned demersal fish superorder Pleuronectoidei, also called the Heterosomata. In many species, both eyes lie on one side of the head, one or the other migrating through or around the head during development. Some species face their left sides upward, some face their right sides upward, and others face either side upward. The most primitive members of the group, the threadfins, do not resemble the flatfish but are their closest relatives. Many important food fish are in this order, including the flounders, soles, turbot, plaice, and halibut. Some flatfish can camouflage themselves on the ocean floor. Taxonomy Due to their highly distinctive morphology, flatfishes were previously treated as belonging to their own order, Pleuronectiformes. However, more recent taxonomic studies have found them to group within a diverse group of nektonic marine fishes known as the Carangiformes, which also includes jacks and billfish. Specifically, flatfish are most closely related to the threadfins, which are now also placed in the suborder Pleuronectoidei. Together, the group is most closely related to the archerfish and beachsalmons within Toxotoidei. Due to this, they are now treated as a suborder of the Carangiformes. Over 800 described species are placed into 16 families. When they were treated as an order, the flatfishes are divided into two suborders, Psettodoidei and Pleuronectoidei, with > 99% of the species diversity found within the Pleuronectoidei. The largest families are Soleidae, Bothidae and Cynoglossidae with more than 150 species each. There also exist two monotypic families (Paralichthodidae and Oncopteridae). Some families are the results of relatively recent splits. For example, the Achiridae were classified as a subfamily of Soleidae in the past, and the Samaridae were considered a subfamily of the Pleuronectidae. The families Paralichthodidae, Poecilopsettidae, and Rhombosoleidae were also traditionally treated as subfamilies of Pleuronectidae, but are now recognised as families in their own right. The Paralichthyidae has long been indicated to be paraphyletic, with the formal description of Cyclopsettidae in 2019 resulting in the split of this family as well. The taxonomy of some groups is in need of a review. The last monograph covering the entire order was John Roxborough Norman's Monograph of the Flatfishes published in 1934. In particular, Tephrinectes sinensis may represent a family-level lineage and requires further evaluation e.g. New species are described with some regularity and undescribed species likely remain. Hybrids Hybrids are well known in flatfishes. The Pleuronectidae have the largest number of reported hybrids of marine fishes. Two of the most famous intergeneric hybrids are between the European plaice (Pleuronectes platessa) and European flounder (Platichthys flesus) in the Baltic Sea, and between the English sole (Parophrys vetulus) and starry flounder (Platichthys stellatus) in Puget Sound. The offspring of the latter species pair is popularly known as the hybrid sole and was initially believed to be a valid species in its own right. Distribution Flatfishes are found in oceans worldwide, ranging from the Arctic, through the tropics, to Antarctica. Species diversity is centered in the Indo-West Pacific and declines following both latitudinal and longitudinal gradients away from the Indo-West Pacific. Most species are found in depths between 0 and , but a few have been recorded from depths in excess of . None have been confirmed from the abyssal or hadal zones. An observation of a flatfish from the Bathyscaphe Trieste at the bottom of the Mariana Trench at a depth of almost has been questioned by fish experts, and recent authorities do not recognize it as valid. Among the deepwater species, Symphurus thermophilus lives congregating around "ponds" of sulphur at hydrothermal vents on the seafloor. No other flatfish is known from hydrothermal vents. Many species will enter brackish or fresh water, and a smaller number of soles (families Achiridae and Soleidae) and tonguefish (Cynoglossidae) are entirely restricted to fresh water. Characteristics The most obvious characteristic of the flatfish is its asymmetry, with both eyes lying on the same side of the head in the adult fish. In some families, the eyes are usually on the right side of the body (dextral or right-eyed flatfish), and in others, they are usually on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-sided individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head. The most basal members of the group, the threadfins, do not closely resemble the flatfishes. The surface of the fish facing away from the sea floor is pigmented, often serving to camouflage the fish, but sometimes with striking coloured patterns. Some flatfishes are also able to change their pigmentation to match the background, in a manner similar to some cephalopods. The side of the body without the eyes, facing the seabed, is usually colourless or very pale. In general, flatfishes rely on their camouflage for avoiding predators, but some have aposematic traits such as conspicuous eyespots (e.g., Microchirus ocellatus) and several small tropical species (at least Aseraggodes, Pardachirus and Zebrias) are poisonous. Juveniles of Soleichthys maculosus mimic toxic flatworms of the genus Pseudobiceros in both colours and swimming mode. Conversely, a few octopus species have been reported to mimic flatfishes in colours, shape and swimming mode. The flounders and spiny turbots eat smaller fish, and have well-developed teeth. They sometimes seek prey in the midwater, away from the bottom, and show fewer extreme adaptations than other families. The soles, by contrast, are almost exclusively bottom-dwellers, and feed on invertebrates. They show a more extreme asymmetry, and may lack teeth on one side of the jaw. Flatfishes range in size from Tarphops oligolepis, measuring about in length, and weighing , to the Atlantic halibut, at and . Species and species groups Brill Dab Sanddab Flounder Halibut Megrim Plaice Sole Tonguefish Turbot Reproduction Flatfishes lay eggs that hatch into larvae resembling typical, symmetrical, fish. These are initially elongated, but quickly develop into a more rounded form. The larvae typically have protective spines on the head, over the gills, and in the pelvic and pectoral fins. They also possess a swim bladder, and do not dwell on the bottom, instead dispersing from their hatching grounds as plankton. The length of the planktonic stage varies between different types of flatfishes, but eventually they begin to metamorphose into the adult form. One of the eyes migrates across the top of the head and onto the other side of the body, leaving the fish blind on one side. The larva also loses its swim bladder and spines, and sinks to the bottom, laying its blind side on the underlying surface. Origin and evolution Scientists have been proposing since the 1910s that flatfishes evolved from percoid ancestors. There has been some disagreement whether they are a monophyletic group. Some palaeontologists think that some percomorph groups other than flatfishes were "experimenting" with head asymmetry during the Eocene, and certain molecular studies conclude that the primitive family of Psettodidae evolved their flat bodies and asymmetrical head independently of other flatfish groups. Many scientists, however, argue that pleuronectiformes are monophyletic. The fossil record indicates that flatfishes might have been present before the Eocene, based on fossil otoliths resembling those of modern pleuronectiforms dating back to the Thanetian and Ypresian stages (57-53 million years ago). Flatfishes have been cited as dramatic examples of evolutionary adaptation. Richard Dawkins, in The Blind Watchmaker, explains the flatfishes' evolutionary history thus: ...bony fish as a rule have a marked tendency to be flattened in a vertical direction.... It was natural, therefore, that when the ancestors of [flatfish] took to the sea bottom, they should have lain on one side.... But this raised the problem that one eye was always looking down into the sand and was effectively useless. In evolution this problem was solved by the lower eye 'moving' round to the upper side. The origin of the unusual morphology of flatfishes was enigmatic up to the 2000s, and early researchers suggested that it came about as a result of saltation rather than gradual evolution through natural selection, because a partially migrated eye were considered to have been maladaptive. This started to change in 2008 with a study on the two fossil genera Amphistium and Heteronectes, dated to about 50 million years ago. These genera retain primitive features not seen in modern types of flatfishes. In addition, their heads are less asymmetric than modern flatfishes, retaining one eye on each side of their heads, although the eye on one side is closer to the top of the head than on the other. The more recently described fossil genera Quasinectes and Anorevus have been proposed to show similar morphologies and have also been classified as "stem pleuronectiforms". Suchs findings lead Friedman to conclude that the evolution of flatfish morphology "happened gradually, in a way consistent with evolution via natural selection—not suddenly, as researchers once had little choice but to believe." To explain the survival advantage of a partially migrated eye, it has been proposed that primitive flatfishes like Amphistium rested with the head propped up above the seafloor (a behaviour sometimes observed in modern flatfishes), enabling them to use their partially migrated eye to see things closer to the seafloor. While known basal genera like Amphistium and Heteronectes support a gradual acquisition of the flatfish morphology, they were probably not direct ancestors to living pleuronectiforms, as fossil evidence indicate that most flatfish lineages living today were present in the Eocene and contemporaneous with them. It has been suggested that the more primitive forms were eventually outcompeted. As food Flatfish is considered a Whitefish because of the high concentration of oils within its liver. Its lean flesh makes for a unique flavor that differs from species to species. Methods of cooking include grilling, pan-frying, baking and deep-frying. Timeline of genera See also Sinistral and dextral References Further reading Gibson, Robin N (Ed) (2008) Flatfishes: biology and exploitation. Wiley. Munroe, Thomas A (2005) "Distributions and biogeography." Flatfishes: Biology and Exploitation: 42–67. External links Information on Canadian fisheries of plaice Commercial fish Articles which contain graphical timelines Extant Paleocene first appearances Asymmetry
Flatfish
Physics
2,455
18,493,095
https://en.wikipedia.org/wiki/Open%20world
In video games, an open world is a virtual world in which the player can approach objectives freely, as opposed to a world with more linear and structured gameplay. Notable games in this category include The Legend of Zelda (1986), Grand Theft Auto V (2013), Red Dead Redemption 2 (2018) and Minecraft (2011). Games with open or free-roaming worlds typically lack level structures like walls and locked doors, or the invisible walls in more open areas that prevent the player from venturing beyond them; only at the bounds of an open-world game will players be limited by geographic features like vast oceans or impassable mountains. Players typically do not encounter loading screens common in linear level designs when moving about the game world, with the open-world game using strategic storage and memory techniques to load the game world dynamically and seamlessly. Open-world games still enforce many restrictions in the game environment, either because of absolute technical limitations or in-game limitations imposed by a game's linearity. While the openness of the game world is an important facet to games featuring open worlds, the main draw of open-world games is about providing the player with autonomy—not so much the freedom to do anything they want in the game (which is nearly impossible with current computing technology), but the ability to choose how to approach the game and its challenges in the order and manner as the player desires while still constrained by gameplay rules. Examples of high level of autonomy in computer games can be found in massively multiplayer online role-playing games (MMORPG) or in single-player games adhering to the open-world concept such as the Fallout series. The main appeal of open-world gameplay is that it provides a simulated reality and allows players to develop their character and its behavior in the direction and pace of their own choosing. In these cases, there is often no concrete goal or end to the game, although there may be the main storyline, such as with games like The Elder Scrolls V: Skyrim. Gameplay and design An open world is a level or game designed as nonlinear, open areas with many ways to reach an objective. Some games are designed with both traditional and open-world levels. An open world facilitates greater exploration than a series of smaller levels, or a level with more linear challenges. Reviewers have judged the quality of an open world based on whether there are interesting ways for the player to interact with the broader level when they ignore their main objective. Some games actually use real settings to model an open world, such as New York City. A major design challenge is to balance the freedom of an open world with the structure of a dramatic storyline. Since players may perform actions that the game designer did not expect, the game's writers must find creative ways to impose a storyline on the player without interfering with their freedom. As such, games with open worlds will sometimes break the game's story into a series of missions, or have a much simpler storyline altogether. Other games instead offer side-missions to the player that do not disrupt the main storyline. Most open-world games make the character a blank slate that players can project their own thoughts onto, although several games such as Landstalker: The Treasures of King Nole offer more character development and dialogue. Writing in 2005, David Braben described the narrative structure of current video games as "little different to the stories of those Harold Lloyd films of the 1920s", and considered genuinely open-ended stories to be the "Holy Grail" for the fifth generation of gaming. Gameplay designer Manveer Heir, who worked on Mass Effect 3 and Mass Effect Andromeda for Electronic Arts, said that there are difficulties in the design of an open-world game since it is difficult to predict how players will approach solving gameplay challenges offered by a design, in contrast to a linear progression, and needs to be a factor in the game's development from its onset. Heir opined that some of the critical failings of Andromeda were due to the open world being added late in development. Some open-world games, to guide the player towards major story events, do not provide the world's entire map at the start of the game, but require the player to complete a task to obtain part of that map, often identifying missions and points of interest when they view the map. This has been derogatorily referred to as "Ubisoft towers", as this mechanic was promoted in Ubisoft's Assassin's Creed series (the player climbing a large tower as to observe the landscape around it and identify waypoints nearby) and reused in other Ubisoft games, including Far Cry, Might & Magic X: Legacy and Watch Dogs. Other games that use this approach include Middle-earth: Shadow of Mordor, The Legend of Zelda: Breath of the Wild, and Marvel's Spider-Man. Rockstar games like GTA IV and the Red Dead Redemption series lock out sections of the map as "barricaded by law enforcement" until a specific point in the story has been reached. Games with open worlds typically give players infinite lives or continues, although some force the player to start from the beginning should they die too many times. There is also a risk that players may get lost as they explore an open world; thus designers sometimes try to break the open world into manageable sections. The scope of open-world games requires the developer to fully detail every possible section of the world the player may be able to access, unless methods like procedural generation are used. The design process, due to its scale, may leave numerous game world glitches, bugs, incomplete sections, or other irregularities that players may find and potentially take advantage of. The term "open world jank" has been used to apply to games where the incorporation of the open world gameplay elements may be poor, incomplete, or unnecessary to the game itself such that these glitches and bugs become more apparent, though are generally not game-breaking, such as the case for No Man's Sky near its launch. Distinctions between open world and sandbox games The mechanics of open-world games are often overlapped with ideas of sandbox games, but these are considered different concepts. Whereas open world refers to the lack of limits for the player's exploration of the game's world, sandbox games are based on the ability of giving the player tools for creative freedom within the game to approach objectives, if such objectives are present. For example, Microsoft Flight Simulator is an open-world game as one can fly anywhere within the mapped world, but is not considered a sandbox game as there are few creative aspects brought into the game. Emergent gameplay The combination of open world and sandbox mechanics can lead towards emergent gameplay, complex reactions that emerge (either expectedly or unexpectedly) from the interaction of relatively simple game mechanics. According to Peter Molyneux, emergent gameplay appears wherever a game has a good simulation system that allows players to play in the world and have it respond realistically to their actions. It is what made SimCity and The Sims compelling to players. Similarly, being able to freely interact with the city's inhabitants in Grand Theft Auto added an extra dimension to the series. In recent years game designers have attempted to encourage emergent play by providing players with tools to expand games through their own actions. Examples include in-game web browsers in EVE Online and The Matrix Online; XML integration tools and programming languages in Second Life; shifting exchange rates in Entropia Universe; and the complex object-and-grammar system used to solve puzzles in Scribblenauts. Other examples of emergence include interactions between physics and artificial intelligence. One challenge that remains to be solved, however, is how to tell a compelling story using only emergent technology. In an op-ed piece for BBC News, David Braben, co-creator of Elite, called truly open-ended game design "The Holy Grail" of modern video gaming, citing games like Elite and the Grand Theft Auto series as early steps in that direction. Peter Molyneux has also stated that he believes emergence (or emergent gameplay) is where video game development is headed in the future. He has attempted to implement emergent gameplay to a great extent in some of his games, particularly Black & White and Fable. Procedural generation of open worlds Procedural generation refers to content generated algorithmically rather than manually, and is often used to generate game levels and other content. While procedural generation does not guarantee that a game or sequence of levels is nonlinear, it is an important factor in reducing game development time and opens up avenues making it possible to generate larger and more or less unique seamless game worlds on the fly and using fewer resources. This kind of procedural generation is known as worldbuilding, in which general rules are used to construct a believable world. Most 4X and roguelike games make use of procedural generation to some extent to generate game levels. SpeedTree is an example of a developer-oriented tool used in the development of The Elder Scrolls IV: Oblivion and aimed at speeding up the level design process. Procedural generation also made it possible for the developers of Elite, David Braben and Ian Bell, to fit the entire game—including thousands of planets, dozens of trade commodities, multiple ship types and a plausible economic system—into less than 22 kilobytes of memory. More recently, No Man's Sky procedurally generated over 18 quintillion planets including flora, fauna, and other features that can be researched and explored. History 20th century There is no consensus on what the earliest open-world game is, due to differing definitions of how large or open a world needs to be. Inverse provides some early examples games that established elements of the open world: Jet Rocket, a 1970 Sega electro-mechanical arcade game that, while not a video game, predated the flight simulator genre to give the player free roaming capabilities, and dnd, a 1975 text-based adventure game for the PLATO system that offered non-linear gameplay. Ars Technica traces the concept back to the free-roaming exploration of 1976 text adventure game Colossal Cave Adventure, which inspired the free-roaming exploration of Adventure (1980), but notes that it was not until 1984 that what "we now know as open-world gaming" took on a "definite shape" with 1984 space simulator Elite, considered a pioneer of the open world; Gamasutra argues that its open-ended sandbox style is rooted in flight simulators, such as SubLOGIC's Flight Simulator (1979/80), noting most flight sims "offer a 'free flight' mode that allows players to simply pilot the aircraft and explore the virtual world". Others trace the concept back to 1981 CRPG Ultima, which had a free-roaming overworld map inspired by tabletop RPG Dungeons & Dragons. The overworld maps of the first five Ultima games, released up to 1988, lacked a single, unified scale, with towns and other places represented as icons; this style was adopted by the first three Dragon Quest games, released from 1986 to 1988 in Japan. Early examples of open-world gameplay in adventure games include The Portopia Serial Murder Case (1983) and The Lords of Midnight (1984), with open-world elements also found in The Hobbit (1982) and Valhalla (1983). The strategy video game, The Seven Cities of Gold (1984), is also cited as an early open-world game, influencing Sid Meier's Pirates! (1987). Eurogamer also cites British computer games such as Ant Attack (1983) and Sabre Wulf (1984) as early examples. According to Game Informers Kyle Hilliard, Hydlide (1984) and The Legend of Zelda (1986) were among the first open-world games, along with Ultima. IGN traces the roots of open-world game design to The Legend of Zelda, which it argues is "the first really good game based on exploration", while noting that it was anticipated by Hydlide, which it argues is "the first RPG that rewarded exploration". According to GameSpot, never "had a game so open-ended, nonlinear, and liberating been released for the mainstream market" before The Legend of Zelda. According to The Escapist, The Legend of Zelda was an early example of open-world, nonlinear gameplay, with an expansive and cohesive world, inspiring many games to adopt a similar open-world design. Mercenary (1985) has been cited as the first open world 3D action-adventure game. There were also other open-world games in the 1980s, such as Back to Skool (1985), Turbo Esprit (1986) and Alternate Reality: The City (1985). Wasteland, released in 1988, is also considered an open-world game. The early 1990s saw open-world games such as The Terminator (1990), The Adventures of Robin Hood (1991), and Hunter (1991), which IGN describes as the first sandbox game to feature full 3D, third-person graphics, and Ars Technica argues "has one of the strongest claims to the title of GTA forebear". Sierra On-Line's 1992 adventure game King's Quest VI has an open world; almost half of the quests are optional, many have multiple solutions, and players can solve most in any order. Atari Jaguar launch title, Cybermorph (1993), was notable for its open 3D polygonal-world and non-linear gameplay. Quarantine (1994) is an example of an open-world driving game from this period, while Iron Soldier (1994) is an open-world mech game. The director of 1997's Blade Runner argues that that game was the first open world three-dimensional action adventure game. IGN considers Nintendo's Super Mario 64 (1996) revolutionary for its 3D open-ended free-roaming worlds, which had rarely been seen in 3D games before, along with its analog stick controls and camera control. Other 3D examples include Mystical Ninja Starring Goemon (1997), Ocarina of Time (1998), the DMA Design (Rockstar North) game Body Harvest (1998), the Angel Studios (Rockstar San Diego) games Midtown Madness (1999) and Midnight Club: Street Racing (2000), the Reflections Interactive (Ubisoft Reflections) game Driver (1999), and the Rareware games Banjo-Kazooie (1998), Donkey Kong 64 (1999), and Banjo-Tooie (2000). 1UP considers Sega's adventure Shenmue (1999) the originator of the "open city" subgenre, touted as a "FREE" ("Full Reactive Eyes Entertainment") game giving players the freedom to explore an expansive sandbox city with its own day-night cycles, changing weather, and fully voiced non-player characters going about their daily routines. The game's large interactive environments, wealth of options, level of detail and the scope of its urban sandbox exploration has been compared to later sandbox games like Grand Theft Auto III and its sequels, Sega's own Yakuza series, Fallout 3, and Deadly Premonition. 21st century Grand Theft Auto has had over 200 million sales. Creative director Gary Penn, who previously worked on Frontier: Elite II, cited Elite as a key influence, calling it "basically Elite in a city", and mentioned other team members being influenced by Syndicate and Mercenary. Grand Theft Auto III combined elements from previous games, and fused them together into a new immersive 3D experience that helped define open-world games for a new generation. Executive producer Sam Houser described it as "Zelda meets Goodfellas", while producer Dan Houser also cited The Legend of Zelda: Ocarina of Time and Super Mario 64 as influences. Radio stations had been implemented earlier in games such as Maxis' SimCopter (1996), the ability to beat or kill non-player characters date back to games such as The Portopia Serial Murder Case (1983), and Valhalla (1983) and the way in which players run over pedestrians and get chased by police has been compared to Pac-Man (1980). After the release of Grand Theft Auto III, many games which employed a 3D open world, such as Ubisoft's Watch Dogs and Deep Silver's Saints Row series, were labeled, often disparagingly, as Grand Theft Auto clones, much as how many early first-person shooters were called "Doom clones". Other examples include World of Warcraft, The Elder Scrolls and Fallout series of games, which feature a large and diverse world, offering tasks and possibilities to play. In the Assassin's Creed series, which began in 2007, players explore historic open-world settings. These include the Holy Land during the Third Crusade in Assassin's Creed, Renaissance Italy in Assassin's Creed II and Brotherhood, Constantinople during the rise of the Ottoman Empire in Revelations, New England during the American Revolution in Assassin's Creed III, the Caribbean during the Golden Age of Piracy in Black Flag, the North Atlantic during the French and Indian War in Rogue, Paris during the French Revolution in Unity, London at the onset of the Second Industrial Revolution in Syndicate, Ptolemaic Egypt in Origins, Classical Greece during the Peloponnesian War in Odyssey, and Medieval England and Norway during the Viking Age in Valhalla. The series intertwines factual history with a fictional storyline. In the fictional storyline, the Templars and the Assassins, two secret organisations inspired by their real-life counterparts, have been mortal enemies for all of known history. Their conflict stems from the Templars' desire to have peace through control, which directly contrasts the Assassins' wish for peace with free will. Their fighting influences much of history, as the sides often back real historical forces. For example, during the American Revolution depicted in Assassin's Creed III, the Templars initially support the British, while the Assassins side with the American colonists. S.T.A.L.K.E.R.: Shadow of Chernobyl was developed by GSC Game World in 2007, followed by two other games, a prequel and a sequel. The free world style of the zone was divided into huge maps, like sectors, and the player can go from one sector to another, depending on required quests or just by choice. In 2011, Dan Ryckert of Game Informer wrote that open-world crime games were "a major force" in the gaming industry for the preceding decade. Another popular sandbox game is Minecraft, which has since become the best-selling video game of all time, selling over 238 million copies worldwide on multiple platforms by April 2021. Minecrafts procedurally generated overworlds cover a virtual 3.6 billion square kilometers. The Outerra Engine is a world rendering engine in development since 2008 that is capable of seamlessly rendering whole planets from space down to ground level. Anteworld is a world-building game and free tech-demo of the Outerra Engine that builds upon real-world data to render planet Earth realistically on a true-to-life scale. No Man's Sky, released in 2016, is an open-world game set in a virtually infinite universe. According to the developers, through procedural generation, the game is able to produce more than 18 quintillion ( or 18,000,000,000,000,000,000) planets to explore. Several critics found that the nature of the game can become repetitive and monotonous, with the survival gameplay elements being lackluster and tedious. Jake Swearingen in New York said that the players can procedurally generate 18.6 quintillion unique planets, but they can't procedurally generate 18.6 quintillion unique things to do. Updates have aimed to address these criticisms. In 2017, the open-world design of The Legend of Zelda: Breath of the Wild was described by critics as being revolutionary and by developers as a paradigm shift for open-world design. In contrast to the more structured approach of most open-world games, Breath of the Wild features a large and fully interactive world that is generally unstructured and rewards the exploration and manipulation of its world. Inspired by the original 1986 Legend of Zelda, the open world of Breath of the Wild integrates multiplicative gameplay, where "objects react to the player's actions and the objects themselves also influence each other". Along with a physics engine, the game's open-world also integrates a chemistry engine, "which governs the physical properties of certain objects and how they relate to each other", rewarding experimentation. Nintendo has described the game's approach to open-world design as "open-air". See also Nonlinear gameplay Persistent world References Further reading Michael Llewellyn: 15 Open World Games More Mature Than Grand Theft Auto V. thegamer.com. April 24, 2017. Video game terminology
Open world
Technology
4,315
77,426,383
https://en.wikipedia.org/wiki/2C-T-21.5
2C-T-21.5 is a lesser-known psychedelic drug related to compounds such as 2C-T-21 and 2C-T-28. It was originally named by Alexander Shulgin and discussed in his book PiHKAL, but was not synthesised at that time. 2C-T-21.5 was ultimately synthesised and tested by Daniel Trachsel some years later. It has a binding affinity of 146 nM at 5-HT2A and 55 nM at 5-HT2C. It produces typical psychedelic effects, being slightly less potent but somewhat longer acting than 2C-T-2 or 2C-T-21, with an active dose of 12–30 mg, and a duration of action of 8–14 hours. Unlike 2C-T-21 it will not form the highly toxic fluoroacetate as a metabolite, instead producing the less toxic difluoroacetic acid. See also 2C-T-16 2C-T-TFM 2C-TFE 3C-DFE DOPF Trifluoromescaline 2C-x DOx 25-NB References 2C (psychedelics) Entheogens Thioethers Amines Methoxy compounds Organofluorides
2C-T-21.5
Chemistry
269
1,854,270
https://en.wikipedia.org/wiki/Rainout%20%28radioactivity%29
A rainout is the process of precipitation causing the removal of radioactive particles from the atmosphere onto the ground, creating nuclear fallout by rain. The rainclouds of the rainout are often formed by the particles of a nuclear explosion itself and because of this, the decontamination of rainout is more difficult than a "dry" fallout. In atmospheric science, rainout also refers to the removal of soluble species—not necessarily radioactive—from the atmosphere by precipitation. Factors affecting rainout A rainout could occur in the vicinity of ground zero or the contamination could be carried aloft before deposition depending on the current atmospheric conditions and how the explosion occurred. The explosion, or burst, can be air, surface, subsurface, or seawater. An air burst will produce less fallout than a comparable explosion near the ground due to less particulate being contaminated. Detonations at the surface will tend to produce more fallout material. In case of water surface bursts, the particles tend to be rather lighter and smaller, producing less local fallout but extending over a greater area. The particles contain mostly sea salts with some water; these can have a cloud seeding effect causing local rainout and areas of high local fallout. Fallout from a seawater burst is difficult to remove once it has soaked into porous surfaces because the fission products are present as metallic ions which become chemically bonded to many surfaces. For subsurface bursts, there is an additional phenomenon present called "base surge". The base surge is a cloud that rolls outward from the bottom of the subsiding column, which is caused by an excessive density of dust or water droplets in the air. This surge is made up of small solid particles, but it still behaves like a fluid. A soil earth medium favors base surge formation in an underground burst. Although the base surge typically contains only about 10% of the total bomb debris in a subsurface burst, it can create larger radiation doses than fallout near the detonation, because it arrives sooner than fallout, before much radioactive decay has occurred. For underwater bursts, the visible surge is, in effect, a cloud of liquid (usually water) droplets with the property of flowing almost as if it were a homogeneous fluid. After the water evaporates, an invisible base surge of small radioactive particles may persist. Meteorogically, snow and rain will accelerate local fallout. Under special meteorological conditions, such as a local rain shower that originates above the radioactive cloud, limited areas of heavy contamination just downwind of a nuclear blast may be formed. Rain on an area contaminated by a surface burst changes the pattern of radioactive intensities by washing off higher elevations, buildings, equipment, and vegetation. This reduces intensities in some areas and possibly increases intensities in drainage systems; on low ground; and in flat, poorly drained areas. References Radioactivity
Rainout (radioactivity)
Physics,Chemistry
581
38,097,861
https://en.wikipedia.org/wiki/Georgetown%20Coal%20Gasification%20Plant
Georgetown Coal Gasification Plant, also known as the Georgetown Service and Gas Company, is a historic coal gasification plant located at Georgetown, Sussex County, Delaware. It was built in the late-19th century, and is a rectangular one-story, three bay by three bay brick structure measuring 40 feet by 25 feet. It has a gable roof with a smaller gable roofed ventilator. Also on the property is a small brick gable roofed brick building measuring 8 feet by 10 feet; a small, square concrete building; a large, cylindrical "surge tank;" 500-gallon bottled-gas tank; and a covered pit for impurities. The complex was privately owned and developed starting in the 1880s to provide metered gas for domestic lighting, town street lights, municipal and domestic uses. The coal gasification process was discontinued in the 1940s. The site was added to the National Register of Historic Places in 1985. It is listed on the Delaware Cultural and Historic Resources GIS system as destroyed or demolished. References Industrial buildings and structures on the National Register of Historic Places in Delaware Buildings and structures in Georgetown, Delaware Coal gasification technologies National Register of Historic Places in Sussex County, Delaware
Georgetown Coal Gasification Plant
Chemistry
238
1,377,881
https://en.wikipedia.org/wiki/Flat%20%28theatre%29
A flat (short for scenery flat) or coulisse is a flat piece of theatrical scenery which is painted and positioned on stage so as to give the appearance of buildings or other background. Flats can be soft covered (covered with cloth such as muslin) or hard covered (covered with decorative plywood such as luan). Soft-covered flats have changed little from their origin in the Italian Renaissance. Flats with a frame that places the width of the lumber parallel to the face are called "Broadway" or "stage" flats. Hard-covered flats with a frame that is perpendicular to the paint surface are referred to as "Hollywood" or "studio" flats. Usually flats are built in standard sizes of , , or tall so that walls or other scenery may easily be constructed, and so that flats may be stored and reused for subsequent productions. Often affixed to battens flown in from the fly tower or loft for the scenes in which they are used, they may also be stored at the sides of the stage, called wings, and braced to the floor when in use for an entire performance. Flat construction Parts of a flat Rails (or plates) are the top and bottom framing members of a flat. Rails run the full width of the flat (, for a , flat). Stiles (or studs) are the vertical members of the frame. The length of the stiles is the full height of the flat, minus the combined width of the rails (, for a , flat constructed of , rails). Toggles are horizontal cross pieces that run between the stiles or studs. The number and placement of toggles depends on the type of flat. The length of the toggles is the total width of the flat minus the combined width of the stiles (, for a , soft-cover flat constructed of , stiles). Corner blocks are used to join the corners of a soft-cover flat. They are normally made of plywood, and are triangles with corners of 45°, 45°, and 90°. They are most often made by ripping the plywood at and then mitering it at 45 degree angles to create triangles with legs. Keystones join the toggles to the stiles of soft-cover flats. They are long, and normally rip sawn to the same width as the toggles (usually ) on one end, and on the other, forming a shape similar to the keystone of an archway. Straps can be used in place of keystones. They are long and wide (same as toggle) rectangles. They are easier to construct than keystones, but not as strong due to their narrower dimension and reduced glue/nailing surface area. A coffin lock or screws may be used to join securely adjacent flats, or traditionally with a lash line cleat, where cotton sash cord is lashed around cleats installed on the flat's edges. This allows for quick standing and striking of the set. Styles Broadway or stage flats are generally constructed of nominal ( actual) pine boards. The boards are laid out flat on the shop floor, squared, and joined with the keystones and corner blocks. The keystones and corner blocks are inset from the outside edge, which allows for flats to be hinged or butted together. They are then glued in place, and stapled or screwed down. The flat can then be flipped over and covered with muslin or decorative plywood. Toggles in a Broadway flat are placed on centers. Broadway flats can also be constructed using Half-lap and Cross-lap joints instead of keystones and corner blocks, and joins stiles, rails, and toggles, by sawing a deep half-lap at the ends of the pieces, and/or a deep dado groove mid-piece, which are then glued and stapled together. Dados can be made using a radial arm saw or table saw, and a dado stack cutter (two outer circular saw blades and one or more "chippers" between them, giving a much wider cut). Setting up for a dado stack is approximately the same as for preparing keystones and cornerblocks, but requires less layout, as the length of stiles, rails and, toggles are equal to the face of your flat. Hollywood or studio flats can be made in various thicknesses to suit a particular design, but are most often made of nominal ( actual) pine boards. The boards are laid out on edge on the shop floor, the ends are glued together and stapled or screwed. Keystones and corner blocks are not normally used. Once assembled, the flat can be covered with or decorative plywood, which is glued on and stapled. The toggles in a Hollywood flat are placed on centers. Hollywood flats may receive a muslin skin over the decorative plywood face. The face is covered in a mixture of water and white glue, the muslin is applied and the entire flat is covered with the water/glue mixture again, to shrink and attach the muslin. References Friedman, Sally (1994). Backstage Handbook: an illustrated almanac of technical information., Broadway Press. Stage terminology Scenic design
Flat (theatre)
Engineering
1,052
2,684,796
https://en.wikipedia.org/wiki/Stenospermocarpy
Stenospermocarpy is the biological mechanism that produces parthenocarpy (seedlessness) in some fruits, notably many table grapes. In stenospermocarpic fruits, normal pollination and fertilization are still required to ensure that the fruit 'sets', i.e. continues to develop on the plant; however subsequent abortion of the embryo that began growing following fertilization leads to a near seedless condition. The remains of the undeveloped seed are visible in the fruit. Most commercial seedless grapes are sprayed with gibberellin to increase the size of the fruit and also to make the fruit clusters less tightly packed. A new cultivar, 'Melissa', has naturally larger fruit so does not require gibberellin sprays. Grape breeders have developed some new seedless grape cultivars by using the embryo rescue technique. Before the tiny embryo aborts, it is removed from the developing fruit and grown in tissue culture until it is large enough to survive on its own. Embryo rescue allows the crossing of two seedless grape cultivars. There are two types of seedlessness in grapes: true seedlessness of parthenocarpic berries when only ovules may develop and commercial seedlessness of stenospermocarpic berries when aborted seeds go unnoticed when chewing. Stenospermocarpic seeds vary significantly in size and in the degree of development of the seed coat and the endosperm. Larger seeds of stenospermocarpic grapes are referred to as rudimentary seeds and smaller ones as seed traces. Seedless grape cultivars Seedless grapes are divided into white, red and black types based roughly on fruit color. The most popular seedless grape is known in the United States as 'Thompson Seedless', but was originally known as 'Sultana'. It is believed to be of ancient origin. It is considered a white grape, but is actually a pale green. Other white cultivars are 'Perlette', 'Menindee Seedless', 'Interlaken', 'Himrod', 'Romulanus', 'Lakemont', 'Fayez', and 'Remaily Seedless.' The most popular red seedless in the U.S. is 'Flame Seedless'. Other red cultivars are 'Crimson Seedless', 'Ruby Seedless', 'Suffolk Red', 'Saturn' and 'Pink Reliance'. Some black cultivars are 'Black Beauty', 'Black Monukka', 'Concord Seedless', 'Glenora' and 'Thomcord.' See also List of grape varieties Seedless fruit References External links Herrera, E. 2002. Improving size and quality of seedless grapes. Guide H-311. New Mexico State University Plant reproduction
Stenospermocarpy
Biology
578
11,128,728
https://en.wikipedia.org/wiki/Valsa%20auerswaldii
Valsa auerswaldii is a plant pathogen infecting apples. See also List of apple diseases References Fungal tree pathogens and diseases Apple tree diseases Diaporthales Fungi described in 1928 Taxa named by Theodor Rudolph Joseph Nitschke Fungus species
Valsa auerswaldii
Biology
53
4,263,231
https://en.wikipedia.org/wiki/Replisome
The replisome is a complex molecular machine that carries out replication of DNA. The replisome first unwinds double stranded DNA into two single strands. For each of the resulting single strands, a new complementary sequence of DNA is synthesized. The total result is formation of two new double stranded DNA sequences that are exact copies of the original double stranded DNA sequence. In terms of structure, the replisome is composed of two replicative polymerase complexes, one of which synthesizes the leading strand, while the other synthesizes the lagging strand. The replisome is composed of a number of proteins including helicase, RFC, PCNA, gyrase/topoisomerase, SSB/RPA, primase, DNA polymerase III, RNAse H, and DNA ligase. Overview of prokaryotic DNA replication process For prokaryotes, each dividing nucleoid (region containing genetic material which is not a nucleus) requires two replisomes for bidirectional replication. The two replisomes continue replication at both forks in the middle of the cell. Finally, as the termination site replicates, the two replisomes separate from the DNA. The replisome remains at a fixed, midcell location in the cell, attached to the membrane, and the template DNA threads through it. DNA is fed through the stationary pair of replisomes located at the cell membrane. Overview of eukaryotic DNA replication process For eukaryotes, numerous replication bubbles form at origins of replication throughout the chromosome. As with prokaryotes, two replisomes are required, one at each replication fork located at the terminus of the replication bubble. Because of significant differences in chromosome size, and the associated complexities of highly condensed chromosomes, various aspects of the DNA replication process in eukaryotes, including the terminal phases, are less well-characterised than for prokaryotes. Challenges of DNA replication The replisome is a system in which various factors work together to solve the structural and chemical challenges of DNA replication. Chromosome size and structure varies between organisms, but since DNA molecules are the reservoir of genetic information for all forms of life, many replication challenges and solutions are the same for different organisms. As a result, the replication factors that solve these problems are highly conserved in terms of structure, chemistry, functionality, or sequence. General structural and chemical challenges include the following: Efficient replisome assembly at origins of replication (origin recognition complexes or specific replication origin sequences in some organisms) Separating the duplex into the leading and lagging template strands (helicases) Protecting the leading and lagging strands from damage after duplex separation (SSB and RPA factors) Priming of the leading and lagging template strands (primase or DNA polymerase alpha) Ensuring processivity (clamp loading factors, ring-shaped clamp proteins, strand binding proteins) High-fidelity DNA replication (DNA polymerase III, DNA polymerase delta, DNA polymerase epsilon. All have intrinsically low error rates because of their structure and chemistry.) Error correction (replicative polymerase active sites sense errors; 3' to 5' exonuclease domains of replicative polymerases fix errors) Synchronised polymerisation of leading and lagging strands despite anti-parallel structure (replication fork structure, dimerisation of replicative polymerases) Primer removal (DNA polymerase I, RNAse H, flap endonucleases such as FEN1, or other DNA repair factors) Formation of phosphodiester bonds at gaps between Okazaki fragments (ligase) In general, the challenges of DNA replication involve the structure of the molecules, the chemistry of the molecules, and, from a systems perspective, the underlying relationships between the structure and the chemistry. Solving the challenges of DNA replication Many of the structural and chemical problems associated with DNA replication are managed by molecular machinery that is highly conserved across organisms. This section discusses how replisome factors solve the structural and chemical challenges of DNA replication. Replisome assembly DNA replication begins at sites called origins of replication. In organisms with small genomes and simple chromosome structure, such as bacteria, there may be only a few origins of replication on each chromosome. Organisms with large genomes and complex chromosome structure, such as humans, may have hundreds, or even thousands, of origins of replication spread across multiple chromosomes. DNA structure varies with time, space, and sequence, and it is thought that these variations, in addition to their role in gene expression, also play active roles in replisome assembly during DNA synthesis. Replisome assembly at an origin of replication is roughly divided into three phases. For bacteria: Formation of pre-replication complex. DnaA binds to the origin recognition complex and separates the duplex. This attracts DnaB helicase and DnaC, which maintain the replication bubble. Formation of pre-initiation complex. SSB binds to the single strand and then gamma (clamp loading factor) binds to SSB. Formation of initiation complex. Gamma deposits the sliding clamp (beta) and attracts DNA polymerase III. For eukaryotes: Formation of pre-replication complex. MCM factors bind to the origin recognition complex and separate the duplex, forming a replication bubble. Formation of pre-initiation complex. Replication protein A (RPA) binds to the single stranded DNA and then RFC (clamp loading factor) binds to RPA. Formation of initiation complex. RFC deposits the sliding clamp (PCNA) and attracts DNA polymerases such as alpha (α), delta (δ), epsilon (ε). For both bacteria and eukaryotes, the next stage is generally referred to as 'elongation', and it is during this phase that the majority of DNA synthesis occurs. Separating the duplex DNA is a duplex formed by two anti-parallel strands. Following Meselson-Stahl, the process of DNA replication is semi-conservative, whereby during replication the original DNA duplex is separated into two daughter strands (referred to as the leading and lagging strand templates). Each daughter strand becomes part of a new DNA duplex. Factors generically referred to as helicases unwind the duplex. Helicases Helicase is an enzyme which breaks hydrogen bonds between the base pairs in the middle of the DNA duplex. Its doughnut like structure wraps around DNA and separates the strands ahead of DNA synthesis. In eukaryotes, the Mcm2-7 complex acts as a helicase, though which subunits are required for helicase activity is not entirely clear. This helicase translocates in the same direction as the DNA polymerase (3' to 5' with respect to the template strand). In prokaryotic organisms, the helicases are better identified and include dnaB, which moves 5' to 3' on the strand opposite the DNA polymerase. Unwinding supercoils and decatenation As helicase unwinds the double helix, topological changes induced by the rotational motion of the helicase lead to supercoil formation ahead of the helicase (similar to what happens when you twist a piece of thread). Gyrase and topoisomerases Gyrase (a form of topoisomerase) relaxes and undoes the supercoiling caused by helicase. It does this by cutting the DNA strands, allowing it to rotate and release the supercoil, and then rejoining the strands. Gyrase is most commonly found upstream of the replication fork, where the supercoils form. Protecting the leading and lagging strands Single-stranded DNA is highly unstable and can form hydrogen bonds with itself that are referred to as 'hairpins' (or the single strand can improperly bond to the other single strand). To counteract this instability, single-strand binding proteins (SSB in prokaryotes and Replication protein A in eukaryotes) bind to the exposed bases to prevent improper ligation. If you consider each strand as a "dynamic, stretchy string", the structural potential for improper ligation should be obvious. An expanded schematic reveals the underlying chemistry of the problem: the potential for hydrogen bond formation between unrelated base pairs. Binding proteins stabilise the single strand and protected the strand from damage caused by unlicensed chemical reactions. The combination of a single strand and its binding proteins serves as a better substrate for replicative polymerases than a naked single strand (binding proteins provide extra thermodynamic driving force for the polymerisation reaction). Strand binding proteins are removed by replicative polymerases. Priming the leading and lagging strands From both a structural and chemical perspective, a single strand of DNA by itself (and the associated single strand binding proteins) is not suitable for polymerisation. This is because the chemical reactions catalysed by replicative polymerases require a free 3' OH in order to initiate nucleotide chain elongation. In terms of structure, the conformation of replicative polymerase active sites (which is highly related to the inherent accuracy of replicative polymerases) means these factors cannot start chain elongation without a pre-existing chain of nucleotides, because no known replicative polymerase can start chain elongation de novo. Priming enzymes, (which are DNA-dependent RNA polymerases), solve this problem by creating an RNA primer on the leading and lagging strands. The leading strand is primed once, and the lagging strand is primed approximately every 1000 (+/- 200) base pairs (one primer for each Okazaki fragment on the lagging strand). Each RNA primer is approximately 10 bases long. The interface at (A*) contains a free 3' OH that is chemically suitable for the reaction catalysed by replicative polymerases, and the "overhang" configuration is structurally suitable for chain elongation by a replicative polymerase. Thus, replicative polymerases can begin chain elongation at (A*). Primase In prokaryotes, the primase creates an RNA primer at the beginning of the newly separated leading and lagging strands. DNA polymerase alpha In eukaryotes, DNA polymerase alpha creates an RNA primer at the beginning of the newly separated leading and lagging strands, and, unlike primase, DNA polymerase alpha also synthesizes a short chain of deoxynucleotides after creating the primer. Ensuring processivity and synchronisation Processivity refers to both speed and continuity of DNA replication, and high processivity is a requirement for timely replication. High processivity is in part ensured by ring-shaped proteins referred to as 'clamps' that help replicative polymerases stay associated with the leading and lagging strands. There are other variables as well: from a chemical perspective, strand binding proteins stimulate polymerisation and provide extra thermodynamic energy for the reaction. From a systems perspective, the structure and chemistry of many replisome factors (such as the AAA+ ATPase features of the individual clamp loading sub-units, along with the helical conformation they adopt), and the associations between clamp loading factors and other accessory factors, also increases processivity. To this point, according to research by Kuriyan et al., due to their role in recruiting and binding other factors such as priming enzymes and replicative polymerases, clamp loaders and sliding clamps are at the heart of the replisome machinery. Research has found that clamp loading and sliding clamp factors are absolutely essential to replication, which explains the high degree of structural conservation observed for clamp loading and sliding clamp factors. This architectural and structural conservation is seen in organisms as diverse as bacteria, phages, yeast, and humans. That such a significant degree of structural conservation is observed without sequence homology further underpins the significance of these structural solutions to replication challenges. Clamp loader Clamp loader is a generic term that refers to replication factors called gamma (bacteria) or RFC (eukaryotes). The combination of template DNA and primer RNA is referred to as 'A-form DNA' and it is thought that clamp loading replication proteins (helical heteropentamers) want to associate with A-form DNA because of its shape (the structure of the major/minor groove) and chemistry (patterns of hydrogen bond donors and acceptors). Thus, clamp loading proteins associate with the primed region of the strand which causes hydrolysis of ATP and provides energy to open the clamp and attach it to the strand. Sliding clamp Sliding clamp is a generic term that refers to ring-shaped replication factors called beta (bacteria) or PCNA (eukaryotes and archaea). Clamp proteins attract and tether replicative polymerases, such as DNA polymerase III, in order to extend the amount of time that a replicative polymerase stays associated with the strand. From a chemical perspective, the clamp has a slightly positive charge at its centre that is a near perfect match for the slightly negative charge of the DNA strand. In some organisms, the clamp is a dimer, and in other organisms the clamp is a trimer. Regardless, the conserved ring architecture allows the clamp to enclose the strand. Dimerisation of replicative polymerases Replicative polymerases form an asymmetric dimer at the replication fork by binding to sub-units of the clamp loading factor. This asymmetric conformation is capable of simultaneously replicating the leading and lagging strands, and the collection of factors that includes the replicative polymerases is generally referred to as a holoenzyme. However, significant challenges remain: the leading and lagging strands are anti-parallel. This means that nucleotide synthesis on the leading strand naturally occurs in the 5' to 3' direction. However, the lagging strand runs in the opposite direction and this presents quite a challenge since no known replicative polymerases can synthesise DNA in the 3' to 5' direction. The dimerisation of the replicative polymerases solves the problems related to efficient synchronisation of leading and lagging strand synthesis at the replication fork, but the tight spatial-structural coupling of the replicative polymerases, while solving the difficult issue of synchronisation, creates another challenge: dimerisation of the replicative polymerases at the replication fork means that nucleotide synthesis for both strands must take place at the same spatial location, despite the fact that the lagging strand must be synthesised backwards relative to the leading strand. Lagging strand synthesis takes place after the helicase has unwound a sufficient quantity of the lagging strand, and this "sufficient quantity of the lagging strand" is polymerised in discrete nucleotide chains called Okazaki fragments. Consider the following: the helicase continuously unwinds the parental duplex, but the lagging strand must be polymerised in the opposite direction. This means that, while polymerisation of the leading strand proceeds, polymerisation of the lagging strand only occurs after enough of the lagging strand has been unwound by the helicase. At this point, the lagging strand replicative polymerase associates with the clamp and primer in order to start polymerisation. During lagging strand synthesis, the replicative polymerase sends the lagging strand back toward the replication fork. The replicative polymerase disassociates when it reaches an RNA primer. Helicase continues to unwind the parental duplex, the priming enzyme affixes another primer, and the replicative polymerase reassociates with the clamp and primer when a sufficient quantity of the lagging strand has unwound. Collectively, leading and lagging strand synthesis is referred to as being 'semidiscontinuous'. High-fidelity DNA replication Prokaryotic and eukaryotic organisms use a variety of replicative polymerases, some of which are well-characterised: DNA polymerase III DNA polymerase delta DNA polymerase epsilon DNA polymerase III This polymerase synthesizes leading and lagging strand DNA in bacteria. DNA polymerase delta This polymerase synthesizes lagging strand DNA in eukaryotes. (Thought to form an asymmetric dimer with DNA polymerase epsilon.) DNA polymerase epsilon This polymerase synthesizes leading strand DNA in eukaryotes. (Thought to form an asymmetric dimer with DNA polymerase delta.) Proof-reading and error correction Although rare, incorrect base pairing polymerisation does occur during chain elongation. (The structure and chemistry of replicative polymerases mean that errors are unlikely, but they do occur.) Many replicative polymerases contain an "error correction" mechanism in the form of a 3' to 5' exonuclease domain that is capable of removing base pairs from the exposed 3' end of the growing chain. Error correction is possible because base pair errors distort the position of the magnesium ions in the polymerisation sub-unit, and the structural-chemical distortion of the polymerisation unit effectively stalls the polymerisation process by slowing the reaction. Subsequently, the chemical reaction in the exonuclease unit takes over and removes nucleotides from the exposed 3' end of the growing chain. Once an error is removed, the structure and chemistry of the polymerisation unit returns to normal and DNA replication continues. Working collectively in this fashion, the polymerisation active site can be thought of as the "proof-reader", since it senses mismatches, and the exonuclease is the "editor", since it corrects the errors. Base pair errors distort the polymerase active site for between 4 and 6 nucleotides, which means, depending on the type of mismatch, there are up to six chances for error correction. The error sensing and error correction features, combined with the inherent accuracy that arises from the structure and chemistry of replicative polymerases, contribute to an error rate of approximately 1 base pair mismatch in 108 to 1010 base pairs. Errors can be classified in three categories: purine-purine mismatches, pyrimidine-pyrimidine mismatches, and pyrimidine-purine mismatches. The chemistry of each mismatch varies, and so does the behaviour of the replicative polymerase with respect to its mismatch sensing activity. The replication of bacteriophage T4 DNA upon infection of E. coli is a well-studied DNA replication system. During the period of exponential DNA increase at 37°C, the rate of elongation is 749 nucleotides per second. The mutation rate during replication is 1.7 mutations per 108 base pairs. Thus DNA replication in this system is both very rapid and highly accurate. Primer removal and nick ligation There are two problems after leading and lagging strand synthesis: RNA remains in the duplex and there are nicks between each Okazaki fragment in the lagging duplex. These problems are solved by a variety of DNA repair enzymes that vary by organism, including: DNA polymerase I, DNA polymerase beta, RNAse H, ligase, and DNA2. This process is well-characterised in bacteria and much less well-characterised in many eukaryotes. In general, DNA repair enzymes complete the Okazaki fragments through a variety of means, including: base pair excision and 5' to 3' exonuclease activity that removes the chemically unstable ribonucleotides from the lagging duplex and replaces them with stable deoxynucleotides. This process is referred to as 'maturation of Okazaki fragments', and ligase (see below) completes the final step in the maturation process. Primer removal and nick ligation can be thought of as DNA repair processes that produce a chemically-stable, error-free duplex. To this point, with respect to the chemistry of an RNA-DNA duplex, in addition to the presence of uracil in the duplex, the presence of ribose (which has a reactive 2' OH) tends to make the duplex much less chemically-stable than a duplex containing only deoxyribose (which has a non-reactive 2' H). DNA polymerase I DNA polymerase I is an enzyme that repairs DNA. RNAse H RNAse H is an enzyme that removes RNA from an RNA-DNA duplex. Ligase After DNA repair factors replace the ribonucleotides of the primer with deoxynucleotides, a single gap remains in the sugar-phosphate backbone between each Okazaki fragment in the lagging duplex. An enzyme called DNA ligase connects the gap in the backbone by forming a phosphodiester bond between each gap that separates the Okazaki fragments. The structural and chemical aspects of this process, generally referred to as 'nick translation', exceed the scope of this article. Replication stress Replication stress can result in a stalled replication fork. One type of replicative stress results from DNA damage such as inter-strand cross-links (ICLs). An ICL can block replicative fork progression due to failure of DNA strand separation. In vertebrate cells, replication of an ICL-containing chromatin template triggers recruitment of more than 90 DNA repair and genome maintenance factors. These factors include proteins that perform sequential incisions and homologous recombination. History Katherine Lemon and Alan Grossman showed using Bacillus subtilis that replisomes do not move like trains along a track but DNA is actually fed through a stationary pair of replisomes located at the cell membrane. In their experiment, the replisomes in B. subtilis were each tagged with green fluorescent protein, and the location of the complex was monitored in replicating cells using fluorescence microscopy. If the replisomes moved like a train on a track, the polymerase-GFP protein would be found at different positions in each cell. Instead, however, in every replicating cell, replisomes were observed as distinct fluorescent foci located at or near midcell. Cellular DNA stained with a blue fluorescent dye (DAPI) clearly occupied most of the cytoplasmic space. References Further reading External links Molecular genetics DNA replication
Replisome
Chemistry,Biology
4,673
75,397,124
https://en.wikipedia.org/wiki/NGC%205514
NGC 5514 is a pair of merging disk galaxies in the northern constellation of Boötes. They were discovered by German astronomer Heinrich d'Arrest on April 26, 1865. The galaxies are located at an estimated distance of . The morphology of the system is similar to the Antennae Galaxies, NGC 4038/NGC 4039. A distinct tail extends to the east for an angular distance of . There is a fainter tail extending a comparable distance to the west. This galaxy pair likely forms a small group with the nearby spiral galaxy NGC 5519. This appears to be a collision between two galaxies of unequal mass, having a 2:1 mass ratio. They display activity of the LINER type, but this is located in two regions in the outer parts away from the combined nucleus. These may be large shock regions caused by the collision. There are two corresponding starburst regions, one of which has outflows that have created a supergiant galactic bubble. One supernova has been observed in NGC 5514: SN2019igh (typeIIb, mag. 19.05). See also List of NGC objects (5001–6000) References Galaxy mergers LINER galaxies 5514 NGC 5500 Galaxies discovered in 1865 Astronomical objects discovered in 1865 Discoveries by Heinrich Louis d'Arrest 50809/93124 9102
NGC 5514
Astronomy
269
78,088,031
https://en.wikipedia.org/wiki/Antarctic%20high
In meteorology, the Antarctic High is the stronger of the two polar highs, areas of high atmospheric pressure situated in the poles. It is situated over Eastern Antarctica, hence it sometimes being referred to as the East Antarctic High. Effects Since the Last Glacial Period, temperature trends have suggested that the interplay from the Antarctic High and Southern Annular Mode has played a significant role in katabatic winds over the Patriot Hills Base Camp. A 2022 study by Nature noted that when the high is over the Drake Passage, it alongside an elongated cyclone located in the South Pacific transport warm and moist air to the southwestern Antarctic Peninsula, which is linked to record-high temperatures, extreme summertime melt, and dramatic break-ups in the Larsen Ice Shelf and eastern Antarctic Peninsula since the 1990s. References Regional climate effects Environment of Antarctica Meteorological phenomena
Antarctic high
Physics
167
3,292,213
https://en.wikipedia.org/wiki/Hazard%20analysis
A hazard analysis is one of many methods that may be used to assess risk. At its core, the process entails describing a system object (such as a person or machine) that intends to conduct some activity. During the performance of that activity, an adverse event (referred to as a “factor”) may be encountered that could cause or contribute to an occurrence (mishap, incident, accident). Finally, that occurrence will result in some outcome that may be measured in terms of the degree of loss or harm. This outcome may be measured on a continuous scale, such as an amount of monetary loss, or the outcomes may be categorized into various levels of severity. A Simple Hazard Analysis The first step in hazard analysis is to identify the hazards. If an automobile is an object performing an activity such as driving over a bridge, and that bridge may become icy, then an icy bridge might be identified as a hazard. If this hazard is encountered, it could cause or contribute to the occurrence of an automobile accident, and the outcome of that occurrence could range in severity from a minor fender-bender to a fatal accident. Managing Risk through Hazard Analysis A hazard analysis may be used to inform decisions regarding the mitigation of risk. For instance, the probability of encountering an icy bridge may be reduced by adding salt such that the ice will melt. Or, risk mitigation strategies may target the occurrence. For instance, putting tire chains on a vehicle does nothing to change the probability of a bridge becoming icy, but if an icy bridge is encountered, it does improve traction, reducing the chance of a sliding into another vehicle. Finally, risk may be managed by influencing the severity of outcomes. For instance, seatbelts and airbags do nothing to prevent bridges from becoming icy, nor do they prevent accidents caused by that ice. However, in the event of an accident, these devices lower the probability of the accident resulting in fatal or serious injuries. Software Hazard Analysis IEEE STD-1228-1994 Software Safety Plans prescribes industry best practices for conducting software safety hazard analyses to help ensure safety requirements and attributes are defined and specified for inclusion in software that commands, controls or monitors critical functions. When software is involved in a system, the development and design assurance of that software is often governed by DO-178C. The severity of consequence identified by the hazard analysis establishes the criticality level of the software. Software criticality levels range from A to E, corresponding to the severity of Catastrophic to No Safety Effect. Higher levels of rigor are required for level A and B software and corresponding functional tasks and work products is the system safety domain are used as objective evidence of meeting safety criteria and requirements. In 2009 a leading edge commercial standard was promulgated based on decades of proven system safety processes in DoD and NASA. ANSI/GEIA-STD-0010-2009 (Standard Best Practices for System Safety Program Development and Execution) is a demilitarized commercial best practice that uses proven holistic, comprehensive and tailored approaches for hazard prevention, elimination and control. It is centered around the hazard analysis and functional based safety process. Severity category examples When used as part of an aviation hazard analysis, "Severity" describes the outcome (the degree of loss or harm) that results from an occurrence (an aircraft accident or incident). When categorized, severity categories must be mutually exclusive such that every occurrence has one, and only one, severity category associated with it. The definitions must also be collectively exhaustive such that all occurrences fall into one of the categories. In the US, the FAA includes five severity categories as part of its safety risk management policy. (medical devices) Likelihood category examples When used as part of an aviation hazard analysis, a "Likelihood" is a specific probability. It is the joint probability of a hazard occurring, that hazard causing or contributing to an aircraft accident or incident, and the resulting degree of loss or harm falling within one of the defined severity categories. Thus, if there are five severity categories, each hazard will have five likelihoods. In the US, the FAA provides a continuous probability scale for measuring likelihood, but also includes seven likelihood categories as part of its safety risk management policy. (medical devices) See also Layers of protection analysis (LOPA) – Technique for evaluating the hazards, risks and layers of protection of a system (Software Considerations in Airborne Systems and Equipment Certification) (similar to DO-178B, but for hardware) (System development process) (System safety assessment process) Further reading Notes References External links CFR, Title 29-Labor, Part 1910--Occupational Safety and Health Standards, § 1910.119 U.S. OSHA regulations regarding "Process safety management of highly hazardous chemicals" (especially Appendix C). FAA Order 8040.4 establishes FAA safety risk management policy. The FAA publishes a System Safety Handbook that provides a good overview of the system safety process used by the agency. IEEE 1584-2002 Standard which provides guidelines for doing arc flash hazard assessment. Avionics Process safety Safety engineering Software quality Occupational safety and health Reliability engineering
Hazard analysis
Chemistry,Technology,Engineering
1,029
48,429,150
https://en.wikipedia.org/wiki/Tricholoma%20rhizophoreti
Tricholoma rhizophoreti is an agaric fungus of the genus Tricholoma. It is found in Singapore, where it fruits on mud, sticks, roots, and stumps among mangrove trees. It was described as new to science in 1994 by English mycologist E.J.H. Corner. See also List of Tricholoma species References rhizophoreti Fungi described in 1994 Fungi of Asia Taxa named by E. J. H. Corner Fungus species
Tricholoma rhizophoreti
Biology
106
31,109,470
https://en.wikipedia.org/wiki/Siemens%20%28unit%29
The siemens (symbol: S) is the unit of electric conductance, electric susceptance, and electric admittance in the International System of Units (SI). Conductance, susceptance, and admittance are the reciprocals of resistance, reactance, and impedance respectively; hence one siemens is equal to the reciprocal of one ohm () and is also referred to as the mho. The siemens was adopted by the IEC in 1935, and the 14th General Conference on Weights and Measures approved the addition of the siemens as a derived unit in 1971. The unit is named after Ernst Werner von Siemens. In English, the same word siemens is used both for the singular and plural. Like other SI units named after people, the symbol (S) is capitalized but the name of the unit is not. For the siemens this distinguishes it from the second, symbol (lower case) s. The related property, electrical conductivity, is measured in units of siemens per metre (S/m). Definition For an element conducting direct current, electrical resistance and electrical conductance are defined as where is the electric current through the object and is the voltage (electrical potential difference) across the object. The unit siemens for the conductance G is defined by where is the ohm, is the ampere, and is the volt. For a device with a conductance of one siemens, the electric current through the device will increase by one ampere for every increase of one volt of electric potential difference across the device. The conductance of a resistor with a resistance of five ohms, for example, is (5 Ω)−1, which is equal to a conductance of 200 mS. Mho A historical equivalent for the siemens is the mho (). The name is derived from the word ohm spelled backwards as the reciprocal of one ohm, at the suggestion of Sir William Thomson (Lord Kelvin) in 1883. Its symbol is an inverted capital Greek letter omega: . NIST's Guide for the Use of the International System of Units (SI) refers to the mho as an "unaccepted special name for an SI unit", and indicates that it should be strictly avoided. The SI term siemens is used universally in science and often in electrical applications, while mho is still used in some electronic contexts. The inverted capital omega symbol (℧), while not an official SI abbreviation, is less likely to be confused with a variable than the letter "S" when writing the symbol by hand. The usual typographical distinctions (such as italic for variables and roman for units) are difficult to maintain. Likewise, it is difficult to distinguish the symbol "S" (siemens) from the lower-case "s" (seconds), potentially causing confusion. So, for example, a pentode’s transconductance of might alternatively be written as or (most common in the 1930s) or . The ohm had officially replaced the old "siemens unit", a unit of resistance, at an international conference in 1881. Notes and references External links SI derived units Units of electrical conductance Werner von Siemens
Siemens (unit)
Physics
647
1,353,729
https://en.wikipedia.org/wiki/Gaussian%20binomial%20coefficient
In mathematics, the Gaussian binomial coefficients (also called Gaussian coefficients, Gaussian polynomials, or q-binomial coefficients) are q-analogs of the binomial coefficients. The Gaussian binomial coefficient, written as or , is a polynomial in q with integer coefficients, whose value when q is set to a prime power counts the number of subspaces of dimension k in a vector space of dimension n over , a finite field with q elements; i.e. it is the number of points in the finite Grassmannian . Definition The Gaussian binomial coefficients are defined by: where m and r are non-negative integers. If , this evaluates to 0. For , the value is 1 since both the numerator and denominator are empty products. Although the formula at first appears to be a rational function, it actually is a polynomial, because the division is exact in Z[q] All of the factors in numerator and denominator are divisible by , and the quotient is the q-number: Dividing out these factors gives the equivalent formula In terms of the q factorial , the formula can be stated as Substituting into gives the ordinary binomial coefficient . The Gaussian binomial coefficient has finite values as : Examples Combinatorial descriptions Inversions One combinatorial description of Gaussian binomial coefficients involves inversions. The ordinary binomial coefficient counts the -combinations chosen from an -element set. If one takes those elements to be the different character positions in a word of length , then each -combination corresponds to a word of length using an alphabet of two letters, say with copies of the letter 1 (indicating the positions in the chosen combination) and letters 0 (for the remaining positions). So, for example, the words using 0s and 1s are . To obtain the Gaussian binomial coefficient , each word is associated with a factor , where is the number of inversions of the word, where, in this case, an inversion is a pair of positions where the left of the pair holds the letter 1 and the right position holds the letter 0. With the example above, there is one word with 0 inversions, , one word with 1 inversion, , two words with 2 inversions, , , one word with 3 inversions, , and one word with 4 inversions, . This is also the number of left-shifts of the 1s from the initial position. These correspond to the coefficients in . Another way to see this is to associate each word with a path across a rectangular grid with height and width , going from the bottom left corner to the top right corner. The path takes a step right for each 0 and a step up for each 1. An inversion switches the directions of a step (right+up becomes up+right and vice versa), hence the number of inversions equals the area under the path. Balls into bins Let be the number of ways of throwing indistinguishable balls into indistinguishable bins, where each bin can contain up to balls. The Gaussian binomial coefficient can be used to characterize . Indeed, where denotes the coefficient of in polynomial (see also Applications section below). Properties Reflection Like the ordinary binomial coefficients, the Gaussian binomial coefficients are center-symmetric, i.e., invariant under the reflection : In particular, Limit at q = 1 The evaluation of a Gaussian binomial coefficient at is i.e. the sum of the coefficients gives the corresponding binomial value. Degree of polynomial The degree of is . q identities Analogs of Pascal's identity The analogs of Pascal's identity for the Gaussian binomial coefficients are: and When , these both give the usual binomial identity. We can see that as , both equations remain valid. The first Pascal analog allows computation of the Gaussian binomial coefficients recursively (with respect to m ) using the initial values and also shows that the Gaussian binomial coefficients are indeed polynomials (in q). The second Pascal analog follows from the first using the substitution and the invariance of the Gaussian binomial coefficients under the reflection . These identities have natural interpretations in terms of linear algebra. Recall that counts r-dimensional subspaces , and let be a projection with one-dimensional nullspace . The first identity comes from the bijection which takes to the subspace ; in case , the space is r-dimensional, and we must also keep track of the linear function whose graph is ; but in case , the space is (r−1)-dimensional, and we can reconstruct without any extra information. The second identity has a similar interpretation, taking to for an (m−1)-dimensional space , again splitting into two cases. Proofs of the analogs Both analogs can be proved by first noting that from the definition of , we have: As Equation () becomes: and substituting equation () gives the first analog. A similar process, using instead, gives the second analog. q-binomial theorem There is an analog of the binomial theorem for q-binomial coefficients, known as the Cauchy binomial theorem: Like the usual binomial theorem, this formula has numerous generalizations and extensions; one such, corresponding to Newton's generalized binomial theorem for negative powers, is In the limit , these formulas yield and . Setting gives the generating functions for distinct and any parts respectively. (See also Basic hypergeometric series.) Central q-binomial identity With the ordinary binomial coefficients, we have: With q-binomial coefficients, the analog is: Applications Gauss originally used the Gaussian binomial coefficients in his determination of the sign of the quadratic Gauss sum. Gaussian binomial coefficients occur in the counting of symmetric polynomials and in the theory of partitions. The coefficient of qr in is the number of partitions of r with m or fewer parts each less than or equal to n. Equivalently, it is also the number of partitions of r with n or fewer parts each less than or equal to m. Gaussian binomial coefficients also play an important role in the enumerative theory of projective spaces defined over a finite field. In particular, for every finite field Fq with q elements, the Gaussian binomial coefficient counts the number of k-dimensional vector subspaces of an n-dimensional vector space over Fq (a Grassmannian). When expanded as a polynomial in q, it yields the well-known decomposition of the Grassmannian into Schubert cells. For example, the Gaussian binomial coefficient is the number of one-dimensional subspaces in (Fq)n (equivalently, the number of points in the associated projective space). Furthermore, when q is 1 (respectively −1), the Gaussian binomial coefficient yields the Euler characteristic of the corresponding complex (respectively real) Grassmannian. The number of k-dimensional affine subspaces of Fqn is equal to . This allows another interpretation of the identity as counting the (r − 1)-dimensional subspaces of (m − 1)-dimensional projective space by fixing a hyperplane, counting such subspaces contained in that hyperplane, and then counting the subspaces not contained in the hyperplane; these latter subspaces are in bijective correspondence with the (r − 1)-dimensional affine subspaces of the space obtained by treating this fixed hyperplane as the hyperplane at infinity. In the conventions common in applications to quantum groups, a slightly different definition is used; the quantum binomial coefficient there is . This version of the quantum binomial coefficient is symmetric under exchange of and . See also List of q-analogs References Exton, H. (1983), q-Hypergeometric Functions and Applications, New York: Halstead Press, Chichester: Ellis Horwood, 1983, , , (undated, 2004 or earlier). Ratnadha Kolhatkar, Zeta function of Grassmann Varieties (dated January 26, 2004) (2009). Q-analogs Factorial and binomial topics
Gaussian binomial coefficient
Mathematics
1,731
2,428,176
https://en.wikipedia.org/wiki/Congelation
Congelation (from Latin: , ) was a term used in medieval and early modern alchemy for the process known today as crystallization. In the ('The Secret of Alchemy') attributed to Khalid ibn Yazid (), it is one of "the four principal operations", along with Solution, Albification ('whitening'), and Rubification ('reddening'). It was one of the twelve alchemical operations involved in the creation of the philosophers' stone as described by Sir George Ripley () in his Compound of Alchymy, as well as by Antoine-Joseph Pernety in his Dictionnaire mytho-hermétique (1758). See also Alchemical process Magnum opus (alchemy) References Works cited Alchemical processes
Congelation
Chemistry
167
1,926,283
https://en.wikipedia.org/wiki/Czes%C5%82aw%20Ryll-Nardzewski
Czesław Ryll-Nardzewski (; 7 October 1926 – 18 September 2015) was a Polish mathematician. Life and career Born in Wilno, Second Polish Republic (now Vilnius, Lithuania), he was a student of Hugo Steinhaus. At the age of 26 he became professor at Warsaw University. In 1959, he became a professor at the Wrocław University of Technology. He was the advisor of 18 PhD theses. His main research areas were measure theory, functional analysis, foundations of mathematics and probability theory. Several theorems bear his name: the Ryll-Nardzewski fixed point theorem, the Ryll-Nardzewski theorem in model theory, and the Kuratowski and Ryll-Nardzewski measurable selection theorem. He became a member of the Polish Academy of Sciences in 1967. He died in 2015 at the age of 88 and he is buried in Wrocław at . References Short biography at the Polish Academy of Sciences 1926 births 2015 deaths Model theorists Scientists from Vilnius People from Wilno Voivodeship (1926–1939) Polish mathematicians Recipients of the State Award Badge (Poland)
Czesław Ryll-Nardzewski
Mathematics
227
57,182,584
https://en.wikipedia.org/wiki/Skeletocutis%20falsipileata
Skeletocutis falsipileata is a species of poroid crust fungus in the family Polyporaceae. Found in Malaysia, it was first described by E.J.H. Corner in 1992 as a species of Tyromyces. Tsutomu Hattori transferred it to Skeletocutis in 2002. References Fungi described in 1992 Fungi of Asia falsipileata Fungus species
Skeletocutis falsipileata
Biology
85
15,985,233
https://en.wikipedia.org/wiki/Quantum%20Byzantine%20agreement
Byzantine fault tolerant protocols are algorithms that are robust to arbitrary types of failures in distributed algorithms. The Byzantine agreement protocol is an essential part of this task. The constant-time quantum version of the Byzantine protocol, is described below. Introduction The Byzantine Agreement protocol is a protocol in distributed computing. It takes its name from a problem formulated by Lamport, Shostak and Pease in 1982, which itself is a reference to a historical problem. The Byzantine army was divided into divisions with each division being led by a General with the following properties: Each General is either loyal or a traitor to the Byzantine state. All Generals communicate by sending and receiving messages. There are only two commands: attack and retreat. All loyal Generals should agree on the same plan of action: attack or retreat. A small linear fraction of bad Generals should not cause the protocol to fail (less than a fraction). (See for the proof of the impossibility result). The problem usually is equivalently restated in the form of a commanding General and loyal Lieutenants with the General being either loyal or a traitor and the same for the Lieutenants with the following properties. All loyal Lieutenants carry out the same order. If the commanding General is loyal, all loyal Lieutenants obey the order that he sends. A strictly less than fraction including the commanding General are traitors. Byzantine failure and resilience Failures in an algorithm or protocol can be categorized into three main types: A failure to take another execution step in the algorithm: This is usually referred to as a "fail stop" fault. A random failure to execute correctly: This is called a "random fault" or "random Byzantine" fault. An arbitrary failure where the algorithm fails to execute the steps correctly (usually in a clever way by some adversary to make the whole algorithm fail) which also encompasses the previous two types of faults; this is called a "Byzantine fault". A Byzantine resilient or Byzantine fault tolerant protocol or algorithm is an algorithm that is robust to all the kinds of failures mentioned above. For example, given a space shuttle with multiple redundant processors, if the processors give conflicting data, which processors or sets of processors should be believed? The solution can be formulated as a Byzantine fault tolerant protocol. Sketch of the algorithm We will sketch here the asynchronous algorithm The algorithm works in two phases: Phase 1 (Communication phase): All messages are sent and received in this round. A coin flipping protocol is a procedure that allows two parties A and B that do not trust each other to toss a coin to win a particular object. There are two types of coin flipping protocols: Weak coin flipping protocols: The two players A and B initially start with no inputs and they are to compute some value and be able to accuse anyone of cheating. The protocol is successful if A and B agree on the outcome. The outcome 0 is defined as A winning and 1 as B winning. The protocol has the following properties: If both players are honest (they follow the protocol), then they agree on the outcome of the protocol with for . If one of the players is honest (i.e., the other player may deviate arbitrarily from the protocol in his or her local computation), then the other party wins with probability at most . In other words, if B is dishonest, then , and if A is dishonest, then . A strong coin flipping protocol: In a strong coin flipping protocol, the goal is instead to produce a random bit which is biased away from any particular value 0 or 1. Clearly, any strong coin flipping protocol with bias leads to weak coin flipping with the same bias. Verifiable secret sharing A verifiable secret sharing protocol: A (n,k) secret sharing protocol allows a set of n players to share a secret, s such that only a quorum of k or more players can discover the secret. The player sharing (distributing the secret pieces) the secret is usually referred to as the dealer. A verifiable secret sharing protocol differs from a basic secret sharing protocol in that players can verify that their shares are consistent even in the presence of a malicious dealer. The fail-stop protocol Protocol quantum coin flip for player Round 1 generate the GHZ state on qubits and send the th qubit to the th player keeping one part Generate the state on qudits (quantum-computing components consistent of multiple qubits), an equal superposition of the numbers between 1 and . Distribute the qudits between all the players Receive the quantum messages from all players and wait for the next communication round, thus forcing the adversary to choose which messages were passed. Round 2: Measure (in the standard base) all qubits received in round I. Select the player with the highest leader value (ties broken arbitrarily) as the "leader" of the round. Measure the leader's coin in the standard base. Set the output of the QuantumCoinFlip protocol: = measurement outcome of the leader's coin. The Byzantine protocol To generate a random coin assign an integer in the range [0,n-1] to each player and each player is not allowed to choose its own random ID as each player selects a random number for every other player and distributes this using a verifiable secret sharing scheme. At the end of this phase players agree on which secrets were properly shared, the secrets are then opened and each player is assigned the value This requires private information channels so we replace the random secrets by the superposition . In which the state is encoded using a quantum verifiable secret sharing protocol (QVSS). We cannot distribute the state since the bad players can collapse the state. To prevent bad players from doing so we encode the state using the Quantum verifiable secret sharing (QVSS) and send each player their share of the secret. Here again the verification requires Byzantine Agreement, but replacing the agreement by the grade-cast protocol is enough. Grade-cast protocol A grade-cast protocol has the following properties using the definitions in Informally, a graded broadcast protocol is a protocol with a designated player called “dealer” (the one who broadcasts) such that: If the dealer is good, all the players get the same message. Even if the dealer is bad, if some good player accepts the message, all the good players get the same message (but they may or may not accept it). A protocol P is said to be achieve graded broadcast if, at the beginning of the protocol, a designated player D (called the dealer) holds a value v, and at the end of the protocol, every player outputs a pair such that the following properties hold: If D is honest, then = v and = 2 for every honest player . For any two honest players and . (Consistency) For any two honest players and , if and , then . For the verification stage of the QVSS protocol guarantees that for a good dealer the correct state will be encoded, and that for any, possibly faulty dealer, some particular state will be recovered during the recovery stage. We note that for the purpose of our Byzantine quantum coin flip protocol the recovery stage is much simpler. Each player measures his share of the QVSS and sends the classical value to all other players. The verification stage guarantees, with high probability, that in the presence of up to faulty players all the good players will recover the same classical value (which is the same value that would result from a direct measurement of the encoded state). Remarks In 2007, a quantum protocol for Byzantine Agreement was demonstrated experimentally using a four-photon polarization-entangled state. This shows that the quantum implementation of classical Byzantine Agreement protocols is indeed feasible. References Quantum information science Cryptography Distributed computing problems Fault tolerance Engineering failures Theory of computation
Quantum Byzantine agreement
Mathematics,Technology,Engineering
1,590
48,870,875
https://en.wikipedia.org/wiki/Wolf%201061d
Wolf 1061d is an exoplanet orbiting the red dwarf star Wolf 1061 in the Ophiuchus constellation, about 13.8 light years from Earth. It is the third and furthest planet in order from its host star in a triple planetary system, and has an orbital period of about 217 days. Characteristics Mass, Radius, and Temperature Wolf 1061d is known as a Super-Earth, with a mass significantly greater than that of Earth, but less than the ice giants Uranus and Neptune. However, because it was found with the radial velocity method and does not transit, only the planet's minimum mass is known. Wolf 1061d is at least 7.70 , close to the upper limit of the Super-Earth range. If the planet is rocky, it would have to be around 1.7 . For a more likely mixed composition of both rock and volatiles, Wolf 1061d would be at least 2.2 . The planet is one of the coldest known Super-Earths, with an equilibrium temperature calculated to be around . This is too cold for liquid water and would mean that the planet is entirely coated in ice, depending on its true composition. However, due to the planet's eccentric orbit, the temperature varies wildly from as high as to a low of . Orbit The orbit of Wolf 1061d is one of the longest observed for a Super-Earth in orbit around a red dwarf, as well as very eccentric. The average distance between the planet and the star (the semi-major axis) is 0.470 AU, or 47% the distance between the Earth and the Sun. This is comparable to Mercury's semi-major axis of 0.387 AU. Unlike Mercury, Wolf 1061d takes 217.21 days to orbit its host star, compared to Mercury's orbital period of 88 days. It also has a much higher orbital eccentricity of 0.55, nearly twice as high as that of Mercury. Wolf 1061d swings from as close as 0.2115 AU to as far as 0.7285 AU during the course of its year. Host Star Wolf 1061d orbits the M3.5V red dwarf Wolf 1061, which is one of the nearest stars to Earth at a distance of just under 14 light years. It is 0.294 times the radius and about 0.307 times the mass of the Sun with a temperature of 3342 K and an unknown age. For comparison, the Sun has a temperature of 5778 K and an age of 4.5 billion years. Wolf 1061 is about 1% as luminous as the Sun. Habitability Although the low stellar flux, the presence of additional greenhouse gases like methane or even hydrogen gas might make Wolf 1061d marginally habitable, due to the high probability of Wolf 1061d being a mini-Neptune combined with questionable prospects of being habitable for Earth-like life in the conventional sense makes this exoplanet a poor candidate for being “potentially habitable”. A re-analysis of the system in March 2017 showed that Wolf 1061d is likely non-habitable; it's too far from its star and more likely to be a Mini-Neptune, with a minimum mass almost 8 times that of Earth. Even at its closest approach, Wolf 1061d never enters the system's habitable zone. See also List of exoplanets References External links Simulated view of the Wolf 1061 system. Video created by the University of New South Wales Wolf 1061 Exoplanets discovered in 2015 Super-Earths Ophiuchus Exoplanets detected by radial velocity
Wolf 1061d
Astronomy
746
48,443,889
https://en.wikipedia.org/wiki/Hygrocybe%20nigrescens
Hygrocybe nigrescens, commonly known as the blackening wax-cap, is a mushroom of the waxcap genus Hygrocybe. It is found in Europe and Africa. It has been treated as the variety nigrescens of Hygrocybe conica. See also List of Hygrocybe species References External links Fungi described in 1883 Fungi of Europe cantharellus Fungus species
Hygrocybe nigrescens
Biology
94
40,779,333
https://en.wikipedia.org/wiki/Hibifolin
Hibifolin is a flavonol glycoside that prevents beta-amyloid-induced neurotoxicity in vitro. References Flavonol glycosides
Hibifolin
Chemistry
41
41,912,043
https://en.wikipedia.org/wiki/Manin%20conjecture
In mathematics, the Manin conjecture describes the conjectural distribution of rational points on an algebraic variety relative to a suitable height function. It was proposed by Yuri I. Manin and his collaborators in 1989 when they initiated a program with the aim of describing the distribution of rational points on suitable algebraic varieties. Conjecture Their main conjecture is as follows. Let be a Fano variety defined over a number field , let be a height function which is relative to the anticanonical divisor and assume that is Zariski dense in . Then there exists a non-empty Zariski open subset such that the counting function of -rational points of bounded height, defined by for , satisfies as Here is the rank of the Picard group of and is a positive constant which later received a conjectural interpretation by Peyre. Manin's conjecture has been decided for special families of varieties, but is still open in general. References Conjectures Diophantine geometry Unsolved problems in number theory
Manin conjecture
Mathematics
205
57,451,363
https://en.wikipedia.org/wiki/Charlotte%20Williams
Charlotte Williams is a British scientist who holds the Professorship of Inorganic Chemistry at the University of Oxford. Her research focuses on the synthesis of novel catalysts with an expertise in organometallic chemistry and polymer materials chemistry. Early life and education Williams studied chemistry at Imperial College London, graduating with a Bachelor's degree in chemistry. She completed a PhD with Vernon C. Gibson and Nick Long. Research and career Williams joined the University of Cambridge as a postdoctoral research associate working with Andrew Bruce Holmes and Richard Friend. Here she focused on the synthesis of electroactive polymers. She then moved to the University of Minnesota, working in the group of Marc Hillymer and William Tollman on zinc catalysis. In 2003 Williams was appointed to Imperial College London as a lecturer. She was appointed a Senior Lecturer in 2007, a Reader in 2009 and a Professor in 2012. Here she developed sugar-based biodegradable polymers that were produced from lignocellulosic biomass. During her time at Imperial she was an inventor of several granted patents. She joined Trinity College, Oxford, in 2016. Her research focuses on metal complexes for use in homogeneous polymerisation catalysis. She identified catalysts that could use carbon dioxide as a raw material for polymers, which prompted Williams to start Econic Technologies. Econic Technologies has received more than £13 million in funding. She also identified transition metal complex catalysts, biorenewable polymers and liquid fuel production. She has developed switchable catalysts that allow the combination of monomers into block copolymers. Working with Milo Shaffer at Imperial College London, Williams uses nanoparticles in polymer composites. She is a member of the London Centre for Nanotechnology. She appears regularly in the media, including on BBC Radio 4's In Our Time and at museums and festivals. In 2015 she won the WISE Campaign research award for her eco-plastics start-up. In June 2024, Williams was appointed to the Professorship of Inorganic Chemistry, one of the five Statutory Professorships in Chemistry at the University of Oxford and took up a fellowship of St Catherine's College. Honours and awards 2021 Royal Society Elected fellow 2018 Gesellschaft Deutscher Chemiker Otto Roelen Medal 2017 UK Catalysis Hub Sir John Meurig Thomas Catalysis Medal 2016 Royal Society of Chemistry Corday-Morgan Prize 2015 WISE Campaign Research Award 2011 Bio-Environmental Polymer Society Outstanding Young Scientist Award 2009 Royal Society of Chemistry Energy, Environment and Sustainability Early Career Award 2005 Royal Society of Chemistry Meldola Medal and Prize 2001 Royal Society of Chemistry Laurie Verangno Award Williams was appointed Officer of the Order of the British Empire (OBE) in the 2020 Birthday Honours for services to chemistry. References 21st-century British scientists 21st-century British women scientists British women chemists Inorganic chemists Alumni of Imperial College London Officers of the Order of the British Empire Fellows of the Royal Society Statutory Professors of the University of Oxford
Charlotte Williams
Chemistry
592
25,175,105
https://en.wikipedia.org/wiki/Genetically%20modified%20bacteria
Genetically modified bacteria were the first organisms to be modified in the laboratory, due to their simple genetics. These organisms are now used for several purposes, and are particularly important in producing large amounts of pure human proteins for use in medicine. History The first example of this occurred in 1978 when Herbert Boyer, working at a University of California laboratory, took a version of the human insulin gene and inserted into the bacterium Escherichia coli to produce synthetic "human" insulin. Four years later, it was approved by the U.S. Food and Drug Administration. Research Bacteria were the first organisms to be genetically modified in the laboratory, due to the relative ease of modifying their chromosomes. This ease made them important tools for the creation of other GMOs. Genes and other genetic information from a wide range of organisms can be added to a plasmid and inserted into bacteria for storage and modification. Bacteria are cheap, easy to grow, clonal, multiply quickly, are relatively easy to transform, and can be stored at -80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria, providing an unlimited supply for research. The large number of custom plasmids make manipulating DNA excised from bacteria relatively easy. Their ease of use has made them great tools for scientists looking to study gene function and evolution. Most DNA manipulation takes place within bacterial plasmids before being transferred to another host. Bacteria are the simplest model organism and most of our early understanding of molecular biology comes from studying Escherichia coli. Scientists can easily manipulate and combine genes within the bacteria to create novel or disrupted proteins and observe the effect this has on various molecular systems. Researchers have combined the genes from bacteria and archaea, leading to insights on how these two diverged in the past. In the field of synthetic biology, they have been used to test various synthetic approaches, from synthesizing genomes to creating novel nucleotides. Food Bacteria have been used in the production of food for a very long time, and specific strains have been developed and selected for that work on an industrial scale. They can be used to produce enzymes, amino acids, flavourings, and other compounds used in food production. With the advent of genetic engineering, new genetic changes can easily be introduced into these bacteria. Most food-producing bacteria are lactic acid bacteria, and this is where the majority of research into genetically engineering food-producing bacteria has gone. The bacteria can be modified to operate more efficiently, reduce toxic byproduct production, increase output, create improved compounds, and remove unnecessary pathways. Food products from genetically modified bacteria include alpha-amylase, which converts starch to simple sugars, chymosin, which clots milk protein for cheese making, and pectinesterase, which improves fruit juice clarity. In cheese Chymosin is an enzyme produced in the stomach of young ruminant mammals to digest milk. The digestion of milk proteins via enzymes is essential to cheesemaking. The species Escherichia coli and Bacillus subtilis can be genetically engineered to synthesise and excrete chymosin, providing a more efficient means of production. The use of bacteria to synthesise chymosin also provides a vegetarian method of cheesemaking, as previously, young ruminants (typically calves) had to be slaughtered to extract the enzyme from the stomach lining. Industrial Genetically modified bacteria are used to produce large amounts of proteins for industrial use. Generally the bacteria are grown to a large volume before the gene encoding the protein is activated. The bacteria are then harvested and the desired protein purified from them. The high cost of extraction and purification has meant that only high value products have been produced at an industrial scale. Pharmaceutical production The majority of the industrial products from bacteria are human proteins for use in medicine. Many of these proteins are impossible or difficult to obtain via natural methods and they are less likely to be contaminated with pathogens, making them safer. Prior to recombinant protein products, several treatments were derived from cadavers or other donated body fluids and could transmit diseases. Indeed, transfusion of blood products had previously led to unintentional infection of haemophiliacs with HIV or hepatitis C; similarly, treatment with human growth hormone derived from cadaver pituitary glands may have led to outbreaks of Creutzfeldt–Jakob disease. The first medicinal use of GM bacteria was to produce the protein insulin to treat diabetes. Other medicines produced include clotting factors to treat haemophilia, human growth hormone to treat various forms of dwarfism, interferon to treat some cancers, erythropoietin for anemic patients, and tissue plasminogen activator which dissolves blood clots. Outside of medicine they have been used to produce biofuels. There is interest in developing an extracellular expression system within the bacteria to reduce costs and make the production of more products economical. Health With greater understanding of the role that the microbiome plays in human health, there is the potential to treat diseases by genetically altering the bacteria to, themselves, be therapeutic agents. Ideas include altering gut bacteria so they destroy harmful bacteria, or using bacteria to replace or increase deficient enzymes or proteins. One research focus is to modify Lactobacillus, bacteria that naturally provide some protection against HIV, with genes that will further enhance this protection. The bacteria which generally cause tooth decay have been engineered to no longer produce tooth-corroding lactic acid. These transgenic bacteria, if allowed to colonize a person's mouth, could perhaps reduce the formation of cavities. Transgenic microbes have also been used in recent research to kill or hinder tumors, and to fight Crohn's disease. If the bacteria do not form colonies inside the patient, the person must repeatedly ingest the modified bacteria in order to get the required doses. Enabling the bacteria to form a colony could provide a more long-term solution, but could also raise safety concerns as interactions between bacteria and the human body are less well understood than with traditional drugs. One example of such an intermediate, which only forms short-term colonies in the gastrointestinal tract, may be Lactobacillus Acidophilus MPH734. This is used as a specific in the treatment of Lactose Intolerance. This genetically modified version of Lactobacillus acidophilus bacteria produces a missing enzyme called lactase which is used for the digestion of lactose found in dairy products or, more commonly, in food prepared with dairy products. The short term colony is induced over a one-week, 21-pill treatment regimen, after which, the temporary colony can produce lactase for three months or more before it is removed from the body by a natural processes. The induction regimen can be repeated as often as necessary to maintain protection from the symptoms of lactose intolerance, or discontinued with no consequences, except the return of the original symptoms. There are concerns that horizontal gene transfer to other bacteria could have unknown effects. As of 2018 there are clinical trials underway testing the efficacy and safety of these treatments. Agriculture For over a century bacteria have been used in agriculture. Crops have been inoculated with Rhizobia (and more recently Azospirillum) to increase their production or to allow them to be grown outside their original habitat. Application of Bacillus thuringiensis (Bt) and other bacteria can help protect crops from insect infestation and plant diseases. With advances in genetic engineering, these bacteria have been manipulated for increased efficiency and expanded host range. Markers have also been added to aid in tracing the spread of the bacteria. The bacteria that naturally colonise certain crops have also been modified, in some cases to express the Bt genes responsible for pest resistance. Pseudomonas strains of bacteria cause frost damage by nucleating water into ice crystals around themselves. This led to the development of ice-minus bacteria, that have the ice-forming genes removed. When applied to crops they can compete with the ice-plus bacteria and confer some frost resistance. Other uses Other uses for genetically modified bacteria include bioremediation, where the bacteria are used to convert pollutants into a less toxic form. Genetic engineering can increase the levels of the enzymes used to degrade a toxin or to make the bacteria more stable under environmental conditions. GM bacteria have also been developed to leach copper from ore, clean up mercury pollution and detect arsenic in drinking water. Bioart has also been created using genetically modified bacteria. In the 1980s artist Joe Davis and geneticist Dana Boyd converted the Germanic symbol for femininity (ᛉ) into binary code and then into a DNA sequence, which was then expressed in Escherichia coli. This was taken a step further in 2012, when a whole book was encoded onto DNA. Paintings have also been produced using bacteria transformed with fluorescent proteins. Bacteria-synthesized transgenic products Insulin Hepatitis B vaccine Tissue plasminogen activator Human growth hormone Ice-minus bacteria Interferon Bt corn References Further reading Genetically modified organisms Bacteria
Genetically modified bacteria
Engineering,Biology
1,864
810,626
https://en.wikipedia.org/wiki/MOD%20%28file%20format%29
MOD is a computer file format used primarily to represent music, and was the first module file format. MOD files use the “.MOD” file extension, except on the Amiga which doesn't rely on filename extensions; instead, it reads a file's header to determine filetype. A MOD file contains a set of instruments in the form of samples, a number of patterns indicating how and when the samples are to be played, and a list of what patterns to play in what order. History The first version of the format was created by Karsten Obarski for use in the Ultimate Soundtracker, tracker software released for the Amiga computer in 1987. The format has since been supported by hundreds of playback programs and dozens of other trackers. The original version of the MOD format featured four channels of simultaneous audio playback, corresponding to the capabilities of the original Amiga chipset, and up to 15 instruments. Later variations of the format have extended this to up to 32 channels and 31 instruments. The format was designed to be directly playable on the Amiga without additional processing: for example, samples are stored in 8-bit PCM format ready to be played on the Amiga DACs, and pattern data is not packed. Playback required very little CPU time on an Amiga, and many games used MOD files for their background music. A common misconception is that the magic number "M.K." in the 0x438 offset of MOD files are the initials of Mahoney and Kaktus, two prominent Amiga demomakers at the time, who played an important part in the popularity of the format. They in fact stand for the initials of Michael Kleps a.k.a. Unknown / DOC, another developer of the format. After the Amiga's production ceased, the MOD format has had continued popularity in the Demoscene and as background music for independent video games and Chiptunes. It is not uncommon to hear MOD music in keygens either. Format overview A pattern is typically represented in a sequencer user interface as a table with one column per channel, thus having four columns – one for each Amiga hardware channel. Each column has 64 rows. A cell in the table can cause one of several actions to happen on its column's channel when its row's time is reached: Start an instrument playing a new note in this channel at a given volume, possibly with a special effect applied on it Change the volume or special effect being applied to the current note Change pattern flow; jump to a specific song or pattern position or loop inside a pattern Do nothing; any existing note playing in this channel will continue to play An instrument is a single sample along with an optional indication of which portion of the sample can be repeated to hold a sustained note. Timing In the original MOD file the minimum time frame was 0.02 seconds, or a "vertical blanking" (VSync) interval, because the original software used the VSync timing of the monitor running at 50 Hz (for PAL) or 60 Hz (for NTSC) for timing. The rate at which pattern data is played is defined by a speed setting. Each row in the pattern data lasts one vertical blanking (or 0.02 seconds) times the current speed setting. The speed setting varied from 1 to 255. In later versions of the format, the vertical blanking was replaced with an adjustable time period staying in the range [0.01, 0.078] seconds. The old speed setting command was replaced with a new one that was used to change both the old speed setting and the new adjustable time period. Unfortunately, some of the old functionality was broken, because the new speed setting command had an identical code value to the old command. Values in the range [1, 31] were interpreted as the old speed settings, but other values were regarded as modifications to the adjustable time period. Hence, values in the range [32, 255] used in some old songs broke in new versions of the player. Further information on the MOD format can be found at the alt.binaries.sounds.mods FAQ. Other formats that use the MOD extension MOD is the file extension for several other applications: The video file format used on many digital camcorders, such as the JVC Everio, the Canon FS100 and the Panasonic D-Snap SD-card camcorders. Game modules in Neverwinter Nights. AMPL model files. Old phpBB modification templates. Module files in Femap The extension for the binary variant of the Wavefront .obj format. The extension for some games using the Vassal game engine. The extension for Fortran module files. The extension for legacy Visual Basic module files, for versions before the release of Visual Basic .NET. The extension for Go module files, used for package versioning. Module for ABB Robotics IRC5 and S4 robot controllers. Contains robotic motion programs written in the language RAPID. Lanner WITNESS simulation software model files Paradox Development Studio uses a ".MOD" format for user-created modifications of their games. DND adventure modules for Fantasy Grounds, a virtual tabletop application. GNU GRUB boot modules (when found in /boot) See also Module file Tracker (music software) Mod (disambiguation) MOD and TOD (video format) List of Amiga music format players MIDI References External links Noisetracker/Soundtracker/Protracker Module Format - 3rd Revision Noisetracker/Soundtracker/Protracker Module Format - 4th Revision Music data index site Modarchive Amiga music formats AmigaOS Module file formats Video game music file formats
MOD (file format)
Technology
1,146
27,723,210
https://en.wikipedia.org/wiki/BRCT%20domain
BRCA1 C Terminus (BRCT) domain is a family of evolutionarily related proteins. It is named after the C-terminal domain of BRCA1, a DNA-repair protein that serves as a marker of breast cancer susceptibility. The BRCT domain is found predominantly in proteins involved in cell cycle checkpoint functions responsive to DNA damage, for example as found in the breast cancer DNA-repair protein BRCA1. The domain is an approximately 100 amino acid tandem repeat, which appears to act as a phospho-protein binding domain. Examples Human proteins containing this domain include: BARD1; BRCA1 CTDP1; TDT or DNTT ECT2 LIG4 MCPH1; MDC1 NBN PARP1; PARP4; PAXIP1; PES1 REV1; RFC1; TOPBP1; TP53BP1; XRCC1 References External links Protein domains
BRCT domain
Chemistry,Biology
198
26,394,107
https://en.wikipedia.org/wiki/43P/Wolf%E2%80%93Harrington
43P/Wolf–Harrington is a periodic comet discovered on December 22, 1924, by Max Wolf in Heidelberg, Germany. In 2019 it passed within of Jupiter, which lifted the perihelion point and increased the orbital period to 9 years. During the 1997 apparition the comet reached an apparent magnitude a little bit brighter than 12. The comet had an unfavorable apparition in 2010, because during perihelion (closest approach to the Sun), the comet was only 10 degrees from the Sun as seen from Earth. The comet was not more favorably positioned in the sky until mid October 2010. The comet nucleus is estimated to be 3.6 kilometers in diameter. References External links Elements and Ephemeris for 43P/Wolf–Harrington – Minor Planet Center 43P/Wolf–Harrington at the Minor Planet Center's Database 43P/Wolf-Harrington (2010) – Seiichi Yoshida @ aerith.net (with pictures taken by different astronomers around the world) 43P/Wolf-Harrington on 2011-Apr-13 (mag 17.4N) C. Bell with a 12" (0.3-m) Schmidt-Cassegrain Periodic comets 0043 043P 043P 19241222
43P/Wolf–Harrington
Astronomy
258
47,762,063
https://en.wikipedia.org/wiki/Conservation%20and%20restoration%20of%20clocks
The conservation and restoration of clocks refers to the care given to the physical and functional aspects of time measuring devices featuring "moving hands on a dial face" exclusive of watches. Care for clocks constitutes regulating the external environment, cleaning, winding, lubrication, pest-management, and repairing or replacing mechanical and aesthetic components to preserve or achieve the desired state as specified by the owner. Clocks are typically composed of multiple types of materials such as wood, metal, paint, plastic, etc., which have unique behaviors and environmental interactions, making treatment options complex. The materials used and the complexity of clockwork warrant having a Horological Conservator complete the work. Conservation and restoration Interventive actions can be taken by trained conservators when clock issues arise, with treatment varying depending on the type of clock and situation. Horological conservation history The history of clock conservation dates back to ancient times. Horology, the study of the measurement of time, dates to 1450 BC, "when the Ancient Egyptians first observed the earth's natural circadian rhythms (Meadows, C., (n.d.)." Some examples of instruments used to measure time include: clocks, watches, sundials, hourglasses, time recorders, and atomic clocks. Horological Conservation is the science and art of preserving timepieces to connect humanity with the history and heritage of timepieces. Conservators A conservator who specializes in clock care will have the qualifications and training to properly treat a clock with the wherewithal to "[...] not totally compromise the historical worth of the object [...]." Conservators rely on a system of examination, documentation, and research prior to treatment so that proper treatment is received, as "Sometimes the only way one can understand the detailed history, quirks, and specific eccentricities common to a particular timepiece is by studying it to glean as much information from it as possible [...]." Aside from pinpointing clock issues and providing care to a clock in need, conservators are also able to impart knowledge about procedures like clock winding to ill-informed clock owners so that future clock damage due to human error may be avoided. Conservators are encouraged to use restraint when restoring objects due to ethical concerns and only make necessary repairs rather than altering the appearance of objects such as clocks for aesthetic preferences. One exception to this dilemma is conservation done to correct previous treatments, which scholarship and investigation have proven to be dishonest or poorly-conceived efforts that took an object further away from its original form. Preventive conservation Preventive conservation involves the proper handling, creating and maintaining suitable storage and display environments, and regular maintenance to prevent issues from arising and impacting a clock's sustainability. Such controls can extend the life of a clock. Correct handling Handling is a potentially a major source of damage to any heritage item. Any handling of an object should be planned in advance and requires some precautions. Certain measures taken by a clock handler can prevent opportunities for damage. Work areas should be clean and arranged to support the safety of the clock: uncluttered, free of any potential for the clock to be bumped, jostled, or moved unnecessarily, and secure from theft. Wearing "[...] cotton or Nitrile gloves when handling the metal portions of the clock" can limit opportunities for transfer of contaminants from hands to clock surfaces. Loose items such as jewelry and loose clothing like scarves or bulky jackets worn by a handler have the potential to come into contact with, and cause damage to, a clock. Removal of these items prior to clock handling can lower the risk of damage. Pendulum damage during clock movement can be prevented by either removing the pendulum or fixing it within the interior of the clock via a latch or padding prior to clock movement. This can also prevent damage to other interior components. When moving or handling a "weight-driven clocks, it is advisable to wait until the clock has wound down before it is relocated. Once it has stopped running, carefully remove the weights and pendulum". A clock "[...] should always be grasped at its most sturdy area" and moved from one location to the next on its back. The correlation of the size of a clock with the number of people moving it can ensure the safety of the clock. A small, mantle clock for example may only require one mover, while a tall clock can necessitate a number of movers to safely carry it to another location. Proper storage and display Proper storage and display mechanisms can work to ensure a clock's safety. For safety and mechanical reasons, conservators securely fix long case clocks and wall clocks to a wall to avoid accidents and to ensure the clocks area to run properly. The use of a solid, sturdy surface when storing or displaying a clock can prevent it from falling on a hard surface. A wall clock for example requires a secure attachment "[...] to a wall if accidents are to be avoided and clocks are to run properly." Clock cases, which come in many different materials, require special consideration and treatment in their own right. Conservators further advise not exposing a clock to a heat source of any kind, including strong sunlight or a mantel over a working fire, as this can cause damage to the clock cases and movements. Motion sensitive lighting and use of non-direct light sources can limit the amount of light and heat withstood by a clock, both of which are sources of clock damage. A clean clock environment can eliminate opportunities for dangerous contaminants to come into contact with a clock. An HVAC system can add an extra level of security by removing such contaminants from the air. Correct humidity and temperature levels The humidity and temperature of the environment in which a clock is being displayed or stored can adversely affect the clock's condition. Damage may be avoided by maintaining certain levels of humidity and temperature within the clock environment, dependent on the materials from which the clock is constructed. a. Wood clocks The ideal humidity and temperature ranges for clocks whose cases are made from wood vary depending on the season. In the summer, an environment with a temperature of 70 to 75 degrees Fahrenheit and 40% to 60% relative humidity is suitable. During the winter, an environment with a temperature of 70 degrees Fahrenheit and 35% to 50% relative humidity is prime. b. Metal clocks Clocks whose cases are made from metal function well in an environment with a temperature of 68 degrees Fahrenheit and a relative humidity of 30%. Running clocks vs. static display Clocks can be simple display objects if they are not required to run. This reduces the physical force on the clock. Whereas "running any functional object will result in wear and handling, which contributes to the degradation of the parts. Replacing the inner workings of a clock still requires handling, which could potentially damage the clock. It is important to note that conserving a non-functional clock is much less invasive (cheaper!) and preventive protection against corrosion and environmental hazards is the main concern for its stabilization and care.[viii] Perhaps the least invasive solution to highlight a functional clock would be to purchase a replica clock and safely store the original". Running clocks also require winding, which adds to the wear-and-tear of the gears and leads to more handling of the object. Regular clock maintenance Regular maintenance of a clock can ensure its long-term preservation. Conservation experts advise clocks need to be serviced regularly. A clock is a complex mechanical contraption made of a variety of materials and with many small moving parts. Even under perfect conditions lubricants deteriorate. Clocks should be examined and re-lubricated every three years. After five years, and certainly no longer than eight years, the whole movement is advised to be dismantled and 'cleaned' if excessive wear and expensive attention is to be avoided Visual inspection can determine whether a clock should be cleaned and/or lubricated, whether any signs of infestation and/or damage exist, and whether a clock should be wound and/or set to the correct time. Regular maintenance also applies to the environment surrounding the clock to ensure that the correct storage conditions are present. a. Cleaning, polishing, and lubrication Regular clock surface dusting can negate opportunities for corrosion or abrasion as a result of dust buildup. It should be stated that such surface cleaning can also result in a loss of information about a clock's history, as "[...] various "dirt" or "salt" deposits can provide precious clues to an objects past [...]." Loss of clock function due to clockwork wear can be avoided through regular clockwork cleaning, polishing, and lubrication performed by a trained professional. In performing such maintenance, a professional will disassemble clockwork so that it can be thoroughly examined "[...] for worn or broken parts, fatigued springs and accumulations of dirt or oil." Cleaning clocks consist of multiple steps, which may or may not all be done as part of a treatment plan: washing, ultrasonics, rinsing, drying, polishing, refinishing, chalk brushing, and pegging out. At this point, should cleaning be the only treatment employed, the pieces could be reassembled with proper lubrication; however, depending on the type of clock, the pieces could also be primed for further treatments to the various parts such as mechanisms, frames, wheelwork, pivots, mainsprings, screws and screw threads, escapements, balance springs, pendulums, etc. b. Winding and setting In order to function as time keepers, and to prevent damage to clockwork, clocks must be regularly wound. An established winding schedule eliminates the threat of over-winding."Traditionally, the job of winding the clocks was given to an horologist or a trained individual. Winding had to be done at certain times of the day; ideally when the temperature was just right and the room vacant. Each clock had its own unique keys, doors, and intricacies within the mechanisms." Regular winding can also ensure that clockwork is still functioning. In setting the correct time, the minute hand is turned clockwise to the desired time. When hand setting a clock, manually moving the hour or seconds hand rather than the minute hand to set the time can be damaging, as can be counterclockwise hand turning. Moving the minute hand ahead on a clock by several hours when setting the time can also lead to clockwork damage. For this reason, "Rather than turning the hands forward through several hours it is better to stop the clock and re-start it when the time matches that on the dial." Agents of deterioration Damage can occur via a variety of mechanisms, either rooted in human error or naturally caused. Agents of Deterioration (AOD) are the ten primary categories of threats to heritage objects. These agents of deterioration can occur while an object is in storage, on display, undergoing conservation, or during handling or transportation of the object. Physical force Physical force is any impact, shock, vibration, pressure or abrasion that causes damage to an object. Physical force damage commonly occurs when handling the object. Any excessive physical force will break, damage, splinter the clock frame or easily shatter the glass of the object. Clocks are complex functional objects; many have moveable working parts. Any excessive physical force can damage the mechanics inside the clock which are critical to its operation. Most commonly, these delicate inner-workings are metal, but clocks can be made of any number of materials, from wood to precious metals like silver and gold. They can have enamel inlays or delicate carvings and be a variety of sizes from small clocks to large grandfather clocks or clock towers. When a clock is in the custody of a professional institution such as a museum, proper barriers and monitored security of the clocks greatly reduce this risk. Some common sources of physical force damage to clocks are: a. Mishandling The failure of a handler to remove loose items like jewelry or clothing items like scarves from their person can result in these items striking a clock's surface. A handler who uses his or her bare hands to touch or transport a clock exposes the clock to contaminants, leaving "unsightly and potentially damaging marks." A handler who chooses to drag rather than lift a large clock "[...] can place stress on the legs and feet of the clock." Lifting a clock via false handles or non-sturdy components may cause these parts to snap off of the clock, potentially leading to the clock falling and smashing on a surface. b. Over-winding or improper hand setting Improper clock winding and hand setting procedures can cause serious damage to clockwork components. The use of an inappropriate key to wind a clock is one such improper procedure. The manner in which a clock is wound is also important, as "[...] considerable and potentially expensive damage can be caused by not winding gently and steadily", as well as by turning the key too many times. c. Unstable storage or display Unsuitable storage and display environments pose great risks to a clock's condition. Placement of a clock on an unstable surface or non-secure mounting on a wall can cause a clock to fall when external vibrations or human-caused accidents occur. Theft and vandalism Some clocks are constructed out of precious materials that are a source of temptation for theft. They can be small enough to easily steal. While a minimal concern for most wood clocks, as they are rarely stolen in a burglary, a valuable historic clock may be more susceptible to theft. With regard to vandalism, any public place runs a risk for any object being vandalized. If a clock is on display in a location that does not have conventional display cases for example, large clocks could be vandalize. In the case of wood clocks, slashing the wood, smashing the glass, or graffiti would be the most common problems for a public institution. High security, cameras, and inspections after crowds leave, especially for highly coveted antique clocks, are usually required in professional institutions. Disassociation The disassociation agent of deterioration refers to the separation or loss of individual parts of an object. This is possible in clocks, especially during cleaning or maintenance when the clock has to be disassembled. "Clocks can consist of up to five hundred individual components", so preventive conservation can be complex for these objects. Any one of those components can be lost or misplaced, resulting in damage to the object as a whole. If the part is critical, the clock may no longer run. Incorrect relative humidity The variety of materials used in clocks, can be sensitive to fluctuations in humidity. "A stable and appropriate relative humidity and temperature should be maintained to prevent parts from rusting, corrosion, darkening of silvers, and lubricants from drying out". Changes in relative humidity can cause wooded clocks to expand or contract. Fluctuating humidity leads to swelling and shrinking that can "loosen the joints of organic components". When changes in humidity causes the wooden parts of the clock to expand and contact, the inorganic components of a clock remain fixed, resulting in strain within the object. This is particularly a problem if the wood becomes warped and creates a pressure point against the inorganic interior workings. Humidity that is too damp can deform organic materials or corrode metals found in clocks. High humidity can cause warping, rotting, and mold or fungus growth in a wood clock, and can corrode the metal components of a clock. Glass facings can also trap moisture within the machine creating a prime environment for mold or rust, another agent of deterioration known as pollutants, to grow and build up internally. Incorrect temperature The temperature of a given environment can play a direct role in clock deterioration. "[...] extremes in temperature and humidity" can cause drastic changes in the materials from which a clock is made, inevitably leading to damage. Placement of a clock for example near "a heat source of any kind, including strong sunlight or a mantel over a working fire [...] can cause damage to cases and movements." In some instances, the heat from the fire contribute to faster deterioration in mantel clocks. Clocks, regardless of the materials they are made of, are susceptible to damage from incorrect temperatures. Like light, temperature that is too high can cause weakening, discoloration, and or disintegration of organic materials. With wood clocks, temperature is critical to conservation and preservation. Extreme heat and freezing temperature will significantly damage the wood's surface. Incorrect temperatures or uncontrolled climates of storage areas can cause rapid deterioration to wood clocks. Incorrect temperature causes the wood to expand or contract. Pests Depending on the materials the clock is made of, pests may be attracted to it. This is especially true for wooden clocks, since many pests feed on organic materials like wood. Many pests, rodents and insects burrow or feast on wood. Wood-boring insects may infest a clock if it is stored in an environment which supports insect activity and allows them access to the clock. Even for clocks that are made entirely of metal, spiders can cause problems because "their webs attract dust and can clog a clock's movement". Pests that are known to infest clocks are attracted to the organic materials from which a clock may be constructed: Carpet beetle- Attracted to protein-based adhesives sometimes used in clock construction, and as such "[...] are generally found at joinery and inside clock cases." Powderpost beetle- Lured by any wood materials from which a clock is constructed, they are known to drill holes into wood. Light: infrared and ultraviolet Over-exposure to light can impact clock materials in a number of ways; many of the materials used in clocks are prone to light damage, both ultraviolet and infrared. Visible light damage can cause discoloration and deterioration. Exposure to infrared can increase the temperature of the wood, this can cause increased oxidation, which causing damage to the wood's surface and develop irregular markings. Ultraviolet light can cause weakening, discoloration, and or disintegration of organic materials like wood. Exposure to direct sunlight (UV) over long periods of time will damage the wood by warping, fading, dry-rotting, and cause damage to the wood structure. Exposure to both UV and infrared light can alter, damage or ruin any stains or finish on a wood clock. Light agents of deterioration can be associated with temperature agents of deterioration, depending on the nature of the light source. Heat from a light source can also cause the varnish or finish on a clock surface to melt, resulting in a sticky surface to which harmful contaminants can become attached. Too much light exposure can also result in changes to finish color, as well as cause finish deterioration "[...] resulting in a cracked, brittle and/or "alligatored" appearance." Pollutants Pollutants are any elements which can corrode, degrade, or alter the condition of an object. Chemicals, smoke, airborne particles, and gases are all pollutants that can damage a clock. The following are two such examples: Dust is one such contaminant which settles on a clock's surface. If not addressed, dust buildup can lead to abrasion and corrosion. Dust and other airborne abrasives can build up inside clocks and impeded and wear down their mechanisms "the abrasive particle will become imbedded into the bearing wall and act as a piece of "sandpaper [...]." Regular clockwork activities can also lead to wear, as lubricants are known to degrade over time. In the case of mantel clocks that have been displayed above fireplaces, these abrasives include soot. Oil is transferred to a clock surface when a clock is handled with bare hands. The presence of oil on a clock can cause damage, particularly corrosion when metal clock components are touched. Other pollutants can cause oxidization and tarnishing in metal clocks. The cleaning agents used to remove pollutants can result in a chemical reaction with previously used lubricants or cleaning agents or with the materials of the clock. Excessive cleaning can "cause more issues like over-drying of bearings and stress corrosion cracking of brass that is exposed to ammonia solution". The mechanisms within clocks are highly sensitive to external matter like oils from fingertips or dust, so much so, that it can cause the machine to malfunction if it gets into the wrong area. For static displays, the functionality of a clock is not always necessary, but limiting the introduction of organic matter through mishandling will prevent unnecessary decay over the life of the clock. The wood used in many wood clocks are very porous, and easily absorb the elements around their environment and affect the integrity of the wood. Well-sealed and insulated rooms mitigate pollutant damage. Fire Like most heritage objects, clocks are highly vulnerable to fire. Not only are the fluctuations in temperature harmful, most clocks will be irreparably damaged if exposed to fire. Errant embers from a fireplace can be a potential threat to mantle clocks. Other threats may be any open flame, wood burning stove, heater, or pilot light. Many clocks are small enough that evacuating them during an emergency could be possible. Large clocks that belong to buildings, like clock towers, may survive fire and could possibly be restored. Water Water damage is one of the greatest agents of deterioration for a wood clock; water warps, swells and rots the wood. The components of many clocks are extremely susceptible to water damage. Because of the complexity of these objects, they are very challenging to dry. It may be extremely difficult, depending upon the severity of the water damage, to repair or restore the clock to its correct condition. Salvaging warped or rotted wood can be challenging and the damaged material may need to be replaced entirely. Water damage also causes corrosion to the internal mechanisms of the clock. Specific types of clock damage Damage can impact all components and materials from which a clock may be constructed. Wood damage Due to its porous nature, wood is significantly impacted through contact with water. When humidity is high, the excess water in the air is absorbed, resulting in wood expansion, while little to no water in the air when humidity is low can cause wood shrinkage. Such changes in character from the wood's original state can lead to damage. Types of low humidity-related wood damage include "[...] structural cracks, lifting veneer and inlays, gaps in joints and the embrittlement of adhesives." Metal damage Similar to wood, interaction with water can also prove detrimental to metal. When contaminants come into contact with metal, they either combine with any present moisture or attract moisture to the metal. This "[...] combination with moisture can produce corrosion." Plastic damage Modern clocks may feature plastic cases, dials, faces, and mechanisms in conjunction with more traditional materials, such as metal or wood. As plastic was created in the twentieth century, conservation treatments are still being researched by conservators. Since the term plastic covers a broad spectrum of materials from celluloid to technopolymers, there are various reactions to environmental factors and agents of deterioration, making clocks with plastic cases or component parts susceptible to unique stresses that are not wholly understood at this point. What is known is that different types of plastics can degrade quickly and unexpectedly because of the inherent vice of some polymers. Exposure to improper lighting, humidity, water, or solvents may cause chemical deterioration in plastics resulting in reactions such as "a white powder 'blooming' on the surface of an object, discoloration, distortion of the object's shape and strong indicator smells of vinegar, or mothballs," crazing, cracks, acid deposits, and sticky surface textures. It can be difficult to easily identify the chemical makeup of the plastic used to create an object, which can make interventive conservation challenging because the potential reactions to treatments are unknown. Therefore, preventive conservation tactics similar to those used for organic objects are the primary method of caring for objects made with plastic such as clocks. Finish or paint damage Finish damage can occur as a result of the use of cleaning products and/or polishes on a clock's surface. Rather than perform their intended duty to assist in the preservation of a clock, some products "can actually darken or become opaque with age, resulting in a dark, dull and often irreparable finish." Any words or images painted on a clock surface can become faded or even removed when these delicate pieces are touched and exposed to moisture from hands. Interior or clockwork damage Clockwork damage can be inflicted via over-winding, interaction with contaminants, ill-advised lubrication, and clock movement when a pendulum is not secured prior to the clock being moved. A free swinging pendulum can itself be damaged and can inflict damage onto other interior components during such clock movement. Improper cleaning and polishing Many cleaners and polishers contain properties that "[...] have been proven to age poorly." Resultantly, the use of such cleaners and polishers can prove detrimental. The use of ammonia based cleaning products for example on brass clock components can lead to an irreversible form of damage called Stress Corrosion Cracking. Depending on the type of metal on which it is used, a metallic brush employed to apply cleaning products not only can create the proper conditions for corrosion, but can "[...] remove any protective oxide layers even with careful use." Improper lubrication Attempts by non-professionals to lubricate or repair clockwork can lead to serious damage and loss of clock functionality. A lubricant applied without first cleaning and polishing clockwork to remove the previous lubricant and contaminants can result in wear rather than prevent it. The combination of the newly applied lubricant with the previous lubricant often results in "[...] an unwanted chemical reaction between the two lubricants [...]." A careless lubricant choice can result in the use of inappropriate or "Poor-quality oil" on clockwork, which "can become sticky leading to mechanical problems." The inexperience of a non-professional can also lead to "the use of too much oil, or oil in the wrong place" during attempts to lubricate clockwork. The application of too much of a lubricant can be damaging because it "[...] usually causes the lubrication to run out of the bearing and by capillary action will cause the bearing to go dry." Totally ignoring the lubrication needs of clockwork can be jeopardizing not only because of lubricant degradation, but because of the acidic nature of the organic lubricants often used in antique clocks. Examples Anglesey Abbey Pagoda Clock Horological conservators at West Dean College were responsible for the treatment of an 18th-century pagoda clock sent from the historic house Anglesey Abbey by the National Trust. In starting the project, examination of the clock revealed that "The automata and clock elements seem to be running, but struggling, and the music sounds as though it needs a few extra hands to help play its tune." The clock was then disassembled, and care was taken to document information about each of the over 600 components both textually and photographically, including measurements and where each component fits within the clock. "After examination, each component was cleaned and dried, wrapped in acid-free tissue, and stored with the catalogue number." References Clocks Conservation and restoration of cultural heritage Hr:Konzervacija satova
Conservation and restoration of clocks
Physics,Technology,Engineering
5,742
12,866,416
https://en.wikipedia.org/wiki/Information%20Systems%20Security%20Association
Information Systems Security Association (ISSA) is a not-for-profit, international professional organization of information security professionals and practitioners. It was founded in 1984 after work on its establishment started in 1982. ISSA promotes the sharing of information security management practices through educational forums, publications and networking opportunities among security professionals. ISSA members and award winners include many of the industry’s notable luminaries and represent a wide range of industries – from communications, education, healthcare, manufacturing, financial and consulting to IT as well as federal, state and local government departments and agencies. The association publishes the ISSA Journal, a peer-reviewed publication on the issues and trends of the industry. It also partners with ESG (Enterprise Strategy Group) to release a yearly research report, "The Life and Times of the Cyber Security Professional", to examine the experiences of cybersecurity professionals as they navigate the modern threat landscape and the effects it has on their careers. Organization Information Systems Security Association has a board of directors that is elected annually by its members and a set of committees that are appointed. The headquarters of ISSA is located in Houston, Texas. ISSA International Board of Directors Executive Officers President: Jimmy Sanders Vice President: Deb Peinert, CISSP-ISSMP Secretary/Director of Operations: Lee Neely Treasurer/Chief Financial Officer: David Vaughn Membership ISSA has an international membership base. Goals The primary goal of the ISSA is to promote management practices that will ensure the confidentiality, integrity and availability of information resources. The ISSA facilitates interaction and education to create a more successful environment for global information systems security and for the professionals involved. ISSA's goals are to promote security education and skills development, encourage free information exchanges, communicate current events within the security industry and help express the importance of security controls to enterprise business management. Code of ethics As an applicant for membership, the individual is expected to be bounded to a principle of ethics related to the Information Security career. Applicants for ISSA membership attest that they have and will: Perform all professional activities and duties in accordance with all applicable laws and the highest ethical principles; Promote generally accepted information security current best practices and standards; Maintain appropriate confidentiality of proprietary or otherwise sensitive information encountered in the course of professional activities; Discharge professional responsibilities with diligence and honesty; Refrain from any activities which might constitute a conflict of interest or otherwise damage the reputation of employers, the information security profession, or the Association; and Not intentionally injure or impugn the professional reputation or practice of colleagues, clients, or employers. International presence ISSA is present in more than one hundred countries, including Europe and Asia, with more than 10,000 members. See also ISACA Business network Computer security Information Security Information security management system IT risk References Information systems Data security Computer security organizations International professional associations
Information Systems Security Association
Technology,Engineering
575
68,927,917
https://en.wikipedia.org/wiki/Miriam%20Higgins%20Thomas
Miriam Mason Higgins Thomas (June 22, 1920 – September 15, 2002) was an American chemist, based in the United States Army Research and Development Command at Natick, Massachusetts. Early life and education Miriam Mason Higgins was born in Chicago, the daughter of William Henry Higgins and Mame Mason Higgins. Her mother was an alumna of the University of Chicago, dean of women at Bethune-Cookman College, and a social service consultant for the Illinois Department of Public Aid. Her brother, William H. Higgins, was a dentist and ordained Methodist minister. Her grandfather, M. C. B. Masons, was a noted orator and Black church leader. Higgins graduated from Hyde Park High School in 1936, and earned a bachelor's degree in nutrition and chemistry from Bennett College in 1940. She earned a master's degree in food chemistry from the University of Chicago. Career Thomas taught at the University of Chicago during World War II, and was a chemist with Food and Container Institute at the Chicago Quartermaster Depot beginning in 1945. She was a research chemist at the U. S. Army Natick Develoopment Center, studying nutritional content of military rations under various conditions. In 1975 she won an Army SARS Fellowship to study food processing and nutrition analysis techniques in Japan, India, the Soviet Union, the Netherlands, and Guatemala. She was a consultant to the Food Research Laboratories, Inc., of Boston, and taught nutrition and food science courses at the Massachusetts Institute of Technology. Her research was published in academic journals including Journal of Microwave Power and Journal of Food Science. Thomas was nominated three times by the Department of the Army for the Federal Woman's Award. She was a member of the Association of Vitamin Chemists, the Society for Nutrition Education, and the American Association for the Advancement of Science. Selected publications and reports "Use of Ionizing Radiation to Preserve Food" (1988) "Effective of Processing and Preparation for Serving on Vitamin Content in T, B, and a Ration Pork" (1986, with Bonita Atwood and K. Ananth Narayan) "Stability of Vitamins C, B1, B2 and B6 in Fortified Beef Stew" (1986, with Bonita M. Atwood and K. Ananth Narayan) "Thiamin and Riboflavin Content of Flake-cot Formed Pork Roasts" (1982, with R. V. Decareau and Bonita M. Atwood) "Effect of Radiation and Conventional Processing on the Thiamin Content of Pork" (1981, with Bonita M. Atwood, E. Wierbicki, and I. A. Taub) "Nutritional Aspects of Food Irradiation: An Overview" (1979, with E. S. Josephson and W. K. Calhoun) "Radappertization of Meat, Meat Products, and Poultry" (1972, with E. S. Josephson, A. Brynjolfsson, E. Wierbicki, D. B. Rowley, C. Merritt, R. W. Baker, and J. J. Killoran) "Effect of Irradiation Dose and Temperature on the Thiamine Content of Ham" (1971, with E. Wierbicki) "High-Dose Radiation Processing of Meat, Poultry, and Seafood Products" (1970, with E. Wierbicki, A. Anellis, J. J. Killoran, E. J. Johnson, and Edward S. Josephson) "Radiation Preservation of Foods and Its Effect on Nutrients" 1970, with Edward S. Josephson) "Effect of Freeze-Thaw Cycling on the Vitamin Content of the Meal Ready-to-Eat, Individual" (1969, with Doris E. Sherman) "Modified Microbiological Procedures for Vitamin Assays: A Manual" (1962) "The Nutrient Composition of the Meal, Combat, Individual" (1962) "Vitamin Retention in Fortified Fruit Tablets During Surgery" (1961) "Nutritional evaluation of dehydrated foods and comparison with foods processed by thermal and radiation methods" (1961, with Doris Howes Calloway) "Effect of Storage on the Flavor of Chocolate Fortified with Nutritional Yeast" (1957, with A. L. Sheffner and Harry Spector) "Effect of Electronic Cooking on Nutritive Value of Foods" (1949, with S. Brenner, A. Eaton, V. Craig) Personal life Miriam Higgins married before 1960 and had a son, Brian. Thomas died in 2002, aged 82 years. Her papers are in the National Archives for Black Women's History. References 1920 births 2002 deaths Scientists from Chicago Bennett College alumni University of Chicago alumni American women chemists 20th-century African-American scientists Food chemists 20th-century African-American women
Miriam Higgins Thomas
Chemistry
978
56,688,185
https://en.wikipedia.org/wiki/Raw%20intelligence
Raw intelligence is raw data gathered by an intelligence operation, such as espionage or signal interception. Such data commonly requires processing and analysis to make it useful and reliable. To turn the raw intelligence into a finished form, the steps required may include decryption, translation, collation, evaluation and confirmation. In the period after the First World War, British practise was to circulate raw intelligence with little analysis or context. Such direct intelligence was a strong influence on policy-makers. Churchill was especially keen to see raw intelligence and was supplied this by Desmond Morton during his period outside the government. When Churchill became Prime Minister in 1940, he still insisted on receiving raw intelligence and wanted it all until it was explained that the volume was now too great. A selection of daily intercepts was provided to him each day by Bletchley Park and he called these his "golden eggs". US intelligence has a different tradition from the British. The key event for the US was the failure to prevent the attack on Pearl Harbor and the inquiries which followed concluded that this was not due to the lack of raw intelligence so much as the failure to make effective use of it. The Central Intelligence Agency was created to collate, analyse and summarise the raw intelligence collected by the other departments. US agencies which focus on the collection of raw intelligence include the National Reconnaissance Office and the National Security Agency. See also Source (intelligence) References Data Intelligence assessment Law enforcement terminology
Raw intelligence
Technology
295
32,104,707
https://en.wikipedia.org/wiki/Cellular%20noise
Cellular noise is random variability in quantities arising in cellular biology. For example, cells which are genetically identical, even within the same tissue, are often observed to have different expression levels of proteins, different sizes and structures. These apparently random differences can have important biological and medical consequences. Cellular noise was originally, and is still often, examined in the context of gene expression levels – either the concentration or copy number of the products of genes within and between cells. As gene expression levels are responsible for many fundamental properties in cellular biology, including cells' physical appearance, behaviour in response to stimuli, and ability to process information and control internal processes, the presence of noise in gene expression has profound implications for many processes in cellular biology. Definitions The most frequent quantitative definition of noise is the coefficient of variation: where is the noise in a quantity , is the mean value of and is the standard deviation of . This measure is dimensionless, allowing a relative comparison of the importance of noise, without necessitating knowledge of the absolute mean. Other quantities often used for mathematical convenience are the Fano factor: and the normalized variance: Experimental measurement The first experimental account and analysis of gene expression noise in prokaryotes is from Becskei & Serrano and from Alexander van Oudenaarden's lab. The first experimental account and analysis of gene expression noise in eukaryotes is from James J. Collins's lab. Intrinsic and extrinsic noise Cellular noise is often investigated in the framework of intrinsic and extrinsic noise. Intrinsic noise refers to variation in identically regulated quantities within a single cell: for example, the intra-cell variation in expression levels of two identically controlled genes. Extrinsic noise refers to variation in identically regulated quantities between different cells: for example, the cell-to-cell variation in expression of a given gene. Intrinsic and extrinsic noise levels are often compared in dual reporter studies, in which the expression levels of two identically regulated genes (often fluorescent reporters like GFP and YFP) are plotted for each cell in a population. An issue with the general depiction of extrinsic noise as a spread along the main diagonal in dual-reporter studies is the assumption that extrinsic factors cause positive expression correlations between the two reporters. In fact, when the two reporters compete for binding of a low-copy regulator, the two reporters become anomalously anticorrelated, and the spread is perpendicular to the main diagonal. In fact, any deviation of the dual-reporter scatter plot from circular symmetry indicates extrinsic noise. Information theory offers a way to avoid this anomaly. Sources Note: These lists are illustrative, not exhaustive, and identification of noise sources is an active and expanding area of research. Intrinsic noise Low copy-number effects (including discrete birth and death events): the random (stochastic) nature of production and degradation of cellular components means that noise is high for components at low copy number (as the magnitude of these random fluctuations is not negligible with respect to the copy number); Diffusive cellular dynamics: many important cellular processes rely on collisions between reactants (for example, RNA polymerase and DNA) and other physical criteria which, given the diffusive dynamic nature of the cell, occur stochastically. Noise propagation: Low copy-number effects and diffusive dynamics result in each of the biochemical reactions in a cell occurring randomly. Stochasticity of reactions can be either attenuated or amplified. Contribution each reaction makes to the intrinsic variability in copy numbers can be quantified via Van Kampen's system size expansion. Extrinsic noise Cellular age / cell cycle stage: cells in a dividing population that is not synchronised will, at a given snapshot in time, be at different cell cycle stages, with corresponding biochemical and physical differences; Cell growth: variations in growth rates leading to concentration variations between cells; Physical environment (temperature, pressure, ...): physical quantities and chemical concentrations (particularly in the case of cell-to-cell signalling) may vary spatially across a population of cells, provoking extrinsic differences as a function of position; Organelle distributions: random factors in the quantity and quality of organelles (for example, the number and functionality of mitochondria) lead to significant cell-to-cell differences in a range of processes (as, for example, mitochondria play a central role in the energy budget of eukaryotic cells); Inheritance noise: uneven partitioning of cellular components between daughter cells at mitosis can result in large extrinsic differences in a dividing population. Regulator competition: Regulators competing to bind downstream promoters can cause negative correlations: when one promoter is bound the other is not and vice versa. Note that extrinsic noise can affect levels and types of intrinsic noise: for example, extrinsic differences in the mitochondrial content of cells lead, through differences in ATP levels, to some cells transcribing faster than others, affecting the rates of gene expression and the magnitude of intrinsic noise across the population. Effects Note: These lists are illustrative, not exhaustive, and identification of noise effects is an active and expanding area of research. Gene expression levels: noise in gene expression causes differences in the fundamental properties of cells, limits their ability to biochemically control cellular dynamics, and directly or indirectly induce many of the specific effects below; Energy levels and transcription rate: noise in transcription rate, arising from sources including transcriptional bursting, is a significant source of noise in expression levels of genes. Extrinsic noise in mitochondrial content has been suggested to propagate to differences in the ATP concentrations and transcription rates (with functional relationships implied between these three quantities) in cells, affecting cells' energetic competence and ability to express genes; Phenotype selection: bacterial populations exploit extrinsic noise to choose a population subset to enter a quiescent state. In a bacterial infection, for example, this subset will not propagate quickly but will be more robust when the population is threatened by antibiotic treatment: the rapidly replicating, infectious bacteria will be killed more quickly than the quiescent subset, which may be capable of restarting the infection. This phenomenon is why courses of antibiotics should be finished even when symptoms seem to have disappeared; Development and stem cell differentiation: developmental noise in biochemical processes which need to be tightly controlled (for example, patterning of gene expression levels that develop into different body parts) during organismal development can have dramatic consequences, necessitating the evolution of robust cellular machinery. Stem cells differentiate into different cell types depending on the expression levels of various characteristic genes: noise in gene expression can clearly perturb and influence this process, and noise in transcription rate can affect the structure of the dynamic landscape that differentiation occurs on. There are review articles summarizing these effects from bacteria to mammalian cells; Drug resistance: Noise improves short-term survival and long-term evolution of drug resistance at high levels of drug treatment. Noise has the opposite effect at low levels of drug treatment; Cancer treatments: recent work has found extrinsic differences, linked to gene expression levels, in the response of cancer cells to anti-cancer treatments, potentially linking the phenomenon of fractional killing (whereby each treatment kills some but not all of a tumour) to noise in gene expression. Because individual cells could repeatedly and stochastically perform transitions between states associated with differences in responsiveness to a therapeutic modality (chemotherapy, targeted agent, radiation, etc.), therapy might need to be administered frequently (to ensure cells are treated soon after entering a therapy-responsive state, before they can rejoin the therapy-resistant subpopulation and proliferate) and over long times (to treat even those cells emerging late from the final residue of the therapy-resistant subpopulation). Evolution of genome: Genome are covered by chromatin that can be roughly classified into "open" (also known as euchromatin) or "closed" (also known as heterochromatin). Open chromatin leads to less noise in transcription compared to heterochromatin. Often "housekeeping" proteins (which are proteins that carry out tasks in the required for cellular survival) work large multiprotein complexes. If the noise in proteins of such complexes are to discoordinated, it can lead to reduced level of production of multiprotein complexes, with potentially deleterious effects. Reduction in noise may provide an evolutionary selection movement of essential genes into open chromatin. Information processing: as cellular regulation is performed with components that are themselves subject to noise, the ability of cells to process information and perform control is fundamentally limited by intrinsic noise Analysis As many quantities of cell biological interest are present in discrete copy number within the cell (single DNAs, dozens of mRNAs, hundreds of proteins), tools from discrete stochastic mathematics are often used to analyse and model cellular noise. In particular, master equation treatments – where the probabilities of observing a system in a state at time are linked through ODEs – have proved particularly fruitful. A canonical model for noise gene expression, where the processes of DNA activation, transcription and translation are all represented as Poisson processes with given rates, gives a master equation which may be solved exactly (with generating functions) under various assumptions or approximated with stochastic tools like Van Kampen's system size expansion. Numerically, the Gillespie algorithm or stochastic simulation algorithm is often used to create realisations of stochastic cellular processes, from which statistics can be calculated. The problem of inferring the values of parameters in stochastic models (parametric inference) for biological processes, which are typically characterised by sparse and noisy experimental data, is an active field of research, with methods including Bayesian MCMC and approximate Bayesian computation proving adaptable and robust. Regarding the two-state model, a moment-based method was described for parameters inference from mRNAs distributions. References Cell biology Biophysics Molecular biology Biostatistics Randomness
Cellular noise
Physics,Chemistry,Biology
2,074
40,688,472
https://en.wikipedia.org/wiki/Entomostracites
Entomostracites is a scientific name for several trilobites, now assigned to various other genera. E. bucephalus = Paradoxides paradoxissimus E. crassicauda = Illaenus crassicauda E. expansus = Asaphus expansus E. gibbosus = Olenus gibbosus E. granulatus = Nankinolithus granulatus E. laciniatus = Lichas laciniatus E. laticauda = Eobronteus laticauda E. paradoxissimus = Paradoxides paradoxissimus E. pisiformis = Agnostus pisiformis E. punctatus = Encrinurus punctatus E. scarabaeoides = Peltura scarabaeoides E. spinulosus = Parabolina spinulosa References Disused trilobite generic names
Entomostracites
Biology
186
2,663,839
https://en.wikipedia.org/wiki/Chaguar
Chaguar is the common name of several related species of South American plants of the family Bromeliaceae, among them Bromelia serra, Bromelia hieronymi, Deinacanthon urbanianum and Pseudananas sagenarius, which are non-woody forest plants with sword-shaped evergreen leaves, resembling yucca. The different varieties grow in the semi-arid parts of the Gran Chaco ecoregion. The term chaguar is of Quechua origin; in areas where the Guaraní have had an influence, it is also known as caraguatá. This plant is mainly employed by the Wichí tribal groups in the provinces of Salta and Formosa, Argentina, to provide a resistant fiber that can be woven to make bags and purses, ponchos, skirts, fishing nets, string, ropes, hammocks, mats, covers and clothing. Along with those made from hardwoods such as quebracho, chaguar crafts make up an important part of the economy of some Wichí groups, though the profits are scarce. Chaguar is not cultivated. It grows in the semi-shade of a middle layer of the Chaco forest, and reproduces by stolons. Desertification of the Chaco has decreased its presence, but the plant is neither endangered nor of primary ecological importance. Farmers consider it a pest and, since its spines scare cattle, they sometimes burn the chaguarales (plant colonies) during the dry season. There are no restrictions on the collection and use of chaguar among the Wichí, but the task is time-consuming and labor-intensive. This fact and the environmental ethics of the tribe discourage overexploitation. References Wichi, Origen crafts Lessons Learned: Case Studies in Sustainable Use FormosaExporter's store: Buy original Chaguar Shoulder Bags Bromelia Flora of Argentina Fiber plants Plant common names
Chaguar
Biology
395
4,171,813
https://en.wikipedia.org/wiki/Solar-powered%20pump
Solar-powered pumps run on electricity generated by photovoltaic (PV) panels or the radiated thermal energy available from collected sunlight as opposed to grid electricity- or diesel-run water pumps. Generally, solar-powered pumps consist of a solar panel array, solar charge controller, DC water pump, fuse box/breakers, electrical wiring, and a water storage tank. The operation of solar-powered pumps is more economical mainly due to the lower operation and maintenance costs and has less environmental impact than pumps powered by an internal combustion engine. Solar pumps are useful where grid electricity is unavailable or impractical, and alternative sources (in particular wind) do not provide sufficient energy. Components A PV solar-powered pump system has three main parts - one or more solar panels, a controller, and a pump. The solar panels make up most (up to 80%) of the system's cost. The size of the PV system is directly dependent on the size of the pump, the amount of water that is required, and the solar irradiance available. The purpose of the controller is twofold. Firstly, it matches the output power that the pump receives with the input power available from the solar panels. Secondly, a controller usually provides a low- or high-voltage protection, whereby the system is switched off, if the voltage is too low or too high for the operating voltage range of the pump. This increases the service life of the pump, thus reducing the need for maintenance. Other ancillary functions include automatically shutting down the system when the water source level is low or when the storage tank is full, regulating water output pressure, blending power input between the solar panels and an alternate power source such as the grid or an engine-powered generator, and remotely monitoring and managing the system through an online portal offered as a cloud service by the manufacturer. Voltage of the solar pump motors can be alternating current (AC) or direct current (DC). DC motors are used for small to medium applications up to about 4 kW rating, and are suitable for applications such as garden fountains, landscaping, drinking water for livestock, or small irrigation projects. Since DC systems tend to have overall higher efficiency levels than AC pumps of a similar size, the costs are reduced, as smaller solar panels can be used. Finally, if an AC solar pump is used, an inverter is necessary to change the DC power from the solar panels into AC for the pump. The supported power range of inverters extends from 0.15 to 55 kW, and can be used for larger irrigation systems. The panel and inverters must be sized accordingly, though, to accommodate the inrush characteristic of an AC motor. To aid in proper sizing, leading manufacturers provide proprietary sizing software tested by third-party certifying companies. The sizing software may include the projected monthly water output, which varies due to seasonal change in insolation. Water pumping Solar-powered water pumps can deliver drinking water, water for livestock, or irrigation water. Solar water pumps may be especially useful in small-scale or community-based irrigation, as large-scale irrigation requires large volumes of water that in turn require a large solar PV array. As the water may only be required during some parts of the year, a large PV array would provide excess energy that is not necessarily required, thus making the system inefficient, unless an alternative use can be found. Solar PV water pumping systems are used for irrigation and drinking water in India. Most of the pumps are fitted with a 2.0 - 3.7 kW motor that receives energy from a 4.8 kWp PV array. The 3.7 kW systems can deliver about 124,000 liters of water/day from a total of 50 meters setoff head and 70 meters dynamic head. By 30 August 2016, a total of 120,000 solar PV water pumping systems had been installed around the world. Energy storage in the form of water storage is better than energy storage in the form of batteries for solar water pumps because no intermediary transformation of one form of energy to another is needed. The most common pump mechanics used are centrifugal pumps, multistage pumps, borehole pumps, and helical pumps. Important scientific concepts of fluid dynamics such as pressure vs. head, pump heads, pump curves, system curves, and net suction head are really important for the successful deployment and design of solar-powered pumps. Oil and gas To combat negative publicity related to the environmental impacts of fossil fuels, including fracking, the oil and gas industry is embracing solar-powered pumping systems. Many oil and gas wells require the accurate injection (metering) of various chemicals under pressure to sustain their operation and to improve extraction rates. Historically, these chemical injection pumps (CIPs) have been driven by gas reciprocating motors using the pressure of the well's gas, and exhausting the raw gas into the atmosphere. Solar-powered electrical pumps (solar CIPs) can reduce these greenhouse gas emissions. Solar arrays (PV cells) not only provide a sustainable power source for the CIPs, but can also provide an electricity source to run remote SCADA-type diagnostics with remote control and satellite/cell communications from very remote locations to a desktop or notebook monitoring computer. Stirling engine Instead of generating electricity to turn a motor, sunlight can be concentrated on the heat exchanger of a Stirling engine and used to drive a pump mechanically. This dispenses with the cost of solar panels and electric equipment. In some cases, the Stirling engine may be suitable for local fabrication, eliminating the difficulty of importing equipment. One form of Stirling engine is the fluidyne engine, which operates directly on the pumped fluid as a piston. Fluidyne solar pumps have been studied since 1987. At least one manufacturer has conducted tests with a Stirling solar-powered pump. See also List of solar powered products List of photovoltaic power stations Notes References Solar-powered devices Pumps Applications of photovoltaics
Solar-powered pump
Physics,Chemistry
1,220
4,504,051
https://en.wikipedia.org/wiki/Channel%20state%20information
In wireless communications, channel state information (CSI) is the known channel properties of a communication link. This information describes how a signal propagates from the transmitter to the receiver and represents the combined effect of, for example, scattering, fading, and power decay with distance. The method is called channel estimation. The CSI makes it possible to adapt transmissions to current channel conditions, which is crucial for achieving reliable communication with high data rates in multiantenna systems. CSI needs to be estimated at the receiver and usually quantized and feedback to the transmitter (although reverse-link estimation is possible in time-division duplex (TDD) systems). Therefore, the transmitter and receiver can have different CSI. The CSI at the transmitter and the CSI at the receiver are sometimes referred to as CSIT and CSIR, respectively. Different kinds of channel state information There are basically two levels of CSI, namely instantaneous CSI and statistical CSI. Instantaneous CSI (or short-term CSI) means that the current channel conditions are known, which can be viewed as knowing the impulse response of a digital filter. This gives an opportunity to adapt the transmitted signal to the impulse response and thereby optimize the received signal for spatial multiplexing or to achieve low bit error rates. Statistical CSI (or long-term CSI) means that a statistical characterization of the channel is known. This description can include, for example, the type of fading distribution, the average channel gain, the line-of-sight component, and the spatial correlation. As with instantaneous CSI, this information can be used for transmission optimization. The CSI acquisition is practically limited by how fast the channel conditions are changing. In fast fading systems where channel conditions vary rapidly under the transmission of a single information symbol, only statistical CSI is reasonable. On the other hand, in slow fading systems instantaneous CSI can be estimated with reasonable accuracy and used for transmission adaptation for some time before being outdated. In practical systems, the available CSI often lies in between these two levels; instantaneous CSI with some estimation/quantization error is combined with statistical information. Mathematical description In a narrowband flat-fading channel with multiple transmit and receive antennas (MIMO), the system is modeled as where and are the receive and transmit vectors, respectively, and and are the channel matrix and the noise vector, respectively. The noise is often modeled as circular symmetric complex normal with where the mean value is zero and the noise covariance matrix is known. Instantaneous CSI Ideally, the channel matrix is known perfectly. Due to channel estimation errors, the channel information can be represented as where is the channel estimate and is the estimation error covariance matrix. The vectorization was used to achieve the column stacking of , as multivariate random variables are usually defined as vectors. Statistical CSI In this case, the statistics of are known. In a Rayleigh fading channel, this corresponds to knowing that for some known channel covariance matrix . Estimation of CSI Since the channel conditions vary, instantaneous CSI needs to be estimated on a short-term basis. A popular approach is so-called training sequence (or pilot sequence), where a known signal is transmitted and the channel matrix is estimated using the combined knowledge of the transmitted and received signal. Let the training sequence be denoted , where the vector is transmitted over the channel as By combining the received training signals for , the total training signalling becomes with the training matrix and the noise matrix . With this notation, channel estimation means that should be recovered from the knowledge of and . Least-square estimation If the channel and noise distributions are unknown, then the least-square estimator (also known as the minimum-variance unbiased estimator) is where denotes the conjugate transpose. The estimation mean squared error (MSE) is proportional to where denotes the trace. The error is minimized when is a scaled identity matrix. This can only be achieved when is equal to (or larger than) the number of transmit antennas. The simplest example of an optimal training matrix is to select as a (scaled) identity matrix of the same size that the number of transmit antennas. MMSE estimation If the channel and noise distributions are known, then this a priori information can be exploited to decrease the estimation error. This approach is known as Bayesian estimation and for Rayleigh fading channels it exploits that The MMSE estimator is the Bayesian counterpart to the least-square estimator and becomes where denotes the Kronecker product and the identity matrix has the dimension of the number of receive antennas. The estimation MSE is and is minimized by a training matrix that in general can only be derived through numerical optimization. But there exist heuristic solutions with good performance based on waterfilling. As opposed to least-square estimation, the estimation error for spatially correlated channels can be minimized even if is smaller than the number of transmit antennas. Thus, MMSE estimation can both decrease the estimation error and shorten the required training sequence. It needs however additionally the knowledge of the channel correlation matrix and noise correlation matrix . In absence of an accurate knowledge of these correlation matrices, robust choices need to be made to avoid MSE degradation. Neural network estimation With the advances of deep learning there has been work that shows that the channel state information can be estimated using Neural network such as 2D/3D CNN and obtain better performance with less pilot signals. The main idea is that the neural network can do a good interpolation in time and frequency. Data-aided versus blind estimation In a data-aided approach, the channel estimation is based on some known data, which is known both at the transmitter and at the receiver, such as training sequences or pilot data. In a blind approach, the estimation is based only on the received data, without any known transmitted sequence. The tradeoff is the accuracy versus the overhead. A data-aided approach requires more bandwidth or it has a higher overhead than a blind approach, but it can achieve a better channel estimation accuracy than a blind estimator. See also Channel sounding References External links Atheros CSI Tool Linux 802.11n CSI Tool Wireless Information theory Radio resource management Telecommunication theory
Channel state information
Mathematics,Technology,Engineering
1,250
1,775,454
https://en.wikipedia.org/wiki/New%20chemical%20entity
A new chemical entity (NCE) is, according to the U.S. Food and Drug Administration, a novel, small, chemical molecule drug that is undergoing clinical trials or has received a first approval (not a new use) by the FDA in any other application submitted under section 505(b) of the Federal Food, Drug, and Cosmetic Act. A new molecular entity (NME) is a broader term that encompasses both an NCE or an NBE (New Biological Entity). Definition An active moiety is a molecule or ion, excluding those appended portions of the molecule that cause the drug to be an ester, salt (including a salt with hydrogen or coordination bonds), or other noncovalent derivative (such as a complex, chelate, or clathrate) of the molecule, responsible for the physiological or pharmacological action of the drug substance. An NCE is a molecule developed by the innovator company in the early drug discovery stage, which after undergoing clinical trials could translate into a drug that could be a treatment for some disease. Synthesis of an NCE is the first step in the process of drug development. Once the synthesis of the NCE has been completed, companies have two options before them. They can either go for clinical trials on their own or license the NCE to another company. In the latter option, companies can avoid the expensive and lengthy process of clinical trials, as the licensee company would be conducting further clinical trials and subsequently launching the drug. Companies adopting this model of business would be able to generate high margins as they get a huge one-time payment for the NCE as well as entering into a revenue sharing agreement with the licensee company. Under the Food and Drug Administration Amendments Act of 2007, all new chemical entities must first be reviewed by an advisory committee before the FDA can approve these products. See also Molecular/chemical entity Medicinal chemistry Drug development References External links NME's worldwide and in Germany CDER Drug and Biologic Approval Reports Medicinal chemistry Drug development Life sciences industry
New chemical entity
Chemistry,Biology
414
62,262,830
https://en.wikipedia.org/wiki/Undercut%20%28turning%29
In turning, an undercut is a recess in a diameter generally on the inside diameter of the part. On turned parts an undercut is also known as a neck or "relief groove". They are often used at the end of the threaded portion of a shaft or screw to provide clearance for the cutting tool, and also referred to as thread relief in this context. A rule of thumb is that the undercut should be at least 1.5 threads long and the diameter should be at least smaller than the minor diameter of the thread. Strictly speaking the relief simply needs to be equal or slightly smaller than the minor diameter of the thread. Thread relief can also be internal on a bore, and then the relief needs to be larger than the major thread diameter. They are also often used on shafts that have diameter changes so that a mating part can seat against the shoulder. If an undercut is not provided there is always a small radius left behind even if a sharp corner is intended. These types of undercuts are called out on technical drawings by saying the width and either the depth or the diameter of the bottom of the neck. References Bibliography . Mechanical engineering
Undercut (turning)
Physics,Engineering
234
31,926,045
https://en.wikipedia.org/wiki/Orbital%20magnetization
In quantum mechanics, orbital magnetization, Morb, refers to the magnetization induced by orbital motion of charged particles, usually electrons in solids. The term "orbital" distinguishes it from the contribution of spin degrees of freedom, Mspin, to the total magnetization. A nonzero orbital magnetization requires broken time-reversal symmetry, which can occur spontaneously in ferromagnetic and ferrimagnetic materials, or can be induced in a non-magnetic material by an applied magnetic field. Definitions The orbital magnetic moment of a finite system, such as a molecule, is given classically by where J(r) is the current density at point r. (Here SI units are used; in Gaussian units, the prefactor would be 1/2c instead, where c is the speed of light.) In a quantum-mechanical context, this can also be written as where −e and me are the charge and mass of the electron, Ψ is the ground-state wave function, and L is the angular momentum operator. The total magnetic moment is where the spin contribution is intrinsically quantum-mechanical and is given by where gs is the electron spin g-factor, μB is the Bohr magneton, ħ is the reduced Planck constant, and S is the electron spin operator. The orbital magnetization M is defined as the orbital moment density; i.e., orbital moment per unit volume. For a crystal of volume V composed of isolated entities (e.g., molecules) labelled by an index j having magnetic moments morb, j, this is However, real crystals are made up out of atomic or molecular constituents whose charge clouds overlap, so that the above formula cannot be taken as a fundamental definition of orbital magnetization. Only recently have theoretical developments led to a proper theory of orbital magnetization in crystals, as explained below. Theory Difficulties in the definition of orbital magnetization For a magnetic crystal, it is tempting to try to define where the limit is taken as the volume V of the system becomes large. However, because of the factor of r in the integrand, the integral has contributions from surface currents that cannot be neglected, and as a result the above equation does not lead to a bulk definition of orbital magnetization. Another way to see that there is a difficulty is to try to write down the quantum-mechanical expression for the orbital magnetization in terms of the occupied single-particle Bloch functions of band n and crystal momentum k: where p is the momentum operator, L = r × p, and the integral is evaluated over the Brillouin zone (BZ). However, because the Bloch functions are extended, the matrix element of a quantity containing the r operator is ill-defined, and this formula is actually ill-defined. Atomic sphere approximation In practice, orbital magnetization is often computed by decomposing space into non-overlapping spheres centered on atoms (similar in spirit to the muffin-tin approximation), computing the integral of r × J(r) inside each sphere, and summing the contributions. This approximation neglects the contributions from currents in the interstitial regions between the atomic spheres. Nevertheless, it is often a good approximation because the orbital currents associated with partially filled d and f shells are typically strongly localized inside these atomic spheres. It remains, however, an approximate approach. Modern theory of orbital magnetization A general and exact formulation of the theory of orbital magnetization was developed in the mid-2000s by several authors, first based on a semiclassical approach, then on a derivation from the Wannier representation, and finally from a long-wavelength expansion. The resulting formula for the orbital magnetization, specialized to zero temperature, is where fn k is 0 or 1 respectively as the band energy En k falls above or below the Fermi energy μ, is the effective Hamiltonian at wavevector k, and is the cell-periodic Bloch function satisfying A generalization to finite temperature is also available. Note that the term involving the band energy En k in this formula is really just an integral of the band energy times the Berry curvature. Results computed using the above formula have appeared in the literature. A recent review summarizes these developments. Experiments The orbital magnetization of a material can be determined accurately by measuring the gyromagnetic ratio γ, i.e., the ratio between the magnetic dipole moment of a body and its angular momentum. The gyromagnetic ratio is related to the spin and orbital magnetization according to The two main experimental techniques are based either on the Barnett effect or the Einstein–de Haas effect. Experimental data for Fe, Co, Ni, and their alloys have been compiled. References Magnetism Electromagnetism Quantum mechanics Electronic structure methods
Orbital magnetization
Physics,Chemistry
970
48,562,710
https://en.wikipedia.org/wiki/Forensic%20biomechanics
Forensic biomechanics is the application of biomechanical engineering science to litigation where biomechanical experts determine whether an accident was the cause of an alleged injury. (See "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.) Application of biomechanics to the analysis of an accident involves an accident reconstruction coupled with an analysis of the motions and forces affecting the people involved in the accident. ( See "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.) A biomechanical expert’s testimony on the motions and forces involved in an accident may be both relevant and probative on the issue of injury causation. (See "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.) History During the years 2005 to 2019, the Courts of New York City witnessed the innovation and widespread use of biomechanical experts. Soon after the innovation of biomechanical experts in the Courts of New York City, prominent trial attorneys and the New York State Bar Association began offering scholarly articles and educational seminars on the use of biomechanical experts. Notable articles on Biomechanical Experts include: "New York State Bar Association Bar Journal November/December 2010 - The Rise of Biomechanical Experts at Trial by Robert Glick, Esq. and Sean O'Loughlin, Esq.", "The Role of Biomechanics Engineering: an Explanation in Accident Reconstruction by Richard Sands, Esq.", "New York Law Journal - Using Biomechanical Science in Labor Law and Premises Cases by Richard Sands, Esq.", "New York Law Journal - Winning the Biomechanical 'Frye' Hearing by Steven Balson-Cohen, Esq," PropertyCasualty360 - Insurers Tap Biomechanics To Fix Blame For Injuries Claimed In Crashes." On December 28, 2018, the New York Law Journal published an article by prominent trial attorney Steven Balson-Cohen titled "Requiem for the Biomechanical ‘Frye’ Hearing?" revealing that the current state of the law in New York was that all four appellate divisions had accepted the legitimacy of biomechanical science in the courtroom. Prominent trial attorneys in biomechanics include but are not limited to: Stephen B. Toner, Francis J. Scahill, Richard M. Sands, Claire F. Rush, Steven Balson-Cohen, John J. Komar, Howard Greenwald, Maurice J. Recchia, Cecil E. Floyd, Philip J. Rizzuto, Milene Mansouri, Joseph Jednak, Paul Koors, Anthony E. Graziani, Kristen N. Reed, and John Corring. References Biomechanics Lawsuits
Forensic biomechanics
Physics
634
714,069
https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch%20principle
In computer science and quantum physics, the Church–Turing–Deutsch principle (CTD principle) is a stronger, physical form of the Church–Turing thesis formulated by David Deutsch in 1985. The principle states that a universal computing device can simulate every physical process. History The principle was stated by Deutsch in 1985 with respect to finitary machines and processes. He observed that classical physics, which makes use of the concept of real numbers, cannot be simulated by a Turing machine, which can only represent computable reals. Deutsch proposed that quantum computers may actually obey the CTD principle, assuming that the laws of quantum physics can completely describe every physical process. An earlier version of this thesis for classical computers was stated by Alan Turing's friend and student Robin Gandy in 1980. A similar thesis was stated by Michael Freedman in an early review of topological quantum computing with Alexei Kitaev, Michael J. Larsen, and Zhenghan Wang, known as the Freedman-Church-Turing thesis: This thesis differs from the Church-Turing-Deutsch thesis insofar as it is a statement about computational complexity, and not computability. See also Bekenstein bound Digital physics Holographic principle Quantum complexity theory Laplace's demon Notes References Further reading Christopher G. Timpson Quantum Computers: the Church-Turing Hypothesis Versus the Turing Principle in Christof Teuscher, Douglas Hofstadter (eds.) Alan Turing: life and legacy of a great thinker, Springer, 2004, , pp. 213–240 External links Alan Turing Principles Computability theory Theory of computation
Church–Turing–Deutsch principle
Mathematics
334
40,341,018
https://en.wikipedia.org/wiki/Stubble%20burning
Stubble burning is the practice of intentionally setting fire to the straw stubble that remains after grains, such as rice and wheat, have been harvested. The technique is used to quickly and cheaply clear fields. It is still widespread today. Stubble burning has been associated with increasing air pollution over the past few decades due to the particulate matter contamination it distributes into the atmosphere. In India, stubble burning generates a thick haze. These fires pose a significant health risk to individuals across all ages. In countries such as India and Pakistan, stubble burning is illegal. Enforcement is weak, allowing the practice to continue. Effects The burning of stubble has both positive and negative consequences. Generally helpful effects Cheaper and easier than other removal methods Helps to combat pests and weeds Can reduce nitrogen tie-up Generally harmful effects Loss of nutrients. Pollution from smoke. Including greenhouse gases and others that damage the ozone layer. Damage to electrical and electronic equipment from floating threads of conductive waste. Risk of fires spreading out of control. Additionally, prolonged stubble burning kills beneficial microflora and fauna in soil which reduces organic matter and destroys the carbon-nitrogen equilibrium. Health concerns related to stubble burning in India A wide array of health disorders are associated with the stubble burning emission releases which have caused people to develop lung cancer and respiratory infections. The emissions also threaten the health of children who tend to have weaker organs. Not only that but the smog from the stubble burning also severely affects people with Chronic Obstructive Pulmonary Disorder as it worsens their health conditions. India also has the highest number of blind people in the world and if the smog gets in your eyes you are more likely to develop cataracts. Additionally, people who have been exposed to smog can develop eye irritation, eye-watering, and conjunctival hyperemia symptoms. In order to reduce pollution there needs to be severe attention to the issue involved with effective sustainable management practices enforced by the government. The Indian government has been receiving intense backlash for not reacting quickly enough to the health emergency, especially amongst the green revolution that is bringing attention to climate change concerns. Alternative to stubble burning The solutions to reduce the pollution from stubble burning involve mitigating crop farming, adhering to diversification of agriculture, adopting the paddy straw farming technique, and making biomass pellets. Agriculture residues can have other uses, such as in particle board and biofuel, though these uses can still cause problems like erosion and nutrient loss. Spraying an enzyme, which decomposes the stubble into useful fertiliser, improves the soil, avoids air pollution and prevents carbon dioxide emissions. Several companies worldwide use leftover agricultural waste to make new products. Agricultural waste can serve as raw materials for new applications, such as paper and board, bio-based oils, leather, catering disposables, fuel and plastic. Another important way to manage the agricultural waste from stubble burning would be to detoxify the soil after it has been burned and using aerobic and anaerobic techniques that recycle organic matter. Empowering farmers to use sustainable solutions Another way to reduce particulate matter pollution entails the requirement of bringing severe attention to the issue in accordance with effective sustainable management practices and government support. Active stakeholder acknowledgment by the farm owners that are producing the products that are used in stubble burning will need to form agreements with the government too. Unfortunately, many of the farmers that contribute to the pollution are unaware of the implications of how harmful stubble burning is for the earth especially as to how it depletes soil of nutrients and contaminates the air. Empowering farmers and educating them about the harmful consequences that stubble burning causes to the atmosphere is also necessary for stubble burning pollution reduction. Attitudes toward stubble burning Stubble burning has been effectively prohibited since 1993 in the United Kingdom. A perceived increase in blackgrass, and particularly herbicide resistant blackgrass, has led to a campaign by some arable farmers for its return. In Australia stubble burning is "not the preferred option for the majority of farmers" but is permitted and recommended in some circumstances. Farmers are advised to rake and burn windrows, and leave a fire break of 3 metres around any burn off. In the United States, fires are fairly common in mid-western states, but some states such as Oregon and Idaho regulate the practice. In the European Union, the Common Agricultural Policy strongly discourages stubble burning. In China, there is a government ban on stubble burning; however the practice remains fairly common. In northern India, despite a ban by the Punjab Pollution Control Board, stubble burning is still practiced since the 1980s. Authorities are starting to enforce this ban more proactively, and to research alternatives. Stubble burning is allowed by permit in some Canadian provinces, including Manitoba where 5% of farmers were estimated to do it in 2007. India Stubble burning in Punjab, Haryana, and Uttar Pradesh in north India has been cited as a major cause of air pollution in Delhi since 1980. Consequently, the government is considering implementation of the 1,600 km long and 5 km wide Great Green Wall of Aravalli. The smog that arises from the burning contributes fine black and brown carbon into the atmosphere which affects light absorption. As the weather is cooler in November in India, the stubble burning generates a thick haze of fog, dust, and industrial pollution. From April to May and October to November each year, farmers mainly in Punjab, Haryana, and Uttar Pradesh burn an estimated 35 million tons of crop waste from their wheat and paddy fields after harvesting as a low-cost straw-disposal practice to reduce the turnaround time between harvesting and sowing for the first (summer) crop and the second (winter) crop. Smoke from this burning produces a cloud of particulates visible from space and has produced what has been described as a "toxic cloud" in New Delhi, resulting in declarations of an air-pollution emergency. For this, the NGT (National Green Tribunal) instituted a fine of ₹2 lakh on the Delhi Government for failing to file an action plan providing incentives and infrastructural assistance to farmers to stop them from burning crop residue to prevent air pollution. Although harvesters such as the Indian-manufactured "Happy Seeder" that shred the crop residues into small pieces and uniformly spread them across the field are available as an alternative to burning stubble, and crops such as millets and maize can be grown as a sustainable alternative to rice and wheat in order to conserve water, some farmers complain that the cost of these machines is a significant financial burden, with the crops not incurred under MSP prices when compared to burning the fields and purchasing crops that are produced under MSP prices. The Indian Agricultural Research Institute, developed an enzyme bio-decomposer solution, that can be sprayed after the harvest, to increase organic carbon in the soil and maintain overall soil health. In 2021, they began licensing its use to various companies. In May 2022, the Government of Punjab announced they will purchase maize, bajra, sunflower and moong crops at MSP, encouraging farmers to adopt less water consuming options as a sustainable alternative to paddy and wheat in the wake of fast-depleting groundwater. The pollution from stubble burning in India A recent study in 2020 showed that the country created 600-700 million tonnes of crop residue and is choking cities. People in India are awaiting sustainable management to reduce the pollution. The areas that are largely contributing to the stubble burning pollutants are Uttar Pradesh, Punjab, and Haryana which is spreading to the border of Uttarakhand. The unsustainable use of alternating wheat-rice cropping patterns is exhausting natural resources like water, soil, and forest areas. In one year the emissions from the crop burning can be 17 times the total annual particulate pollution and the crop residue carbon dioxide submissions are 64 times the element emissions in Delhi. The crops that are typically burned include rice, wheat, maize, millet, and sugarcane, all of which have large investment returns and also leave a residue on the field after being cut. After 1 tonne of crop residue is burnt in a field there is a release of 1,400 kg of carbon dioxide (CO2), 58 kg of Carbon Monoxide (CO), 11 kg of particulate matter, 4.9 kg of nitrogen oxides (NOx), and 1.2 kg of sulfur dioxide (SO2). Stubble burning also depletes groundwater and the lack of attention to the issue has led Indian civilians to feel hopeless for effective government interventional responses. See also Slash-and-burn References Agriculture Articles containing video clips Fire Horticultural techniques Air pollution
Stubble burning
Chemistry
1,752
61,641,996
https://en.wikipedia.org/wiki/Maxwell%27s%20theorem%20%28geometry%29
Maxwell's theorem is the following statement about triangles in the plane. The theorem is named after the physicist James Clerk Maxwell (1831–1879), who proved it in his work on reciprocal figures, which are of importance in statics. References Daniel Pedoe: Geometry: A Comprehensive Course. Dover, 1970, pp. 35–36, 114–115 Daniel Pedoe: "On (what should be) a Well-Known Theorem in Geometry." The American Mathematical Monthly, Vol. 74, No. 7 (August – September, 1967), pp. 839–841 (JSTOR) Dao Thanh Oai, Cao Mai Doai, Quang Trung, Kien Xuong, Thai Binh: "Generalizations of some famous classical Euclidean geometry theorems." International Journal of Computer Discovered Mathematics, Vol. 1, No. 3, pp. 13–20 External links Maxwell's Theorem at cut-the-knot.org Elementary geometry Theorems about triangles James Clerk Maxwell
Maxwell's theorem (geometry)
Mathematics
209
5,910,237
https://en.wikipedia.org/wiki/Scene%20shop
A scenery shop or scene shop is a specialized workshop found in many medium or large theaters, as well as many educational theatre settings. The primary function of a scene shop is to fabricate and assemble the flats, platforms, scenery wagons, and other scenic (set) pieces required for a performance. Commonly, a scene shop is also the location where most of the set painting is done, and is sometimes used to make props. Generally, the individuals who work in a scene shop are carpenters, although, in bigger shops, it is common for metalworkers to be employed for steel-construction set pieces which require welding and other machining. It is common for the individuals working in a scene shop to be knowledgeable in a wide variety of technical skills, developed over time as required for specific construction needs. Commercial shops Commercial scene shops can also be found in larger metropolitan areas, where they are capable of supplying scenic elements to a variety of clients, including theatre, film, television, and corporate productions. Scenic studios also sometimes make elements for museum booths, touring concerts, and other custom fabrication tasks. Shop construction Although many scene shops are located in general purpose building, most are in purpose built spaces because scenic fabrication has some fairly specific needs. Often they are in very large, open rooms, to accommodate big elements that a show may call for. They are usually attached directly to a loading dock for delivery of materials and shipping of finished elements. If they are attached to a performance venue, it is also common to have large doors providing access to the stage. Typically, compressed air and dust collection systems are distributed around the shop. Many processes in a scene shop such as spray painting, welding, or hot-wire foam cutting produce dangerous gasses, so often extra ventilation is installed. Ideally, fume collection systems are available to use near the actual workpiece. Power is usually available in floor pockets or dropped from the ceiling, in a variety of voltages, as some tools, especially welders require high voltages. Often, scene shops have designated areas inside for paints, carpentry, metalwork, and sometimes prop construction. A scene shop will have many worker who are skilled to different craft work. Common scenery shop tools Circular saws Table saw Power drills Screwdrivers Hammers Miter saws Welder Cutoff grinder Power sander Showers References Parts of a theatre Scenic design Prop design
Scene shop
Technology,Engineering
484
32,302,221
https://en.wikipedia.org/wiki/Hybrid%20grouper
The first hybrid grouper was a fish cross-bred by researchers from Universiti Malaysia Sabah (UMS), Malaysia, in collaboration with researchers from the Borneo Marine Research Institute of UMS, the Fisheries Development Authority of Malaysia (LKIM) and Kindai University of Japan, represented by Shigeharu Senoo of UMS. The first hybrid grouper was produced by fertilising the eggs of the tiger grouper (Epinephelus fuscoguttatus) with the sperm of the giant grouper (Epinephelus lanceolatus) through the in-vitro fertilisation (IVF) technique. It is commonly known as Sabah grouper or pearl grouper. References Epinephelus Fish hybrids Intergeneric hybrids Invasive species
Hybrid grouper
Biology
163
21,454,204
https://en.wikipedia.org/wiki/SiCortex
SiCortex was a supercomputer manufacturer founded in 2003 and headquartered in Clock Tower Place, Maynard, Massachusetts. On 27 May 2009, HPCwire reported that the company had shut down its operations, laid off most of its staff, and is seeking a buyer for its assets. The Register reported that Gerbsman Partners was hired to sell SiCortex's intellectual properties. While SiCortex had some sales, selling at least 75 prototype supercomputers to several large customers, the company had never produced an operating profit and ran out of venture capital. New funding could not be found. The company built and marketed a family of clusters of between 12 and 972 compute nodes, connected in a Kautz graph. The clusters are the SC5832, SC648 and SC072. It was reported that the company has been working on the next generation of clusters since March 2009, but development ceased when operations were closed. The SC5832 is a high-end model housed in a cabinet. It has 972 nodes, 5,832 cores and 972 to 7,776 GB of memory. It uses a diameter-6 Kautz graph for 2,916 links. The SC648 is a mid-range model housed in a standard 19-inch rack. Each rack may contain two systems. It has 108 nodes, 648 cores and 108 to 864 GB of memory. It uses a diameter-4 Kautz graph for 324 links. The SC072 is a desktop model for developing software. Each node is system-on-chip (SoC), codenamed ICE9, consisting of six cores that implement the MIPS64 instruction set architecture (ISA). Each core has a 32 KB instruction cache and a 32 KB data cache. The six cores have their own 256 KB L2 cache, which can be accessed by other cores. The MIPS cores execute instructions in-order and have a six-stage pipeline. They can issue and execute two instructions per cycle for peak double-precision (64-bit) performance of 1 GFLOPS at 500 MHz. This was later increased to 1.4 GFLOPS when the clock frequency of the SoC was increased to 700 MHz when the SoC was fabricated in a 90 nm process. The SoC contains two DDR2 memory controllers, each controlling a single DIMM. Each node can have 1 to 8 GB of memory. The SoC also implements a 8x PCI Express controller. The cluster interconnect is implemented by a DMA engine fabric switch. Each cluster interconnect provides a maximum bandwidth of 2 GB/s. Message passing, via MPI, is the presumptive programming model. SiCortex systems run a customized Linux distribution derived from Gentoo Linux. Hardware Models References https://web.archive.org/web/20090531211623/http://www.top500.org/2007_overview_recent_supercomputers/sicortex_sc_series http://www.boston.com/business/technology/articles/2009/01/04/through_ups_and_downs_creative_mill_grinds_on/ http://www.bio-medicine.org/biology-news-1/Argonne-National-Lab-acquires-first-SiCortex-SC5832-824-1/ Tally, Steve "New green supercomputer powers up at Purdue". Morgan, Timothy Prickett (19 September 2008). "SiCortex cranks clocks on mega MIPS machines". Vance, Ashlee (20 November 2006). "Startup takes Reg's coveted 'Top FLOP' award". The Register. Morgan, Timothy Prickett (12 March 2009). "Supercomputer niche chucks rocks at Nehalem". The Register. External links Installation Photos @ Argonne National Laboratory Signed PCB & Rhode Island Computer Museum Cluster computing MIPS architecture Parallel computing Supercomputers
SiCortex
Technology
830
46,201,390
https://en.wikipedia.org/wiki/Media%20aggregation%20platform
A Media Aggregation Platform or Media Aggregation Portal (MAP) is an over the top service for distributing web-based streaming media content from multiple sources to a large audience. MAPs consist of networks of sources who host their own content which viewers can choose and access directly from a larger variety of content to choose from than a single source can offer. The service is used by content providers, looking to extend the reach of their content. Unlike multichannel video programming distributor (MVPD) or multiple-system operator (MSO), MAPs rely on the Internet rather than cables or satellite. As more network television channels have moved online in the early 21st century, joining web-native channels like Netflix, MAPs aggregates content the way that MSOs and MVPDs have used cable, and to a lesser extent satellite and IPTV infrastructure. There are companies that offer a similar service for free, including Yidio and StreamingMoviesRight, while others charge a subscription fee like as FreeCast Inc's Rabbit TV Plus. When compared with MSOs and MVPDs, MPAs network have much lower cost due to lack of physical infrastructure. The majority of revenues from their services is retained by the content creators and revenues are from advertisements, pay-per-view, and subscription-based content offerings instead of by licensing and reselling content. MAPs service consumers directly with the content source and they purchase content directly from its source, without the markup added by a middleman. See also Broadcast networks Internet Protocol television (IPTV) Multichannel video programming distributor (MVPD) Multiple system operator (MSO) Over-the-air (OTA) Over-the-top (OTT) Pay TV Streaming Media Streaming television Video on demand (VOD) References Data management
Media aggregation platform
Technology
354
33,791,207
https://en.wikipedia.org/wiki/Ultralight%20material
Ultralight materials are solids with a density of less than 10 mg/cm3, including silica aerogels, carbon nanotube aerogels, aerographite, metallic foams, polymeric foams, and metallic microlattices. The density of air is about 1.275 mg/cm3, which means that the air in the pores contributes significantly to the density of these materials in atmospheric conditions. They can be classified by production method as aerogels, stochastic foams, and structured cellular materials. Properties Ultralight materials are solids with a density of less than 10 mg/cm3. Ultralight material is defined by its cellular arrangement and its stiffness and strength that make up its solid constituent. They include silica aerogels, carbon nanotube aerogels, aero graphite, metallic foams, polymeric foams, and metallic micro lattices. Ultralight materials are produced to have the strength of bulk-scaled properties at a micro-size. Also, they are designed to not compress even under extreme pressure, which show that they are stiff and strong. Ultralight material also has elastic properties. Some ultralight materials are designed with more pores to allow the structure to have better heat transfer, which is needed for many materials, like pipes for example. In compression experiments, ultralight materials almost always show complete recovery from strains exceeding 50%. Applications Ultralight foams are produced by 3D interconnected hollow tubes at the micrometer and nanometer levels. These foams are used to quickly and selectively absorb oils from water surfaces that are under magnetic field. The foam can absorb 100 times its own weight. Plywood faced sandwiches are thermal insulators made with low density fiber boards with an ultralight foam interior. The plywood faced sandwich provide insulation properties superior to those without ultralight-weight properties. Possible advances Parts of massive bridges could be made from ultra strong, lightweight material in the future. These bridges would also be insulated from heat and cold. Scientists at MIT have been working to make a material that is as strong as steel but has the density of a plastic bag. The biggest hurdle in making this material is the lack of industrial manufacturing capability for producing them. Ultralight material is constantly subjected to compression and accidental physical damage or abuse during practical applications. Recent advances in ultralight magnetic framework has allowed structures made from lightweight material to self repair their structure when it is compromised. Ultralight materials are capable of healing because of pH induced coordination between iron and catecholic compounds. Examples Aerogel The first ultralight material, aerogel was first created by Samuel Stephens Kistler in 1931. Stochastic foam Graphene foams and graphite foams are examples of stochastic foams. Structured cellular materials Structured cellular materials can be remarkably strong despite very low density. Reversibly assembled cellular composite materials enable tailorable composite materials properties, to the ideal linear specific stiffness scaling regime. Using projection microstereolithography, octet microlattices have also been fabricated from polymers, metals, and ceramics. The design of the high performing lattices mean that the individual struts making up the materials do not bend. The materials are therefore exceptionally stiff and strong for their weight. References Bibliography Materials Least dense things
Ultralight material
Physics
669
75,252,202
https://en.wikipedia.org/wiki/Aminopeptidase%20A%20inhibitor
Aminopeptidase A inhibitors are a class of antihypertensive drugs that work by inhibiting the conversion of angiotensin II to angiotensin III by the aminopeptidase A enzyme. The first medication in this class is firibastat. It is hypothesized that the drugs may be more effective in overweight people and those of African descent. References Antihypertensive agents
Aminopeptidase A inhibitor
Chemistry
91
10,766,937
https://en.wikipedia.org/wiki/Semiautomaton
In mathematics and theoretical computer science, a semiautomaton is a deterministic finite automaton having inputs but no output. It consists of a set Q of states, a set Σ called the input alphabet, and a function T: Q × Σ → Q called the transition function. Associated with any semiautomaton is a monoid called the characteristic monoid, input monoid, transition monoid or transition system of the semiautomaton, which acts on the set of states Q. This may be viewed either as an action of the free monoid of strings in the input alphabet Σ, or as the induced transformation semigroup of Q. In older books like Clifford and Preston (1967) semigroup actions are called "operands". In category theory, semiautomata essentially are functors. Transformation semigroups and monoid acts A transformation semigroup or transformation monoid is a pair consisting of a set Q (often called the "set of states") and a semigroup or monoid M of functions, or "transformations", mapping Q to itself. They are functions in the sense that every element m of M is a map . If s and t are two functions of the transformation semigroup, their semigroup product is defined as their function composition . Some authors regard "semigroup" and "monoid" as synonyms. Here a semigroup need not have an identity element; a monoid is a semigroup with an identity element (also called "unit"). Since the notion of functions acting on a set always includes the notion of an identity function, which when applied to the set does nothing, a transformation semigroup can be made into a monoid by adding the identity function. M-acts Let M be a monoid and Q be a non-empty set. If there exists a multiplicative operation which satisfies the properties for 1 the unit of the monoid, and for all and , then the triple is called a right M-act or simply a right act. In long-hand, is the right multiplication of elements of Q by elements of M. The right act is often written as . A left act is defined similarly, with and is often denoted as . An M-act is closely related to a transformation monoid. However the elements of M need not be functions per se, they are just elements of some monoid. Therefore, one must demand that the action of be consistent with multiplication in the monoid (i.e. ), as, in general, this might not hold for some arbitrary , in the way that it does for function composition. Once one makes this demand, it is completely safe to drop all parenthesis, as the monoid product and the action of the monoid on the set are completely associative. In particular, this allows elements of the monoid to be represented as strings of letters, in the computer-science sense of the word "string". This abstraction then allows one to talk about string operations in general, and eventually leads to the concept of formal languages as being composed of strings of letters. Another difference between an M-act and a transformation monoid is that for an M-act Q, two distinct elements of the monoid may determine the same transformation of Q. If we demand that this does not happen, then an M-act is essentially the same as a transformation monoid. M-homomorphism For two M-acts and sharing the same monoid , an M-homomorphism is a map such that for all and . The set of all M-homomorphisms is commonly written as or . The M-acts and M-homomorphisms together form a category called M-Act. Semiautomata A semiautomaton is a triple where is a non-empty set, called the input alphabet, Q is a non-empty set, called the set of states, and T is the transition function When the set of states Q is a finite set—it need not be—, a semiautomaton may be thought of as a deterministic finite automaton , but without the initial state or set of accept states A. Alternately, it is a finite-state machine that has no output, and only an input. Any semiautomaton induces an act of a monoid in the following way. Let be the free monoid generated by the alphabet (so that the superscript * is understood to be the Kleene star); it is the set of all finite-length strings composed of the letters in . For every word w in , let be the function, defined recursively, as follows, for all q in Q: If , then , so that the empty word does not change the state. If is a letter in , then . If for and , then . Let be the set The set is closed under function composition; that is, for all , one has . It also contains , which is the identity function on Q. Since function composition is associative, the set is a monoid: it is called the input monoid, characteristic monoid, characteristic semigroup or transition monoid of the semiautomaton Properties If the set of states Q is finite, then the transition functions are commonly represented as state transition tables. The structure of all possible transitions driven by strings in the free monoid has a graphical depiction as a de Bruijn graph. The set of states Q need not be finite, or even countable. As an example, semiautomata underpin the concept of quantum finite automata. There, the set of states Q are given by the complex projective space , and individual states are referred to as n-state qubits. State transitions are given by unitary n×n matrices. The input alphabet remains finite, and other typical concerns of automata theory remain in play. Thus, the quantum semiautomaton may be simply defined as the triple when the alphabet has p letters, so that there is one unitary matrix for each letter . Stated in this way, the quantum semiautomaton has many geometrical generalizations. Thus, for example, one may take a Riemannian symmetric space in place of , and selections from its group of isometries as transition functions. The syntactic monoid of a regular language is isomorphic to the transition monoid of the minimal automaton accepting the language. Literature A. H. Clifford and G. B. Preston, The Algebraic Theory of Semigroups. American Mathematical Society, volume 2 (1967), . F. Gecseg and I. Peak, Algebraic Theory of Automata (1972), Akademiai Kiado, Budapest. W. M. L. Holcombe, Algebraic Automata Theory (1982), Cambridge University Press J. M. Howie, Automata and Languages, (1991), Clarendon Press, . Mati Kilp, Ulrich Knauer, Alexander V. Mikhalov, Monoids, Acts and Categories (2000), Walter de Gruyter, Berlin, . Rudolf Lidl and Günter Pilz, Applied Abstract Algebra (1998), Springer, References Category theory Semigroup theory Finite automata
Semiautomaton
Mathematics
1,472
23,517,706
https://en.wikipedia.org/wiki/Edge%20wave
In fluid dynamics, an edge wave is a surface gravity wave fixed by refraction against a rigid boundary, often a shoaling beach. Progressive edge waves travel along this boundary, varying sinusoidally along it and diminishing exponentially in the offshore direction. References Further reading Oceanography Water waves
Edge wave
Physics,Chemistry,Environmental_science
62
30,302,010
https://en.wikipedia.org/wiki/Caldisericum
Caldisericum exile is a species of bacteria sufficiently distinct from other bacteria to be placed in its own family, order, class and phylum. It is the first member of the thermophilic candidate phylum OP5 to be cultured and described. See also List of bacteria genera List of bacterial orders References External links Type strain of Caldisericum exile at BacDive - the Bacterial Diversity Metadatabase Caldiserica Thermophiles Bacteria genera
Caldisericum
Biology
100
62,222,719
https://en.wikipedia.org/wiki/Nokia%202720%20Flip
The Nokia 2720 Flip is a Nokia-branded flip phone developed by HMD Global. The 2720 Flip was created, as an updated version of the Nokia 2720 Fold, which debuted in 2009. It was unveiled at IFA 2019 together with the Nokia 110 (2019), Nokia 800 Tough, Nokia 6.2, and Nokia 7.2. It runs the KaiOS operating system, a web-based operating system based on B2G OS. Target Audience According to a 2019 article from The Verge, "If you’re seriously considering buying a feature phone like the Nokia 2720 Flip in 2019 then I think you’re likely to be one of three kinds of people. Either you live or work in the developing world, where smartphones sometimes aren’t a practical option, you’re the kind of person who thrives on nostalgia for a simpler time, or else you’re trying to do a “digital detox,” and decrease the amount of time you seem to waste staring at screens every day". This fits in with the general trend of modern flip-phones being seen as a less-harmful alternative to cell phones. Software The Nokia 2720 Flip runs on the web-based operating system KaiOS. It runs many more modern apps including, but not limited to, WhatsApp, Facebook, Google Assistant and YouTube. Reception In February 2020, the Nokia 2720 Flip received an iF Design Award 2020 by iF International Forum Design. See also Nokia 8110 4G Nokia 800 Tough References External links 2720 Flip KaiOS phones Mobile phones introduced in 2019 Mobile phones with user-replaceable battery
Nokia 2720 Flip
Technology
330
22,580,062
https://en.wikipedia.org/wiki/Chaetopsis%20macroclada
Chaetopsis macroclada is a species of fungus in the genus Chaetopsis. References Ascomycota Fungus species
Chaetopsis macroclada
Biology
28
18,607,759
https://en.wikipedia.org/wiki/PROPHET%20system
The PROPHET system was an early medical expert system. The system was initiated in about 1965 by a young administrator at NIH, William Raub, who had the idea to set up a collaborative communication network modeled on ARPANET, for use among biomedical investigators to share data and procedures, with a wide range of computational tools, ranging from statistics to molecular orbit calculations. A panel of NIH advisors planned the system over a couple of years, until Bolt, Beranek and Neman (BBN Technologies) received the contract for implementation. The PROPHET system was operational for 15 to 20 years before being superseded by more current Internet tools. According to NIH, "PROPHET is the most nearly comprehensive set of information-handling tools for this area of science ever to be presented in a unified system, and offered as a service to the biomedical community." Further reading Castleman, P. A., et al. "The implementation of the PROPHET system." Proceedings of the May 6–10, 1974, national computer conference and exposition. ACM, 1974. Ransil, Bernard J. "Applications of the PROPHET system in human clinical investigation." Proceedings of the May 6–10, 1974, national computer conference and exposition. ACM, 1974. Rohrer, Douglas C., et al. "Functional receptor mapping for modified cardenolides: Use of the PROPHET system." 1979. 259–279. Weeks, Charles M., et al. "Applications of the PROPHET system in correlating crystallographic structural data with biological information." Proceedings of the May 6–10, 1974, national computer conference and exposition. ACM, 1974. Archival materials The Computer History Museum holds materials related to PROPHET, including two brochures about "Prophet, a National Computing Resource for Life Science Research" from BBN and a BBN Prophet II Design Manual. References Expert systems
PROPHET system
Technology
383
54,498,676
https://en.wikipedia.org/wiki/PSV%20Garuda%20Vega
PSV Garuda Vega is a 2017 Indian Telugu-language action spy film written and directed by Praveen Sattaru. The film is produced by M. Koteswara Raju and stars Rajasekhar, Pooja Kumar, Adith Arun, Shraddha Das, and Kishore, while Sanjay Reddy, Nassar, Ali, Posani Krishna Murali, and Sayaji Shinde play supporting roles. PSV Garuda Vega was released worldwide on 3 November 2017 where it received positive reviews from critics and emerged as a commercial success at the box office. Plot Niranjan Iyer, a hacker, makes a deal for confidential information worth with an unknown person. When he confirms the deal, someone chases after him, but Niranjan manages to escape. Pullela Chandrasekhar, an ACP of the NIA and his wife Swathi, consult a divorce lawyer. Swathi accuses Shekhar of neglecting her and their son and only focusing on his work. Shekhar gets into a car crash when he returns to his office. He finds a high range gun in the other car, which is rare in India, and the other car's owner escapes by shooting Shekhar. Shekhar decides to arrest the car owner and orders his assistant to check traffic signal CCTV footage to find him. He discovers that the owner is the seasoned assassin Yakub Ali. Shekhar and his team interrogate him, but learn nothing; they decide to bring him to court, but Ali dies on the way, where his phone receives a message, which displays three codes. Having solved the code, Shekhar and his team learn that the codes describe a bomb placed in Charminar and set to explode at 8:00 AM, in just a few hours, and also learns in the investigation, that the opposition leader of the state, Pratap Reddy, is going on a rally against the current government, and they conclude that the bomb is placed to kill Reddy. Shekhar finds the bomb in an underhole and immediately informs the bomb squad, but they inform him that it is a remote-controlled bomb and that the remote must be used to detonate the bomb, in the vicinity. Shekhar searches for the remote, but to no avail. After Shekhar captures a suspicious man, the latter reveals that the remote is in a white colored-van, being operated by the terrorists. He shoots at the van with the other police officers and kills them. Shekhar finds Niranjan in the crowd, and chases after him, where he successfully arrests him. In custody, Niranjan manages to sell the drive containing the secret information for to Reddy's secretary. After arresting him, Shekhar's senior officer is informed by the government that Shekhar has been assigned to transfer Niranjan to Nagpur. When Shekhar and Niranjan arrive in Nagpur, a gang attacks them, which forces their van into the river and shoot at them. Shekhar saves Niranjan and leaves towards the forest, where Niranjan reveals that the success of Pokhran Fire testing in 1998 made India to keep its uranium reserves protected from other countries, for future use. The head of the Atomic Department of India, hired four hackers, which include Niranjan, Vishal and two other people, to replace the current photos of their uranium reserves with older photos, so that satellites from other countries cannot see their wealth. They successfully hacked and then left the place. However, Vishal bugged the Indian database of the DAE and learned that a uranium mine in Andhra Pradesh has the highest quantity of uranium in the country. The head of the DAE has dealt with the MLA of the district and several other ministers, and the hackers were used to hack the satellites to cover up evidence of uranium smuggling. Vishal blackmailed the head of the DAE for , but was killed. Niranjan took the drive and sold it to Reddy's secretary as he would expose the scandal, but Reddy made a deal with the MLA. After the revelation, Shekhar kills the mercenaries sent to kill them and manages to track the location of the main suspect George. Shekhar and Niranjan leave to meet George to find the data on a remote server of the DAE, but are captured. They are left tied, along with Swathi, on the remote server ship, rigged with explosives to kill them. Shekhar and Niranjan manage to escape and detach the explosives on their ship. They shoot the explosives onto George's ship, by using a leaking high pressurized steam pipe, and successfully kill him and the smugglers. The MLA and several other officers of the scandal are exposed and arrested. Cast Rajasekhar as Pullela Chandrashekhar "Shekhar" Pooja Kumar as Swathi Adith Arun as Niranjan Iyer Shraddha Das as Malini Kishore as George Sanjay Reddy as Ex-Intelligence Officer and Analyst Nassar as NIA Head Ali as Lawyer Posani Krishna Murali as Opposition Leader Pratap Reddy Sayaji Shinde as Mining Minister Srinivas Avasarala as Prakash Charandeep as NIA Officer Venkat Rao Ravi Varma as NIA Officer Yadav Pruthviraj as Doctor Shatru as Yakub Ali Sanjay Swaroop as Ravi Naik Aadarsh Balakrishna Sunny Leone - special appearance in the song "Deo Deo" Soundtrack Soundtrack was composed by Bheems Ceciroleo and Sricharan Pakala. Controversy On 12 April 2018, Hyderabad city civil court forbade the film producers, directors, and YouTube from screening this movie. The court passed this interim order as a city-based uranium company filed a complaint against the film, claiming that it shows the corporation in a bad light. Notes References External links 2010s Telugu-language films Indian nonlinear narrative films Indian spy thriller films 2010s spy thriller films Films shot in Hyderabad, India Films set in Hyderabad, India Films about computing Techno-thriller films Films about security and surveillance Films about Islamic terrorism in India Films shot in Chennai Fictional portrayals of the Telangana Police Films about murder Films about mass murder Intelligence Bureau (India) in fiction Films about the Research and Analysis Wing Films scored by Sricharan Pakala Films scored by Bheems Ceciroleo Films directed by Praveen Sattaru Films set on ships
PSV Garuda Vega
Technology
1,300
164,033
https://en.wikipedia.org/wiki/Netizen
The term netizen is a portmanteau of the English words internet and citizen, as in a "citizen of the net" or "net citizen". It describes a person actively involved in online communities or the Internet in general. The term also commonly implies an interest and active engagement in improving the internet, making it an intellectual and a social resource, or its surrounding political structures, especially in regard to open access, net neutrality and free speech. The term was widely adopted in the mid-1990s as a way to describe those who inhabit the new geography of the internet. Internet pioneer and author Michael F. Hauben is credited with coining and popularizing the term. Determining factor In general, any individual who has access to the internet has the potential to be classified as a netizen. In the 21st century, this is made possible by the global connectivity of the internet. People can physically be located in one country but connected to most of the world via a global network. There is a clear distinction between netizens and people who come online to use the internet. A netizen is described as an individual who actively seek to contribute to the development of the internet. Netizens are not individuals who go online for personal gain or profit, but instead actively seeks to make the internet a better place. A term used to classify internet users who do not actively contribute to the development of the internet is "lurker". Lurkers cannot be classified as netizens, as although they do not actively harm the internet, they do not contribute either. Besides, lurkers seemed to be more critical of the technological elements enabling communities whereas posters appeared to be more critical of users who hampered community creation by making rude or unpleasant comments. Additionally, discussions indicate that both lurkers and posters had distinct motives for lurking and might modify their engagement behaviours based on how they understand the community from various online groups, despite the fact that engagement between those who post and those who lurk was different in the communities studied. In China In Mandarin Chinese, the terms wǎngmín (, literally "netizen" or "net folks") and wǎngyǒu (, literally "net friend" or "net mate") are commonly used terms meaning "internet users", and the English word netizen is used by mainland China-based English language media to translate both terms, resulting in the frequent appearance of that English word in media reporting about China, far more frequently than the use of the word in other contexts. Netizen Prize The international nonprofit organisation Reporters Without Borders awards an annual Netizen Prize in recognition to an internet user, blogger, cyber-dissident, or group who has helped to promote freedom of expression on the internet. The organisation uses the term when describing the political repression of cyber-dissidents such as legal consequences of blogging in politically repressive environments. Psychological studies With time, more and more people have started interacting and building communities online. The effect it has on human psychology and life is of major interest and concern to researchers. Several studies are being done on netizens under the name Netizens’ Psychology. Problems are internet addiction, mental health, outrage, and the effect on kids' development are some of the many problems netizen psychology tries to focus on. See also Digital citizen – citizens (of the physical space) using the Internet as a tool in order to engage in society, politics, and government participation Digital native – a person who has grown up in the information age Netiquette – social conventions for online communities Cyberspace – the new societal territory that is inhabited by Netizens Information Age Internet age Network society Active citizenship – the concept that citizens have certain roles and responsibilities to society and the environment and should actively participate Social Age List of Internet pioneers – those who helped erect the theoretical and technological foundation of the Internet (instead of improving its content, utility or political aspects) Participatory culture – a culture in which the public does not act merely as consumers and voters, but also as contributors, producers and active participants References Further reading External links The Mysterious Netizen Information society Information Age Internet terminology Social influence Cyberspace Hyperreality Virtual communities Citizenship
Netizen
Technology
855
6,294,249
https://en.wikipedia.org/wiki/Type-I%20superconductor
The interior of a bulk superconductor cannot be penetrated by a weak magnetic field, a phenomenon known as the Meissner effect. When the applied magnetic field becomes too large, superconductivity breaks down. Superconductors can be divided into two types according to how this breakdown occurs. In type-I superconductors, superconductivity is abruptly destroyed via a first order phase transition when the strength of the applied field rises above a critical value Hc. This type of superconductivity is normally exhibited by pure metals, e.g. aluminium, lead, and mercury. The only alloys known up to now which exhibit type I superconductivity are tantalum silicide (TaSi2). and BeAu The covalent superconductor SiC:B, silicon carbide heavily doped with boron, is also type-I. Depending on the demagnetization factor, one may obtain an intermediate state. This state, first described by Lev Landau, is a phase separation into macroscopic non-superconducting and superconducting domains forming a Husimi Q representation. This behavior is different from type-II superconductors which exhibit two critical magnetic fields. The first, lower critical field occurs when magnetic flux vortices penetrate the material but the material remains superconducting outside of these microscopic vortices. When the vortex density becomes too large, the entire material becomes non-superconducting; this corresponds to the second, higher critical field. The ratio of the London penetration depth λ to the superconducting coherence length ξ determines whether a superconductor is type-I or type-II. Type-I superconductors are those with , and type-II superconductors are those with . References Superconductivity Magnetism
Type-I superconductor
Physics,Materials_science,Engineering
384
3,136,140
https://en.wikipedia.org/wiki/Mass%20flow%20%28life%20sciences%29
In the life sciences, mass flow, also known as mass transfer and bulk flow, is the movement of fluids down a pressure or temperature gradient. As such, mass flow is a subject of study in both fluid dynamics and biology. Examples of mass flow include blood circulation and transport of water in vascular plant tissues. Mass flow is not to be confused with diffusion which depends on concentration gradients within a medium rather than pressure gradients of the medium itself. Plant biology In general, bulk flow in plant biology typically refers to the movement of water from the soil up through the plant to the leaf tissue through xylem, but can also be applied to the transport of larger solutes (e.g. sucrose) through the phloem. Xylem According to cohesion-tension theory, water transport in xylem relies upon the cohesion of water molecules to each other and adhesion to the vessel's wall via hydrogen bonding combined with the high water pressure of the plant's substrate and low pressure of the extreme tissues (usually leaves). As in blood circulation in animals, (gas) embolisms may form within one or more xylem vessels of a plant. If an air bubble forms, the upward flow of xylem water will stop because the pressure difference in the vessel cannot be transmitted. Once these embolisms are nucleated , the remaining water in the capillaries begins to turn to water vapor. When these bubbles form rapidly by cavitation, the "snapping" sound can be used to measure the rate of cavitation within the plant . Plants do, however, have physiological mechanisms to reestablish the capillary action within their cells . Phloem Solute flow is driven by a difference in hydraulic pressure created from the unloading of solutes in the sink tissues. That is, as solutes are off-loaded into sink cells (by active or passive transport), the density of the phloem liquid decreases locally, creating a pressure gradient. See also Countercurrent exchange Pounds Per Hour Fluid Dynamics Mass flow rate Hemorheology Flying and gliding animals References Fluid dynamics
Mass flow (life sciences)
Chemistry,Engineering
440
52,432
https://en.wikipedia.org/wiki/Xanthine
Xanthine ( or , from Ancient Greek for its yellowish-white appearance; archaically xanthic acid; systematic name 3,7-dihydropurine-2,6-dione) is a purine base found in most human body tissues and fluids, as well as in other organisms. Several stimulants are derived from xanthine, including caffeine, theophylline, and theobromine. Xanthine is a product on the pathway of purine degradation. It is created from guanine by guanine deaminase. It is created from hypoxanthine by xanthine oxidoreductase. It is also created from xanthosine by purine nucleoside phosphorylase. Xanthine is subsequently converted to uric acid by the action of the xanthine oxidase enzyme. Use and production Xanthine is used as a drug precursor for human and animal medications, and is produced as a pesticide ingredient. Clinical significance Derivatives of xanthine (known collectively as xanthines) are a group of alkaloids commonly used for their effects as mild stimulants and as bronchodilators, notably in the treatment of asthma or influenza symptoms. In contrast to other, more potent stimulants like sympathomimetic amines, xanthines mainly act to oppose the actions of adenosine, and increase alertness in the central nervous system. Toxicity Methylxanthines (methylated xanthines), which include caffeine, aminophylline, IBMX, paraxanthine, pentoxifylline, theobromine, theophylline, and 7-methylxanthine (heteroxanthine), among others, affect the airways, increase heart rate and force of contraction, and at high concentrations can cause cardiac arrhythmias. In high doses, they can lead to convulsions that are resistant to anticonvulsants. Methylxanthines induce gastric acid and pepsin secretions in the gastrointestinal tract. Methylxanthines are metabolized by cytochrome P450 in the liver. If swallowed, inhaled, or exposed to the eyes in high amounts, xanthines can be harmful, and they may cause an allergic reaction if applied topically. Pharmacology In in vitro pharmacological studies, xanthines act as both competitive nonselective phosphodiesterase inhibitors and nonselective adenosine receptor antagonists. Phosphodiesterase inhibitors raise intracellular cAMP, activate PKA, inhibit TNF-α synthesis, and leukotriene and reduce inflammation and innate immunity. Adenosine receptor antagonists inhibit sleepiness-inducing adenosine. However, different analogues show varying potency at the numerous subtypes, and a wide range of synthetic xanthines (some nonmethylated) have been developed searching for compounds with greater selectivity for phosphodiesterase enzyme or adenosine receptor subtypes. Pathology People with rare genetic disorders, specifically xanthinuria and Lesch–Nyhan syndrome, lack sufficient xanthine oxidase and cannot convert xanthine to uric acid. Possible formation in absence of life Studies reported in 2008, based on 12C/13C isotopic ratios of organic compounds found in the Murchison meteorite, suggested that xanthine and related chemicals, including the RNA component uracil, have been formed extraterrestrially. In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting xanthine and related organic molecules, including the DNA and RNA components adenine and guanine, were found in outer space. See also DMPX Murchison meteorite Theobromine poisoning Xanthene Xanthone Xanthydrol Kidney stone disease References Enones
Xanthine
Chemistry
835
8,355,429
https://en.wikipedia.org/wiki/Dynamic%20priority%20scheduling
Dynamic priority scheduling is a type of scheduling algorithm in which the priorities are calculated during the execution of the system. The goal of dynamic priority scheduling is to adapt to dynamically changing progress and to form an optimal configuration in a self-sustained manner. It can be very hard to produce well-defined policies to achieve the goal depending on the difficulty of a given problem. Earliest deadline first scheduling and Least slack time scheduling are examples of Dynamic priority scheduling algorithms. Optimal schedulable utilization The idea of real-time scheduling is to confine processor utilization under schedulable utilization of a certain scheduling algorithm, which is scaled from 0 to 1. Higher schedulable utilization means higher utilization of resource and the better the algorithm. In preemptible scheduling, dynamic priority scheduling such as earliest deadline first (EDF) provides the optimal schedulable utilization of 1 in contrast to less than 0.69 with fixed priority scheduling such as rate-monotonic (RM). In periodic real-time task model, a task's processor utilization is defined as execution time over period. Every set of periodic tasks with total processor utilization less or equal to the schedulable utilization of an algorithm can be feasibly scheduled by that algorithm. Unlike fixed priority, dynamic priority scheduling could dynamically prioritize task deadlines achieving optimal schedulable utilization in the preemptible case. See also Earliest deadline first scheduling Least slack time scheduling References Scheduling algorithms
Dynamic priority scheduling
Technology
297
41,285,965
https://en.wikipedia.org/wiki/Unital%20%28geometry%29
In geometry, a unital is a set of n3 + 1 points arranged into subsets of size n + 1 so that every pair of distinct points of the set are contained in exactly one subset. This is equivalent to saying that a unital is a 2-(n3 + 1, n + 1, 1) block design. Some unitals may be embedded in a projective plane of order n2 (the subsets of the design become sets of collinear points in the projective plane). In this case of embedded unitals, every line of the plane intersects the unital in either 1 or n + 1 points. In the Desarguesian planes, PG(2,q2), the classical examples of unitals are given by nondegenerate Hermitian curves. There are also many non-classical examples. The first and the only known unital with non prime power parameters, n=6, was constructed by Bhaskar Bagchi and Sunanda Bagchi. It is still unknown if this unital can be embedded in a projective plane of order 36, if such a plane exists. Unitals Classical We review some terminology used in projective geometry. A correlation of a projective geometry is a bijection on its subspaces that reverses containment. In particular, a correlation interchanges points and hyperplanes. A correlation of order two is called a polarity. A polarity is called a unitary polarity if its associated sesquilinear form s with companion automorphism α satisfies s(u,v) = s(v,u)α for all vectors u, v of the underlying vector space. A point is called an absolute point of a polarity if it lies on the image of itself under the polarity. The absolute points of a unitary polarity of the projective geometry PG(d,F), for some d ≥ 2, is a nondegenerate Hermitian variety, and if d = 2 this variety is called a nondegenerate Hermitian curve. In PG(2,q2) for some prime power q, the set of points of a nondegenerate Hermitian curve form a unital, which is called a classical unital. Let be a nondegenerate Hermitian curve in for some prime power . As all nondegenerate Hermitian curves in the same plane are projectively equivalent, can be described in terms of homogeneous coordinates as follows: Ree unitals Another family of unitals based on Ree groups was constructed by H. Lüneburg. Let Γ = R(q) be the Ree group of type 2G2 of order (q3 + 1)q3(q − 1) where q = 32m+1. Let P be the set of all q3 + 1 Sylow 3-subgroups of Γ. Γ acts doubly transitively on this set by conjugation (it will be convenient to think of these subgroups as points that Γ is acting on.) For any S and T in P, the pointwise stabilizer, ΓS,T is cyclic of order q - 1, and thus contains a unique involution, μ. Each such involution fixes exactly q + 1 points of P. Construct a block design on the points of P whose blocks are the fixed point sets of these various involutions μ. Since Γ acts doubly transitively on P, this will be a 2-design with parameters 2-(q3 + 1, q + 1, 1) called a Ree unital. Lüneburg also showed that the Ree unitals can not be embedded in projective planes of order q2 (Desarguesian or not) such that the automorphism group Γ is induced by a collineation group of the plane. For q = 3, Grüning proved that a Ree unital can not be embedded in any projective plane of order 9. Unitals with In the four projective planes of order 9 (the Desarguesian plane PG(2,9), the Hall plane of order 9, the dual Hall plane of order 9 and the Hughes plane of order 9.), an exhaustive computer search by Penttila and Royle found 18 unitals (up to equivalence) with n = 3 in these four planes: two in PG(2,9) (both Buekenhout), four in the Hall plane (two Buekenhout, two not), and so another four in the dual Hall plane, and eight in the Hughes plane. However, one of the Buekenhout unitals in the Hall plane is self-dual, and thus gets counted again in the dual Hall plane. Thus, there are 17 distinct embeddable unitals with n = 3. On the other hand, a nonexhaustive computer search found over 900 mutually nonisomorphic designs which are unitals with n = 3. Isomorphic versus equivalent unitals Since unitals are block designs, two unitals are said to be isomorphic if there is a design isomorphism between them, that is, a bijection between the point sets which maps blocks to blocks. This concept does not take into account the property of embeddability, so to do so we say that two unitals, embedded in the same ambient plane, are equivalent if there is a collineation of the plane which maps one unital to the other. Buekenhout's Constructions By examining the classical unital in in the Bruck/Bose model, Buekenhout provided two constructions, which together proved the existence of an embedded unital in any finite 2-dimensional translation plane. Metz subsequently showed that one of Buekenhout's constructions actually yields non-classical unitals in all finite Desarguesian planes of square order at least 9. These Buekenhout-Metz unitals have been extensively studied. The core idea in Buekenhout's construction is that when one looks at in the higher-dimensional Bruck/Bose model, which lies in , the equation of the Hermitian curve satisfied by a classical unital becomes a quadric surface in , either a point-cone over a 3-dimensional ovoid if the line represented by the spread of the Bruck/Bose model meets the unital in one point, or a non-singular quadric otherwise. Because these objects have known intersection patterns with respect to planes of , the resulting point set remains a unital in any translation plane whose generating spread contains all of the same lines as the original spread within the quadric surface. In the ovoidal cone case, this forced intersection consists of a single line, and any spread can be mapped onto a spread containing this line, showing that every translation plane of this form admits an embedded unital. Hermitian varieties Hermitian varieties are in a sense a generalisation of quadrics, and occur naturally in the theory of polarities. Definition Let K be a field with an involutive automorphism . Let n be an integer and V be an (n+1)-dimensional vector space over K. A Hermitian variety H in PG(V) is a set of points of which the representing vector lines consisting of isotropic points of a non-trivial Hermitian sesquilinear form on V. Representation Let be a basis of V. If a point p in the projective space has homogeneous coordinates with respect to this basis, it is on the Hermitian variety if and only if : where and not all If one constructs the Hermitian matrix A with , the equation can be written in a compact way : where Tangent spaces and singularity Let p be a point on the Hermitian variety H. A line L through p is by definition tangent when it is contains only one point (p itself) of the variety or lies completely on the variety. One can prove that these lines form a subspace, either a hyperplane of the full space. In the latter case, the point is singular. Notes Citations Sources Combinatorial design Finite geometry Incidence geometry Projective geometry Algebraic varieties
Unital (geometry)
Mathematics
1,664
27,116
https://en.wikipedia.org/wiki/Scandium
Scandium is a chemical element with the symbol Sc and atomic number 21. It is a silvery-white metallic d-block element. Historically, it has been classified as a rare-earth element, together with yttrium and the lanthanides. It was discovered in 1879 by spectral analysis of the minerals euxenite and gadolinite from Scandinavia. Scandium is present in most of the deposits of rare-earth and uranium compounds, but it is extracted from these ores in only a few mines worldwide. Because of the low availability and difficulties in the preparation of metallic scandium, which was first done in 1937, applications for scandium were not developed until the 1970s, when the positive effects of scandium on aluminium alloys were discovered. Its use in such alloys remains its only major application. The global trade of scandium oxide is 15–20 tonnes per year. The properties of scandium compounds are intermediate between those of aluminium and yttrium. A diagonal relationship exists between the behavior of magnesium and scandium, just as there is between beryllium and aluminium. In the chemical compounds of the elements in group 3, the predominant oxidation state is +3. Properties Chemical characteristics Scandium is a soft metal with a silvery appearance. It develops a slightly yellowish or pinkish cast when oxidized by air. It is susceptible to weathering and dissolves slowly in most dilute acids. It does not react with a 1:1 mixture of nitric acid () and 48.0% hydrofluoric acid (), possibly due to the formation of an impermeable passive layer. Scandium turnings ignite in the air with a brilliant yellow flame to form scandium oxide. Isotopes In nature, scandium is found exclusively as the isotope 45Sc, which has a nuclear spin of ; this is its only stable isotope. The known isotopes of scandium range from 37Sc to 62Sc. The most stable radioisotope is 46Sc, which has a half-life of 83.8 days. Others are 47Sc, 3.35 days; the positron emitter 44Sc, 4 hours; and 48Sc, 43.7 hours. All of the remaining radioactive isotopes have half-lives less than 4 hours, and the majority of them have half-lives less than 2 minutes. The low mass isotopes are very difficult to create. The initial detection of 37Sc and 38Sc only resulted in the characterization of their mass excess. Scandium also has five nuclear isomers: the most stable of these is 44m2Sc (t1/2 = 58.6 h). The primary decay mode of ground-state scandium isotopes at masses lower than the only stable isotope, 45Sc, is electron capture (or positron emission), but the lightest isotopes (37Sc to 39Sc) undergo proton emission instead, all three of these producing calcium isotopes. The primary decay mode at masses above 45Sc is beta emission, producing titanium isotopes. Occurrence In Earth's crust, scandium is not rare. Estimates vary from 18 to 25 ppm, which is comparable to the abundance of cobalt (20–30 ppm). Scandium is only the 50th most common element on Earth (35th most abundant element in the crust), but it is the 23rd most common element in the Sun and the 26th most abundant element in the stars. However, scandium is distributed sparsely and occurs in trace amounts in many minerals. Rare minerals from Scandinavia and Madagascar such as thortveitite, euxenite, and gadolinite are the only known concentrated sources of this element. Thortveitite can contain up to 45% of scandium in the form of scandium oxide. The stable form of scandium is created in supernovas via the r-process. Also, scandium is created by cosmic ray spallation of the more abundant iron nuclei. 28Si + 17n → 45Sc (r-process) 56Fe + p → 45Sc + 11C + n (cosmic ray spallation) Production The world production of scandium is in the order of 15–20 tonnes per year, in the form of scandium oxide. The demand is slightly higher, and both the production and demand keep increasing. In 2003, only three mines produced scandium: the uranium and iron mines in Zhovti Vody in Ukraine, the rare-earth mines in Bayan Obo, China, and the apatite mines in the Kola Peninsula, Russia. Since then, many other countries have built scandium-producing facilities, including 5 tonnes/year (7.5 tonnes/year ) by Nickel Asia Corporation and Sumitomo Metal Mining in the Philippines. In the United States, NioCorp Development hopes to raise $1 billion toward opening a niobium mine at its Elk Creek site in southeast Nebraska, which may be able to produce as much as 95 tonnes of scandium oxide annually. In each case, scandium is a byproduct of the extraction of other elements and is sold as scandium oxide. To produce metallic scandium, the oxide is converted to scandium fluoride and then reduced with metallic calcium. Madagascar and the Iveland-Evje region in Norway have the only deposits of minerals with high scandium content, thortveitite ), but these are not being exploited. The mineral kolbeckite has a very high scandium content but is not available in any larger deposits. The absence of reliable, secure, stable, long-term production has limited the commercial applications of scandium. Despite this low level of use, scandium offers significant benefits. Particularly promising is the strengthening of aluminium alloys with as little as 0.5% scandium. Scandium-stabilized zirconia enjoys a growing market demand for use as a high-efficiency electrolyte in solid oxide fuel cells. The USGS reports that, from 2015 to 2019 in the US, the price of small quantities of scandium ingot has been $107 to $134 per gram, and that of scandium oxide $4 to $5 per gram. Compounds Scandium chemistry is almost completely dominated by the trivalent ion, Sc3+. The radii of M3+ ions in the table below indicate that the chemical properties of scandium ions have more in common with yttrium ions than with aluminium ions. In part because of this similarity, scandium is often classified as a lanthanide-like element. {|class="wikitable" |+ Ionic radius (pm) |- |Al||Sc||Y||La||Lu |- |53.5||74.5||90.0||103.2||86.1 |} Oxides and hydroxides The oxide and the hydroxide are amphoteric: + 3 → (scandate ion) + 3 + 3 → α- and γ-ScOOH are isostructural with their aluminium hydroxide oxide counterparts. Solutions of in water are acidic due to hydrolysis. Halides and pseudohalides The halides , where X= Cl, Br, or I, are very soluble in water, but is insoluble. In all four halides, the scandium is 6-coordinated. The halides are Lewis acids; for example, dissolves in a solution containing excess fluoride ion to form . The coordination number 6 is typical for Sc(III). In the larger Y3+ and La3+ ions, coordination numbers of 8 and 9 are common. Scandium triflate is sometimes used as a Lewis acid catalyst in organic chemistry. Organic derivatives Scandium forms a series of organometallic compounds with cyclopentadienyl ligands (Cp), similar to the behavior of the lanthanides. One example is the chlorine-bridged dimer, and related derivatives of pentamethylcyclopentadienyl ligands. Uncommon oxidation states Compounds that feature scandium in oxidation states other than +3 are rare but well characterized. The blue-black compound is one of the simplest. This material adopts a sheet-like structure that exhibits extensive bonding between the scandium(II) centers. Scandium hydride is not well understood, although it appears not to be a saline hydride of Sc(II). As is observed for most elements, a diatomic scandium hydride has been observed spectroscopically at high temperatures in the gas phase. Scandium borides and carbides are non-stoichiometric, as is typical for neighboring elements. Lower oxidation states (+2, +1, 0) have also been observed in organoscandium compounds. History Dmitri Mendeleev, who is referred to as the father of the periodic table, predicted the existence of an element ekaboron, with an atomic mass between 40 and 48 in 1869. Lars Fredrik Nilson and his team detected this element in the minerals euxenite and gadolinite in 1879. Nilson prepared 2 grams of scandium oxide of high purity. He named the element scandium, from the Latin Scandia meaning "Scandinavia". Nilson was apparently unaware of Mendeleev's prediction, but Per Teodor Cleve recognized the correspondence and notified Mendeleev. Metallic scandium was produced for the first time in 1937 by electrolysis of a eutectic mixture of potassium, lithium, and scandium chlorides, at 700–800 °C. The first pound of 99% pure scandium metal was produced in 1960. Production of aluminium alloys began in 1971, following a US patent. Aluminium-scandium alloys were also developed in the USSR. Laser crystals of gadolinium-scandium-gallium garnet (GSGG) were used in strategic defense applications developed for the Strategic Defense Initiative (SDI) in the 1980s and 1990s. Applications Aluminium alloys The main application of scandium by weight is in aluminium-scandium alloys for minor aerospace industry components. These alloys contain between 0.1% and 0.5% of scandium. They were used in Russian military aircraft, specifically the Mikoyan-Gurevich MiG-21 and MiG-29. The addition of scandium to aluminium limits the grain growth in the heat zone of welded aluminium components. This has two beneficial effects: the precipitated forms smaller crystals than in other aluminium alloys, and the volume of precipitate-free zones at the grain boundaries of age-hardening aluminium alloys is reduced. The precipitate is a coherent precipitate that strengthens the aluminum matrix by applying elastic strain fields that inhibit dislocation movement (i.e., plastic deformation). has an equilibrium L12 superlattice structure exclusive to this system. A fine dispersion of nano scale precipitate can be achieved via heat treatment that can also strengthen the alloys through order hardening. Recent developments include the additions of transition metals such as zirconium (Zr) and rare earth metals like erbium (Er) produce shells surrounding the spherical precipitate that reduce coarsening. These shells are dictated by the diffusivity of the alloying element and lower the cost of the alloy due to less Sc being substituted in part by Zr while maintaining stability and less Sc being needed to form the precipitate. These have made somewhat competitive with titanium alloys along with a wide array of applications. However, titanium alloys, which are similar in lightness and strength, are cheaper and much more widely used. The alloy is as strong as titanium, light as aluminium, and hard as some ceramics. Some items of sports equipment, which rely on lightweight high-performance materials, have been made with scandium-aluminium alloys, including baseball bats, tent poles and bicycle frames and components. Lacrosse sticks are also made with scandium. The American firearm manufacturing company Smith & Wesson produces semi-automatic pistols and revolvers with frames of scandium alloy and cylinders of titanium or carbon steel. Since 2013, Apworks GmbH, a spin-off of Airbus, have marketed a high strength Scandium containing aluminium alloy processed using metal 3D-Printing (Laser Powder Bed Fusion) under the trademark Scalmalloy which claims very high strength & ductility. Light sources The first scandium-based metal-halide lamps were patented by General Electric and made in North America, although they are now produced in all major industrialized countries. Approximately 20 kg of scandium (as ) is used annually in the United States for high-intensity discharge lamps. One type of metal-halide lamp, similar to the mercury-vapor lamp, is made from scandium triiodide and sodium iodide. This lamp is a white-light source with high color rendering index that sufficiently resembles sunlight to allow good color-reproduction with TV cameras. About 80 kg of scandium is used in metal-halide lamps/light bulbs globally per year. Dentists use erbium-chromium-doped yttrium-scandium-gallium garnet () lasers for cavity preparation and in endodontics. Other The radioactive isotope 46Sc is used in oil refineries as a tracing agent. Scandium triflate is a catalytic Lewis acid used in organic chemistry. The 12.4 keV nuclear transition of 45Sc has been studied as a reference for timekeeping applications, with a theoretical precision as much as three orders of magnitude better than the current caesium reference clocks. Scandium has been proposed for use in solid oxide fuel cells (SOFCs) as a dopant in the electrolyte material, typically zirconia (ZrO₂). Scandium oxide (Sc₂O₃) is one of several possible additives to enhance the ionic conductivity of the zirconia, improving the overall thermal stability, performance and efficiency of the fuel cell. This application would be particularly valuable in clean energy technologies, as SOFCs can utilize a variety of fuels and have high energy conversion efficiencies. Health and safety Elemental scandium is considered non-toxic, though extensive animal testing of scandium compounds has not been done. The median lethal dose (LD50) levels for scandium chloride for rats have been determined as 755 mg/kg for intraperitoneal and 4 g/kg for oral administration. In the light of these results, compounds of scandium should be handled as compounds of moderate toxicity. Scandium appears to be handled by the body in a manner similar to gallium, with similar hazards involving its poorly soluble hydroxide. Notes References Further reading External links Scandium at The Periodic Table of Videos (University of Nottingham) WebElements.com – Scandium Chemical elements Transition metals Chemical elements with hexagonal close-packed structure Chemical elements predicted by Dmitri Mendeleev
Scandium
Physics,Chemistry
3,063
447,020
https://en.wikipedia.org/wiki/Dedekind%20eta%20function
In mathematics, the Dedekind eta function, named after Richard Dedekind, is a modular form of weight 1/2 and is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. It also occurs in bosonic string theory. Definition For any complex number with , let ; then the eta function is defined by, Raising the eta equation to the 24th power and multiplying by gives where is the modular discriminant. The presence of 24 can be understood by connection with other occurrences, such as in the 24-dimensional Leech lattice. The eta function is holomorphic on the upper half-plane but cannot be continued analytically beyond it. The eta function satisfies the functional equations In the second equation the branch of the square root is chosen such that when . More generally, suppose are integers with , so that is a transformation belonging to the modular group. We may assume that either , or and . Then where Here is the Dedekind sum Because of these functional equations the eta function is a modular form of weight and level 1 for a certain character of order 24 of the metaplectic double cover of the modular group, and can be used to define other modular forms. In particular the modular discriminant of Weierstrass with can be defined as and is a modular form of weight 12. Some authors omit the factor of , so that the series expansion has integral coefficients. The Jacobi triple product implies that the eta is (up to a factor) a Jacobi theta function for special values of the arguments: where is "the" Dirichlet character modulo 12 with and . Explicitly, The Euler function has a power series by the Euler identity: Note that by using Euler Pentagonal number theorem for , the eta function can be expressed as This can be proved by using in Euler Pentagonal number theorem with the definition of eta function. Because the eta function is easy to compute numerically from either power series, it is often helpful in computation to express other functions in terms of it when possible, and products and quotients of eta functions, called eta quotients, can be used to express a great variety of modular forms. The picture on this page shows the modulus of the Euler function: the additional factor of between this and eta makes almost no visual difference whatsoever. Thus, this picture can be taken as a picture of eta as a function of . Combinatorial identities The theory of the algebraic characters of the affine Lie algebras gives rise to a large class of previously unknown identities for the eta function. These identities follow from the Weyl–Kac character formula, and more specifically from the so-called "denominator identities". The characters themselves allow the construction of generalizations of the Jacobi theta function which transform under the modular group; this is what leads to the identities. An example of one such new identity is where is the -analog or "deformation" of the highest weight of a module. Special values From the above connection with the Euler function together with the special values of the latter, it can be easily deduced that Eta quotients Eta quotients are defined by quotients of the form where is a non-negative integer and is any integer. Linear combinations of eta quotients at imaginary quadratic arguments may be algebraic, while combinations of eta quotients may even be integral. For example, define, with the 24th power of the Weber modular function . Then, and so on, values which appear in Ramanujan–Sato series. Eta quotients may also be a useful tool for describing bases of modular forms, which are notoriously difficult to compute and express directly. In 1993 Basil Gordon and Kim Hughes proved that if an eta quotient of the form given above, namely satisfies then is a weight modular form for the congruence subgroup (up to holomorphicity) where This result was extended in 2019 such that the converse holds for cases when is coprime to 6, and it remains open that the original theorem is sharp for all integers . This also extends to state that any modular eta quotient for any level congruence subgroup must also be a modular form for the group . While these theorems characterize modular eta quotients, the condition of holomorphicity must be checked separately using a theorem that emerged from the work of Gérard Ligozat and Yves Martin: If is an eta quotient satisfying the above conditions for the integer and and are coprime integers, then the order of vanishing at the cusp relative to is These theorems provide an effective means of creating holomorphic modular eta quotients, however this may not be sufficient to construct a basis for a vector space of modular forms and cusp forms. A useful theorem for limiting the number of modular eta quotients to consider states that a holomorphic weight modular eta quotient on must satisfy where denotes the largest integer such that divides . These results lead to several characterizations of spaces of modular forms that can be spanned by modular eta quotients. Using the graded ring structure on the ring of modular forms, we can compute bases of vector spaces of modular forms composed of -linear combinations of eta-quotients. For example, if we assume is a semiprime then the following process can be used to compute an eta-quotient basis of . A collection of over 6300 product identities for the Dedekind Eta Function in a canonical, standardized form is available at the Wayback machine of Michael Somos' website. See also Chowla–Selberg formula Ramanujan–Sato series q-series Weierstrass's elliptic functions Partition function Kronecker limit formula Affine Lie algebra References Further reading Fractals Modular forms Elliptic functions
Dedekind eta function
Mathematics
1,197
53,293,378
https://en.wikipedia.org/wiki/Position-specific%20isotope%20analysis
Position-specific isotope analysis, also called site-specific isotope analysis, is a branch of isotope analysis aimed at determining the isotopic composition of a particular atom position in a molecule. Isotopes are elemental variants with different numbers of neutrons in their nuclei, thereby having different atomic masses. Isotopes are found in varying natural abundances depending on the element; their abundances in specific compounds can vary from random distributions (i.e., stochastic distribution) due to environmental conditions that act on the mass variations differently. These differences in abundances are called "fractionations," which are characterized via stable isotope analysis. Isotope abundances can vary across an entire substrate (i.e., “bulk” isotope variation), specific compounds within a substrate (i.e., compound-specific isotope variation), or across positions within specific molecules (i.e., position specific isotope variation). Isotope abundances can be measured in a variety of ways (e.g., isotope ratio mass spectrometry, laser spectrometry,  NMR, ESI-MS). Early analyses varied in technique, but were commonly limited by their ability to only measure average isotope compositions over molecules or samples. While this allows isotope analysis of the bulk substrate, it eliminates the ability to distinguish variation between different sites of the same element within the molecule. The field of position-specific isotope biogeochemistry studies these intramolecular variations, known as “position-specific isotope” and “site-specific isotope” enrichments. It focuses on position-specific isotope fractionations in many contexts, development of technologies to measure these fractionations and the application of position-specific isotope enrichments to questions surrounding biogeochemistry, microbiology, enzymology, medicinal chemistry, and earth history. Position-specific isotope enrichments can retain critical information about synthesis and source of the atoms in the molecule. Indeed, bulk isotope analysis averages site-specific isotope effects across the molecule, and so while all those values have an influence on the bulk value, signatures of specific processes may be diluted or indistinguishable. While the theory of position-specific isotope analysis has existed for decades, new technologies exist now to allow these methods to be much more common. The potential applications of this approach are widespread, such as understanding metabolism in biomolecules, environmental pollutants in air, inorganic reaction mechanisms, etc. Clumped isotope analysis, a subset of position-specific isotope analysis, has already proven useful in characterizing sources of methane, paleoenvironment, paleoaltimetry, among many other applications. More specific case studies of position-specific isotope fractionation are detailed below. Theory Stable isotopes do not decay, and the heavy and light isotope masses affect how they partition within the environment. Any deviation from a random distribution of the light and heavy isotopes within the environment is called fractionation, and consistent fractionations as a result of a particular process or reaction are called "isotope effects." Isotope Effects Isotope effects are recurring patterns in the partitioning of heavy and light isotopes across different chemical species or compounds, or between atomic sites within a molecule. These isotope effects can come about from a near infinite number of processes, but most of them can be narrowed down into two main categories, based on the nature of the chemical reaction creating or destroying the compound of interest: (1) Kinetic isotope effects manifest in irreversible reactions, when one isotopologue is preferred in the transition state due to the lowest energy state. The preferred isotopologue will depend on whether the transition state of the molecule during a chemical reaction is more like the reactant or the product. Normal isotope effects are defined as those which partition the lighter isotope into the products of the reaction. Inverse isotope effects are less common as they preferentially partition the heavier isotope into the products. (2) Equilibrium isotope effects manifest in reversible reactions, when molecules can exchange freely to reach the lowest possible energy state. These variations can occur on a compound-specific level, but also on a position-specific level within a molecule. For instance, the carboxyl site of amino acids is exchangeable and therefore its carbon isotope signature can change over time and may not represent the original carbon source of the molecule. Biological fractionation Chemical reactions in biological processes are controlled by enzymes that catalyze the conversion of substrate to product. Since enzymes can alter the transition state structure for reactions, they also change kinetic and equilibrium isotope effects.  Placed in the context of a metabolism, the expression of isotope effects on biomolecules is further controlled by branch points. Different pathways of biosynthesis will use different enzymes, yielding a range of position specific isotope enrichments. This variability allows position-specific isotope measurements to discern multiple biosynthetic pathways from the same metabolic product. Biogeochemists use position specific isotope enrichments from amino acids, lipids, and sugars in nature to interpret the relative importance of different metabolisms. Mechanism The position-specific isotope effect of an enzymatic reaction is expressed as the ratio of rate constants for a monoisotopic substrate and a substrate substituted with one rare isotope. For example, enzyme formate dehydrogenase catalyzes the reaction of formate and NAD+ to carbon dioxide and NADH. The hydrogen of formate is directly transferred to NAD+. This step has an isotope effect, because the rate of protium transfer from formate to NAD+ is nearly three times faster than the rate of the same reaction with a deuterium transfer. This is also an example of a primary isotope effect. A primary isotope effect is one in which the rare isotope is substituted where a bond is broken or formed. Secondary isotope effects occur on other positions in the molecule and are controlled by the molecular geometry of the transition state. These are generally considered to be negligible but do arise in certain cases, especially for hydrogen isotopes. Unlike abiotic reactions, enzymatic reactions occur through a series of steps, including substrate-enzyme binding, conversion of substrate to product, and dissociation of enzyme-product complex. The observed isotope effect of an enzyme will be controlled by the rate limiting step in this mechanism. If the step that converts substrate to product is rate limiting, the enzyme will express its intrinsic isotope effect, that of the bond forming or breaking reaction. Abiological fractionation Like biotic molecules, position specific isotope enrichments in abiotic molecules can reflect the source of chemical precursors and synthesis pathways. The energy for abiotic reactions can come from many different sources, which will affect fractionation. For instance, metal catalysts can speed up abiotic reactions. Reactions can be slowed down or sped up by different temperature and pressure conditions, which will affect the equilibrium constant or activation energy of reversible and irreversible reactions, respectively. For example, carbon in the interstellar medium and solar nebula partition into distinct states based on thermodynamic favorability. Measuring site-specific isotope enrichments of carbon from organic molecules extracted from carbonaceous chondrites can elucidate where each carbon atom comes from, and how organic molecules can be synthesized abiotically. More broadly, these isotope enrichments can provide information about physical processes in the region where the molecular precursors were formed, and where the molecule formed in the solar system (i.e., nucleosynthetic heterogeneity, mass independent fractionation, self-shielding, etc.). Another example of distinct site-specific fractionations in abiotic molecules is Fischer-Tropsch-type synthesis, which is thought to produce abiogenic hydrocarbon chains. Through this reaction mechanism, site enrichments of carbon would deplete as carbon chain length increases, and be distinct from site-specific enrichments of hydrocarbons of biological origins. Analysis Substrates need to be prepared and analyzed in a specific way to elucidate site specific isotope enrichments. This requires clean separation of the compound of interest from the original sample, which can require a variety of different preparatory chemistries. Once isolated, position-specific isotope enrichments can be analyzed with a variety of instruments, which all have different advantages and provide varying degrees of precision. Enzymatic Reaction To measure the kinetic isotope effects of enzymatic reactions, biochemists perform in vitro experiments with enzymes and substrates. The goal of these experiments is to measure the difference in the enzymatic reaction rates for the monoisotopic substrate and the substrate with one rare isotope. There are two popularly used techniques in these experiments: Internal competition studies and direct comparison experiments. Both measure position-specific isotope effects. Direct Comparison Direct comparison experiments are primarily used for measuring hydrogen/deuterium isotope effects in enzymatic reactions. The monoisotopic substrate and a deuterated form of the substrate are separately exposed to the enzyme of interest over a range of concentrations. The Michaelis-Menten kinetic parameters for both substrates are determined and the position-specific isotope effect at the site of deuteration is expressed as the ratio of the monoisotopic rate constant over the rare isotope rate constant. Internal Competition For isotopes of elements like carbon and sulfur, the difference in kinetic parameters is too small, and the measurement precision too low, to measure an isotope effect by directly comparing the rates of the monoisotopic and rare isotope substrates. Instead, the two are mixed together using the natural abundance of stable isotopes in molecules. The enzyme is exposed to both isotopes simultaneously and its preference for the light isotope is analyzed by collecting the product of the reaction and measuring its isotope composition. For example, if an enzyme removes a carbon from a molecule by turning it into carbon dioxide, that carbon dioxide product can be collected and measured on an Isotope Ratio Mass Spectrometer for its carbon isotope composition. If the carbon dioxide has less 13C than the substrate mixture, the enzyme has preferentially reacted with the substrate that has a 12C at the site that is decarboxylated. In this way, internal competition experiments are also position-specific. If only the CO2 is measured, then only the isotope effect on the site of decarboxylation is recorded. Chemical degradation Before the advent of technologies that analyze whole molecules for their intramolecular isotopic structure, molecules were sequentially degraded and converted to CO2 and measured on an Isotope Ratio Mass Spectrometer, revealing position-specific 13C enrichments. Ninhydrin Reaction In 1961, Abelson and Hoering developed a technique for removing the carboxylic acid of amino acids using the ninhydrin reaction. This reaction converts the carboxylic acid to a molecule of CO2 which is measured via an Isotope Ratio Mass Spectrometer. Ozonolysis Reaction Lipids are of particular interest to stable isotope geochemists because they are preserved in rocks for millions of years. Monson & Hayes used ozonolysis to characterize the position-specific isotope abundances of unsaturated fatty acids, turning different carbon positions into carbon dioxide. Using this technique, they directly measured an isotopic pattern in fatty acids that had been predicted for years. Preparatory Chemistry Derivatization In some cases, additional functional groups will need to be added to molecules to facilitate the other separation and analysis methods. Derivatization can change the properties of an analyte; for instance, it would make a polar and non-volatile compound non-polar and more volatile, which would be necessary for analysis in certain types of chromatography. It is important to note, however, that derivatization is not ideal for site-specific analyses as it adds additional elements that must be accounted for in analyses. Chromatography Chromatography facilitates separation of distinct molecules within a mixture based on their respective chemical properties, and how those properties interact with the substrate coating the chromatographic column. This separation can happen “on-line,” during the measurement itself, or prior to measurements to isolate a pure compound. Gas and liquid chromatography have distinct advantages, based on the molecules of interest. For example, aqueously soluble molecules are more easily separated with liquid chromatography, while volatile, nonpolar molecules like propane or ethane are separated with gas chromatography. Instrumental Analysis A variety of different instruments can be used to perform position-specific isotope analysis, and each have distinct advantages and drawbacks. Many of them require comparison the sample of interest to a standard of known isotopic composition; fractionation within the instrument and variation of instrumental conditions over time can affect accuracy of individual measurements if not standardized. GC-IRMS and LC-MS Initial measurements of position specific isotope enrichments were measured using isotope ratio mass spectrometry in which sites on a molecule were first degraded to , the was captured and purified, and then the CO2 was measured for its isotope composition on an Isotope Ratio Mass Spectrometer (IRMS). Py-GC-MS was also used in these experiments to degrade molecules even further and characterize their intramolecular isotopic distributions. Both GC-MS and LC-MS are capable of characterizing position specific isotope enrichments in isotopically labelled molecules. In these molecules, 13C is so abundant that it can be seen on a mass spectrometer with low sensitivity. The resolution of these instruments can distinguish two molecules with a 1 Dalton difference in their molecular masses; however, this difference could arise from the addition of many rare isotopes (17O, 13C, 2H, etc.). For this reason, mass spectrometers using quadrupoles or time-of-flight detection techniques cannot be used for measuring position-specific enrichments at natural abundances. Spectroscopy Laser spectroscopy can be used to measure isotope enrichments of gases in the environment. Laser spectroscopy takes advantage of the different vibrational frequencies of isotopologues which cause them to absorb different wavelengths of light. Transmission of light through the gaseous sample at a controlled temperature can be quantitatively converted into a statement about isotopic composition. For N2O, these measurements can determine the position specific isotope enrichments of (15N. These measurements are fast and can reach relatively good precision (1-10 per mille). It is used to characterize environmental gas fluxes, and effects on these fluxes. This method is limited to measurement and characterization of gases. Nuclear magnetic resonance (NMR) Nuclear magnetic resonance observes small differences in molecular reactions to oscillating magnetic fields. It is able to characterize atoms with active nuclides that have a non-zero nuclear spin (e.g., 13C, 1H, 17O 35Cl, 15N, 37Cl), which makes it particularly useful for identifying certain isotopes. In typical proton or 13C NMR, the chemical shifts of protiums (1H) and carbon-13 atoms within a molecule are measured, respectively, as they are excited by a magnetic field and then relax with a diagnostic resonance frequency. With site specific natural isotope fractionation (SNIF) NMR, the relaxation resonances of the deuterium and 13C atoms. NMR does not have the sensitivity to detect isotopologues with multiple rare isotopes. The only peaks that appear in a SNIF-NMR spectra are those of the isotopologues with a single rare isotope. Since the instrument is only measuring the resonances of the rare isotopes, each isotopologue will have one peak. For example, a molecule with six chemically unique carbon atoms will have six peaks in a 13C SNIF NMR spectrum. The site of 13C substitution can be determined by the chemical shift of each of the peaks. As a result, NMR is able to identify site specific isotope enrichments within molecules. Orbitrap Mass Spectrometry The Orbitrap is a high-resolution Fourier transform mass spectrometer that has recently been adapted to allow for site-specific analyses. Molecules introduced into the Orbitrap are fragmented, accelerated, and analyzed. Because the Orbitrap characterizes molecular masses by measuring oscillations at radio frequencies, it is able to reach very high levels of precision, depending on measurement method (i.e., down to 0.1 per mille for long integration times). It is significantly faster than site-specific isotope measurements that can be performed using NMR, and can measure molecules with different rare isotopes but the same nominal mass at natural abundances (unlike GC and LCMS). It is also widely generalizable to molecules that can be introduced via gas or liquid solvent. Resolution of the Orbitrap is such that nominal isobars (e.g., 2H versus 15N versus 13C enrichments) can be distinguished from one another, and so molecules do not need to be converted into a homogeneous substrate to facilitate isotope analysis. Like other isotope measurements, measurements of site-specific enrichments on the Orbitrap should be compared to a standard of known composition. Case studies To illustrate the utility of position-specific isotope enrichments, several case studies are described below in which scientists used position-specific isotope analyses to answer important questions about biochemistry, pollution, and climate. Phosphoenolpyruvate carboxylase Phosphoenolpyruvate carboxylase (PEPC) is an enzyme that combines bicarbonate and phosphoenolpyruvate (PEP) to form the four-carbon acid, oxaloacetate. It is an important enzyme in C4 photosynthesis and anaplerotic pathways. It is also responsible for the position-specific enrichment of oxaloacetate, due to the equilibrium isotope effect of converting the linear molecule CO2 into the trigonal planar molecule HCO3-, which partitions 13C into bicarbonate. Inside the PEPC enzyme, H12CO3- reacts 1.0022 times faster than  H13CO3- so that PEPC has a 0.22% kinetic isotope effect. This is not enough to compensate for the 13C enrichment in bicarbonate. Thus, oxaloacetate is left with a 13C-enriched carbon at the C4 position. However, the C1 site experiences a small inverse secondary isotope effect due to its bonding environment in the transition state, leaving the C1 site of oxaloacetate enriched in 13C. In this way, PEPC simultaneously partitions 12C into the C4 site and 13C into the C1 site of oxaloacetate, an example of multiple position-specific isotope effects. Amino acids The first paper on site-specific enrichment used the ninhydrin reaction to cleave the carboxyl site off alpha-amino acids in photosynthetic organisms. The authors demonstrated an enriched carboxyl site relative to the bulk δ13C of the molecules, which they attribute to uptake of heavier CO2 through the Calvin cycle.  A recent study applied similar theory to understand enrichments in methionine, which they suggested would be powerful in origin and synthesis studies. Carbohydrates In 2012, a team of scientists used NMR spectroscopy to measure all of the position-specific carbon isotope abundances of glucose and other sugars. It was shown that the isotope abundances are heterogeneous. Different portions of the sugar molecules are used for biosynthesis based on the metabolic pathway an organism uses. Therefore, any interpretations of position-specific isotopes of molecules downstream of glucose have to consider this intramolecular heterogeneity. Glucose is the monomer of cellulose, the polymer that makes plants and trees rigid. After the advent of position-specific analyses of glucose, biogeochemists from Sweden looked the concentric tree rings of a Pinus nigra that recorded yearly growth between 1961 and 1995. They digested the cellulose down to its glucose units and used NMR spectroscopy to analyze its intramolecular isotopic patterns. They found correlations with position-specific isotope enrichments that were not apparent with whole molecule carbon isotope analysis of glucose. By measuring position-specific enrichments in the 6-carbon glucose molecule, they gathered six times more information from the same sample. Fatty acids The biosynthesis of fatty acids begins with acetyl-CoA precursors that are brought together to make long straight chain lipids. Acetyl-CoA is produced in aerobic organisms by pyruvate dehydrogenase, an enzyme that has been shown to express a large, 2.3% isotope effect on the C2 site of pyruvate and a small fractionation on the C3 site. These become the odd and even carbon positions of fatty acids respectively and in theory would result in a pattern of 13C depletions and enrichments at odd and even positions, respectively. In 1982, Monson and Hayes developed technology for measuring the position specific carbon isotope abundances of fatty acids. Their experiments on Escherichia coli revealed the predicted relative 13C enrichments at odd numbered carbon sites. However, this pattern was not found in Saccharomyces cerevisiae that were fed glucose. Instead, its fatty acids were 13C enriched at the odd positions. This has been interpreted as either a product of isotope effects during fatty acid degradation or the intramolecular isotopic heterogeneity of glucose that ultimately is reflected in the position-specific patterns of fatty acids. Nitrous Oxide Site specific isotope enrichments of N2O is measured in the environment to help disentangle microbial sources and sinks in the environment. Different isotopologues of N2O absorb light at different wavelengths. Laser spectroscopy converts these differences as it scans across wavelengths to measure the abundance of 14N-15N-16O vs. 15N-14N-16O, a distinction that is impossible on other instruments. These measurements have achieved very high precision, down to 0.2 per mille. Environmental pollutants Position-specific isotopes can be used to trace environmental pollutants through local and global environment. This is specifically useful as heavy isotopes are often used to synthesize chemicals and then will get incorporated into the natural environment through biodegradation. Thus, tracing position-specific isotopes in the environment can help trace the movement of these pollutants and chemical products. Case study conclusions These case studies represent some potential applications for position specific isotope analysis, but certainly not all. The opportunities for samples to measure and processes to characterize are virtually unlimited, and new methodological developments will help make these measurements possible going forward. References Analytical chemistry Isotopes
Position-specific isotope analysis
Physics,Chemistry
4,667
1,509,165
https://en.wikipedia.org/wiki/IT%20University
IT University is a joint department between Chalmers University of Technology and the University of Gothenburg in Sweden. This joint venture offers a great scope for cooperation between researchers with different areas of expertise and academic specialties within the field of information technology. The programmes offered are based on advanced research and are in a constant state of development. IT University was established in the autumn of 2001. Today it offers programs at both Bachelor's and Master's level, mostly with a focus on applied information technology. Programs in English C:Art:Media Master Program Intelligent Systems Design Master's in Software Engineering and Management Software Engineering and Management International MSc in Applied Data Science External links IT University website Educational institutions established in 2001 Chalmers University of Technology University of Gothenburg Information technology organizations Information technology education Joint ventures 2001 establishments in Sweden
IT University
Technology
159
6,261,834
https://en.wikipedia.org/wiki/Half-value%20layer
A material's half-value layer (HVL), or half-value thickness, is the thickness of the material at which the intensity of radiation entering it is reduced by one half. HVL can also be expressed in terms of air kerma rate (AKR), rather than intensity: the half-value layer is the thickness of specified material that, "attenuates the beam of radiation to an extent such that the AKR is reduced to one-half of its original value. In this definition the contribution of all scattered radiation, other than any [...] present initially in the beam concerned, is deemed to be excluded." Rather than AKR, measurements of air kerma, exposure, or exposure rate can be used to determine half value layer, as long as it is given in the description. Half-value layer refers to the first half-value layer, where subsequent (i.e. second) half-value layers refer to the amount of specified material that will reduce the air kerma rate by one-half after material has been inserted into the beam that is equal to the sum of all previous half-value layers. Quarter-value layer is the amount of specified material that reduces the air kerma rate (or exposure rate, exposure, air kerma, etc...) to one fourth of the value obtained without any test filters. The quarter-value layer is equal to the sum of the first and second half-value layers. The homogeneity factor (HF) describes the polychromatic nature of the beam and is given by: The HF for a narrow beam will always be less than or equal to one (it is only equal to one in the case of a monoenergetic beam). In case of a narrow polychromatic beam, the HF is less than one because of beam hardening. HVL is related to Mean free path, however the mean free path is the average distance a unit of radiation can travel in the material before being absorbed, whereas HVL is the average amount of material needed to absorb 50% of all radiation (i.e., to reduce the intensity of the incident radiation by half). In the case of sound waves, HVL is the distance that it takes for the intensity of a sound wave to be reduced to one-half of its original value. The HVL of sound waves is determined by both the medium through which it travels, and the frequency of the beam. A "thin" half-value layer (or a quick drop of -3 dB) results from a high frequency sound wave and a medium with a high rate of attenuation, such as bone. HVL is measured in units of length. A similar concept is the tenth-value layer or TVL. The TVL is the average amount of material needed to absorb 90% of all radiation, i.e., to reduce it to a tenth of the original intensity. 1 TVL is greater than or equal to log2(10) or approximately 3.32 HVLs, with equality achieved for a monoenergetic beam. Here are example approximate half-value layers for a variety of materials against a source of gamma rays, according to Iridium-192 radiation decay energy (MeV): Beta 0.67 (49%); Gamma 0.308 (28%); Gamma 0.468 (46%): Concrete: 44.5 mm Steel: 12.7 mm Lead: 4.8 mm Tungsten: 3.3 mm Uranium: 2.8 mm See also Attenuation coefficient Radiation protection References Radiation health effects Radiometry
Half-value layer
Chemistry,Materials_science,Engineering
754
71,652,174
https://en.wikipedia.org/wiki/Trans-European%20Drug%20Information
The Trans-European Drug Information (TEDI) project is a European database compiling information from different drug checking services located on the European continent. The non-governmental organizations feeding into the database are referred to as the TEDI network. History The first drug checking service in Europe opened in 1986 in Amsterdam, allowing drug users to analyze the chemical composition of illicit substances that they consume. In the following years, a number of nonprofit organizations present in various other drug scenes in several countries (including in Austria, France, Germany, the Netherlands, Portugal, Spain, and Switzerland) set up drug checking services. In 2011, a database was created for to centralize information from these services and allow for the sharing of alerts (for example on new adulterants in illicit substances or circulation of novel psychoactive substance) and the monitoring of drug markets across borders. Between 2008 and 2013, organizations member of the TEDI network analyzed more than 45,000 samples of recreational drugs, showing similarities and discrepancies between areas of the European continent in terms of purity, formulation, or prices. The TEDI project The project and the network are hosted by the Polish nonprofit TEDI Nightlife Empowerment & Well-being Network (also known as NEW net or SaferNightlife). Network As of 2022, the TEDI network was integrated by 20 organizations across 13 countries (Austria, Belgium, Finland, France, Germany, Italy, Luxembourg, the Netherlands, Portugal, Slovenia, Spain, Switzerland, and the United Kingdom). A team of professionals from various fields (substance use disorder prevention workers, pharmacists, chemists, etc.) across network member organizations constitutes the TEDI project's team. Database The aims of the Trans-European Drug Information project are to collect, monitor and analyze the evolution of the European recreational drug market trends, and to regularly report the findings. Since 2011, the database has facilitated the centralization and comparison of information collected at the local level. The TEDI database also feeds into the early warning system of the European Union Drugs Agency (EUDA, formerly EMCDDA). EUDA and the TEDI network also collaborate on the organization of conferences and trainings. In 2019, the mobile application TripApp was launched by a consortium or organizations, sharing in real-time alerts from the TEDI database, in addition to connecting app users with local harm reduction providers. The app received an award from the Council of Europe in 2021. Guidelines As part of the project, guidelines and methodological recommendations have been published, such as: Factsheets on drug checking services, Guidance for organizations willing to create a drug checking service, Training on personal support and counselling in nightlife settings, etc. See also Drug checking Harm reduction Substance use disorder Drug policy European Monitoring Centre for Drugs and Drug Addiction Early warning system Reagent testing References External links (Official website) Drug checking information page of the EMCDDA Addiction and substance abuse organizations Addiction medicine Drug culture Drug policy organizations Drug safety Early warning systems Harm reduction Organizations established in 2011 Public health organizations Substance intoxication Substance abuse
Trans-European Drug Information
Chemistry,Technology
623
54,539,305
https://en.wikipedia.org/wiki/GDC%20Observatory
The GDC Observatory is an astronomical observatory dedicated to the exploration and science of the night sky. Located in Gingin, Western Australia, the observatory is a part of the Gingin Gravity Precinct. The Observatory is open to the public on a regular basis. Southern Cross Cosmos Centre The observatory is located in the Southern Cross Cosmos Centre, a purpose built facility opened in 2001, with a slide-off roof housing five telescopes available for public use on astronomical viewing nights. The largest telescope is a Obsession telescope named "Brodie-Hall", donated to the observatory by Laurence and Jean Brodie-Hall. The facility is located next-door to the Zadko telescope run by the University of Western Australia, which is actively involved in scientific research. See also List of astronomical observatories List of astronomical societies Lists of telescopes References External links Astronomical observatories in Western Australia Public observatories Education in Western Australia Wheatbelt (Western Australia) Astronomy museums
GDC Observatory
Astronomy
193
26,847,511
https://en.wikipedia.org/wiki/Stephen%20Ibaraki
Stephen K. Ibaraki has been a teacher, an industry analyst, writer and consultant in the IT industry, and the former president of the Canadian Information Processing Society. Currently, Ibaraki is a venture capitalist, entrepreneur, futurist and speaker. He is also the founder of the UN ITU AI for Good Global Summit, and a vicechair of the ITU-WHO Focus Group on Artificial Intelligence for Health, founding chair of the Financial Services Roundtable Tech Advisory Council, and founding chair of the International Federation for Information Processing Global Industry Council. Ibaraki serves as an Advisor for the Business Development Committee of the IEEE Computer Society. He is the co-founder for the Technology Advisory Board at Yintech Investment Holdings Limited. In 2018, he was invited to be a founding member of IPSoft's AI Pioneers Steering Committee and he was also invited to join the Science and Technology Council for the ARCH Mission Foundation. Aside from taking on these new roles, he was invited to be a keynote speaker at the Yonder25 Conference as well as the Reshaping National Security forum which was hosted by Shanghai Institute for International Studies; and the UNICRI Centre for AI and Robotics. Ibaraki has been the recipient of a Microsoft MVP award for 13 years in a row since 2006 In December 2018, Ibaraki was invited by YPO to share his insights about business and investments with respect to the Fourth Industrial Revolution at YPO Edge 2019. References External links REDDS Capital Official Website Stephen Ibaraki CIPS Fellow Profile Stephen Ibaraki ITU Profile 1955 births Living people People in information technology Writers from British Columbia
Stephen Ibaraki
Technology
331
14,760,562
https://en.wikipedia.org/wiki/Mucin%205B
Mucin-5B (MUC-5B) is a protein that in humans is encoded by the MUC5B gene and by the Muc5b gene in the mouse. It is one of the five gel-forming mucins. MUC-5B can be found in whole saliva, normal lung mucus, and cervical mucus. In some diseases, such as COPD, chronic rhinosinusitis (CRS), and H. pylori-associated gastric disease, the gene has been found to be upregulated, and this may be related to the pathogenesis of these conditions. Synthesis All mucins are synthesized in secretory cells known as goblet cells or mucous cells, depending on the tissue location. Their creation, while still not completely understood, begins in the endoplasmic reticulum. From there, the Golgi apparatus builds the O-linked glycans found in mucins. Finally, they are packaged into secretory granules. References 05B
Mucin 5B
Chemistry
216
60,242,687
https://en.wikipedia.org/wiki/NGC%207838
NGC 7838 is a spiral or lenticular galaxy located about 500 million light-years away in the constellation of Pisces. The galaxy was discovered by astronomer Albert Marth on November 29, 1864. NGC 7838 appears to interact with NGC 7837 forming Arp 246. See also List of NGC objects (7001–7840) References External links 7838 246 525 Pisces (constellation) Astronomical objects discovered in 1864 Spiral galaxies Lenticular galaxies Discoveries by Albert Marth
NGC 7838
Astronomy
99
53,282,876
https://en.wikipedia.org/wiki/NGC%205837
NGC 5837 is a barred spiral galaxy in the constellation Boötes. It was discovered on 19 June 1887 by Lewis A. Swift. Supernovae A Type Ia Supernova, SN 2014ac, was discovered 9 March 2014, and on 16 June 2015, a Type IIN Supernova, SN 2015z, was discovered. References External links Barred spiral galaxies Boötes 5837 09686 053817
NGC 5837
Astronomy
86
20,746,022
https://en.wikipedia.org/wiki/Human%20parasite
Human parasites include various protozoa and worms. Human parasites are divided into endoparasites, which cause infection inside the body, and ectoparasites, which cause infection superficially within the skin. The cysts and eggs of endoparasites may be found in feces, which aids in the detection of the parasite in the human host while also providing the means for the parasitic species to exit the current host and enter other hosts. Although there are a number of ways in which humans can contract parasitic infections, observing basic hygiene and cleanliness tips can reduce its probability. The most accurate diagnosis is by qPcr DNA antigen assay, not generally available by primary care physicians in the USA: most labs offer research only service. History Archaeological evidence It was assumed that early human ancestors generally had parasites, but until recently there was no evidence to support this claim. Generally, the discovery of parasites in ancient humans relies on the study of feces and other fossilized material. The earliest known parasite in a human was eggs of the lung fluke found in fossilized feces in northern Chile and is estimated to be from around 5900 BC. There are also claims of hookworm eggs from around 5000 BC in Brazil and large roundworm eggs from around 2330 BC in Peru. Tapeworm eggs have also been found present in Egyptian mummies dating from around 2000 BC, 1250 BC, and 1000 BC along with a well preserved and calcified female worm inside of a mummy. Written evidence The first written records of parasites date from 3000 to 400 BC in Egyptian papyrus records. They identify parasites such as roundworms, Guinea worms, threadworms, and some tapeworms of unknown varieties. In ancient Greece, Hippocrates and Aristotle documented several parasites in his collection of works Corpus Hippocraticus. In this book, they documented the presence of worms and other parasites inside of fish, domesticated animals, and humans. The bladder worm is well documented in its presence in pigs along with the larval stages of a tapeworm (Taenia solium). These tapeworms were mentioned in a play by Aristophanes as "hailstones" with Aristotle in the section about pig diseases in his book History of Animals. The cysts of the Echinococcus granulosus tapeworm were also well known in ancient cultures mainly because of their presence in slaughtered and sacrificed animals. The major parasitic disease that has been documented in early records is dracunculiasis. This disease is caused by the Guinea worm and is characterized by the female worm emerging from the leg. This symptom is so specific to the disease that it is mentioned in many texts and plays that predate 1000 AD. Greece and Rome In Greece, Hippocrates and Aristotle created considerable medical documentation about parasites in the Corpus Hippocraticus. In this work, they documented the presence of parasitic worms in many animals ranging from fish to domesticated animals and humans. Among the most extensively documented was the Bladder Worm (Taenia solium). This condition was called "measly pork" when present in pigs and was characterized by the presence of the larval stages of the Bladder Worm in muscle tissue. This disease was also mentioned by the playwright Aristophanes when he referred to "hailstones" in one of his plays. This naming convention is also reflected by Aristotle when he refers to "bladders that are like hailstones." Another worm that was commonly written about in ancient Greek texts was the tapeworm Echinococccus granulosus. This worm was distinguished by the presence of "massive cysts" in the liver of animals. This condition was documented so well mainly because of its presence in slaughtered and sacrificed animals. It was documented by several different cultures of the time other than the Greeks including the Arabs, Romans, and Babylonians. Not many parasitic diseases were identified in ancient Greek and Roman texts mainly because the symptoms for parasitic diseases are shared with many other illnesses such as the flu, the common cold, and dysentery. However, several diseases such as Dracunculiasis (Guinea worm disease), Hookworm, Elephantiasis, Schistosomiasis, Malaria, and Amebiasis cause unique and specific symptoms and are well documented because of this. The most documented by far was Guinea worm disease mainly because the grown female worm emerges from the skin, which causes considerable irritation, and which cannot really be ignored. This particular disease is widely accepted to also be the "fiery serpents" written about in the Old Testament of the Bible. This disease was mentioned by Hippocrates in Greece along with Pliny the Elder, Galen, Aetius of Amida, and Paulus Aegineta of Alexandria in Rome. Strangely, this disease was never present in Greece even though it was documented. Northern Africa, the Middle East, and Mesopotamia The medieval Persian doctor Avicenna records the presence of several parasites in animals and in his patients including Guinea worm, threadworms, tapeworms, and the Ascaris worm. This followed a tradition of Arab medical writings spanning over 1000 years in the area near the Red Sea. However, the Arabs never made the connection between parasites and the diseases they caused. As with Greek and Roman texts, the Guinea worm is very well documented in Middle Eastern medical texts. Several Assyrian documents in the library of King Ashurbanipal refer to an affliction that has been interpreted as Guinea worm disease. In Egypt, the Ebers Papyrus contains one of the few references to hookworm disease in ancient texts. This disease does not have very specific symptoms and was vaguely mentioned. However vague the reference, it is one of the few that connect the disease to the hookworm parasite. Another documented disease is elephantiasis. Symptoms of this disease are highly visible, since it causes extreme swelling in the limbs, breasts, and genitals. A number of surviving statues indicate that Pharaoh Mentuhotep II is likely to have suffered from elephantiasis. This disease was well known to Arab physicians and Avicenna, who noted specific differences between elephantiasis and leprosy. China The Chinese mostly documented diseases rather than the parasites associated with them. Chinese texts contain one of the few references to hookworm disease found in ancient records, but no connection to the hookworm parasite is made. The Emperor Huang Ti recorded the earliest mentioning (2700 BC) of malaria in his text Nei Ching. He lists chills, headaches, and fevers as the main symptoms and distinguished between the different kinds of fevers. India In India, the Charaka Samhita and Sushruta Samhita document malaria. These documents list the main symptoms as fever and enlarged spleens. The Bhrigu Samhita from 1000 BCE makes the earliest reference to Amebiasis. The symptoms were given as bloody and mucosal diarrhea. Most common parasites As of 2013, the parasites causing the most deaths globally were as follows: Commonly documented parasites Endoparasites Protozoa Plasmodium spp.: causes malaria Entamoeba: causes amoebiasis Giardia: causes giardiasis Trypanosoma brucei: causes African trypanosomiasis Toxoplasma gondii: causes toxoplasmosis Acanthamoeba: causes acanthamoeba keratitis Leishmania: causes leishmaniasis Babesia: causes babesiosis Balamuthia mandrillaris: causes granulomatous amoebic encephalitis Cryptosporidium: causes cryptosporidiosis Cyclospora: causes cyclosporiasis Naegleria fowleri: causes primary amoebic meningoencephalitis Parasitic worms (helminths) Ascaris lumbricoides: causes ascariasis Pinworm: causes enterobiasis Strongyloides stercoralis: causes trongyloidiasis Toxocara: causes toxocariasis Guinea worm: causes dracunculiasis. Hookworm: causes helminthiasis Tapeworm (Eucestoda): causes cysticercosis, echinococcosis, hymenolepiasis, diphyllobothriasis and sparganosis Whipworm (Trichuris trichiura): causes trichuriasis Parasitic flukes Schistosoma: causes schistosomiasis Gnathostoma: causes gnathostomiasis Paragonimus: causes paragonimiasis Fasciola hepatica: causes fascioliasis Trichobilharzia regenti: causes cercarial dermatitis Other organisms New World screwworm (Cochliomyia): causes myiasis Ectoparasites Head louse (Pediculus humanus capitis) causes pediculosis Body louse (Pediculus humanus humanus): causes pediculosis Crab louse (Pthirus pubis): causes phthiriasis Human botfly maggot (Dermatobia hominis): causes myiasis Flea (Siphonaptera): causes papular urticaria Chigoe flea (Tunga penetrans): causes tungiasis Mosquito (Culicidae): causes papular urticaria Bed bug (Cimex lectularius): causes papular urticaria Tick (Ixodoidea): causes papular urticaria Chiggers (Trombiculidae): causes trombiculosis Scabies mite (Sarcoptes scabiei): causes scabies Red mite (Dermanyssus gallinae): causes gamasoidosis Tropical fowl mite (Ornithonyssus bursa): causes gamasoidosis Northern fowl mite (Ornithonyssus sylviarum): causes gamasoidosis Tropical rat mite (Ornithonyssus bacoti): causes rodent mite dermatitis Spiny rat mite (Laelaps echidnina): causes rodent mite dermatitis House mouse mite (Liponyssoides sanguineus): causes rodent mite dermatitis Demodex mite: associated with acne vulgaris and rosacea References
Human parasite
Biology
2,138
17,475,077
https://en.wikipedia.org/wiki/Edgar%20Bright%20Wilson
Edgar Bright Wilson Jr. (December 18, 1908 – July 12, 1992) was an American chemist. Wilson was a prominent and accomplished chemist and teacher, recipient of the National Medal of Science in 1975, Guggenheim Fellowships in 1949 and 1970, the Elliott Cresson Medal in 1982, and a number of honorary doctorates. He was a member of both the American Academy of Arts and Sciences, the American Philosophical Society, and the United States National Academy of Sciences. He was also the Theodore William Richards Professor of Chemistry, Emeritus at Harvard University. One of his sons, Kenneth G. Wilson, was awarded the Nobel Prize in physics in 1982. E. B. Wilson was a student and protégé of Nobel laureate Linus Pauling and was a coauthor with Pauling of Introduction to Quantum Mechanics, a graduate level textbook in Quantum Mechanics. Wilson was also the thesis advisor of Nobel laureate Dudley Herschbach. Wilson was elected to the first class of the Harvard Society of Fellows. Wilson made major contributions to the field of molecular spectroscopy. He developed the first rigorous quantum mechanical Hamiltonian in internal coordinates for a polyatomic molecule. He developed the theory of how rotational spectra are influenced by centrifugal distortion during rotation. He pioneered the use of group theory for the analysis and simplification normal mode analysis, particularly for high symmetry molecules, such as benzene. In 1955, Wilson published Molecular Vibrations along with J.C. Decius and Paul C. Cross. Following the Second World War, Wilson was a pioneer in the application of microwave spectroscopy to the determination of molecular structure. Wilson wrote an influential introductory text Introduction to Scientific Research that provided an introduction of all the steps of scientific research, from defining a problem through the archival of data after publication. Starting in 1997, the American Chemical Society has annually awarded the E. Bright Wilson Award in Spectroscopy, named in honor of Wilson. Scientific career Bright started his higher education at Princeton in 1926, where he received both his bachelor's and master's degree in 1930 and 1931 respectively. He then went to the California Institute of Technology where he worked with Linus Pauling on crystal structure determinations and finished his PhD. During this time, he also wrote a textbook with Pauling, called Introduction to Quantum Mechanics, which was published in 1935. This textbook was still in print in the year 2000, some 70 years after its initial publication. In 1934, Bright was elected to the Society of Fellows at Harvard for his work done at the California Institute of Technology. His election meant he had a 3-year junior fellowship at Harvard during which he studied molecular motion and symmetry analysis. In 1936 the Harvard Chemistry department appointed Bright as an assistant professor during his third year of his fellowship. He taught courses in chemistry and quantum mechanics and was promoted to an associate professor with tenure after three years. From 1934 to 1941, Bright, along with Harold Gershinowitz, constructed an automatic infrared spectrometer which was used to measure vibrational absorption spectra of various molecules. After the start of World War II Bright started research on explosives with the National Defense Research Committee (NDRC) where he studied shock waves in water. In 1942 an Underwater Explosives Research Laboratory (UERL) was opened at the Woods Hole Oceanographic Institution which Bright led. The US navy, exasperated by the continual harassment of Nazi U-boats on allied shipping vessels had a strong interest in the UERL and its research with depth charges and other anti-submarine weapons. To facilitate this research, the laboratory acquired an old fishing vessel, the Reliance, which was fitted to record electronic signals from pressure sensors deep underwater. After the end of the war Bright returned to Harvard. In 1947 Bright and Richard Hughes invented and built a Stark-effect microwave spectrometer which could measure different radio waves and became an important tool in spectroscopy. From 1949 to 1950, Bright took a sabbatical in Oxford during which he mainly worked on his book Introduction to Scientific Research which was published in 1952. In 1952–1953, during the Korean War, Bright became the Research director and deputy director of the Weapons Evaluation Group (WSEG), where he only stayed for 18 months. He later began accepting assignments in the mid-1960's in Washington during the Vietnam war. In 1955 Bright published a book Molecular Vibrations along with co-authors J.C Decius and P.C. Cross which discussed infrared and Raman spectra of polyatomic molecules. In 1955 Bright studied the internal rotation of single bonds in molecules using microwave spectroscopy. In 1965 Bright studied the rotational energy transfer in inelastic molecular collisions. In 1970, Bright began to study hydrogen bonding and the structure of hydrogen bonds using low resolution microwave spectroscopy. In 1979, Bright retired and was named an emeritus professor. The E. Bright Wilson Award in Spectroscopy was established in 1994 by the American Chemical Society. Personal life Wilson was born in Gallatin, Tennessee to mother Alma Lackey and father Edgar Bright Wilson, a lawyer. His family soon moved to Yonkers, New York. He was married to Emily Buckingham from 1935 until she died in 1954. He remarried to Therese Bremer in 1955, a distinguished photochemist. Wilson had a total of 4 sons and 2 daughters, one of whom was Kenneth Wilson, a Nobel Laureate in physics. In his final years, Wilson suffered from Parkinson's disease. He died on July 12, 1992, in Cambridge, Massachusetts of pneumonia. References External links 1908 births 1992 deaths 20th-century American chemists Theoretical chemists Harvard University faculty National Medal of Science laureates Spectroscopists Members of the American Philosophical Society
Edgar Bright Wilson
Chemistry
1,133
9,195,888
https://en.wikipedia.org/wiki/Marimastat
Marimastat was a proposed antineoplastic drug developed by British Biotech. It acted as a broad-spectrum matrix metalloproteinase inhibitor. Marimastat performed poorly in clinical trials, and development was terminated. This may be, however, a result of targeting cancer at too late of a stage. This is supported by the fact that MMP inhibitors have more recently been shown in animal models to be more effective in earlier stages of cancers. (Effects of angiogenesis inhibitors on multistage carcinogenesis in mice. Science 284, 808-812. Bergers, G., Javaherian, K., Lo, K.-M., Folkman, J., and Hanahan, D. (1999)). See also Batimastat References Experimental cancer drugs Hydroxamic acids Matrix metalloproteinase inhibitors Isobutyl compounds Tert-butyl compounds
Marimastat
Chemistry
190
38,028,990
https://en.wikipedia.org/wiki/C12H10O6
{{DISPLAYTITLE:C12H10O6}} The molecular formula C12H10O6 (molar mass: 250.20 g/mol, exact mass: 250.0477 u) may refer to: Difucol, a phlorotannin Diphlorethol, a phlorotannin
C12H10O6
Chemistry
74
9,844,221
https://en.wikipedia.org/wiki/J.%20Lawrence%20Smith%20Medal
The J. Lawrence Smith Medal is awarded every three years by the National Academy of Sciences for investigations of meteoric bodies. The medal's namesake is the American chemist and meteoriticist J. Lawrence Smith. Recipients List of recipients: H. A. Newton (1888) George P. Merrill (1922) Stuart H. Perry (1945) Fred L. Whipple (1949) Peter M. Millman (1954) Mark G. Inghram (1957) Ernst J. Opik (1960) Harold C. Urey (1962) John H. Reynolds (1967) Edward P. Henderson (1970) Edward Anders (1971) Clair C. Patterson (1973) John A. Wood (1976) Ralph B. Baldwin (1979) G. J. Wasserburg (1985) A. G. W. Cameron (1988) Robert M. Walker (1991) Donald E. Brownlee (1994) Ernst K. Zinner (1997) – For his pioneering studies of the isotopic composition of circumstellar dust grains preserved in meteorites, opening a new window to the formation of the solar nebula. George W. Wetherill (2000) – For his unique contributions to the cosmochronology of the planets and meteorites and to the orbital dynamics and formation of solar system bodies. John T. Wasson (2003) – For important studies on the classification, origin, and early history of iron meteorites and chondritic meteorites, and on the mode of formation of chondrules. Klaus Keil (2006) – For his pioneering quantitative studies of minerals in meteorites and important contributions to understanding the nature, origin, and evolution of their parent bodies. Robert N. Clayton (2009) – For pioneering the study of oxygen isotopes to unravel the nature and origin of meteorites, showing that meteorites were assembled from components with distinct nuclear origins. Harry Y. McSween Jr. (2012) – For his studies of the igneous and metamorphic histories of the parent planets of the chondritic and achondritic meteorites, with particular emphasis on his work on the geological history of Mars based on studies of Martian meteorites and spacecraft missions to this planet. Hiroko Nagahara (2015) – For her work on the kinetics of evaporation and condensation processes in the early Solar System and her fundamental contributions to one of the most enduring mysteries in meteoritics, the formation of the chondrules that constitute the characteristic component of the most abundant group of meteorites. Kevin D. McKeegan (2018) – For contributions to understanding of the processes and chronology of the early solar system as recorded by primitive meteorites, for innovation in analytical instrumentation, and for showing that the oxygen isotopic compositions of the Earth and rocky planets and meteorites are distinctly different from that of the Sun. Meenakshi Wadhwa (2021) – For deepening the world's understanding of the evolutionary history of the solar system through her significant contributions to the sciences of cosmochemistry, solar system chronology, meteoritics, and trace element geochemistry. See also List of astronomy awards Glossary of meteoritics References Further reading Willson, Lee Anne. J. Lawrence Smith and his meteorite collections. Presented April 2006. Astronomy prizes Awards established in 1888 Meteorite prizes Awards of the United States National Academy of Sciences
J. Lawrence Smith Medal
Astronomy,Technology
698
35,596,359
https://en.wikipedia.org/wiki/ECybermission
eCYBERMISSION is a U.S. Army-sponsored online educational science fair for students in grades 6–9 in the United States or at US Army schools across the world. The contest is conducted entirely online—groups of 2-4 students submit "Mission Folders", which contain detailed information about their projects choosing either Scientific Inquiry or the Engineering Design Process. The competition selects winners on state, regional, and finally national levels for each grade level. All regional winners receive a trip to attend the National Judging and Educational Event (NJ&EE). Students can win up to $10,000 in savings bonds (maturity value). The NJ&EE event includes many opportunities to meet others, physical training, various workshops and panels, as well as the DoD STEM Workshops, which is a day working with scientists and engineers from different sectors of the DoD. eCYBERMISSION is part of the Army Educational Outreach Program (AEOP). The competition is administered by the National Science Teaching Association (NSTA). References External links eCYBERMISSION Official Website RDECOM Official Website US AEOP website American educational websites Science competitions American military youth groups
ECybermission
Technology
234
56,157,624
https://en.wikipedia.org/wiki/Plant%20Biotechnology%20Journal
Plant Biotechnology Journal, an Open Access journal, publishes high-impact original research and incisive reviews with an emphasis on molecular plant sciences and their applications through plant biotechnology. It was established in 2003 and is published by Wiley-Blackwell in association with the Society for Experimental Biology and the Association of Applied Biologists. The editor-in-chief is Henry Daniell (University of Pennsylvania). According to the Journal Citation Reports, the journal has a 2021 impact factor of 13.263, ranking it 5th out of 238 journals in the category "Plant Sciences" and 8th out of 158 journals in the category "Biotechnology & Applied Microbiology". As an Open Access journal, articles are accessible globally without restriction. To cover the cost of publishing, Plant Biotechnology Journal charges a publication fee. References External links Botany journals Biotechnology journals English-language journals Wiley-Blackwell academic journals Academic journals established in 2003 Academic journals associated with international learned and professional societies of Europe
Plant Biotechnology Journal
Biology
190
22,955,745
https://en.wikipedia.org/wiki/Value-driven%20design
Value-driven design (VDD) is a systems engineering strategy based on microeconomics which enables multidisciplinary design optimization. Value-driven design is being developed by the American Institute of Aeronautics and Astronautics, through a program committee of government, industry and academic representatives. In parallel, the U.S. Defense Advanced Research Projects Agency has promulgated an identical strategy, calling it value-centric design, on the F6 Program. At this point, the terms value-driven design and value-centric design are interchangeable. The essence of these strategies is that design choices are made to maximize system value rather than to meet performance requirements. This is also similar to the value-driven approach of agile software development where a project's stakeholders prioritise their high-level needs (or system features) based on the perceived business value each would deliver. Value-driven design is controversial because performance requirements are a central element of systems engineering. However, value-driven design supporters claim that it can improve the development of large aerospace systems by reducing or eliminating cost overruns which are a major problem, according to independent auditors. Concept Value-driven design creates an environment that enables and encourages design optimization by providing designers with an objective function and eliminating those constraints which have been expressed as performance requirements. The objective function inputs all the important attributes of the system being designed, and outputs a score. The higher the score, the better the design. Describing an early version of what is now called value-driven design, George Hazelrigg said, "The purpose of this framework is to enable the assessment of a value for every design option so that options can be rationally compared and a choice taken." At the whole system level, the objective function which performs this assessment of value is called a "value model." The value model distinguishes value-driven design from Multi-Attribute Utility Theory applied to design. Whereas in Multi-Attribute Utility Theory, an objective function is constructed from stakeholder assessments, value-driven design employs economic analysis to build a value model. The basis for the value model is often an expression of profit for a business, but economic value models have also been developed for other organizations, such as government. To design a system, engineers first take system attributes that would traditionally be assigned performance requirements, like the range and fuel consumption of an aircraft, and build a system value model that uses all these attributes as inputs. Next, the conceptual design is optimized to maximize the output of the value model. Then, when the system is decomposed into components, an objective function for each component is derived from the system value model through a sensitivity analysis. A workshop exercise implementing value-driven design for a GPS satellite was conducted in 2006, and may serve as an example of the process. History The dichotomy between designing to performance requirements versus objective functions was raised by Herbert Simon in an essay called "The Science of Design" in 1969. Simon played both sides, saying that, ideally, engineered systems should be optimized according to an objective function, but realistically this is often too hard, so that attributes would need to be satisficed, which amounted to setting performance requirements. But he included optimization techniques in his recommended curriculum for engineers, and endorsed "utility theory and statistical decision theory as a logical framework for rational choice among given alternatives". Utility theory was given most of its current mathematical formulation by von Neumann and Morgenstern, but it was the economist Kenneth Arrow who proved the Expected Utility Theorem most broadly, which says in essence that, given a choice among a set of alternatives, one should choose the alternative that provides the greatest probabilistic expectation of utility, where utility is value adjusted for risk aversion. Ralph Keeney and Howard Raiffa extended utility theory in support of decision making, and Keeney developed the idea of a value model to encapsulate the calculation of utility. Keeney and Raiffa also used "attributes" to describe the inputs to an evaluation process or value model. George Hazelrigg put engineering design, business plan analysis, and decision theory together for the first time in a framework in a paper written in 1995, which was published in 1998. Meanwhile, Paul Collopy independently developed a similar framework in 1997, and Harry Cook developed the S-Model for incorporating product price and demand into a profit-based objective function for design decisions. The MIT Engineering Systems Division produced a series of papers from 2000 on, many co-authored by Daniel Hastings, in which many utility formulations were used to address various forms of uncertainty in making engineering design decisions. Saleh et al. is a good example of this work. The term value-driven design was coined by James Sturges at Lockheed Martin while he was organizing a workshop that would become the Value-Driven Design Program Committee at the American Institute of Aeronautics and Astronautics (AIAA) in 2006. Meanwhile, value centric design was coined independently by Owen Brown and Paul Eremenko of DARPA in the Phase 1 Broad Agency Announcement for the DARPA F6 satellite design program in 2007. Castagne et al. provides an example where value-driven design was used to design fuselage panels for a regional jet. Value-based acquisition Implementation of value-driven design on large government systems, such as NASA or European Space Agency spacecraft or weapon systems, will require a government acquisition system that directs or incentivizes the contractor to employ a value model. Such a system is proposed in some detail in an essay by Michael Lippitz, Sean O'Keefe, and John White. They suggest that "A program office can offer a contract in which price is a function of value", where the function is derived from a value model. The price function is structured so that, in optimizing the product design in accordance with the value model, the contractor will maximize its own profit. They call this system Value Based Acquisition. See also Systems engineering Decision theory Multidisciplinary optimization References Systems engineering
Value-driven design
Engineering
1,219
2,020,508
https://en.wikipedia.org/wiki/Elektron%20%28alloy%29
Elektron is the registered trademark of a wide range of magnesium alloys manufactured by a British company, Magnesium Elektron Limited. There are about 100 alloys in the Elektron range, containing from 0% to 9.5% of some of the following elements in varying proportions: aluminium (< 9.5%), yttrium (5.25%), neodymium (2.7%), silver (2.5%), gadolinium (1.3%), zinc (0.9%), zirconium (0.6%), manganese (0.5%) and other rare-earth metals. Varying amounts of alloying elements (up to 9.5%) added to the magnesium result in changes to mechanical properties such as increased tensile strength, creep resistance, thermal stability or corrosion resistance. Elektron is unusually light and has a specific gravity of about 1.8 compared with the 2.8 of aluminium alloy, or the 7.9 of steel. Magnesium's relatively low density makes its alloy variants suitable for use in auto racing and aerospace engineering applications. History Elektron or Elektronmetall was first developed in 1908 by Gustav Pistor and Wilhelm Moschel at the Bitterfeld works of Chemische Fabrik Griesheim-Elektron (CFGE or CFG), the headquarters of which was in Griesheim am Main, Germany. The composition of the initial Elektron alloy was approximately Mg 90%, Al 9%, other 1%. At its pavilion at the International Aviation Fair (Internationale Luftschiffahrt-Ausstellung, ILA) in Frankfurt am Main in 1909, CFG exhibited an Adler 75HP engine with a cast magnesium alloy crankcase. Also exhibiting at the 1909 Frankfurt Air Exhibition was August Euler (1868–1957) – owner of German pilot's licence No. 1 – who manufactured Voisin biplanes under licence in Griesheim am Main. His Voisins with Adler 50 hp engines flew in October 1909. CFG joined the newly created IG Farben as an associate company in 1916. During the Allied Occupation after World War I, a Major Charles J. P. Ball, DSO, MC, of the Royal Horse Artillery was stationed in Germany. He later joined F. A. Hughes and Co. Ltd., which began manufacturing elektron in the UK under licence from IG Farben from around 1923. CFG merged fully with the IG Farben conglomerate in 1925 along with Versuchsbau Hellmuth Hirth (a copper alloy manufacturer), to form another company, Elektronmetall Bad Cannstatt Stuttgart. In 1935, IG Farben, ICI and F. A. Hughes and Co. (22% shares) founded Magnesium Elektron Ltd. of Clifton, Greater Manchester. The company is still manufacturing alloys in 2017. Uses Elektron has been used in Zeppelin airships, aircraft, and motor racing applications. Incendiary bombs using elektron were developed towards the end of the First World War by both Germany (the B-1E Elektronbrandbombe or Stabbrandbombe) and the UK. Although neither side used this type of bomb operationally during the conflict, Erich Ludendorff mentions in his memoirs a plan to bomb Paris with a new type of incendiary bomb with the aim of overwhelming the city's fire services; this planned raid was also reported in Le Figaro on 21 December 1918. The lightness of elektron meant that a large aeroplane like the Zeppelin-Staaken R-type bomber could carry hundreds of bomblets. British and German incendiary bombs used extensively during World War II weighed about 1 kg and consisted of an outer casing made of elektron alloy, which was filled with thermite pellets and fitted with a fuse. The fuse ignited the thermite, which in turn ignited the magnesium casing; it burned for about 15 minutes. Trying to douse the fire with water only intensified the reaction. It could not be extinguished and burned at such a high temperature that it could penetrate armour plate. In 1924, magnesium alloys (AZ; 2,5–3,0% Al; 3,0–4,0% Zn) were used in automobile pistons diecast by Elektronmetall Bad Cannstatt, another IG Farben company formed out of Versuchsbau Hellmuth Hirth. The main engine bearers of the Messerschmitt Bf 109 and the Junkers Ju 87 were made from forged elektron. The air-cooled BMW 801 radial aero engine that powered the Focke-Wulf Fw 190 had a radiator fan made of magnesium alloy, very probably elektron. An advertisement in the German trade paper Flugsport in 1939 claimed that the record-breaking Arado Ar 79 aircraft contained 25% by weight of elektron, mostly in the Hirth HM 504 A2 4-cylinder inline engine whose crankcase was made of Elektron. The connectors for the fuel pipes in the engine compartment of Tiger II tanks were originally made of elektron, but they distorted when clamped and were replaced with steel ones. Siemens-Halske used elektron casings for their Hellschreiber military teleprinter used during WW2. The prototype 4-seater 1948 Planet Satellite had a monocoque fuselage of elektron, a solid elektron keel and wings skinned in elektron, but the keel suffered from stress failures and never reached production. The bodywork of certain racing cars utilized elektron, including the Mercedes-Benz 300 SLR that infamously crashed in the 1955 Le Mans race, highlighting its flammability. See also List of alloys Magnesium Elektron References External links Website of Magnesium Elektron Ltd. "ZTi" a 1952 Flight advertisement for Magnesium Elektron's new alloy ASTM International standards ?? Magnesium alloys Aluminium–magnesium alloys
Elektron (alloy)
Chemistry
1,253
62,414,339
https://en.wikipedia.org/wiki/Phaeocollybia%20festiva
Phaeocollybia festiva is a species of fungus in the family Cortinariaceae. References Cortinariaceae Fungi described in 1942 Fungus species
Phaeocollybia festiva
Biology
33