text
stringlengths 2
132k
| source
dict |
|---|---|
formalized, an incorrect spelling variant was "Crysomallon squamiferum". Chrysomallon squamiferum is the type species and the sole species within the genus Chrysomallon. The generic name Chrysomallon is from the Ancient Greek language, and means "golden haired", because pyrite (a compound occurring in its shell) is golden in color. The specific name squamiferum is from the Latin language and means "scale-bearing", because of its sclerites. At first it was not known to which family this species belonged. Warén et al. classified this species in the family Peltospiridae, within the Neomphalina in 2003. Molecular analyses based on sequences of cytochrome-c oxidase I (COI) genes confirmed the placement of this species within the Peltospiridae. Morphotypes from two localities are dark; a morphotype from a third locality is white (see next section for explanation of localities). These different colored snails appear to be simply "varieties" of the same species, according to the results of genetic analysis. == Distribution == The scaly-foot gastropod is a vent-endemic gastropod known only from the deep-sea hydrothermal vents of the Indian Ocean, which are around 2,780 metres (1.73 mi) in depth. The species was discovered in 2001, living on the bases of black smokers in the Kairei hydrothermal vent field, 25°19.239′S 70°02.429′E, on the Central Indian Ridge, just north of the Rodrigues Triple Point. The species has subsequently also been found in the Solitaire field, 19°33.413′S 65°50.888′E, Central Indian Ridge, within the Exclusive Economic Zone of Mauritius and Longqi (means "Dragon flag" in Chinese) field, 37°47.027′S 49°38.963′E, Southwest Indian Ridge. Longqi field was designated as the type locality; all type material originated from this vent field. The distance between Kairei and Solitaire is about 700 km (430 mi). The distance between Solitaire and Longqi is about 2,500 km (1,600 mi). These three sites belong to the Indian Ocean biogeographic
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
province of hydrothermal vent systems sensu Rogers et al. (2012). The distance between sites is large, but the total distribution area is very small, less than 0.02 square kilometres (0.0077 sq mi). Peltospiridae snails are mainly known to live in Eastern Pacific vent fields. Nakamura et al. hypothesized that the occurrence of the scaly-foot gastropod in the Indian Ocean suggests a relationship of the hydrothermal vent faunas between these two areas. Research expeditions have included: 2000 – an expedition of the Japan Agency for Marine-Earth Science and Technology using the ship RV Kairei and ROV Kaikō discovered the Kairei vent field, but scaly-foot gastropods were not found at that time. This was the first vent field discovered in the Indian Ocean. 2001 – an expedition of the U.S. research vessel RV Knorr with ROV Jason discovered scaly-foot gastropods in the Kairei vent field. 2007 – an expedition of RV Da Yang Yi Hao discovered the Longqi vent field. 2009 – an expedition of RV Yokosuka with DSV Shinkai 6500 discovered the Solitaire field and sampled scaly-foot gastropods there. 2009 – an expedition of RV Da Yang Yi Hao visually observed scaly-foot gastropods at Longqi vent field. 2011 – an expedition of the British Royal Research Ship RRS James Cook with ROV Kiel 6000 sampled the Longqi vent field. == Description == === Sclerites === In this species, the sides of the snail's foot are extremely unusual, being armoured with hundreds of iron-mineralised sclerites; these are composed of iron sulfides greigite and pyrite. Each sclerite has a soft epithelial tissue core, a conchiolin cover, and an uppermost layer containing pyrite and greigite. Prior to the discovery of the scaly-foot gastropod, it was thought that the only extant molluscs possessing scale-like structures were in the classes Caudofoveata, Solenogastres and Polyplacophora. Sclerites are
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
not homologous to a gastropod operculum. The sclerites of the scaly-foot gastropod are also not homologous to the sclerites found in chitons (Polyplacophora). It has been hypothesized that the sclerites of Cambrian halwaxiids such as Halkieria may potentially be more analogous to the sclerites of this snail than are the sclerites of chitons or aplacophorans. As recently as 2015, detailed morphological analysis for testing this hypothesis had not been carried out. The sclerites of C. squamiferum are mainly proteinaceous (conchiolin is a complex protein); in contrast, the sclerites of chitons are mainly calcareous. There are no visible growth lines of conchiolin in cross-sections of sclerites. No other extant or extinct gastropods possess dermal sclerites, and no other extant animal is known to use iron sulfides in this way, either in its skeleton, or exoskeleton. The size of each sclerite is about 1 × 5 mm in adults. Juveniles have scales in few rows, while adults have dense and asymmetric scales. The Solitaire population of snails has white sclerites instead of black; this is due to a lack of iron in the sclerites. The sclerites are imbricated (overlapped in a manner reminiscent of roof tiles). The purpose of sclerites has been speculated to be protection or detoxification. The sclerites may help protect the gastropod from the vent fluid, so that its bacteria can live close to the source of electron donors for chemosynthesis. Or alternatively, the sclerites may result from deposition of toxic sulfide waste from the endosymbionts, and therefore represent a novel solution for detoxification. But the true function of sclerites is, as yet, unknown. The sclerites of the Kairei population, which have a layer of iron sulfide, are ferrimagnetic. The non-iron-sulfide-mineralized sclerite from the Solitaire morphotype showed greater mechanical strength of the whole structure in the three-point bending stress
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
test (12.06 MPa) than did the sclerite from the Kairei morphotype (6.54 MPa). In life, the external surfaces of sclerites host a diverse array of epibionts: Campylobacterota (formerly Epsilonproteobacteria) and Thermodesulfobacteriota (formerly part of Deltaproteobacteria). These bacteria probably provide their mineralization. Goffredi et al. (2004) hypothesized that the snail secretes some organic compounds that facilitate the attachment of the bacteria. === Shell === The shell of these species has three whorls. The shape of the shell is globose and the spire is compressed. The shell sculpture consists of ribs and fine growth lines. The shape of the aperture is elliptical. The apex of the shell is fragile and it is corroded in adults. This is a very large peltospirid compared to the majority of other species, which are usually below 15 millimetres (3⁄5 in) in shell length. The width of the shell is 9.80–40.02 mm (0.39–1.58 in); the maximum width of the shell reaches 45.5 millimetres (1.79 in). The average width of the shell of adult snails is 32 mm. The average shell width in the Solitaire population was slightly less than that in the Kairei population. The height of the shell is 7.65–30.87 mm (0.30–1.22 in). The width of the aperture is 7.26–32.52 mm (0.29–1.28 in). The height of the aperture is 6.38–27.29 mm (0.25–1.07 in). The shell structure consists of three layers. The outer layer is about 30 μm thick, black, and is made of iron sulfides, containing greigite Fe3S4. This species is the only extant animal known to feature this material in its skeleton. The middle layer (about 150 μm) is equivalent to the organic periostracum which is also found in other gastropods. The periostracum is thick and brown. The innermost layer is made of aragonite (about 250 μm thick), a form of calcium carbonate that is
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
commonly found both in the shells of molluscs and in various corals. The color of the aragonite layer is milky white. Each shell layer appears to contribute to the effectiveness of the snail's defence in different ways. The middle organic layer appears to absorb mechanical strain and energy generated by a squeezing attack (for example by the claws of a crab), making the shell much tougher. The organic layer also acts to dissipate heat. Features of this composite material are in focus of researchers for possible use in civilian and military protective applications. === Operculum === In this species, the shape of the operculum changes during growth, from a rounded shape in juveniles to a curved shape in adults. The relative size of the operculum decreases as individuals grow. About a half of all adult snails of this species possess an operculum among the sclerites at the rear of the animal. It seems likely that the sclerites gradually grow and fully cover the whole foot for protection, and the operculum loses its protective function as the animal grows. === External anatomy === The scaly-foot gastropod has a thick snout, which tapers distally to a blunt end. The mouth is a circular ring of muscles when contracted and closed. The two smooth cephalic tentacles are thick at the base and gradually taper to a fine point at their distal tips. This snail has no eyes. There is no specialised copulatory appendage. The foot is red and large, and the snail cannot withdraw the foot entirely into the shell. There is no pedal gland in the front part of the foot. There are also no epipodial tentacles. === Internal anatomy === In C. squamiferum, the soft parts of the animal occupy approximately two whorls of the interior of the shell. The shell
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
muscle is horseshoe-shaped and large, divided in two parts on the left and right, and connected by a narrower attachment. The mantle edge is thick but simple without any distinctive features. The mantle cavity is deep and reaches the posterior edge of the shell. The medial to left side of the cavity is dominated by a very large bipectinate ctenidium. Ventral to the visceral mass, the body cavity is occupied by a huge esophageal gland, which extends to fill the ventral floor of the mantle cavity. The digestive system is simple, and is reduced to less than 10% of the volume typical in gastropods. The radula is "weak", of the rhipidoglossan type, with a single pair of radular cartilages. The formula of the radula is ~50 + 4 + 1 + 4 + ~50. The radula ribbon is 4 mm long, 0.5 mm wide; the width to length ratio is approximately 1:10. There is no jaw, and no salivary glands. A part of the anterior oesophagus rapidly expands into a huge, hypertrophied, blind-ended esophageal gland, which occupies much of the ventral face of the mantle cavity (estimated 9.3% body volume). The esophageal gland grows isometrically with the snail, consistent with the snail depending on its endosymbiont microbes throughout its settled life. The oesophageal gland has a uniform texture, and is highly vascularised with fine blood vessels. The stomach has at least three ducts at its anterior right, connecting to the digestive gland. There are consolidated pellets in both the stomach and in the hindgut. These pellets are probably granules of sulfur produced by the endosymbiont as a way to detoxify hydrogen sulfide. The intestine is reduced, and only has a single loop. The extensive and unconsolidated digestive gland extends to the posterior, filling the shell apex of the shell. The
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
rectum does not penetrate the heart, but passes ventral to it. The anus is located on the right side of the snail, above the genital opening. In the excretory system, the nephridium is central, tending to the right side of the body, as a thin dark layer of glandular tissue. The nephridium is anterior and ventral of the digestive gland, and is in contact with the dorsal side of the foregut. The respiratory system and circulatory system consist of a single left bipectinate ctenidium (gill), which is very large (15.5% of the body volume), and is supported by many large and mobile blood sinuses filled with haemocoel. On dissection, the blood sinuses and lumps of haemocoel material are a prominent feature throughout the body cavity. Although the circulatory system in Chrysomallon is mostly closed (meaning that haemocoel mostly does not leave blood sinuses), the prominent blood sinuses appear to be transient, and occur in different areas of the body in different individuals. There are thin gill filaments on either side of the ctenidium. The bipectinate ctenidium extends far behind the heart into the upper shell whorls; it is much larger than in Peltospira. Although this species has a similar shell shape and general form to other peltospirids, the ctenidium is proportional size to that of Hirtopelta, which has the largest gill among peltospirid genera that have been investigated anatomically so far. The ctenidium provides oxygen for the snail, but the circulatory system is enlarged beyond the scope of other similar vent gastropods. There are no endosymbionts in or on the gill of C. squamiferum. The enlargement of the gill is probably to facilitate extracting oxygen in the low-oxygen conditions that are typical of hydrothermal-vent ecosystems. At the posterior of the ctenidium is a remarkably large and well-developed heart. The heart
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
is unusually large for any animal proportionally. Based on the volume of the single auricle and ventricle, the heart complex represents approximately 4% of the body volume (for example, the heart of humans is 1.3% of the body volume). The ventricle is 0.64 mm long in juveniles with a shell length of 2.2 mm, and grows to 8 mm long in adults. This proportionally giant heart primarily sucks blood through the ctenidium and supplies the highly vascularised oesophageal gland. In C. squamiferum the endosymbionts are housed in an esophageal gland, where they are isolated from the vent fluid. The host is thus likely to play a major role in supplying the endosymbionts with necessary chemicals, leading to increased respiratory needs. Detailed investigation of the haemocoel of C. squamiferum will reveal further information about its respiratory pigments. The scaly-foot gastropod is a chemosymbiotic holobiont. It hosts thioautotrophic (sulfur-oxidising) gammaproteobacterial endosymbionts in a much enlarged oesophageal gland, and appears to rely on these symbionts for nutrition. The closest known relative of this endosymbiont is that one from Alviniconcha snails. In this species, the size of the oesophageal gland is about two orders of magnitude larger than the usual size. There is a significant embranchment within the oesophageal gland, where the blood pressure likely decreases to almost zero. The elaborate cardiovascular system most likely evolved to oxygenate the endosymbionts in an oxygen-poor environment, and/or to supply hydrogen sulfide to the endosymbionts. Thioautotrophic gammaproteobacteria have a full set of genes required for aerobic respiration, and are probably capable of switching between the more efficient aerobic respiration, and the less efficient anaerobic respiration, depending on oxygen availability. In 2014, the endosymbiont of the scaly-foot gastropod become the first endosymbiont of any gastropod for which the complete genome was known. C. squamiferum was previously thought to
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
be the only species of Peltospiridae that has an enlarged oesophageal gland, but later it was discovered that both species of Gigantopelta also have an enlarged oesophageal gland. Chrysomallon and Gigantopelta are the only vent animals, except siboglinid tubeworms, that house endosymbionts within an enclosed part of the body not in direct contact with vent fluid. The nervous system is large, and the brain is a solid neural mass without ganglia. The nervous system is reduced in complexity and enlarged in size compared to other neomphaline taxa. As is typical of gastropods, the nervous system is composed of an anterior oesophageal nerve ring and two pairs of longitudinal nerve cords, the ventral pair innervating the foot and the dorsal pair forming a twist via streptoneury. The frontal part of the oesophageal nerve ring is large, connecting two lateral swellings. The huge fused neural mass is directly adjacent to, and passes through, the oeosophageal gland, where the bacteria are housed. There are large tentacular nerves projecting into the cephalic tentacles. The sensory organs of the scaly-foot gastropod include statocysts surrounded by the oesophageal gland, each statocyst with a single statolith. There are also sensory ctenidial bursicles on the tip of the gill filaments; these are known to be present in most vetigastropods, and are present some neomphalines. The reproductive system has some unusual features. The gonads of adult snails are not inside the shell; they are in the head-foot region on the right side of the body. There are no gonads present in juveniles with shell length of 2.2 mm. Adults possess both testis and ovary in different levels of development. The testis is placed ventrally; the ovary is placed dorsally, and the nephridium lies between them. There is a "spermatophore packaging organ" next to the testis. Gonoducts from the testis
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
and ovary are initially separate, but apparently fuse to a single duct, and emerge as a single genital opening on the right of the mantle cavity. The animal has no copulatory organ. It is hypothesized that the derived strategy of housing endosymbiotic microbes in an oesophageal gland, has been the catalyst for anatomical innovations that serve primarily to improve the fitness of the bacteria, over and above the needs of the snail. The great enlargement of the oesophageal gland, the snail's protective dermal sclerites, its highly enlarged respiratory and circulatory systems and its high fecundity are all considered to be adaptations which are beneficial to its endosymbiont microbes. These adaptations appear to be a result of specialisation to resolve energetic needs in an extreme chemosynthetic environment. == Ecology == === Habitat === This species inhabits the hydrothermal vent fields of the Indian Ocean. It lives adjacent to both acidic and reducing vent fluid, on the walls of black-smoker chimneys, or directly on diffuse flow sites. The depth of the Kairei field varies from 2,415 to 2,460 m (7,923 to 8,071 ft), and its dimensions are approximately 30 by 80 m (98 by 262 ft). The slope of the field is 10° to 30°. The substrate rock is troctolite and depleted mid-ocean ridge basalt. The Kairei-field scaly-foot gastropods live in the low-temperature diffuse fluids of a single chimney. The transitional zone, where these gastropods were found, is about 1–2 m (3–7 ft) in width, with temperature of 2–10 °C. The preferred water temperature for this species is about 5 °C. These snails live in an environment which has high concentrations of hydrogen sulfide, and low concentrations of oxygen. The abundance of scaly-foot gastropods was lower in the Kairei field than in the Longqi field. The Kairei hydrothermal-vent community consists of 35
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
taxa, including sea anemones Marianactis sp., crustaceans Austinograea rodriguezensis, Rimicaris kairei, Mirocaris indica, Munidopsis sp., Neolepadidae genus and sp., Eochionelasmus sp., bivalves Bathymodiolus marisindicus, gastropods Lepetodrilus sp., Pseudorimula sp., Eulepetopsis sp., Shinkailepas sp., and Alviniconcha marisindica, Desbruyeresia marisindica, Bruceiella wareni, Phymorhynchus sp., Sutilizona sp., slit limpet sp. 1, slit limpet sp. 2, Iphinopsis boucheti, solenogastres Helicoradomenia? sp., annelids Amphisamytha sp., Archinome jasoni, Capitellidae sp. 1, Ophyotrocha sp., Hesionidae sp. 1, Hesionoidae sp. 2, Branchinotogluma sp., Branchipolynoe sp., Harmothoe? sp., Levensteiniella? sp., Prionospio sp., unidentified Nemertea and unidentified Platyhelminthes. Scaly-foot gastropods live in colonies with Alviniconcha marisindica snails, and there are colonies of Rimicaris kairei above them. The Solitaire field is at a depth of 2,606 m (8,550 ft), and its dimensions are approximately 50 by 50 m (160 by 160 ft). The substrate rock is enriched mid-ocean ridge basalt. Scaly-foot gastropods live near the high-temperature diffuse fluids of chimneys in the vent field. The abundance of scaly-foot gastropods was lower in the Solitaire field than in the Longqi field. The Solitaire hydrothermal-vent community comprises 22 taxa, including: sea anemones Marianactis sp., crustaceans Austinograea rodriguezensis, Rimicaris kairei, Mirocaris indica, Munidopsis sp., Neolepadidae gen et sp., Eochionelasmus sp., bivalves Bathymodiolus marisindicus, gastropods Lepetodrilus sp., Eulepetopsis sp., Shinkailepas sp., Alviniconcha sp. type 3, Desbruyeresia sp., Phymorhynchus sp., annelids Alvinellidae genus and sp., Archinome jasoni, Branchinotogluma sp., echinoderm holothurians Apodacea gen et sp., fish Macrouridae genus and sp., unidentified Nemertea, and unidentified Platyhelminthes. The Longqi vent field is in a depth of 2,780 m (9,120 ft), and its dimensions are approximately 100 by 150 m (330 by 490 ft). C. squamiferum was densely populated in the areas immediately surrounding the diffuse-flow venting. The Longqi hydrothermal-vent community include 23 macro- and megafauna taxa: sea anemones Actinostolidae sp., annelids Polynoidae n. gen. n. sp. "655", Branchipolynoe
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
n. sp. "Dragon", Peinaleopolynoe n. sp. "Dragon", Hesiolyra cf. bergi, Hesionidae sp. indet., Ophryotrocha n. sp. "F-038/1b", Prionospio cf. unilamellata, Ampharetidae sp. indet., mussels Bathymodiolus marisindicus, gastropods Gigantopelta aegis, Dracogyra subfuscus, Lirapex politus, Phymorhynchus n. sp. "SWIR", Lepetodrilus n. sp. "SWIR", crustaceans Neolepas sp. 1, Rimicaris kairei, Mirocaris indica, Chorocaris sp., Kiwa n. sp. "SWIR"17, Munidopsis sp. and echinoderm holothurians Chiridota sp. The density of Lepetodrilus n. sp. "SWIR" and scaly-foot gastropods is over 100 snails per m2 in close distance from vent fluid sources at Longqi vent field. === Feeding habits === The scaly-foot gastropod is an obligate symbiotroph throughout post-settlement life. Throughout its post-larval life, the scaly-foot gastropod obtains all of its nutrition from the chemoautotrophy of its endosymbiotic bacteria. The scaly-foot gastropod is neither a filter-feeder nor uses other mechanisms for feeding. The radula and radula cartilage are small, respectively constituting only 0.4% and 0.8% of juveniles' body volume, compared to 1.4% and 2.6% in the mixotrophic juveniles of Gigantopelta chessoia. For identification of trophic interactions in a habitat, where direct observation of feeding habits is complicated, carbon and nitrogen stable-isotope compositions can be measured. There are depleted values of δ13C in the oesophageal gland (relative to photosynthetically derived organic carbon). Chemoautotrophic symbionts were presumed as a source of such carbon. Chemoautotrophic origin of the stable carbon isotope 13C was confirmed experimentally. === Life cycle === This gastropod is a simultaneous hermaphrodite. It is the only species in the family Peltospiridae that is so far known to be a simultaneous hermaphrodite. It has a high fecundity. It lays eggs that are probably of lecithotrophic type. Eggs of the scaly-foot gastropod are negatively buoyant under atmospheric pressure. Neither the larvae nor the protoconch is known as of 2016, but it is thought that the species has a planktonic
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
dispersal stage. The smallest C. squamiferum juvenile specimens ever collected had a shell length 2.2 mm. The results of statistical analyses revealed no genetic differentiation between the two populations in the Kairei and Solitaire fields, suggesting potential connectivity between the two vent fields. The Kairei population represents a potential source population for the two populations in the Central Indian Ridge. These snails are difficult to keep alive in an artificial environment; however, they survived in aquaria at atmospheric pressure for more than three weeks. == Conservation measures and threats == The scaly-foot gastropod is not protected. Its potential habitat across all Indian Ocean hydrothermal vent fields has been estimated to be at most 0.27 square kilometres (67 acres), while the three known sites at which it has been found, between which only negligible migration occurs, add up to 0.0177 square kilometres (4.4 acres), or less than one-fifth of a football field. The population at the Longqi vent field may be of particular concern. The Southwest Indian Ridge, within which it is located, is one of the slowest-spreading mid-ocean ridges, and the low rate of natural disturbances is associated with ecological communities that are likely more sensitive to and recover more slowly from disruptions. Slow-spreading centers may also create larger mineral deposits, making those sensitive areas primary targets for deep-sea mining. Furthermore, by genetic measures the population at Longqi is poorly connected to those at the Kairei and Solitaire vent fields, over 2000 km away within the Central Indian Ridge. The Solitaire Vent Field falls within the exclusive economic zone of Mauritius, while the other two sites are within Areas Beyond National Jurisdiction (commonly known as the high seas) under the authority of the International Seabed Authority, which has granted commercial mining exploration licenses for both. The Kairei Vent Field is
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
under a license to Germany (2015–2030), the Longqi Vent Field to China (2011–2026). As of 2017, no conservation measures are proposed or in place for any of the three sites. It has been listed as an endangered species in the IUCN Red List of Threatened Species since July 4, 2019. == See also == Iron in biology == Notes == == References == == External links == Media related to Chrysomallon squamiferum at Wikimedia Commons
|
{
"page_id": 17367123,
"source": null,
"title": "Scaly-foot gastropod"
}
|
Important ecological areas (IEAs) are habitat areas which, either by themselves or in a network, contribute significantly to an ecosystem’s productivity, biodiversity, and resilience. Appropriate management of key ecological features delineates the management boundaries of an IEA. The identification and protection of IEAs is an element of an ecosystem-based management approach. Important ecological areas may have varying levels of management of extractive activities, from monitoring up to and including marine reserve. IEAs have management measures tailored to the ecological features within the area with consideration of socioeconomic factors. Whereas marine reserves generally have a fixed management policy of no extraction or ‘no-take’. Nonetheless, a marine reserve may be the appropriate management policy for an IEA. The identification and management of IEAs is a form of ocean zoning. In the event that there are a series of linked IEAs within a large marine ecosystem, a collective action to manage the network, such as a marine sanctuary or national monument, may be warranted. Examples are tropical rainforests, oceans, forests, etc. == References ==
|
{
"page_id": 10485844,
"source": null,
"title": "Important ecological areas"
}
|
This page consists of a list of wastewater treatment technologies: == See also == Agricultural wastewater treatment Industrial wastewater treatment List of solid waste treatment technologies Waste treatment technologies Water purification Sewage sludge treatment == References ==
|
{
"page_id": 5963858,
"source": null,
"title": "List of wastewater treatment technologies"
}
|
Clinical chemistry (also known as chemical pathology, clinical biochemistry or medical biochemistry) is a division in medical laboratory sciences focusing on qualitative tests of important compounds, referred to as analytes or markers, in bodily fluids and tissues using analytical techniques and specialized instruments. This interdisciplinary field includes knowledge from medicine, biology, chemistry, biomedical engineering, informatics, and an applied form of biochemistry (not to be confused with medicinal chemistry, which involves basic research for drug development). The discipline originated in the late 19th century with the use of simple chemical reaction tests for various components of blood and urine. Many decades later, clinical chemists use automated analyzers in many clinical laboratories. These instruments perform experimental techniques ranging from pipetting specimens and specimen labelling to advanced measurement techniques such as spectrometry, chromatography, photometry, potentiometry, etc. These instruments provide different results that help identify uncommon analytes, changes in light and electronic voltage properties of naturally-occurring analytes such as enzymes, ions, electrolytes, and their concentrations, all of which are important for diagnosing diseases. Blood and urine are the most common test specimens clinical chemists or medical laboratory scientists collect for clinical routine tests, with a main focus on serum and plasma in blood. There are now many blood tests and clinical urine tests with extensive diagnostic capabilities. Some clinical tests require clinical chemists to process the specimen before testing. Clinical chemists and medical laboratory scientists serve as the interface between the laboratory side and the clinical practice, providing suggestions to physicians on which test panel to order and interpret any irregularities in test results that reflect on the patient's health status and organ system functionality. This allows healthcare providers to make more accurate evaluation of a patient's health and to diagnose disease, predicting the progression of a disease (prognosis), screening, and monitoring the treatment's
|
{
"page_id": 65622,
"source": null,
"title": "Clinical chemistry"
}
|
efficiency in a timely manner. The type of test required dictates what type of sample is used. == Common Analytes == Some common analytes that clinical chemistry tests analyze include: == Panel tests == A physician may order many laboratory tests on one specimen, referred to as a test panel, when a single test cannot provide sufficient information to make a swift and accurate diagnosis and treatment plan. A test panel is a group of many tests a clinical chemists do on one sample to look for changes in many analytes that may be indicative of specific medical concerns or the health status of an organ system. Thus, panel tests provide a more extensive evaluation of a patient's health, have higher predictive values for confirming or disproving a disease, and are quick and cost-effective. === Metabolic Panel === A Metabolic Panel (MP) is a routine group of blood tests commonly used for health screenings, disease detection, and monitoring vital signs of hospitalized patients with specific medical conditions. MP panel analyzes common analytes in the blood to assess the functions of the kidneys and liver, as well as electrolyte and acid-base balances. There are two types of MPs - Basic Metabolic Panel (BMP) or Comprehensive Metabolic Panel (CMP). ==== Basic Metabolic Panel ==== BMP is a panel of tests that measures eight analytes in the blood's fluid portion (plasma). The results of the BMP provide valuable information about a patient's kidney function, blood sugar level, electrolyte levels, and the acid-base balance. Abnormal changes in one or more of these analytes can be a sign of serious health issues: Sodium, Potassium, Chloride, and Carbon Dioxide: they are electrolytes that have electrical charges that manage the body’s water level, acid-base balance in the blood, and kidney function. Calcium: This charged electrolyte is essential
|
{
"page_id": 65622,
"source": null,
"title": "Clinical chemistry"
}
|
for the proper functions of nerve, muscle, blood clotting, and bone health. Changes in the calcium level can be signs of bone disease, muscle cramps/ spasms, thyroid disease, or other conditions. Glucose: This measures the blood sugar levels, which is a crucial energy for your body and brain. High glucose levels can be a sign of diabetes or insulin resistance. Urea and Creatinine: These are waste products that the kidney filters out from blood. Urea measurements are helpful in detecting and treating kidney failure and related metabolic disorders, whereas creatinine measurements give information on kidney’s health, tracking renal dialysis treatment, and monitor hospitalized patients that are on diuretics. ==== Comprehensive Metabolic Panel ==== Comprehensive metabolic panel (CMP) - 14 tests - above BMP plus total protein, albumin, alkaline phosphatase (ALP), alanine amino transferase (ALT), aspartate amino transferase (AST), bilirubin. == Specimen Processing == For blood tests, clinical chemists must process the specimen to obtain plasma and serum before testing for targeted analytes. This is most easily done by centrifugation, which packs the denser blood cells and platelets to the bottom of the centrifuge tube, leaving the liquid serum fraction resting above the packed cells. This initial step before analysis has recently been included in instruments that operate on the "integrated system" principle. Plasma is obtained by centrifugation before clotting occurs. == Instruments == Most current medical laboratories now have highly automated analyzers to accommodate the high workload typical of a hospital laboratory, and accept samples for up to about 700 different kinds of tests. Even the largest of laboratories rarely do all these tests themselves, and some must be referred to other labs. Tests performed are closely monitored and quality controlled. == Specialties == The large array of tests can be categorised into sub-specialities of: General or routine chemistry –
|
{
"page_id": 65622,
"source": null,
"title": "Clinical chemistry"
}
|
commonly ordered blood chemistries (e.g., liver and kidney function tests). Special chemistry – elaborate techniques such as electrophoresis, and manual testing methods. Clinical endocrinology – the study of hormones, and diagnosis of endocrine disorders. Toxicology – the study of drugs of abuse and other chemicals. Therapeutic Drug Monitoring – measurement of therapeutic medication levels to optimize dosage. Urinalysis – chemical analysis of urine for a wide array of diseases, along with other fluids such as CSF and effusions Fecal analysis – mostly for detection of gastrointestinal disorders. == See also == Reference ranges for common blood tests Medical technologist Clinical Biochemistry (journal) == Notes and references == == Bibliography == Burtis, Carl A.; Ashwood, Edward R.; Bruns, David E. (2006). Tietz textbook of clinical chemistry (4th ed.). Saunders. p. 2448. ISBN 978-0-7216-0189-2. == External links == American Association of Clinical Chemistry Association for Mass Spectrometry: Applications to the Clinical Lab (MSACL)
|
{
"page_id": 65622,
"source": null,
"title": "Clinical chemistry"
}
|
A socially assistive robot (SAR) aids users through social engagement and support rather than through physical tasks and interactions. == Background == The field of socially assistive robotics emerged in the early 2000s, following the emergence of the field of social robots. In contrast to social robots, SARs aid users with specific goals related to behavior change rather than serving as purely social entities. The term "Socially assistive robot" was initially defined by Maja Matarić and David Feil-Seifer in 2005. Since its inception, the field has gained substantial recognition, featuring numerous research projects, a wealth of global research publications, startup companies, and a growing array of products on the consumer market. The COVID-19 pandemic has underscored the immense potential of socially assistive robots, particularly in addressing the needs of large user populations, including children engaged in remote learning, elderly individuals grappling with loneliness, and those affected by social isolation and its associated negative consequences. == Characteristics of interaction == SARs rely on artificial intelligence (AI) to generate real-time, responsive, natural, and meaningful robot behaviors during interactions with humans. The robots employ various forms of communication, such as facial expressions, gestures, body movements, and speech. In contrast to robots intended for physical tasks, SARs are designed to support and motivate users to perform their own tasks. The tasks a user engages in can be physical (e.g., rehabilitation exercises for post-stroke users), cognitive (e.g., dementia screening for elderly users), or social (e.g., turn-taking for users with autism spectrum disorders). This complex interaction involves detecting and interpreting the user's movement, behavior, intent, goals, speech, and preferences. Machine learning and robot learning techniques are frequently employed to enhance the robot's understanding of the user, predict user preferences, and provide effective assistance. The effectiveness of socially assistive robots is assessed based on objective measurements of
|
{
"page_id": 75038808,
"source": null,
"title": "Socially assistive robot"
}
|
user performance and improvement resulting from the robot’s assistance and support. Unlike other branches of robotics, where effectiveness depends on the robot's physical task completion, SAR measures the success of the robot based on the user's progress and achievements. This evaluation is carried out using quantitative objective metrics, such as time spent on tasks, accuracy, retention, and verbalization, as well as quantitative subjective metrics, such as user survey tools. SAR is based on the large body of evidence showing that users tend to respond more positively to interactions with physical robots compared to interactions with screens. Interaction with physical robots also encourages users to learn and retain more information than screen-based interactions. This fundamental insight underlines why physical robots in SAR applications are more effective, as opposed to interactions solely involving screens, tablets, or computers. == Uses and applications == SARs have been developed and validated in a wide array of applications, including healthcare, elder care, education, and training. For example, SARs have been developed to support children on the autism spectrum in acquiring and practicing social and cognitive skills, to motivate and coach stroke patients throughout their rehabilitation exercises, monitoring individuals health (ex. fall detection), and to encourage elderly users to be more physically and socially active. There is a concern that technophobia and lack of trust in robots will pose a barrier to the effectiveness of SARs in older adults. == References ==
|
{
"page_id": 75038808,
"source": null,
"title": "Socially assistive robot"
}
|
Fumiquinazolines are bio-active isolates of Aspergillus. == References == == Further reading == Cheng, Zhongbin; Lou, Lanlan; Liu, Dong; Li, Xiaodan; Proksch, Peter; Yin, Sheng; Lin, Wenhan (10 November 2016). "Versiquinazolines A–K, Fumiquinazoline-Type Alkaloids from the Gorgonian-Derived Fungus Aspergillus versicolor LZD-14-1". Journal of Natural Products. 79 (11): 2941–2952. Bibcode:2016JNAtP..79.2941C. doi:10.1021/acs.jnatprod.6b00801. PMID 27933898. Ames, BD; Haynes, SW; Gao, X; Evans, BS; Kelleher, NL; Tang, Y; Walsh, CT (11 October 2011). "Complexity generation in fungal peptidyl alkaloid biosynthesis: oxidation of fumiquinazoline A to the heptacyclic hemiaminal fumiquinazoline C by the flavoenzyme Af12070 from Aspergillus fumigatus". Biochemistry. 50 (40): 8756–69. doi:10.1021/bi201302w. PMC 3194008. PMID 21899262. Magotra, A; Kumar, M; Kushwaha, M; Awasthi, P; Raina, C; Gupta, AP; Shah, BA; Gandhi, SG; Chaubey, A (December 2017). "Epigenetic modifier induced enhancement of fumiquinazoline C production in Aspergillus fumigatus (GA-L7): an endophytic fungus from Grewia asiatica L". AMB Express. 7 (1): 43. doi:10.1186/s13568-017-0343-z. PMC 5315648. PMID 28213885. == External links == New metabolites from the marine-derived fungus Aspergillus fumigatus pubchem.ncbi
|
{
"page_id": 40763481,
"source": null,
"title": "Fumiquinazoline"
}
|
A growing self-organizing map (GSOM) is a growing variant of a self-organizing map (SOM). The GSOM was developed to address the issue of identifying a suitable map size in the SOM. It starts with a minimal number of nodes (usually 4) and grows new nodes on the boundary based on a heuristic. By using the value called Spread Factor (SF), the data analyst has the ability to control the growth of the GSOM. All the starting nodes of the GSOM are boundary nodes, i.e. each node has the freedom to grow in its own direction at the beginning. (Fig. 1) New Nodes are grown from the boundary nodes. Once a node is selected for growing all its free neighboring positions will be grown new nodes. The figure shows the three possible node growth options for a rectangular GSOM. == The algorithm == The GSOM process is as follows: Initialization phase: Initialize the weight vectors of the starting nodes (usually four) with random numbers between 0 and 1. Calculate the growth threshold ( G T {\displaystyle GT} ) for the given data set of dimension D {\displaystyle D} according to the spread factor ( S F {\displaystyle SF} ) using the formula G T = − D × ln ( S F ) {\displaystyle GT=-D\times \ln(SF)} Growing Phase: Present input to the network. Determine the weight vector that is closest to the input vector mapped to the current feature map (winner), using Euclidean distance (similar to the SOM). This step can be summarized as: find q ′ {\displaystyle q'} such that | v − w q ′ | ≤ | v − w q | ∀ q ∈ N {\displaystyle \left|v-w_{q'}\right\vert \leq \left|v-w_{q}\right\vert \forall q\in \mathbb {N} } where v {\displaystyle v} , w {\displaystyle w} are the input and
|
{
"page_id": 30867546,
"source": null,
"title": "Growing self-organizing map"
}
|
weight vectors respectively, q {\displaystyle q} is the position vector for nodes and N {\displaystyle \mathbb {N} } is the set of natural numbers. The weight vector adaptation is applied only to the neighborhood of the winner and the winner itself. The neighborhood is a set of neurons around the winner, but in the GSOM the starting neighborhood selected for weight adaptation is smaller compared to the SOM (localized weight adaptation). The amount of adaptation (learning rate) is also reduced exponentially over the iterations. Even within the neighborhood, weights that are closer to the winner are adapted more than those further away. The weight adaptation can be described by w j ( k + 1 ) = { w j ( k ) if j ∉ N k + 1 w j ( k ) + L R ( k ) × ( x k − w j ( k ) ) if j ∈ N k + 1 {\displaystyle w_{j}(k+1)={\begin{cases}w_{j}(k)&{\text{if }}j\notin \mathrm {N} _{k+1}\\w_{j}(k)+LR(k)\times (x_{k}-w_{j}(k))&{\text{if }}j\in \mathrm {N} _{k+1}\end{cases}}} where the Learning Rate L R ( k ) {\displaystyle LR(k)} , k ∈ N {\displaystyle k\in \mathbb {N} } is a sequence of positive parameters converging to zero as k → ∞ {\displaystyle k\to \infty } . w j ( k ) {\displaystyle w_{j}(k)} , w j ( k + 1 ) {\displaystyle w_{j}(k+1)} are the weight vectors of the node j {\displaystyle j} before and after the adaptation and N k + 1 {\displaystyle \mathrm {N} _{k+1}} is the neighbourhood of the winning neuron at the ( k + 1 ) {\displaystyle (k+1)} th iteration. The decreasing value of L R ( k ) {\displaystyle LR(k)} in the GSOM depends on the number of nodes existing in the map at time k {\displaystyle k} . Increase the error
|
{
"page_id": 30867546,
"source": null,
"title": "Growing self-organizing map"
}
|
value of the winner (error value is the difference between the input vector and the weight vectors). When T E i > G T {\displaystyle TE_{i}>GT} (where T E i {\displaystyle TE_{i}} is the total error of node i {\displaystyle i} and G T {\displaystyle GT} is the growth threshold). Grow nodes if i is a boundary node. Distribute weights to neighbors if i {\displaystyle i} is a non-boundary node. Initialize the new node weight vectors to match the neighboring node weights. Initialize the learning rate ( L R {\displaystyle LR} ) to its starting value. Repeat steps 2 – 7 until all inputs have been presented and node growth is reduced to a minimum level. Smoothing phase. Reduce learning rate and fix a small starting neighborhood. Find winner and adapt the weights of the winner and neighbors in the same way as in growing phase. == Applications == The GSOM can be used for many preprocessing tasks in Data mining, for Nonlinear dimensionality reduction, for approximation of principal curves and manifolds, for clustering and classification. It gives often the better representation of the data geometry than the SOM (see the classical benchmark for principal curves on the left). == References == == Bibliography == Liu, Y.; Weisberg, R.H.; He, R. (2006). "Sea surface temperature patterns on the West Florida Shelf using growing hierarchical self-organizing maps". Journal of Atmospheric and Oceanic Technology. 23 (2): 325–338. Bibcode:2006JAtOT..23..325L. doi:10.1175/JTECH1848.1. hdl:1912/4186. Hsu, A.; Tang, S.; Halgamuge, S. K. (2003). "An unsupervised hierarchical dynamic self-organizing approach to cancer class discovery and marker gene identification in microarray data". Bioinformatics. 19 (16): 2131–2140. doi:10.1093/bioinformatics/btg296. PMID 14594719. Alahakoon, D.; Halgamuge, S.K.; Sirinivasan, B. (2000). "Dynamic Self Organizing Maps With Controlled Growth for Knowledge Discovery". IEEE Transactions on Neural Networks. 11 (3): 601–614. doi:10.1109/72.846732. PMID 18249788. == See
|
{
"page_id": 30867546,
"source": null,
"title": "Growing self-organizing map"
}
|
also == Self-organizing map Time Adaptive Self-Organizing Map Elastic map Artificial intelligence Machine learning Data mining Nonlinear dimensionality reduction
|
{
"page_id": 30867546,
"source": null,
"title": "Growing self-organizing map"
}
|
In artificial intelligence, eager learning is a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is made to the system. The main advantage gained in employing an eager learning method, such as an artificial neural network, is that the target function will be approximated globally during training, thus requiring much less space than using a lazy learning system. Eager learning systems also deal much better with noise in the training data. Eager learning is an example of offline learning, in which post-training queries to the system have no effect on the system itself, and thus the same query to the system will always produce the same result. The main disadvantage with eager learning is that it is generally unable to provide good local approximations in the target function. == References ==
|
{
"page_id": 10747995,
"source": null,
"title": "Eager learning"
}
|
== Best selling pharmaceuticals of U.S. market == The top 5 best selling pharmaceuticals 2015–2019. Sales in billion USD. == Best selling pharmaceuticals of 2017/18 == The top 16 best selling pharmaceuticals of 2017/18. == Largest selling pharmaceutical products of 2015 == Drugs with sales above $5 billion in 2015 included: == Best selling pharmaceuticals of 2013 == For the fourth quarter of 2013, the largest selling drugs were: == See also == List of drugs Lists about the pharmaceutical industry == References ==
|
{
"page_id": 12976220,
"source": null,
"title": "List of largest selling pharmaceutical products"
}
|
The molecular formula C30H46NO7P (molar mass: 563.66 g/mol, exact mass: 563.3012 u) may refer to: Ceronapril, a phosphonate ACE inhibitor that was never marketed Fosinopril, an angiotensin converting enzyme (ACE) inhibitor
|
{
"page_id": 40960096,
"source": null,
"title": "C30H46NO7P"
}
|
Danielle S. McNamara is an educational researcher known for her theoretical and empirical work with reading comprehension and the development of game-based literacy technologies. She is professor of psychology and senior research scientist at Arizona State University. She has previously held positions at University of Memphis, Old Dominion University, and University of Colorado, Boulder. In 2015, McNamara received the Distinguished Cognitive Scientist Award from the University of California, Merced. She was selected by the American Educational Research Association (AERA) as a 2018 AERA Fellow in acknowledgement of her theoretical and research contributions to the field of literacy and learning. McNamara is the founding editor of Technology, Mind, and Behavior, an open-access, peer-reviewed journal published by the American Psychological Association (APA). She has also previously served as president of the Society for Text and Discourse, and serves on the editorial board of Discourse Processes, a multidisciplinary journal published by Taylor & Francis. == Biography == McNamara received her B.A. in Linguistics from the University of Kansas in 1982, and her M.A. in Clinical Psychology from Wichita State University, Kansas, in 1989. In 1992, she earned her Ph.D. in Cognitive Psychology at the University of Colorado, Boulder. During her Ph.D., McNamara conducted research on learning theories with Alice F. Healy and reading comprehension with Walter Kintsch. She moved into educational research after receiving two grants from the James S. McDonnell Foundation to apply cognitive psychology principles to education. McNamara is the director of the Science of Learning and Educational Technology (SoLET) Lab, where she and her team research and develop intelligent tutoring systems and natural language processing software. SoLET learning technologies like iSTART, a game-based tool to help readers develop self-explanation strategies, and Writing Pal, an intelligent writing tutor with game-based writing guides and automatic feedback, are free to access online through
|
{
"page_id": 62193764,
"source": null,
"title": "Danielle S. McNamara"
}
|
McNamara's Adaptive Literacy website. iSTART and Writing Pal are funded by the U.S. Department of Education through the Institute of Education Sciences. == Research == McNamara's research focuses on the development of intelligent tutoring systems that use game-based exercises to increase learner motivation when practicing reading and writing strategies. She developed the intelligent tutoring system Interactive Strategy Training for Active Reading and Thinking (iSTART), an online application based on the idea of Self-Explanation Reading Training (SERT), which coaches learners to use active reading strategies. iSTART has been found to be as effective as live, one-on-one human tutoring of SERT in improving students' quality of self-explanation when reading. With Arthur Graesser, McNamara developed Coh-Metrix, a computational tool for evaluating text readability based on measuring levels of cohesion, world knowledge, language and discourse characteristics. Coh-Metrix has made it easier for researchers and publishers to assess text difficulty and cohesion without relying on previous methods that focused primarily on word and sentence length. McNamara has authored and edited five books spanning the topics of reading comprehension, linguistics, educational technologies, and cognition. These include Reading Comprehension Strategies: Theories, Interventions, and Technologies; Automated Evaluation of Text and Discourse with Coh-Metrix with Arthur Graesser, Philip M. McCarthy, and Zhiqiang Cai; Handbook of Latent Semantic Analysis with Thomas K. Landauer, Simon Dennis, and Walter Kintsch; Adaptive Educational Technologies for Literacy Instruction with Scott A. Crossley; and Cognition in Education (Ed Psych Insights) with Matthew T. McCrudden. == Representative Publications == McNamara, D.S. (2004). SERT: Self-explanation reading training. Discourse Processes, 38(1), 1–30. McNamara, D.S. (2007). Reading comprehension strategies: Theories, interventions, and technologies. Psychology Press McNamara, D.S., & Kintsch, W. (1996). Learning from texts: Effects of prior knowledge and text coherence. Discourse Processes, 22(3), 247–288. McNamara, D.S., Kintsch, E., Songer, N.B., & Kintsch, W. (1996). Are good texts always
|
{
"page_id": 62193764,
"source": null,
"title": "Danielle S. McNamara"
}
|
better? Interactions of text coherence, background knowledge, and levels of understanding in learning from text. Cognition and Instruction, 14(1), 1–43. McNamara, D.S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. Psychology of Learning and Motivation, 51, 297–384. == References == == External links == Science of Learning and Educational Technology (SoLET) Lab Adaptive Literacy Danielle S. McNamara publications indexed by Google Scholar
|
{
"page_id": 62193764,
"source": null,
"title": "Danielle S. McNamara"
}
|
Exact diagonalization (ED) is a numerical technique used in physics to determine the eigenstates and energy eigenvalues of a quantum Hamiltonian. In this technique, a Hamiltonian for a discrete, finite system is expressed in matrix form and diagonalized using a computer. Exact diagonalization is only feasible for systems with a few tens of particles, due to the exponential growth of the Hilbert space dimension with the size of the quantum system. It is frequently employed to study lattice models, including the Hubbard model, Ising model, Heisenberg model, t-J model, and SYK model. == Expectation values from exact diagonalization == After determining the eigenstates | n ⟩ {\displaystyle |n\rangle } and energies ϵ n {\displaystyle \epsilon _{n}} of a given Hamiltonian, exact diagonalization can be used to obtain expectation values of observables. For example, if O {\displaystyle {\mathcal {O}}} is an observable, its thermal expectation value is ⟨ O ⟩ = 1 Z ∑ n e − β ϵ n ⟨ n | O | n ⟩ , {\displaystyle \langle {\mathcal {O}}\rangle ={\frac {1}{Z}}\sum _{n}e^{-\beta \epsilon _{n}}\langle n|{\mathcal {O}}|n\rangle ,} where Z = ∑ n e − β ϵ n {\displaystyle Z=\sum _{n}e^{-\beta \epsilon _{n}}} is the partition function. If the observable can be written down in the initial basis for the problem, then this sum can be evaluated after transforming to the basis of eigenstates. Green's functions may be evaluated similarly. For example, the retarded Green's function G R ( t ) = − i θ ( t ) ⟨ [ A ( t ) , B ( 0 ) ] ⟩ {\displaystyle G^{R}(t)=-i\theta (t)\langle [A(t),B(0)]\rangle } can be written G R ( t ) = − i θ ( t ) Z ∑ n , m ( e − β ϵ n − e − β ϵ m )
|
{
"page_id": 61341798,
"source": null,
"title": "Exact diagonalization"
}
|
⟨ n | A ( 0 ) | m ⟩ ⟨ m | B ( 0 ) | n ⟩ e − i ( ϵ m − ϵ n ) t / ℏ . {\displaystyle G^{R}(t)=-{\frac {i\theta (t)}{Z}}\sum _{n,m}\left(e^{-\beta \epsilon _{n}}-e^{-\beta \epsilon _{m}}\right)\langle n|A(0)|m\rangle \langle m|B(0)|n\rangle e^{-i(\epsilon _{m}-\epsilon _{n})t/\hbar }.} Exact diagonalization can also be used to determine the time evolution of a system after a quench. Suppose the system has been prepared in an initial state | ψ ⟩ {\displaystyle |\psi \rangle } , and then for time t > 0 {\displaystyle t>0} evolves under a new Hamiltonian, H {\displaystyle {\mathcal {H}}} . The state at time t {\displaystyle t} is | ψ ( t ) ⟩ = ∑ n e − i ϵ n t / ℏ ⟨ n | ψ ( 0 ) ⟩ | n ⟩ . {\displaystyle |\psi (t)\rangle =\sum _{n}e^{-i\epsilon _{n}t/\hbar }\langle n|\psi (0)\rangle |n\rangle .} == Memory requirements == The dimension of the Hilbert space describing a quantum system scales exponentially with system size. For example, consider a system of N {\displaystyle N} spins localized on fixed lattice sites. The dimension of the on-site basis is 2, because the state of each spin can be described as a superposition of spin-up and spin-down, denoted | ↑ ⟩ {\displaystyle \left|\uparrow \right\rangle } and | ↓ ⟩ {\displaystyle \left|\downarrow \right\rangle } . The full system has dimension 2 N {\displaystyle 2^{N}} , and the Hamiltonian represented as a matrix has size 2 N × 2 N {\displaystyle 2^{N}\times 2^{N}} . This implies that computation time and memory requirements scale very unfavorably in exact diagonalization. In practice, the memory requirements can be reduced by taking advantage of symmetry of the problem, imposing conservation laws, working with sparse matrices, or using other techniques. == Comparison with other
|
{
"page_id": 61341798,
"source": null,
"title": "Exact diagonalization"
}
|
techniques == Exact diagonalization is useful for extracting exact information about finite systems. However, often small systems are studied to gain insight into infinite lattice systems. If the diagonalized system is too small, its properties will not reflect the properties of the system in the thermodynamic limit, and the simulation is said to suffer from finite size effects. Unlike some other exact theory techniques, such as Auxiliary-field Monte Carlo, exact diagonalization obtains Green's functions directly in real time, as opposed to imaginary time. Unlike in these other techniques, exact diagonalization results do not need to be numerically analytically continued. This is an advantage, because numerical analytic continuation is an ill-posed and difficult optimization problem. == Applications == Can be used as an impurity solver for Dynamical mean-field theory techniques. When combined with finite size scaling, estimating the ground state energy and critical exponents of the 1D transverse-field Ising model. Studying various properties of the 2D Heisenberg model in a magnetic field, including antiferromagnetism and spin-wave velocity. Studying the Drude weight of the 2D Hubbard model. Studying out-of-time-order correlations (OTOCs) and scrambling in the SYK model. Simulating resonant x-ray spectra of strongly correlated materials. == Implementations == Numerous software packages implementing exact diagonalization of quantum Hamiltonians exist. These include ALPS, DoQo, EdLib, edrixs, Quanty and many others. == Generalizations == Exact diagonalization results from many small clusters can be combined to obtain more accurate information about systems in the thermodynamic limit using the numerical linked cluster expansion. == See also == Lanczos algorithm == References == == External links == Quantum Simulation/Exact diagonalization ALPS full diagonalization tutorial Archived 2019-07-23 at the Wayback Machine Exact Diagonalization and Lanczos Method in E. Pavarini, E. Koch and S. Zhang (eds.): Many-Body Methods for Real Materials, Jülich 2019, ISBN 978-3-95806-400-3
|
{
"page_id": 61341798,
"source": null,
"title": "Exact diagonalization"
}
|
The tetrad formalism is an approach to general relativity that generalizes the choice of basis for the tangent bundle from a coordinate basis to the less restrictive choice of a local basis, i.e. a locally defined set of four linearly independent vector fields called a tetrad or vierbein. It is a special case of the more general idea of a vielbein formalism, which is set in (pseudo-)Riemannian geometry. This article as currently written makes frequent mention of general relativity; however, almost everything it says is equally applicable to (pseudo-)Riemannian manifolds in general, and even to spin manifolds. Most statements hold simply by substituting arbitrary n {\displaystyle n} for n = 4 {\displaystyle n=4} . In German, "vier" translates to "four", "viel" to "many", and "bein" to "leg". The general idea is to write the metric tensor as the product of two vielbeins, one on the left, and one on the right. The effect of the vielbeins is to change the coordinate system used on the tangent manifold to one that is simpler or more suitable for calculations. It is frequently the case that the vielbein coordinate system is orthonormal, as that is generally the easiest to use. Most tensors become simple or even trivial in this coordinate system; thus the complexity of most expressions is revealed to be an artifact of the choice of coordinates, rather than a innate property or physical effect. That is, as a formalism, it does not alter predictions; it is rather a calculational technique. The advantage of the tetrad formalism over the standard coordinate-based approach to general relativity lies in the ability to choose the tetrad basis to reflect important physical aspects of the spacetime. The abstract index notation denotes tensors as if they were represented by their coefficients with respect to a fixed local
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
tetrad. Compared to a completely coordinate free notation, which is often conceptually clearer, it allows an easy and computationally explicit way to denote contractions. The significance of the tetradic formalism appear in the Einstein–Cartan formulation of general relativity. The tetradic formalism of the theory is more fundamental than its metric formulation as one can not convert between the tetradic and metric formulations of the fermionic actions despite this being possible for bosonic actions . This is effectively because Weyl spinors can be very naturally defined on a Riemannian manifold and their natural setting leads to the spin connection. Those spinors take form in the vielbein coordinate system, and not in the manifold coordinate system. The privileged tetradic formalism also appears in the deconstruction of higher dimensional Kaluza–Klein gravity theories and massive gravity theories, in which the extra-dimension(s) is/are replaced by series of N lattice sites such that the higher dimensional metric is replaced by a set of interacting metrics that depend only on the 4D components. Vielbeins commonly appear in other general settings in physics and mathematics. Vielbeins can be understood as solder forms. == Mathematical formulation == The tetrad formulation is a special case of a more general formulation, known as the vielbein or n-bein formulation, with n=4. Vielbien is spelt with an "l", not an "r": in German, "viel" means "many", not to be confused with "vier", meaning "four". In the vielbein formalism, an open cover of the spacetime manifold M {\displaystyle M} and a local basis for each of those open sets is chosen: a set of n {\displaystyle n} independent vector fields e a = e a μ ∂ μ {\displaystyle e_{a}=e_{a}{}^{\mu }\partial _{\mu }} for a = 1 , … , n {\displaystyle a=1,\ldots ,n} that together span the n {\displaystyle n} -dimensional tangent bundle
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
at each point in the set. Dually, a vielbein (or tetrad in 4 dimensions) determines (and is determined by) a dual co-vielbein (co-tetrad) — a set of n {\displaystyle n} independent 1-forms. e a = e a μ d x μ {\displaystyle e^{a}=e^{a}{}_{\mu }dx^{\mu }} such that e a ( e b ) = e a μ e b μ = δ b a , {\displaystyle e^{a}(e_{b})=e^{a}{}_{\mu }e_{b}{}^{\mu }=\delta _{b}^{a},} where δ b a {\displaystyle \delta _{b}^{a}} is the Kronecker delta. A vielbein is usually specified by its coefficients e μ a {\displaystyle e^{\mu }{}_{a}} with respect to a coordinate basis, despite the choice of a set of (local) coordinates x μ {\displaystyle x^{\mu }} being unnecessary for the specification of a tetrad. Each covector is a solder form. From the point of view of the differential geometry of fiber bundles, the n vector fields { e a } a = 1 … n {\displaystyle \{e_{a}\}_{a=1\dots n}} define a section of the frame bundle i.e. a parallelization of U ⊂ M {\displaystyle U\subset M} which is equivalent to an isomorphism T U ≅ U × R n {\displaystyle TU\cong U\times {\mathbb {R} ^{n}}} . Since not every manifold is parallelizable, a vielbein can generally only be chosen locally (i.e. only on a coordinate chart U {\displaystyle U} and not all of M {\displaystyle M} .) All tensors of the theory can be expressed in the vector and covector basis, by expressing them as linear combinations of members of the (co)vielbein. For example, the spacetime metric tensor can be transformed from a coordinate basis to the tetrad basis. Popular tetrad bases in general relativity include orthonormal tetrads and null tetrads. Null tetrads are composed of four null vectors, so are used frequently in problems dealing with radiation, and are the basis
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
of the Newman–Penrose formalism and the GHP formalism. == Relation to standard formalism == The standard formalism of differential geometry (and general relativity) consists of using the coordinate tetrad in the tetrad formalism. The coordinate tetrad is the canonical set of vectors associated with the coordinate chart. The coordinate tetrad is commonly denoted { ∂ μ } {\displaystyle \{\partial _{\mu }\}} whereas the dual cotetrad is denoted { d x μ } {\displaystyle \{dx^{\mu }\}} . These tangent vectors are usually defined as directional derivative operators: given a chart φ = ( φ 1 , … , φ n ) {\displaystyle {\varphi =(\varphi ^{1},\ldots ,\varphi ^{n})}} which maps a subset of the manifold into coordinate space R n {\displaystyle \mathbb {R} ^{n}} , and any scalar field f {\displaystyle f} , the coordinate vectors are such that: ∂ μ [ f ] ≡ ∂ ( f ∘ φ − 1 ) ∂ x μ . {\displaystyle \partial _{\mu }[f]\equiv {\frac {\partial (f\circ \varphi ^{-1})}{\partial x^{\mu }}}.} The definition of the cotetrad uses the usual abuse of notation d x μ = d φ μ {\displaystyle dx^{\mu }=d\varphi ^{\mu }} to define covectors (1-forms) on M {\displaystyle M} . The involvement of the coordinate tetrad is not usually made explicit in the standard formalism. In the tetrad formalism, instead of writing tensor equations out fully (including tetrad elements and tensor products ⊗ {\displaystyle \otimes } as above) only components of the tensors are mentioned. For example, the metric is written as " g a b {\displaystyle g_{ab}} ". When the tetrad is unspecified this becomes a matter of specifying the type of the tensor called abstract index notation. It allows to easily specify contraction between tensors by repeating indices as in the Einstein summation convention. Changing tetrads is a routine operation
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
in the standard formalism, as it is involved in every coordinate transformation (i.e., changing from one coordinate tetrad basis to another). Switching between multiple coordinate charts is necessary because, except in trivial cases, it is not possible for a single coordinate chart to cover the entire manifold. Changing to and between general tetrads is much similar and equally necessary (except for parallelizable manifolds). Any tensor can locally be written in terms of this coordinate tetrad or a general (co)tetrad. For example, the metric tensor g {\displaystyle \mathbf {g} } can be expressed as: g = g μ ν d x μ d x ν where g μ ν = g ( ∂ μ , ∂ ν ) . {\displaystyle \mathbf {g} =g_{\mu \nu }dx^{\mu }dx^{\nu }\qquad {\text{where}}~g_{\mu \nu }=\mathbf {g} (\partial _{\mu },\partial _{\nu }).} (Here we use the Einstein summation convention). Likewise, the metric can be expressed with respect to an arbitrary (co)tetrad as g = g a b e a e b where g a b = g ( e a , e b ) . {\displaystyle \mathbf {g} =g_{ab}e^{a}e^{b}\qquad {\text{where}}~g_{ab}=\mathbf {g} \left(e_{a},e_{b}\right).} Here, we use choice of alphabet (Latin and Greek) for the index variables to distinguish the applicable basis. We can translate from a general co-tetrad to the coordinate co-tetrad by expanding the covector e a = e a μ d x μ {\displaystyle e^{a}=e^{a}{}_{\mu }dx^{\mu }} . We then get g = g a b e a e b = g a b e a μ e b ν d x μ d x ν = g μ ν d x μ d x ν {\displaystyle \mathbf {g} =g_{ab}e^{a}e^{b}=g_{ab}e^{a}{}_{\mu }e^{b}{}_{\nu }dx^{\mu }dx^{\nu }=g_{\mu \nu }dx^{\mu }dx^{\nu }} from which it follows that g μ ν = g a b e a μ e b ν
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
{\displaystyle g_{\mu \nu }=g_{ab}e^{a}{}_{\mu }e^{b}{}_{\nu }} . Likewise expanding d x μ = e μ a e a {\displaystyle dx^{\mu }=e^{\mu }{}_{a}e^{a}} with respect to the general tetrad, we get g = g μ ν d x μ d x ν = g μ ν e μ a e ν b e a e b = g a b e a e b {\displaystyle \mathbf {g} =g_{\mu \nu }dx^{\mu }dx^{\nu }=g_{\mu \nu }e^{\mu }{}_{a}e^{\nu }{}_{b}e^{a}e^{b}=g_{ab}e^{a}e^{b}} which shows that g a b = g μ ν e μ a e ν b {\displaystyle g_{ab}=g_{\mu \nu }e^{\mu }{}_{a}e^{\nu }{}_{b}} . === Manipulation of indices === The manipulation with tetrad coefficients shows that abstract index formulas can, in principle, be obtained from tensor formulas with respect to a coordinate tetrad by "replacing greek by latin indices". However care must be taken that a coordinate tetrad formula defines a genuine tensor when differentiation is involved. Since the coordinate vector fields have vanishing Lie bracket (i.e. commute: ∂ μ ∂ ν = ∂ ν ∂ μ {\displaystyle \partial _{\mu }\partial _{\nu }=\partial _{\nu }\partial _{\mu }} ), naive substitutions of formulas that correctly compute tensor coefficients with respect to a coordinate tetrad may not correctly define a tensor with respect to a general tetrad because the Lie bracket is non-vanishing: [ e a , e b ] ≠ 0 {\displaystyle [e_{a},e_{b}]\neq 0} . Thus, it is sometimes said that tetrad coordinates provide a non-holonomic basis. For example, the Riemann curvature tensor is defined for general vector fields X , Y {\displaystyle X,Y} by R ( X , Y ) = ( ∇ X ∇ Y − ∇ Y ∇ X − ∇ [ X , Y ] ) {\displaystyle R(X,Y)=\left(\nabla _{X}\nabla _{Y}-\nabla _{Y}\nabla _{X}-\nabla _{[X,Y]}\right)} . In a coordinate tetrad this gives tensor coefficients R
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
ν σ τ μ = d x μ ( ( ∇ σ ∇ τ − ∇ τ ∇ σ ) ∂ ν ) . {\displaystyle R_{\ \nu \sigma \tau }^{\mu }=dx^{\mu }\left((\nabla _{\sigma }\nabla _{\tau }-\nabla _{\tau }\nabla _{\sigma })\partial _{\nu }\right).} The naive "Greek to Latin" substitution of the latter expression R b c d a = e a ( ( ∇ c ∇ d − ∇ d ∇ c ) e b ) (wrong!) {\displaystyle R_{\ bcd}^{a}=e^{a}\left((\nabla _{c}\nabla _{d}-\nabla _{d}\nabla _{c})e_{b}\right)\qquad {\text{(wrong!)}}} is incorrect because for fixed c and d, ( ∇ c ∇ d − ∇ d ∇ c ) {\displaystyle \left(\nabla _{c}\nabla _{d}-\nabla _{d}\nabla _{c}\right)} is, in general, a first order differential operator rather than a zeroth order operator which defines a tensor coefficient. Substituting a general tetrad basis in the abstract formula we find the proper definition of the curvature in abstract index notation, however: R b c d a = e a ( ( ∇ c ∇ d − ∇ d ∇ c − f c d e ∇ e ) e b ) {\displaystyle R_{\ bcd}^{a}=e^{a}\left((\nabla _{c}\nabla _{d}-\nabla _{d}\nabla _{c}-f_{cd}{}^{e}\nabla _{e})e_{b}\right)} where [ e a , e b ] = f a b c e c {\displaystyle [e_{a},e_{b}]=f_{ab}{}^{c}e_{c}} . Note that the expression ( ∇ c ∇ d − ∇ d ∇ c − f c d e ∇ e ) {\displaystyle \left(\nabla _{c}\nabla _{d}-\nabla _{d}\nabla _{c}-f_{cd}{}^{e}\nabla _{e}\right)} is indeed a zeroth order operator, hence (the (c d)-component of) a tensor. Since it agrees with the coordinate expression for the curvature when specialised to a coordinate tetrad it is clear, even without using the abstract definition of the curvature, that it defines the same tensor as the coordinate basis expression. == Example: Lie groups == Given a vector (or covector) in the tangent
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
(or cotangent) manifold, the exponential map describes the corresponding geodesic of that tangent vector. Writing X ∈ T M {\displaystyle X\in TM} , the parallel transport of a differential corresponds to e − X d e X = d X − 1 2 ! [ X , d X ] + 1 3 ! [ X , [ X , d X ] ] − 1 4 ! [ X , [ X , [ X , d X ] ] ] + ⋯ {\displaystyle e^{-X}de^{X}=dX-{\frac {1}{2!}}\left[X,dX\right]+{\frac {1}{3!}}[X,[X,dX]]-{\frac {1}{4!}}[X,[X,[X,dX]]]+\cdots } The above can be readily verified simply by taking X {\displaystyle X} to be a matrix. For the special case of a Lie algebra, the X {\displaystyle X} can be taken to be an element of the algebra, the exponential is the exponential map of a Lie group, and group elements correspond to the geodesics of the tangent vector. Choosing a basis e i {\displaystyle e_{i}} for the Lie algebra and writing X = X i e i {\displaystyle X=X^{i}e_{i}} for some functions X i , {\displaystyle X^{i},} the commutators can be explicitly written out. One readily computes that e − X d e X = d X i e i − 1 2 ! X i d X j f i j k e k + 1 3 ! X i X j d X k f j k l f i l m e m − ⋯ {\displaystyle e^{-X}de^{X}=dX^{i}e_{i}-{\frac {1}{2!}}X^{i}dX^{j}{f_{ij}}^{k}e_{k}+{\frac {1}{3!}}X^{i}X^{j}dX^{k}{f_{jk}}^{l}{f_{il}}^{m}e_{m}-\cdots } for [ e i , e j ] = f i j k e k {\displaystyle [e_{i},e_{j}]={f_{ij}}^{k}e_{k}} the structure constants of the Lie algebra. The series can be written more compactly as e − X d e X = e i W i j d X j {\displaystyle e^{-X}de^{X}=e_{i}{W^{i}}_{j}dX^{j}} with the infinite series W = ∑
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
n = 0 ∞ ( − 1 ) n M n ( n + 1 ) ! = ( I − e − M ) M − 1 . {\displaystyle W=\sum _{n=0}^{\infty }{\frac {(-1)^{n}M^{n}}{(n+1)!}}=(I-e^{-M})M^{-1}.} Here, M {\displaystyle M} is a matrix whose matrix elements are M j k = X i f i j k {\displaystyle {M_{j}}^{k}=X^{i}{f_{ij}}^{k}} . The matrix W {\displaystyle W} is then the vielbein; it expresses the differential d X j {\displaystyle dX^{j}} in terms of the "flat coordinates" (orthonormal, at that) e i {\displaystyle e_{i}} . Given some map N → G {\displaystyle N\to G} from some manifold N {\displaystyle N} to some Lie group G {\displaystyle G} , the metric tensor on the manifold N {\displaystyle N} becomes the pullback of the metric tensor B m n {\displaystyle B_{mn}} on the Lie group G {\displaystyle G} : g i j = W i m B m n W n j {\displaystyle g_{ij}={W_{i}}^{m}B_{mn}{W^{n}}_{j}} The metric tensor B m n {\displaystyle B_{mn}} on the Lie group is the Cartan metric, aka the Killing form. Note that, as a matrix, the second W is the transpose. For N {\displaystyle N} a (pseudo-)Riemannian manifold, the metric is a (pseudo-)Riemannian metric. The above generalizes to the case of symmetric spaces. These vielbeins are used to perform calculations in sigma models, of which the supergravity theories are a special case. == See also == == Notes == == Citations == == References == De Felice, F.; Clarke, C.J.S. (1990), Relativity on Curved Manifolds (first published 1990 ed.), Cambridge University Press, ISBN 0-521-26639-4 Benn, I.M.; Tucker, R.W. (1987), An introduction to Spinors and Geometry with Applications in Physics (first published 1987 ed.), Adam Hilger, ISBN 0-85274-169-3 == External links == General Relativity with Tetrads
|
{
"page_id": 11141222,
"source": null,
"title": "Tetrad formalism"
}
|
Stilbene may refer to one of the two stereoisomers of 1,2-diphenylethene: (E)-Stilbene (trans isomer) (Z)-Stilbene (cis isomer) == See also == Stilbenoids, a class of molecules found in plants 1,1-Diphenylethylene
|
{
"page_id": 17760360,
"source": null,
"title": "Stilbene"
}
|
Ecosystem diversity deals with the variations in ecosystems within a geographical location and its overall impact on human existence and the environment. Ecosystem diversity addresses the combined characteristics of biotic properties which are living organisms (biodiversity) and abiotic properties such as nonliving things like water or soil (geodiversity). It is a variation in the ecosystems found in a region or the variation in ecosystems over the whole planet. Ecological diversity includes the variation in both terrestrial and aquatic ecosystems. Ecological diversity can also take into account the variation in the complexity of a biological community, including the number of different niches, the number of and other ecological processes. An example of ecological diversity on a global scale would be the variation in ecosystems, such as deserts, forests, grasslands, wetlands and oceans. Ecological diversity is the largest scale of biodiversity, and within each ecosystem, there is a great deal of both species and genetic diversity. == Impact == Diversity in the ecosystem is significant to human existence for a variety of reasons. Ecosystem diversity boosts the availability of oxygen via the process of photosynthesis amongst plant organisms domiciled in the habitat. Diversity in an aquatic environment helps in the purification of water by plant varieties for use by humans. Diversity increases plant varieties which serves as a good source for medicines and herbs for human use. A lack of diversity in the ecosystem produces an opposite result. == Examples == Some examples of ecosystems that are rich in diversity are: Deserts Forests Large marine ecosystems Marine ecosystems Old-growth forests Rainforests Tundra Coral reefs Marine == Ecosystem diversity as a result of evolutionary pressure == Ecological diversity around the world can be directly linked to the evolutionary and selective pressures that constrain the diversity outcome of the ecosystems within different niches. Tundras,
|
{
"page_id": 524396,
"source": null,
"title": "Ecosystem diversity"
}
|
Rainforests, coral reefs and deciduous forests all are formed as a result of evolutionary pressures. Even seemingly small evolutionary interactions can have large impacts on the diversity of the ecosystems throughout the world. One of the best studied cases of this is of the honeybee's interaction with angiosperms on every continent in the world except Antarctica. In 2010, Robert Brodschneider and Karl Crailsheim conducted a study on the health and nutrition in honeybee colonies. The study focused on overall colony health, adult nutrition, and larva nutrition as a function of the effect of pesticides, monocultures and genetically modified crops to see if the anthropogenically created problems can have an effect pollination levels. The results indicate that human activity does have a role in the destruction of the fitness of the bee colony. The extinction or near extinction of these pollinators would result in many plants that feed humans on a wide scale needing alternative pollination methods. Crop pollinating insects are worth annually $14.6 billion to the US economy and the cost to hand pollinate over insect pollination is estimated to cost $5,715-$7,135 more per hectare. Not only will there be a cost increase but also an decrease in colony fitness, leading to a decrease in genetic diversity, which studies have shown has a direct link to the long-term survival of the honeybee colonies. According to a study, there are over 50 plants that are dependent on bee pollination, many of these being key staples to feeding the world. Another study conducted states that a lack of plant diversity will lead to a decline in the bee population fitness, and a low bee colony fitness has impacts on the fitness of plant ecosystem diversity. By allowing for bee pollination and working to reduce anthropogenically harmful footprints, bee pollination can increase genetic
|
{
"page_id": 524396,
"source": null,
"title": "Ecosystem diversity"
}
|
diversity of flora growth and create a unique ecosystem that is highly diverse and can provide a habitat and niche for many other organisms to thrive. Due to the evolutionary pressures of bees being located on six out of seven continents, there can be no denying the impact of pollinators on the ecosystem diversity. The pollen collected by the bees is harvested and used as an energy source for wintertime; this act of collecting pollen from local plants also has a more important effect of facilitating the movement of genes between organisms. The new evolutionary pressures that are largely anthropogenically catalyzed can potentially cause widespread collapse of ecosystems. In the north Atlantic Sea, a study was conducted that followed the effects of the human interaction on surrounding ocean habitats. They found that there was no habitat or trophic level that in some way was affected negatively by human interaction, and that much of the diversity of life was being stunted as a result. == See also == Bioregion Disparity (ecology) Ecology Evolutionary biology Genetic diversity Nature Natural environment Species diversity Sustainable development == References ==
|
{
"page_id": 524396,
"source": null,
"title": "Ecosystem diversity"
}
|
Pyrolysis is a process involving the separation of covalent bonds in organic matter by thermal decomposition within an inert environment without oxygen. == Etymology == The word pyrolysis is coined from the Greek-derived elements pyro- (from Ancient Greek πῦρ : pûr - "fire, heat, fever") and lysis (λύσις : lúsis - "separation, loosening"). == Applications == Pyrolysis is most commonly used in the treatment of organic materials. It is one of the processes involved in the charring of wood or pyrolysis of biomass. In general, pyrolysis of organic substances produces volatile products and leaves char, a carbon-rich solid residue. Extreme pyrolysis, which leaves mostly carbon as the residue, is called carbonization. Pyrolysis is considered one of the steps in the processes of gasification or combustion. Laypeople often confuse pyrolysis gas with syngas. Pyrolysis gas has a high percentage of heavy tar fractions, which condense at relatively high temperatures, preventing its direct use in gas burners and internal combustion engines, unlike syngas. The process is used heavily in the chemical industry, for example, to produce ethylene, many forms of carbon, and other chemicals from petroleum, coal, and even wood, or to produce coke from coal. It is used also in the conversion of natural gas (primarily methane) into hydrogen gas and solid carbon char, recently introduced on an industrial scale. Aspirational applications of pyrolysis would convert biomass into syngas and biochar, waste plastics back into usable oil, or waste into safely disposable substances. == Terminology == Pyrolysis is one of the various types of chemical degradation processes that occur at higher temperatures (above the boiling point of water or other solvents). It differs from other processes like combustion and hydrolysis in that it usually does not involve the addition of other reagents such as oxygen (O2, in combustion) or water (in
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
hydrolysis). Pyrolysis produces solids (char), condensable liquids, (light and heavy oils and tar), and non-condensable gasses. Pyrolysis is different from gasification. In the chemical process industry, pyrolysis refers to a partial thermal degradation of carbonaceous materials that takes place in an inert (oxygen free) atmosphere and produces both gases, liquids and solids. The pyrolysis can be extended to full gasification that produces mainly gaseous output, often with the addition of e.g. water steam to gasify residual carbonic solids, see Steam reforming. === Types === Specific types of pyrolysis include: Carbonization, the complete pyrolysis of organic matter, which usually leaves a solid residue that consists mostly of elemental carbon. Methane pyrolysis, the direct conversion of methane to hydrogen fuel and separable solid carbon, sometimes using molten metal catalysts. Hydrous pyrolysis, in the presence of superheated water or steam, producing hydrogen and substantial atmospheric carbon dioxide. Dry distillation, as in the original production of sulfuric acid from sulfates. Destructive distillation, as in the manufacture of charcoal, coke and activated carbon. Charcoal burning, the production of charcoal. Tar production by destructive distillation of wood in tar kilns. Caramelization of sugars. High-temperature cooking processes such as roasting, frying, toasting, and grilling. Cracking of heavier hydrocarbons into lighter ones, as in oil refining. Thermal depolymerization, which breaks down plastics and other polymers into monomers and oligomers. Ceramization involving the formation of polymer derived ceramics from preceramic polymers under an inert atmosphere. Catagenesis, the natural conversion of buried organic matter to fossil fuels. Flash vacuum pyrolysis, used in organic synthesis. Other pyrolysis types come from a different classification that focuses on the pyrolysis operating conditions and heating system used, which have an impact on the yield of the pyrolysis products. == History == Pyrolysis has been used for turning wood into charcoal since ancient times. The
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
ancient Egyptians used the liquid fraction obtained from the pyrolysis of cedar wood, in their embalming process. The dry distillation of wood remained the major source of methanol into the early 20th century. Pyrolysis was instrumental in the discovery of many chemical substances, such as phosphorus from ammonium sodium hydrogen phosphate NH4NaHPO4 in concentrated urine, oxygen from mercuric oxide, and various nitrates. == General processes and mechanisms == Pyrolysis generally consists in heating the material above its decomposition temperature, breaking chemical bonds in its molecules. The fragments usually become smaller molecules, but may combine to produce residues with larger molecular mass, even amorphous covalent solids. In many settings, some amounts of oxygen, water, or other substances may be present, so that combustion, hydrolysis, or other chemical processes may occur besides pyrolysis proper. Sometimes those chemicals are added intentionally, as in the burning of firewood, in the traditional manufacture of charcoal, and in the steam cracking of crude oil. Conversely, the starting material may be heated in a vacuum or in an inert atmosphere to avoid chemical side reactions (such as combustion or hydrolysis). Pyrolysis in a vacuum also lowers the boiling point of the byproducts, improving their recovery. When organic matter is heated at increasing temperatures in open containers, the following processes generally occur, in successive or overlapping stages: Below about 100 °C, volatiles, including some water, evaporate. Heat-sensitive substances, such as vitamin C and proteins, may partially change or decompose already at this stage. At about 100 °C or slightly higher, any remaining water that is merely absorbed in the material is driven off. This process consumes a lot of energy, so the temperature may stop rising until all water has evaporated. Water trapped in crystal structure of hydrates may come off at somewhat higher temperatures. Some solid substances,
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
like fats, waxes, and sugars, may melt and separate. Between 100 and 500 °C, many common organic molecules break down. Most sugars start decomposing at 160–180 °C. Cellulose, a major component of wood, paper, and cotton fabrics, decomposes at about 350 °C. Lignin, another major wood component, starts decomposing at about 350 °C, but continues releasing volatile products up to 500 °C. The decomposition products usually include water, carbon monoxide CO and/or carbon dioxide CO2, as well as a large number of organic compounds. Gases and volatile products leave the sample, and some of them may condense again as smoke. Generally, this process also absorbs energy. Some volatiles may ignite and burn, creating a visible flame. The non-volatile residues typically become richer in carbon and form large disordered molecules, with colors ranging between brown and black. At this point the matter is said to have been "charred" or "carbonized". At 200–300 °C, if oxygen has not been excluded, the carbonaceous residue may start to burn, in a highly exothermic reaction, often with no or little visible flame. Once carbon combustion starts, the temperature rises spontaneously, turning the residue into a glowing ember and releasing carbon dioxide and/or monoxide. At this stage, some of the nitrogen still remaining in the residue may be oxidized into nitrogen oxides like NO2 and N2O3. Sulfur and other elements like chlorine and arsenic may be oxidized and volatilized at this stage. Once combustion of the carbonaceous residue is complete, a powdery or solid mineral residue (ash) is often left behind, consisting of inorganic oxidized materials of high melting point. Some of the ash may have left during combustion, entrained by the gases as fly ash or particulate emissions. Metals present in the original matter usually remain in the ash as oxides or carbonates, such as
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
potash. Phosphorus, from materials such as bone, phospholipids, and nucleic acids, usually remains as phosphates. == Safety challenges == Because pyrolysis takes place at high temperatures which exceed the autoignition temperature of the produced gases, an explosion risk exists if oxygen is present. To control the temperature of pyrolysis systems careful temperature control is needed and can be accomplished with an open source pyrolysis controller. Pyrolysis also produces various toxic gases, mainly carbon monoxide. The greatest risk of fire, explosion and release of toxic gases comes when the system is starting up and shutting down, operating intermittently, or during operational upsets. Inert gas purging is essential to manage inherent explosion risks. The procedure is not trivial and failure to keep oxygen out has led to accidents. == Occurrence and uses == === Clandestine chemistry === Conversion of CBD to THC can be brought about by pyrolysis. === Cooking === Pyrolysis has many applications in food preparation. Caramelization is the pyrolysis of sugars in food (often after the sugars have been produced by the breakdown of polysaccharides). The food goes brown and changes flavor. The distinctive flavors are used in many dishes; for instance, caramelized onion is used in French onion soup. The temperatures needed for caramelization lie above the boiling point of water. Frying oil can easily rise above the boiling point. Putting a lid on the frying pan keeps the water in, and some of it re-condenses, keeping the temperature too cool to brown for longer time. Pyrolysis of food can also be undesirable, as in the charring of burnt food (at temperatures too low for the oxidative combustion of carbon to produce flames and burn the food to ash). === Coke, carbon, charcoals, and chars === Carbon and carbon-rich materials have desirable properties but are nonvolatile, even at
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
high temperatures. Consequently, pyrolysis is used to produce many kinds of carbon; these can be used for fuel, as reagents in steelmaking (coke), and as structural materials. Charcoal is a less smoky fuel than pyrolyzed wood. Some cities ban, or used to ban, wood fires; when residents only use charcoal (and similarly treated rock coal, called coke) air pollution is significantly reduced. In cities where people do not generally cook or heat with fires, this is not needed. In the mid-20th century, "smokeless" legislation in Europe required cleaner-burning techniques, such as coke fuel and smoke-burning incinerators as an effective measure to reduce air pollution The coke-making or "coking" process consists of heating the material in "coking ovens" to very high temperatures (up to 900 °C or 1,700 °F) so that the molecules are broken down into lighter volatile substances, which leave the vessel, and a porous but hard residue that is mostly carbon and inorganic ash. The amount of volatiles varies with the source material, but is typically 25–30% of it by weight. High temperature pyrolysis is used on an industrial scale to convert coal into coke. This is useful in metallurgy, where the higher temperatures are necessary for many processes, such as steelmaking. Volatile by-products of this process are also often useful, including benzene and pyridine. Coke can also be produced from the solid residue left from petroleum refining. The original vascular structure of the wood and the pores created by escaping gases combine to produce a light and porous material. By starting with a dense wood-like material, such as nutshells or peach stones, one obtains a form of charcoal with particularly fine pores (and hence a much larger pore surface area), called activated carbon, which is used as an adsorbent for a wide range of chemical substances. Biochar
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
is the residue of incomplete organic pyrolysis, e.g., from cooking fires. It is a key component of the terra preta soils associated with ancient indigenous communities of the Amazon basin. Terra preta is much sought by local farmers for its superior fertility and capacity to promote and retain an enhanced suite of beneficial microbiota, compared to the typical red soil of the region. Efforts are underway to recreate these soils through biochar, the solid residue of pyrolysis of various materials, mostly organic waste. Carbon fibers are filaments of carbon that can be used to make very strong yarns and textiles. Carbon fiber items are often produced by spinning and weaving the desired item from fibers of a suitable polymer, and then pyrolyzing the material at a high temperature (from 1,500–3,000 °C or 2,730–5,430 °F). The first carbon fibers were made from rayon, but polyacrylonitrile has become the most common starting material. For their first workable electric lamps, Joseph Wilson Swan and Thomas Edison used carbon filaments made by pyrolysis of cotton yarns and bamboo splinters, respectively. Pyrolysis is the reaction used to coat a preformed substrate with a layer of pyrolytic carbon. This is typically done in a fluidized bed reactor heated to 1,000–2,000 °C or 1,830–3,630 °F. Pyrolytic carbon coatings are used in many applications, including artificial heart valves. === Liquid and gaseous biofuels === Pyrolysis is the basis of several methods for producing fuel from biomass, i.e. lignocellulosic biomass. Crops studied as biomass feedstock for pyrolysis include native North American prairie grasses such as switchgrass and bred versions of other grasses such as Miscantheus giganteus. Other sources of organic matter as feedstock for pyrolysis include greenwaste, sawdust, waste wood, leaves, vegetables, nut shells, straw, cotton trash, rice hulls, and orange peels. Animal waste including poultry litter, dairy manure,
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
and potentially other manures are also under evaluation. Some industrial byproducts are also suitable feedstock including paper sludge, distillers grain, and sewage sludge. In the biomass components, the pyrolysis of hemicellulose happens between 210 and 310 °C. The pyrolysis of cellulose starts from 300 to 315 °C and ends at 360–380 °C, with a peak at 342–354 °C. Lignin starts to decompose at about 200 °C and continues until 1000 °C. Synthetic diesel fuel by pyrolysis of organic materials is not yet economically competitive. Higher efficiency is sometimes achieved by flash pyrolysis, in which finely divided feedstock is quickly heated to between 350 and 500 °C (660 and 930 °F) for less than two seconds. Syngas is usually produced by pyrolysis. The low quality of oils produced through pyrolysis can be improved by physical and chemical processes, which might drive up production costs, but may make sense economically as circumstances change. There is also the possibility of integrating with other processes such as mechanical biological treatment and anaerobic digestion. Fast pyrolysis is also investigated for biomass conversion. Fuel bio-oil can also be produced by hydrous pyrolysis. === Methane pyrolysis for hydrogen === Methane pyrolysis is an industrial process for "turquoise" hydrogen production from methane by removing solid carbon from natural gas. This one-step process produces hydrogen in high volume at low cost (less than steam reforming with carbon sequestration). No greenhouse gas is released. No deep well injection of carbon dioxide is needed. Only water is released when hydrogen is used as the fuel for fuel-cell electric heavy truck transportation, gas turbine electric power generation, and hydrogen for industrial processes including producing ammonia fertilizer and cement. Methane pyrolysis is the process operating around 1065 °C for producing hydrogen from natural gas that allows removal of carbon easily (solid carbon is
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
a byproduct of the process). The industrial quality solid carbon can then be sold or landfilled and is not released into the atmosphere, avoiding emission of greenhouse gas (GHG) or ground water pollution from a landfill. In 2015, a company called Monolith Materials built a pilot plant in Redwood City, CA to study scaling Methane Pyrolysis using renewable power in the process. A successful pilot project then led to a larger commercial-scale demonstration plant in Hallam, Nebraska in 2016. As of 2020, this plant is operational and can produce around 14 metric tons of hydrogen per day. In 2021, the US Department of Energy backed Monolith Materials' plans for major expansion with a $1B loan guarantee. The funding will help produce a plant capable of generating 164 metric tons of hydrogen per day by 2024. Pilots with gas utilities and biogas plants are underway with companies like Modern Hydrogen. Volume production is also being evaluated in the BASF "methane pyrolysis at scale" pilot plant, the chemical engineering team at University of California - Santa Barbara and in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). Power for process heat consumed is only one-seventh of the power consumed in the water electrolysis method for producing hydrogen. The Australian company Hazer Group was founded in 2010 to commercialise technology originally developed at the University of Western Australia. The company was listed on the ASX in December 2015. It is completing a commercial demonstration project to produce renewable hydrogen and graphite from wastewater and iron ore as a process catalyst use technology created by the University of Western Australia (UWA). The Commercial Demonstration Plant project is an Australian first, and expected to produce around 100 tonnes of fuel-grade hydrogen and 380 tonnes of graphite each year starting in 2023. It was scheduled to
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
commence in 2022. "10 December 2021: Hazer Group (ASX: HZR) regret to advise that there has been a delay to the completion of the fabrication of the reactor for the Hazer Commercial Demonstration Project (CDP). This is expected to delay the planned commissioning of the Hazer CDP, with commissioning now expected to occur after our current target date of 1Q 2022." The Hazer Group has collaboration agreements with Engie for a facility in France in May 2023, A Memorandum of Understanding with Chubu Electric & Chiyoda in Japan April 2023 and an agreement with Suncor Energy and FortisBC to develop 2,500 tonnes per Annum Burrard-Hazer Hydrogen Production Plant in Canada April 2022 The American company C-Zero's technology converts natural gas into hydrogen and solid carbon. The hydrogen provides clean, low-cost energy on demand, while the carbon can be permanently sequestered. C-Zero announced in June 2022 that it closed a $34 million financing round led by SK Gas, a subsidiary of South Korea's second-largest conglomerate, the SK Group. SK Gas was joined by two other new investors, Engie New Ventures and Trafigura, one of the world's largest physical commodities trading companies, in addition to participation from existing investors including Breakthrough Energy Ventures, Eni Next, Mitsubishi Heavy Industries, and AP Ventures. Funding was for C-Zero's first pilot plant, which was expected to be online in Q1 2023. The plant may be capable of producing up to 400 kg of hydrogen per day from natural gas with no CO2 emissions. One of the world's largest chemical companies, BASF, has been researching hydrogen pyrolysis for more than 10 years. === Ethylene === Pyrolysis is used to produce ethylene, the chemical compound produced on the largest scale industrially (>110 million tons/year in 2005). In this process, hydrocarbons from petroleum are heated to around 600 °C
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
(1,112 °F) in the presence of steam; this is called steam cracking. The resulting ethylene is used to make antifreeze (ethylene glycol), PVC (via vinyl chloride), and many other polymers, such as polyethylene and polystyrene. === Semiconductors === The process of metalorganic vapour-phase epitaxy (MOCVD) entails pyrolysis of volatile organometallic compounds to give semiconductors, hard coatings, and other applicable materials. The reactions entail thermal degradation of precursors, with deposition of the inorganic component and release of the hydrocarbons as gaseous waste. Since it is an atom-by-atom deposition, these atoms organize themselves into crystals to form the bulk semiconductor. Raw polycrystalline silicon is produced by the chemical vapor deposition of silane gases: SiH4 → Si + 2 H2 Gallium arsenide, another semiconductor, forms upon co-pyrolysis of trimethylgallium and arsine. === Waste management === Pyrolysis can also be used to treat municipal solid waste and plastic waste. The main advantage is the reduction in volume of the waste. In principle, pyrolysis will regenerate the monomers (precursors) to the polymers that are treated, but in practice the process is neither a clean nor an economically competitive source of monomers. In tire waste management, tire pyrolysis is a well-developed technology. Other products from car tire pyrolysis include steel wires, carbon black and bitumen. The area faces legislative, economic, and marketing obstacles. Oil derived from tire rubber pyrolysis has a high sulfur content, which gives it high potential as a pollutant; consequently it should be desulfurized. Alkaline pyrolysis of sewage sludge at low temperature of 500 °C can enhance H2 production with in-situ carbon capture. The use of NaOH (sodium hydroxide) has the potential to produce H2-rich gas that can be used for fuels cells directly. In early November 2021, the U.S. State of Georgia announced a joint effort with Igneo Technologies to build an
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
$85 million large electronics recycling plant in the Port of Savannah. The project will focus on lower-value, plastics-heavy devices in the waste stream using multiple shredders and furnaces using pyrolysis technology. Waste from pyrolysis itself can also be used for useful products. For example, contaminant-rich retentate from liquid-fed pyrolysis of postconsumer multilayer packaging waste can be used as novel building composite materials, which have higher compression strengths (10-12 MPa) than construction bricks and brickworks (7 MPa), as well as 57% lower density, 0.77 g/cm3 . ==== One-stepwise pyrolysis and Two-stepwise pyrolysis for Tobacco Waste ==== Pyrolysis has also been used for trying to mitigate tobacco waste. One method was done where tobacco waste was separated into two categories TLW (Tobacco Leaf Waste) and TSW (Tobacco Stick Waste). TLW was determined to be any waste from cigarettes and TSW was determined to be any waste from electronic cigarettes. Both TLW and TSW were dried at 80 °C for 24 hours and stored in a desiccator. Samples were grounded so that the contents were uniform. Tobacco Waste (TW) also contains inorganic (metal) contents, which was determined using an inductively coupled plasma-optical spectrometer. Thermo-gravimetric analysis was used to thermally degrade four samples (TLW, TSW, glycerol, and guar gum) and monitored under specific dynamic temperature conditions. About one gram of both TLW and TSW were used in the pyrolysis tests. During these analysis tests, CO2 and N2 were used as atmospheres inside of a tubular reactor that was built using quartz tubing. For both CO2 and N2 atmospheres the flow rate was 100 mL min−1. External heating was created via a tubular furnace. The pyrogenic products were classified into three phases. The first phase was biochar, a solid residue produced by the reactor at 650 °C. The second phase liquid hydrocarbons were collected by
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
a cold solvent trap and sorted by using chromatography. The third and final phase was analyzed using an online micro GC unit and those pyrolysates were gases. Two different types of experiments were conducted: one-stepwise pyrolysis and two-stepwise pyrolysis. One-stepwise pyrolysis consisted of a constant heating rate (10 °C min−1) from 30 to 720 °C. In the second step of the two-stepwise pyrolysis test the pyrolysates from the one-stepwise pyrolysis were pyrolyzed in the second heating zone which was controlled isothermally at 650 °C. The two-stepwise pyrolysis was used to focus primarily on how well CO2 affects carbon redistribution when adding heat through the second heating zone. First noted was the thermolytic behaviors of TLW and TSW in both the CO2 and N2 environments. For both TLW and TSW the thermolytic behaviors were identical at less than or equal to 660 °C in the CO2 and N2 environments. The differences between the environments start to occur when temperatures increase above 660 °C and the residual mass percentages significantly decrease in the CO2 environment compared to that in the N2 environment. This observation is likely due to the Boudouard reaction, where we see spontaneous gasification happening when temperatures exceed 710 °C. Although these observations were seen at temperatures lower than 710 °C it is most likely due to the catalytic capabilities of inorganics in TLW. It was further investigated by doing ICP-OES measurements and found that a fifth of the residual mass percentage was Ca species. CaCO3 is used in cigarette papers and filter material, leading to the explanation that degradation of CaCO3 causes pure CO2 reacting with CaO in a dynamic equilibrium state. This being the reason for seeing mass decay between 660 °C and 710 °C. Differences in differential thermogram (DTG) peaks for TLW were compared to TSW. TLW
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
had four distinctive peaks at 87, 195, 265, and 306 °C whereas TSW had two major drop offs at 200 and 306 °C with one spike in between. The four peaks indicated that TLW contains more diverse types of additives than TSW. The residual mass percentage between TLW and TSW was further compared, where the residual mass in TSW was less than that of TLW for both CO2 and N2 environments concluding that TSW has higher quantities of additives than TLW. The one-stepwise pyrolysis experiment showed different results for the CO2 and N2 environments. During this process the evolution of 5 different notable gases were observed. Hydrogen, Methane, Ethane, Carbon Dioxide, and Ethylene all are produced when the thermolytic rate of TLW began to be retarded at greater than or equal to 500 °C. Thermolytic rate begins at the same temperatures for both the CO2 and N2 environment but there is higher concentration of the production of Hydrogen, Ethane, Ethylene, and Methane in the N2 environment than that in the CO2 environment. The concentration of CO in the CO2 environment is significantly greater as temperatures increase past 600 °C and this is due to CO2 being liberated from CaCO3 in TLW. This significant increase in CO concentration is why there is lower concentrations of other gases produced in the CO2 environment due to a dilution effect. Since pyrolysis is the re-distribution of carbons in carbon substrates into three pyrogenic products. The CO2 environment is going to be more effective because the CO2 reduction into CO allows for the oxidation of pyrolysates to form CO. In conclusion the CO2 environment allows a higher yield of gases than oil and biochar. When the same process is done for TSW the trends are almost identical therefore the same explanations can be applied to
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
the pyrolysis of TSW. Harmful chemicals were reduced in the CO2 environment due to CO formation causing tar to be reduced. One-stepwise pyrolysis was not that effective on activating CO2 on carbon rearrangement due to the high quantities of liquid pyrolysates (tar). Two-stepwise pyrolysis for the CO2 environment allowed for greater concentrations of gases due to the second heating zone. The second heating zone was at a consistent temperature of 650 °C isothermally. More reactions between CO2 and gaseous pyrolysates with longer residence time meant that CO2 could further convert pyrolysates into CO. The results showed that the two-stepwise pyrolysis was an effective way to decrease tar content and increase gas concentration by about 10 wt.% for both TLW (64.20 wt.%) and TSW (73.71%). === Thermal cleaning === Pyrolysis is also used for thermal cleaning, an industrial application to remove organic substances such as polymers, plastics and coatings from parts, products or production components like extruder screws, spinnerets and static mixers. During the thermal cleaning process, at temperatures from 310 to 540 °C (600 to 1,000 °F), organic material is converted by pyrolysis and oxidation into volatile organic compounds, hydrocarbons and carbonized gas. Inorganic elements remain. Several types of thermal cleaning systems use pyrolysis: Molten Salt Baths belong to the oldest thermal cleaning systems; cleaning with a molten salt bath is very fast but implies the risk of dangerous splatters, or other potential hazards connected with the use of salt baths, like explosions or highly toxic hydrogen cyanide gas. Fluidized Bed Systems use sand or aluminium oxide as heating medium; these systems also clean very fast but the medium does not melt or boil, nor emit any vapors or odors; the cleaning process takes one to two hours. Vacuum Ovens use pyrolysis in a vacuum avoiding uncontrolled combustion inside the
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
cleaning chamber; the cleaning process takes 8 to 30 hours. Burn-Off Ovens, also known as Heat-Cleaning Ovens, are gas-fired and used in the painting, coatings, electric motors and plastics industries for removing organics from heavy and large metal parts. === Fine chemical synthesis === Pyrolysis is used in the production of chemical compounds, mainly, but not only, in the research laboratory. The area of boron-hydride clusters started with the study of the pyrolysis of diborane (B2H6) at ca. 200 °C. Products include the clusters pentaborane and decaborane. These pyrolyses involve not only cracking (to give H2), but also recondensation. The synthesis of nanoparticles, zirconia and oxides utilizing an ultrasonic nozzle in a process called ultrasonic spray pyrolysis (USP). === Other uses and occurrences === Pyrolysis is used to turn organic materials into carbon for the purpose of carbon-14 dating. Pyrolysis liquids from slow pyrolysis of bark and hemp have been tested for their antifungal activity against wood decaying fungi, showing potential to substitute the current wood preservatives while further tests are still required. However, their ecotoxicity is very variable and while some are less toxic than current wood preservatives, other pyrolysis liquids have shown high ecotoxicity, what may cause detrimental effects in the environment. Pyrolysis of tobacco, paper, and additives, in cigarettes and other products, generates many volatile products (including nicotine, carbon monoxide, and tar) that are responsible for the aroma and negative health effects of smoking. Similar considerations apply to the smoking of marijuana and the burning of incense products and mosquito coils. Pyrolysis occurs during the incineration of trash, potentially generating volatiles that are toxic or contribute to air pollution if not completely burned. Laboratory or industrial equipment sometimes gets fouled by carbonaceous residues that result from coking, the pyrolysis of organic products that come into contact with
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
hot surfaces. == PAHs generation == Polycyclic aromatic hydrocarbons (PAHs) can be generated from the pyrolysis of different solid waste fractions, such as hemicellulose, cellulose, lignin, pectin, starch, polyethylene (PE), polystyrene (PS), polyvinyl chloride (PVC), and polyethylene terephthalate (PET). PS, PVC, and lignin generate significant amount of PAHs. Naphthalene is the most abundant PAH among all the polycyclic aromatic hydrocarbons. When the temperature is increased from 500 to 900 °C, most PAHs increase. With increasing temperature, the percentage of light PAHs decreases and the percentage of heavy PAHs increases. == Study tools == === Thermogravimetric analysis === Thermogravimetric analysis (TGA) is one of the most common techniques to investigate pyrolysis with no limitations of heat and mass transfer. The results can be used to determine mass loss kinetics. Activation energies can be calculated using the Kissinger method or peak analysis-least square method (PA-LSM). TGA can couple with Fourier-transform infrared spectroscopy (FTIR) and mass spectrometry. As the temperature increases, the volatiles generated from pyrolysis can be measured. === Macro-TGA === In TGA, the sample is loaded first before the increase of temperature, and the heating rate is low (less than 100 °C min−1). Macro-TGA can use gram-scale samples to investigate the effects of pyrolysis with mass and heat transfer. === Pyrolysis–gas chromatography–mass spectrometry === Pyrolysis mass spectrometry (Py-GC-MS) is an important laboratory procedure to determine the structure of compounds. === Machine learning === In recent years, machine learning has attracted significant research interest in predicting yields, optimizing parameters, and monitoring pyrolytic processes. == See also == == References == == External links == Biddy, Mary; Dutta, Abhijit; Jones, Susanne; Meyer, Aye (2013). In-Situ Catalytic Fast Pyrolysis Technology Pathway (Report). doi:10.2172/1076660.
|
{
"page_id": 262252,
"source": null,
"title": "Pyrolysis"
}
|
A molecular shuttle in supramolecular chemistry is a special type of molecular machine capable of shuttling molecules or ions from one location to another. This field is of relevance to nanotechnology in its quest for nanoscale electronic components and also to biology where many biochemical functions are based on molecular shuttles. Academic interest also exists for synthetic molecular shuttles, the first prototype reported in 1991 based on a rotaxane. This device is based on a molecular thread composed of an ethyleneglycol chain interrupted by two arene groups acting as so-called stations. The terminal units (or stoppers) on this wire are bulky triisopropylsilyl groups. The bead is a tetracationic cyclophane based on two bipyridine groups and two para-phenylene groups. The bead is locked to one of the stations by pi-pi interactions but since the activation energy for migration from one station to the other station is only 13 kcal/mol (54 kJ/mol) the bead shuttles between them. The stoppers prevent the bead from slipping from the thread. Chemical synthesis of this device is based on molecular self-assembly from a preformed thread and two bead fragments (32% chemical yield). In certain molecular switches the two stations are non-degenerate. == References ==
|
{
"page_id": 12124272,
"source": null,
"title": "Molecular shuttle"
}
|
Solid mechanics (also known as mechanics of solids) is the branch of continuum mechanics that studies the behavior of solid materials, especially their motion and deformation under the action of forces, temperature changes, phase changes, and other external or internal agents. Solid mechanics is fundamental for civil, aerospace, nuclear, biomedical and mechanical engineering, for geology, and for many branches of physics and chemistry such as materials science. It has specific applications in many other areas, such as understanding the anatomy of living beings, and the design of dental prostheses and surgical implants. One of the most common practical applications of solid mechanics is the Euler–Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, and the relationship between them. Solid mechanics is a vast subject because of the wide range of solid materials available, such as steel, wood, concrete, biological materials, textiles, geological materials, and plastics. == Fundamental aspects == A solid is a material that can support a substantial amount of shearing force over a given time scale during a natural or industrial process or action. This is what distinguishes solids from fluids, because fluids also support normal forces which are those forces that are directed perpendicular to the material plane across from which they act and normal stress is the normal force per unit area of that material plane. Shearing forces in contrast with normal forces, act parallel rather than perpendicular to the material plane and the shearing force per unit area is called shear stress. Therefore, solid mechanics examines the shear stress, deformation and the failure of solid materials and structures. The most common topics covered in solid mechanics include: stability of structures - examining whether structures can return to a given equilibrium after disturbance or partial/complete failure, see Structure mechanics dynamical systems and chaos
|
{
"page_id": 458866,
"source": null,
"title": "Solid mechanics"
}
|
- dealing with mechanical systems highly sensitive to their given initial position thermomechanics - analyzing materials with models derived from principles of thermodynamics biomechanics - solid mechanics applied to biological materials e.g. bones, heart tissue geomechanics - solid mechanics applied to geological materials e.g. ice, soil, rock vibrations of solids and structures - examining vibration and wave propagation from vibrating particles and structures i.e. vital in mechanical, civil, mining, aeronautical, maritime/marine, aerospace engineering fracture and damage mechanics - dealing with crack-growth mechanics in solid materials composite materials - solid mechanics applied to materials made up of more than one compound e.g. reinforced plastics, reinforced concrete, fiber glass variational formulations and computational mechanics - numerical solutions to mathematical equations arising from various branches of solid mechanics e.g. finite element method (FEM) experimental mechanics - design and analysis of experimental methods to examine the behavior of solid materials and structures == Relationship to continuum mechanics == As shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics. == Response models == A material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain. If the applied stress is sufficiently low (or the imposed strain is small enough), almost all solid materials behave in such a way that the strain is directly proportional to the stress; the coefficient of the proportion is called the modulus of elasticity. This region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, due to ease of computation. However, real materials often
|
{
"page_id": 458866,
"source": null,
"title": "Solid mechanics"
}
|
exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, non-linear material models are becoming more common. These are basic models that describe how a solid responds to an applied stress: Elasticity – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load, can be described by the linear elasticity equations such as Hooke's law. Viscoelasticity – These are materials that behave elastically, but also have damping: when the stress is applied and removed, work has to be done against the damping effects and is converted in heat within the material resulting in a hysteresis loop in the stress–strain curve. This implies that the material response has time-dependence. Plasticity – Materials that behave elastically generally do so when the applied stress is less than a yield value. When the stress is greater than the yield stress, the material behaves plastically and does not return to its previous state. That is, deformation that occurs after yield is permanent. Viscoplasticity - Combines theories of viscoelasticity and plasticity and applies to materials like gels and mud. Thermoelasticity - There is coupling of mechanical with thermal responses. In general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fourier's law of heat conduction, as opposed to advanced theories with physically more realistic models. == Timeline == 1452–1519 Leonardo da Vinci made many contributions 1638: Galileo Galilei published the book "Two New Sciences" in which he examined the failure of simple structures 1660: Hooke's law by Robert Hooke 1687: Isaac Newton published "Philosophiae Naturalis Principia Mathematica" which contains Newton's laws of motion 1750: Euler–Bernoulli beam equation 1700–1782: Daniel Bernoulli introduced the principle of virtual work
|
{
"page_id": 458866,
"source": null,
"title": "Solid mechanics"
}
|
1707–1783: Leonhard Euler developed the theory of buckling of columns 1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures 1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy. This theorem includes the method of least work as a special case 1874: Otto Mohr formalized the idea of a statically indeterminate structure. 1922: Timoshenko corrects the Euler–Bernoulli beam equation 1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames. 1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework 1942: R. Courant divided a domain into finite subregions 1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today == See also == Strength of materials - Specific definitions and the relationships between stress and strain. Applied mechanics Materials science Continuum mechanics Fracture mechanics Impact (mechanics) Solid-state physics Rigid body == References == === Notes === === Bibliography === L.D. Landau, E.M. Lifshitz, Course of Theoretical Physics: Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J.E. Marsden, T.J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P.C. Chou, N. J. Pagano, Elasticity: Tensor, Dyadic, and Engineering Approaches, Dover, ISBN 0-486-66958-0 R.W. Ogden, Non-linear Elastic Deformation, Dover, ISBN 0-486-69648-0 S. Timoshenko and J.N. Goodier," Theory of elasticity", 3d ed., New York, McGraw-Hill, 1970. G.A. Holzapfel, Nonlinear Solid Mechanics: A Continuum Approach for Engineering, Wiley, 2000 A.I. Lurie, Theory of Elasticity, Springer, 1999. L.B. Freund, Dynamic Fracture Mechanics, Cambridge University Press, 1990. R. Hill, The Mathematical Theory of Plasticity, Oxford University,
|
{
"page_id": 458866,
"source": null,
"title": "Solid mechanics"
}
|
1950. J. Lubliner, Plasticity Theory, Macmillan Publishing Company, 1990. J. Ignaczak, M. Ostoja-Starzewski, Thermoelasticity with Finite Wave Speeds, Oxford University Press, 2010. D. Bigoni, Nonlinear Solid Mechanics: Bifurcation Theory and Material Instability, Cambridge University Press, 2012. Y. C. Fung, Pin Tong and Xiaohong Chen, Classical and Computational Solid Mechanics, 2nd Edition, World Scientific Publishing, 2017, ISBN 978-981-4713-64-1.
|
{
"page_id": 458866,
"source": null,
"title": "Solid mechanics"
}
|
The constants listed here are known values of physical constants expressed in SI units; that is, physical quantities that are generally believed to be universal in nature and thus are independent of the unit system in which they are measured. Many of these are redundant, in the sense that they obey a known relationship with other physical constants and can be determined from them. == Table of physical constants == == Uncertainties == While the values of the physical constants are independent of the system of units in use, each uncertainty as stated reflects our lack of knowledge of the corresponding value as expressed in SI units, and is strongly dependent on how those units are defined. For example, the atomic mass constant m u {\displaystyle m_{\text{u}}} is exactly known when expressed using the dalton (its value is exactly 1 Da), but the kilogram is not exactly known when using these units, the opposite of when expressing the same quantities using the kilogram. == Technical constants == Some of these constants are of a technical nature and do not give any true physical property, but they are included for convenience. Such a constant gives the correspondence ratio of a technical dimension with its corresponding underlying physical dimension. These include the Boltzmann constant k B {\displaystyle k_{\text{B}}} , which gives the correspondence of the dimension temperature to the dimension of energy per degree of freedom, and the Avogadro constant N A {\displaystyle N_{\text{A}}} , which gives the correspondence of the dimension of amount of substance with the dimension of count of entities (the latter formally regarded in the SI as being dimensionless). By implication, any product of powers of such constants is also such a constant, such as the molar gas constant R {\displaystyle R} . == See also == List
|
{
"page_id": 3932275,
"source": null,
"title": "List of physical constants"
}
|
of mathematical constants Mathematical constant Physical constant List of particles == Notes == == References ==
|
{
"page_id": 3932275,
"source": null,
"title": "List of physical constants"
}
|
In general relativity, the Newtonian gauge is a perturbed form of the Friedmann–Lemaître–Robertson–Walker line element. The gauge freedom of general relativity is used to eliminate two scalar degrees of freedom of the metric, so that it can be written as: d s 2 = − ( 1 + 2 Φ ) d t 2 + a 2 ( t ) ( 1 − 2 Ψ ) δ a b d x a d x b , {\displaystyle ds^{2}=-(1+2\Phi )dt^{2}+a^{2}(t)(1-2\Psi )\delta _{ab}dx^{a}dx^{b},} where the Latin indices a and b are summed over the spatial directions and δ a b {\displaystyle \delta _{ab}} is the Kronecker delta. We can instead make use of conformal time as the time component yielding the longitudinal or conformal Newtonian gauge: d s 2 = a 2 ( τ ) [ − ( 1 + 2 Φ ) d τ 2 + ( 1 − 2 Ψ ) δ a b d x a d x b ] {\displaystyle ds^{2}=a^{2}(\tau )[-(1+2\Phi )d\tau ^{2}+(1-2\Psi )\delta _{ab}dx^{a}dx^{b}]} which is related by the simple transformation d t = a ( t ) d τ {\displaystyle dt=a(t)d\tau } . They are called Newtonian gauges because Ψ {\displaystyle \Psi } is the Newtonian gravitational potential of classical Newtonian gravity, which satisfies the Poisson equation ∇ 2 Ψ = 4 π G ρ {\displaystyle \nabla ^{2}\Psi =4\pi G\rho } for non-relativistic matter and on scales where the expansion of the universe may be neglected. It includes only scalar perturbations of the metric: by the scalar-vector-tensor decomposition these evolve independently of the vector and tensor perturbations and are the predominant ones affecting the growth of structure in the universe in cosmological perturbation theory. The vector perturbations vanish in cosmic inflation and the tensor perturbations are gravitational waves, which have a negligible effect on physics
|
{
"page_id": 3997813,
"source": null,
"title": "Newtonian gauge"
}
|
except for the so-called B-modes of the cosmic microwave background polarization. The tensor perturbation is truly gauge independent, since it is the same in all gauges. In a universe without anisotropic stress (that is, where the stress–energy tensor is invariant under spatial rotations, or the three principal pressures are identical) the Einstein equations sets Φ = Ψ {\displaystyle \Phi =\Psi } . == References == C.-P. Ma & E. Bertschinger (1995). "Cosmological perturbation theory in the synchronous and conformal Newtonian gauges". The Astrophysical Journal. 455: 7–25. arXiv:astro-ph/9401007. Bibcode:1995ApJ...455....7M. doi:10.1086/176550. S2CID 14570491. V. F. Mukhanov; H. A. Feldman & R. H. Brandenberger (1992). "Theory of cosmological perturbations". Physics Reports. 215 (5–6): 203–333. Bibcode:1992PhR...215..203M. doi:10.1016/0370-1573(92)90044-Z.
|
{
"page_id": 3997813,
"source": null,
"title": "Newtonian gauge"
}
|
Gravitational time dilation is a form of time dilation, an actual difference of elapsed time between two events, as measured by observers situated at varying distances from a gravitating mass. The lower the gravitational potential (the closer the clock is to the source of gravitation), the slower time passes, speeding up as the gravitational potential increases (the clock moving away from the source of gravitation). Albert Einstein originally predicted this in his theory of relativity, and it has since been confirmed by tests of general relativity. This effect has been demonstrated by noting that atomic clocks at differing altitudes (and thus different gravitational potential) will eventually show different times. The effects detected in such Earth-bound experiments are extremely small, with differences being measured in nanoseconds. Relative to Earth's age in billions of years, Earth's core is in effect 2.5 years younger than its surface. Demonstrating larger effects would require measurements at greater distances from the Earth, or a larger gravitational source. Gravitational time dilation was first described by Albert Einstein in 1907 as a consequence of special relativity in accelerated frames of reference. In general relativity, it is considered to be a difference in the passage of proper time at different positions as described by a metric tensor of spacetime. The existence of gravitational time dilation was first confirmed directly by the Pound–Rebka experiment in 1959, and later refined by Gravity Probe A and other experiments. Gravitational time dilation is closely related to gravitational redshift, in which the closer a body emitting light of constant frequency is to a gravitating body, the more its time is slowed by gravitational time dilation, and the lower (more "redshifted") would seem to be the frequency of the emitted light, as measured by a fixed observer. == Definition == Clocks that are far from
|
{
"page_id": 852089,
"source": null,
"title": "Gravitational time dilation"
}
|
massive bodies (or at higher gravitational potentials) run more quickly, and clocks close to massive bodies (or at lower gravitational potentials) run more slowly. For example, considered over the total time-span of Earth (4.6 billion years), a clock set in a geostationary position at an altitude of 9,000 meters above sea level, such as perhaps at the top of Mount Everest (prominence 8,848 m), would be about 39 hours ahead of a clock set at sea level. This is because gravitational time dilation is manifested in accelerated frames of reference or, by virtue of the equivalence principle, in the gravitational field of massive objects. According to general relativity, inertial mass and gravitational mass are the same, and all accelerated reference frames (such as a uniformly rotating reference frame with its proper time dilation) are physically equivalent to a gravitational field of the same strength. Consider a family of observers along a straight "vertical" line, each of whom experiences a distinct constant g-force directed along this line (e.g., a long accelerating spacecraft, a skyscraper, a shaft on a planet). Let g ( h ) {\displaystyle g(h)} be the dependence of g-force on "height", a coordinate along the aforementioned line. The equation with respect to a base observer at h = 0 {\displaystyle h=0} is T d ( h ) = exp [ 1 c 2 ∫ 0 h g ( h ′ ) d h ′ ] {\displaystyle T_{d}(h)=\exp \left[{\frac {1}{c^{2}}}\int _{0}^{h}g(h')dh'\right]} where T d ( h ) {\displaystyle T_{d}(h)} is the total time dilation at a distant position h {\displaystyle h} , g ( h ) {\displaystyle g(h)} is the dependence of g-force on "height" h {\displaystyle h} , c {\displaystyle c} is the speed of light, and exp {\displaystyle \exp } denotes exponentiation by e. For simplicity, in
|
{
"page_id": 852089,
"source": null,
"title": "Gravitational time dilation"
}
|
a Rindler's family of observers in a flat spacetime, the dependence would be g ( h ) = c 2 / ( H + h ) {\displaystyle g(h)=c^{2}/(H+h)} with constant H {\displaystyle H} , which yields T d ( h ) = e ln ( H + h ) − ln H = H + h H {\displaystyle T_{d}(h)=e^{\ln(H+h)-\ln H}={\tfrac {H+h}{H}}} . On the other hand, when g {\displaystyle g} is nearly constant and g h {\displaystyle gh} is much smaller than c 2 {\displaystyle c^{2}} , the linear "weak field" approximation T d = 1 + g h / c 2 {\displaystyle T_{d}=1+gh/c^{2}} can also be used. See Ehrenfest paradox for application of the same formula to a rotating reference frame in flat spacetime. == Outside a non-rotating sphere == A common equation used to determine gravitational time dilation is derived from the Schwarzschild metric, which describes spacetime in the vicinity of a non-rotating massive spherically symmetric object. The equation is t 0 = t f 1 − 2 G M r c 2 = t f 1 − r s r = t f 1 − v e 2 c 2 = t f 1 − β e 2 < t f {\displaystyle t_{0}=t_{f}{\sqrt {1-{\frac {2GM}{rc^{2}}}}}=t_{f}{\sqrt {1-{\frac {r_{\rm {s}}}{r}}}}=t_{f}{\sqrt {1-{\frac {v_{e}^{2}}{c^{2}}}}}=t_{f}{\sqrt {1-\beta _{e}^{2}}}<t_{f}} where t 0 {\displaystyle t_{0}} is the proper time between two events for an observer close to the massive sphere, i.e. deep within the gravitational field t f {\displaystyle t_{f}} is the coordinate time between the events for an observer at an arbitrarily large distance from the massive object (this assumes the far-away observer is using Schwarzschild coordinates, a coordinate system where a clock at infinite distance from the massive sphere would tick at one second per second of coordinate time, while closer
|
{
"page_id": 852089,
"source": null,
"title": "Gravitational time dilation"
}
|
clocks would tick at less than that rate), G {\displaystyle G} is the gravitational constant, M {\displaystyle M} is the mass of the object creating the gravitational field, r {\displaystyle r} is the radial coordinate of the observer within the gravitational field (this coordinate is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate; the equation in this form has real solutions for r > r s {\displaystyle r>r_{\rm {s}}} ), c {\displaystyle c} is the speed of light, r s = 2 G M / c 2 {\displaystyle r_{\rm {s}}=2GM/c^{2}} is the Schwarzschild radius of M {\displaystyle M} , v e = 2 G M r {\displaystyle v_{e}={\sqrt {\frac {2GM}{r}}}} is the escape velocity, and β e = v e / c {\displaystyle \beta _{e}=v_{e}/c} is the escape velocity, expressed as a fraction of the speed of light c. To illustrate then, without accounting for the effects of rotation, proximity to Earth's gravitational well will cause a clock on the planet's surface to accumulate around 0.0219 fewer seconds over a period of one year than would a distant observer's clock. In comparison, a clock on the surface of the Sun will accumulate around 66.4 fewer seconds in one year. == Circular orbits == In the Schwarzschild metric, free-falling objects can be in circular orbits if the orbital radius is larger than 3 2 r s {\displaystyle {\tfrac {3}{2}}r_{s}} (the radius of the photon sphere). The formula for a clock at rest is given above; the formula below gives the general relativistic time dilation for a clock in a circular orbit: t 0 = t f 1 − 3 2 ⋅ r s r . {\displaystyle t_{0}=t_{f}{\sqrt {1-{\frac {3}{2}}\!\cdot \!{\frac {r_{\rm {s}}}{r}}}}\,.} Both dilations are shown in the figure below. == Important
|
{
"page_id": 852089,
"source": null,
"title": "Gravitational time dilation"
}
|
features of gravitational time dilation == According to the general theory of relativity, gravitational time dilation is copresent with the existence of an accelerated reference frame. Additionally, all physical phenomena in similar circumstances undergo time dilation equally according to the equivalence principle used in the general theory of relativity. The speed of light in a locale is always equal to c according to the observer who is there. That is, every infinitesimal region of spacetime may be assigned its own proper time, and the speed of light according to the proper time at that region is always c. This is the case whether or not a given region is occupied by an observer. A time delay can be measured for photons which are emitted from Earth, bend near the Sun, travel to Venus, and then return to Earth along a similar path. There is no violation of the constancy of the speed of light here, as any observer observing the speed of photons in their region will find the speed of those photons to be c, while the speed at which we observe light travel finite distances in the vicinity of the Sun will differ from c. If an observer is able to track the light in a remote, distant locale which intercepts a remote, time dilated observer nearer to a more massive body, that first observer tracks that both the remote light and that remote time dilated observer have a slower time clock than other light which is coming to the first observer at c, like all other light the first observer really can observe (at their own location). If the other, remote light eventually intercepts the first observer, it too will be measured at c by the first observer. Gravitational time dilation T {\displaystyle T} in a gravitational
|
{
"page_id": 852089,
"source": null,
"title": "Gravitational time dilation"
}
|
well is equal to the velocity time dilation for a speed that is needed to escape that gravitational well (given that the metric is of the form g = ( d t / T ( x ) ) 2 − g s p a c e {\displaystyle g=(dt/T(x))^{2}-g_{space}} , i. e. it is time invariant and there are no "movement" terms d x d t {\displaystyle dxdt} ). To show that, one can apply Noether's theorem to a body that freely falls into the well from infinity. Then the time invariance of the metric implies conservation of the quantity g ( v , d t ) = v 0 / T 2 {\displaystyle g(v,dt)=v^{0}/T^{2}} , where v 0 {\displaystyle v^{0}} is the time component of the 4-velocity v {\displaystyle v} of the body. At the infinity g ( v , d t ) = 1 {\displaystyle g(v,dt)=1} , so v 0 = T 2 {\displaystyle v^{0}=T^{2}} , or, in coordinates adjusted to the local time dilation, v l o c 0 = T {\displaystyle v_{loc}^{0}=T} ; that is, time dilation due to acquired velocity (as measured at the falling body's position) equals to the gravitational time dilation in the well the body fell into. Applying this argument more generally one gets that (under the same assumptions on the metric) the relative gravitational time dilation between two points equals to the time dilation due to velocity needed to climb from the lower point to the higher. == Experimental confirmation == Gravitational time dilation has been experimentally measured using atomic clocks on airplanes, such as the Hafele–Keating experiment. The clocks aboard the airplanes were slightly faster than clocks on the ground. The effect is significant enough that the Global Positioning System's artificial satellites had their atomic clocks permanently corrected. Additionally, time dilations
|
{
"page_id": 852089,
"source": null,
"title": "Gravitational time dilation"
}
|
due to height differences of less than one metre have been experimentally verified in the laboratory. Gravitational time dilation in the form of gravitational redshift has also been confirmed by the Pound–Rebka experiment and observations of the spectra of the white dwarf Sirius B. Gravitational time dilation has been measured in experiments with time signals sent to and from the Viking 1 Mars lander. == See also == Clock hypothesis Gravitational redshift Hafele–Keating experiment Relative velocity time dilation Twin paradox Barycentric Coordinate Time == References == == Further reading == Grøn, Øyvind; Næss, Arne (2011). Einstein's Theory: A Rigorous Introduction for the Mathematically Untrained. Springer. ISBN 9781461407058.
|
{
"page_id": 852089,
"source": null,
"title": "Gravitational time dilation"
}
|
Computer Atlas of Surface Topography of Proteins (CASTp) aims to provide comprehensive and detailed quantitative characterization of topographic features of protein, is now updated to version 3.0. Since its release in 2006, the CASTp server has ≈45000 visits and fulfills ≈33000 calculation requests annually. CASTp has been proven as a confident tool for a wide range of researches, including investigations of signaling receptors, discoveries of cancer therapeutics, understanding of mechanism of drug actions, studies of immune disorder diseases, analysis of protein–nanoparticle interactions, inference of protein functions and development of high-throughput computational tools. This server is maintained by Jie Liang's lab in University of Illinois at Chicago. == Geometric Modeling Principles == For the calculation strategy of CASTp, alpha-shape and discrete-flow methods are applied to the protein binding site, also the measurement of pocket size by the program of CAST by Liang et al. in 1998, then updated by Tian et al. in 2018. Firstly, CAST identifies atoms which form the protein pocket, then calculates the volume and area, identifies the atoms forming the rims of pocket mouth, computes how many mouth openings for each pocket, predict the area and circumference of mouth openings, finally locates cavities and calculate their size. The secondary structures were calculated by DSSP. The single amino acid annotations were fetched from UniProt database, then mapped to PDB structures following residue-level information from SIFTS database. == Instructions of Protein Pocket Calculation == Input Protein structures in PDB format, and a probe radius. Searching Users can either search for pre-computed result by 4-letter PDB ID, or upload their own PDB file for customized computation. The core algorithm helps in finding the pocket or cavity with capability of housing a solvent, with a default or adjusted diameter. Output CASTp identifies all surface pockets, interior cavities and cross channels, provides
|
{
"page_id": 39059578,
"source": null,
"title": "Computer Atlas of Surface Topography of Proteins"
}
|
detailed delineation of all atoms participating in their formation, including the area and volume of pocket or void as well as measurement of numbers of mouth opening of a particular pocket ID by solvent accessible surface model (Richards' surface) and by molecular surface model (Connolly surface), all calculated analytically. The core algorithm helps in finding the pocket or cavity with capability of housing a solvent with a diameter of 1.4 Å. This online tool also supports PyMOL and UCSF Chimera plugin for molecular visualization. == Why CASTp is Useful? == Protein science, from an amino acid to sequences and structures Proteins are large, complex molecules that playing critical roles to maintain the normal functioning of the human body. They are essential not just for the structure and function, but also the regulation among the body's tissues and organs. Proteins are made up of hundreds of smaller units called amino acids that are attached to one another by peptide bonds, forming a long chain. Protein active sites Usually, the active site of a protein locates on its center of action and, the key to its function. The first step is the detection of active sites on the protein surface and an exact description of their features and boundaries. These specifications are vital inputs for subsequent target druggability prediction or target comparison. Most of the algorithms for active site detection are based on geometric modeling or energetic features based calculation. The role of protein pockets The shape and properties of the protein surface determine what interactions are possible with ligands and other macromolecules. Pockets are an important yet ambiguous feature of this surface. During drug discovery process, the first step in screening for lead compounds and potential molecules as drugs is usually a selection of the shape of the binding pocket. Shape
|
{
"page_id": 39059578,
"source": null,
"title": "Computer Atlas of Surface Topography of Proteins"
}
|
plays a role in many computational pharmacological methods. Based on existing results, most features important to predicting drug-binding were depended on size and shape of the binding pocket, with the chemical properties of secondary importance. The surface shape is also important for interactions between protein and water. However, defining discrete pockets or possible interaction sites still remains unclear, due to the shape and location of nearby pockets affected promiscuity and diversity of binding sites. Since most pockets are open to solvent, to define the border of a pocket is the primary difficulty. Those closed to solvent we refer to as buried cavities. With the benefit of well-defined extent, area and volume, buried cavities are more straightforward to locate. In contrast, the border of an open pocket defines its mouth and it provides the cut-off for determination of the surface area and volume. Even defining the pocket as a set of residues does not define the volume or the mouth of the pocket. Druggability role prediction In pharmaceutical industry, the current priority strategy for target assessment is high-throughput screening (HTS). NMR screenings are applied against large compound datasets. Chemical characteristics of compounds binding against specific targets are measured, so how well the compound sets bind to the chemical space will decide the binding efficiency. Success rates of virtually docking of the drug-like ligands into the active sites of the target proteins would be detected for prioritization, while most of the active sites are located at the pockets. With the benefits of large amount of structural data, computational methods from different perspectives for druggability prediction have been introduced during the last 30 years with positive results, as a vital instrument to accelerate the prediction accessibility. Many candidates have been integrated into drug discovery pipeline already since then. == New Features in CASTp
|
{
"page_id": 39059578,
"source": null,
"title": "Computer Atlas of Surface Topography of Proteins"
}
|
3.0 == Pre-computed results for biological assemblies For a lot of proteins deposited in Protein Data Bank, the asymmetric unit might be different from biological unit, which would make the computational result biologically irrelevant. So the new CASTp 3.0 computed the topological features for biological assemblies, overcome the barriers between asymmetric unit and biological assemblies. Imprints of negative volumes of topological features In the 1st release of CASTp server in 2006, only geometric and topological features of those surface atoms participated in the formation of protein pockets, cavities, and channels. The new CASTp added the "negative volume" of the space, referred to the space encompassed by the atoms formed these geometric and topological features. Comprehensive annotation on single amino-acid polymorphism The latest CASTp integrated protein annotations aligned with the sequence, including the brief feature, positions, description, and reference of the domains, motifs, and single amino-acid polymorphisms. Improved user interface & convenient visualization The new CASTp now incorporated 3Dmol.js for structural visualization, made users able to browse, to interact the protein 3D model, and to examine the computational results in latest web-browsers including Chrome, Firefox, Safari, et al. Users can pick their own representation style of the atoms which form each topographic feature, and to edit the colors by their own preferences. == References ==
|
{
"page_id": 39059578,
"source": null,
"title": "Computer Atlas of Surface Topography of Proteins"
}
|
Josef Herzig (25 September 1853 – 4 July 1924) was an Austrian chemist. Herzig was born in Sanok, Galicia, which at that time was part of Austria-Hungary. Herzig went to school in Breslau until 1874, started studying chemistry at the University of Vienna but joined August Wilhelm von Hofmann at the University of Berlin in the second semester. He worked with Robert Bunsen at the University of Heidelberg and received his PhD for work with Ludwig Barth at the University of Vienna. He later became lecturer and, in 1897, professor at the University of Vienna. He died in Vienna in 1924. == Work == Herzig was active in the chemistry of natural products. He succeeded in determining the structure of flavonoids quercetin, fisetin and rhamnetin as well as several alkaloids. == See also == Jacobsen rearrangement == References ==
|
{
"page_id": 17367163,
"source": null,
"title": "Josef Herzig"
}
|
For chemical reactions, the iron oxide cycle (Fe3O4/FeO) is the original two-step thermochemical cycle proposed for use for hydrogen production. It is based on the reduction and subsequent oxidation of iron ions, particularly the reduction and oxidation between Fe3+ and Fe2+. The ferrites, or iron oxide, begins in the form of a spinel and depending on the reaction conditions, dopant metals and support material forms either Wüstites or different spinels. == Process description == The thermochemical two-step water splitting process uses two redox steps. The steps of solar hydrogen production by iron based two-step cycle are: { M II Fe 2 III O 4 ⟶ M II O + 2 Fe II O + 1 2 O 2 ( Reduction ) M II O + 2 Fe II O + H 2 O ⟶ M II Fe 2 III O 4 + H 2 ( Oxidation ) {\displaystyle {\begin{cases}{\ce {M^{II}Fe2^{III}O4 -> M^{II}O + 2Fe^{II}O + 1/2O2}}&{\ce {(Reduction)}}\\{\ce {M^{II}O + 2Fe^{II}O + H2O -> M^{II}Fe2^{III}O4 + H2}}&{\ce {(Oxidation)}}\end{cases}}} Where M can by any number of metals, often Fe itself, Co, Ni, Mn, Zn or mixtures thereof. The endothermic reduction step (1) is carried out at high temperatures greater than 1400 °C, though the "Hercynite cycle" is capable of temperatures as low as 1200 °C. The oxidative water splitting step (2) occurs at a lower ~1000 °C temperature which produces the original ferrite material in addition to hydrogen gas. The temperature level is realized by using geothermal heat from magma or a solar power tower and a set of heliostats to collect the solar thermal energy. == Hercynite cycle == Like the traditional iron oxide cycle, the hercynite is based on the oxidation and reduction of iron atoms. However unlike the traditional cycle, the ferrite material reacts with a second metal oxide,
|
{
"page_id": 20775036,
"source": null,
"title": "Iron oxide cycle"
}
|
aluminum oxide, rather than simply decomposing. The reactions take place via the following two reactions: { M II Fe 2 III O 4 + 3 Al 2 O 3 ⟶ M II Al 2 III O 4 + 2 Fe II Al 2 III O 4 + 1 2 O 2 ( Reduction ) M II Al 2 III O 4 + 2 Fe II Al 2 III O 4 + H 2 O ⟶ M II Fe 2 III O 4 + 3 Al 2 O 3 + H 2 ( Oxidation ) {\displaystyle {\begin{cases}{\ce {M^{II}Fe2^{III}O4 + 3Al2O3 -> M^{II}Al2^{III}O4 + 2Fe^{II}Al2^{III}O4 + 1/2O2}}&{\ce {(Reduction)}}\\{\ce {M^{II}Al2^{III}O4 + 2Fe^{II}Al2^{III}O4 + H2O -> M^{II}Fe2^{III}O4 + 3Al2O3 + H2}}&{\ce {(Oxidation)}}\end{cases}}} The reduction step of the hercynite reaction takes place at temperature ~ 200 °C lower than the traditional water splitting cycle (1200 °C). This leads to lower radiation losses, which scale as temperature to the fourth power. == Advantages and disadvantages == The advantages of the ferrite cycles are: they have lower reduction temperatures than other 2-step systems, no metallic gasses are produced, high specific H2 production capacity, non-toxicity of the elements used and abundance of the constituent elements. The disadvantages of the ferrite cycles are: similar reduction and melting temperature of the spinels (except for the hercynite cycle as aluminates have very high melting temperatures), and slow rates of the oxidation, or water splitting, reaction. == See also == Cerium(IV) oxide-cerium(III) oxide cycle Copper-chlorine cycle Hybrid sulfur cycle Hydrosol-2 Sulfur-iodine cycle Zinc zinc-oxide cycle == References == == External links == Solar hydrogen from iron oxide based thermochemical cycles
|
{
"page_id": 20775036,
"source": null,
"title": "Iron oxide cycle"
}
|
Offline learning is a machine learning training approach in which a model is trained on a fixed dataset that is not updated during the learning process. This dataset is collected beforehand, and the learning typically occurs in a batch mode (i.e., the model is updated using batches of data, rather than a single input-output pair at a time). Once the model is trained, it can make predictions on new, unseen data. In online learning, only the set of possible elements is known, whereas in offline learning, the learner also knows the order in which they are presented. == See also == Online machine learning Incremental learning == References ==
|
{
"page_id": 10748030,
"source": null,
"title": "Offline learning"
}
|
The transformer is a deep learning architecture that was developed by researchers at Google and is based on the multi-head attention mechanism, which was proposed in the 2017 paper "Attention Is All You Need". Text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished. Transformers have the advantage of having no recurrent units, therefore requiring less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM). Later variations have been widely adopted for training large language models (LLM) on large (language) datasets. Transformers were first developed as an improvement over previous architectures for machine translation, but have found many applications since. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning, audio, multimodal learning, robotics, and even playing chess. It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs) and BERT (bidirectional encoder representations from transformers). == History == === Predecessors === For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens. A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units. Neural networks using multiplicative units were later called sigma-pi networks or higher-order networks. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers. However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence. Modern Transformers overcome this problem, but unlike RNNs, they require computation time that is quadratic in the size of the context window. The linearly scaling fast weight controller (1992) learns to compute a weight matrix for further processing depending on the input. One of its two networks has "fast weights" or "dynamic links" (1981). A slow neural network learns by gradient descent to generate keys and values for computing the weight changes of the fast neural network which computes answers to queries. This was later shown to be equivalent to the unnormalized linear Transformer. === Attention with seq2seq === The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see previous papers). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014. A 380M-parameter model for machine translation uses two long short-term memories (LSTM). Its architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, another 130M-parameter model used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq. These early seq2seq models had no attention mechanism, and the
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved. This is because the input is processed sequentially by one recurrent network into a fixed-size output vector, which is then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, degrading the output. As evidence, reversing the input sentence improved seq2seq translation. The RNNsearch model introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem (of the fixed-size output vector), allowing the model to process long-distance dependencies more easily. The name is because it "emulates searching through a source sentence during decoding a translation". The relative performances were compared between global (that of RNNsearch) and local (sliding window) attention model architectures for machine translation, finding that mixed attention had higher quality than global attention, while local attention reduced translation time. In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM. It took nine months to develop, and it outperformed the statistical approach, which took ten years to develop. === Parallelizing attention === Seq2seq models with attention (including self-attention) still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them from being accelerated on GPUs. In 2016, decomposable attention applied a self-attention mechanism to feedforward networks, which are easy to parallelize, and achieved SOTA result in textual entailment with an order of magnitude fewer parameters
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
than LSTMs. One of its authors, Jakob Uszkoreit, suspected that attention without recurrence is sufficient for language translation, thus the title "attention is all you need". That hypothesis was against conventional wisdom at the time, and even his father Hans Uszkoreit, a well-known computational linguist, was skeptical. In the same year, self-attention (called intra-attention or intra-sentence attention) was proposed for LSTMs. In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. This led to the introduction of a multi-head attention model that was easier to parallelize due to the use of independent heads and the lack of recurrence. Its parallelizability was an important factor to its widespread use in large neural networks. === AI boom era === Already in spring 2017, even before the "Attention is all you need" preprint was published, one of the co-authors applied the "decoder-only" variation of the architecture to generate fictitious Wikipedia articles. Transformer architecture is now used alongside many generative models that contribute to the ongoing AI boom. In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model. In 2019 October, Google started using BERT to process search queries. In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model. Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular, triggering
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
a boom around large language models. Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer, speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks. Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), use Transformers to analyse input data (like text prompts) by breaking it down into "tokens" and then calculating the relevance between each token using self-attention, which helps the model understand the context and relationships within the data. == Training == === Methods for stabilizing training === The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again. A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup. === Pretrain-finetune === Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include: language modeling next-sentence prediction question answering reading comprehension sentiment analysis paraphrasing The T5 transformer report documents a large number of natural language pretraining tasks. Some examples are: restoring or repairing incomplete or corrupted text. For example, the input, "Thank you ~~ me to your party ~~ week", might generate the output, "Thank you for inviting me to your party last week". translation between natural languages (machine translation) judging the pragmatic acceptability of natural language. For example, the following sentence might
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
be judged "not acceptable", because even though it is syntactically well-formed, it is improbable in ordinary human usage: The course is jumping well. Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture. === Tasks === In general, there are 3 classes of language modelling tasks: "masked", "autoregressive", and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer. In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens: Loss = − ∑ t ∈ masked tokens ln ( probability of t conditional on its context ) {\displaystyle {\text{Loss}}=-\sum _{t\in {\text{masked tokens}}}\ln({\text{probability of }}t{\text{ conditional on its context}})} and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task. In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks. In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
for the task is still typically the same. The T5 series of models are trained by prefixLM tasks. Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model). == Architecture == All transformers have the same primary components: Tokenizers, which convert text into tokens. Embedding layer, which converts tokens and positions of the tokens into vector representations. Transformer layers, which carry out repeated transformations on the vector representations, extracting more and more linguistic information. These consist of alternating attention and feedforward layers. There are two major types of transformer layers: encoder layers and decoder layers, with further variants. Un-embedding layer, which converts the final vector representations back to a probability distribution over the tokens. The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section. By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as x W {\displaystyle xW} . === Tokenization === As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between texts and token sequences is a tokenizer. The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size n vocabulary {\displaystyle n_{\text{vocabulary}}} . When faced with tokens outside the vocabulary, typically a special token is used,
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
written as "[UNK]" for "unknown". Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece. === Embedding === Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix M {\displaystyle M} . For example, if the input token is 3 {\displaystyle 3} , then the one-hot representation is [ 0 , 0 , 0 , 1 , 0 , 0 , … ] {\displaystyle [0,0,0,1,0,0,\dots ]} , and its embedding vector is E m b e d ( 3 ) = [ 0 , 0 , 0 , 1 , 0 , 0 , … ] M {\displaystyle \mathrm {Embed} (3)=[0,0,0,1,0,0,\dots ]M} The token embedding vectors are added to their respective positional encoding vectors (see below), producing the sequence of input vectors. The number of dimensions in an embedding vector is called hidden size or embedding size and written as d emb {\displaystyle d_{\text{emb}}} . This size is written as d model {\displaystyle d_{\text{model}}} in the original Transformer paper. === Un-embedding === An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens. The un-embedding layer is a linear-softmax layer: U n E m b e d ( x ) = s o f t m a x ( x W + b ) {\displaystyle \mathrm {UnEmbed} (x)=\mathrm {softmax} (xW+b)} The matrix has shape ( d emb , n vocabulary ) {\displaystyle (d_{\text{emb}},n_{\text{vocabulary}})} . The embedding matrix M {\displaystyle M} and the un-embedding matrix W {\displaystyle W} are sometimes required to be transposes of each other, a practice called weight tying. === Positional encoding === A positional encoding is a fixed-size
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. This shall induce a bias towards the order of the input sequence, so that, for example, the input sequence "man bites dog" is processed differently from "dog bites man". The positional encoding is defined as a function of type f : R → R d ; d ∈ Z , d > 0 {\displaystyle f:\mathbb {R} \to \mathbb {R} ^{d};d\in \mathbb {Z} ,d>0} , where d {\displaystyle d} is a positive even integer. The full positional encoding defined in the original paper is: ( f ( t ) 2 k , f ( t ) 2 k + 1 ) = ( sin ( θ ) , cos ( θ ) ) ∀ k ∈ { 0 , 1 , … , d / 2 − 1 } {\displaystyle (f(t)_{2k},f(t)_{2k+1})=(\sin(\theta ),\cos(\theta ))\quad \forall k\in \{0,1,\ldots ,d/2-1\}} where θ = t r k , r = N 2 / d {\displaystyle \theta ={\frac {t}{r^{k}}},r=N^{2/d}} . Here, N {\displaystyle N} is a free parameter that should be significantly larger than the biggest k {\displaystyle k} that would be input into the positional encoding function. The original paper uses N = 10000 {\displaystyle N=10000} . The function is in a simpler form when written as a complex function of type f : R → C d / 2 {\displaystyle f:\mathbb {R} \to \mathbb {C} ^{d/2}} f ( t ) = ( e i t / r k ) k = 0 , 1 , … , d 2 − 1 {\displaystyle f(t)=\left(e^{it/r^{k}}\right)_{k=0,1,\ldots ,{\frac {d}{2}}-1}} where r = N 2 / d {\displaystyle r=N^{2/d}} . The main reason for using this positional encoding function is
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.