id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
54,076,100 | https://en.wikipedia.org/wiki/The%20Quantum%20Vacuum | The Quantum Vacuum: An Introduction to Quantum Electrodynamics is a physics textbook authored by Peter W. Milonni in 1993. The book provides a careful and thorough treatment of zero-point energy, spontaneous emission, the Casimir, van der Waals forces, Lamb shift and anomalous magnetic moment of the electron at a level of detail not found in other introductory texts to quantum electrodynamics.
The first chapter, Zero‐Point Energy in Early Quantum Theory, was originally published in 1991 in the American Journal of Physics.
In 2008 Milonni received the Max Born Award "For exceptional contributions to the fields of theoretical optics, laser physics and quantum mechanics, and for dissemination of scientific knowledge through authorship of a series of outstanding books".
References
Physics textbooks
Quantum electrodynamics
Quantum electronics
Electrodynamics
Quantum field theory | The Quantum Vacuum | [
"Physics",
"Materials_science",
"Mathematics"
] | 167 | [
"Quantum field theory",
"Quantum electronics",
"Quantum mechanics",
"Condensed matter physics",
"Electrodynamics",
"Nanotechnology",
"Dynamical systems"
] |
54,076,293 | https://en.wikipedia.org/wiki/Paludiculture | Paludiculture is wet agriculture and forestry on peatlands. Paludiculture combines the reduction of greenhouse gas emissions from drained peatlands through rewetting with continued land use and biomass production under wet conditions. “Paludi” comes from the Latin “palus” meaning “swamp, morass” and "paludiculture" as a concept was developed at Greifswald University. Paludiculture is a sustainable alternative to drainage-based agriculture, intended to maintain carbon storage in peatlands. This differentiates paludiculture from agriculture like rice paddies, which involve draining, and therefore degrading wetlands.
Characteristics
Impact of peatland drainage and rewetting
Peatlands store an enormous amount of carbon. Covering only 3% of the Earth's land surface, they store more than 450 gigatonnes of carbon - more than stored by forests (which cover 30% of the land surface). Drained peatlands cause numerous negative environmental impacts such as greenhouse gas emission, nutrient leaching, subsidence and loss of biodiversity. Although only 0.3% of all peatlands are drained, peatland drainage is estimated to be responsible for 6% of all human greenhouse gas emission. By making soils waterlogged when re-wetting peatlands, decomposition of organic matter (~50% carbon) will almost cease, and hence carbon will no longer escape into the atmosphere as carbon dioxide. Peatland rewetting can significantly reduce environmental impacts caused by drainage by restoring hydrological buffering and reducing the water table's sensitivity to atmospheric evaporative demand. Due to the drainage of soils for agriculture in many areas, the peat soil depth and water quality has dropped significantly over the years. These problems are mitigated by re-wetting peatlands. As such, they can also make installations against rising sea levels (levees, pumps) unnecessary. Wet bogs act as nitrogen sinks, whereas mineralisation and fertilisation from agriculture on drained bogs produces nitrogen run-off into nearby waters.
Arguments for cultivating crops on restored peatlands
Cultivating peatland products sustainably can incentivise the rewetting of drained peatlands, while maintaining similar land use in previously drained agricultural areas
Raw materials can be grown on peatlands without competing with food production for land in other areas.
The growing of crops extracts phosphate from the land, which is important in wetlands; it also helps to extract other nutrients from water, making it suitable for post-water treatment purposes
In many tropical countries, cultivating semi-wild native crops in peat swamp forests is a traditional livelihood which can be sustainable.
Restored reed beds can obstruct nitrogen and phosphorus run-off from agriculture higher up in the river system and so protect lower waters.
Paludiculture areas can act as habitat corridors and ecological buffer zones between traditional agriculture and intact peatlands
abc
Debates around the sustainability of paludiculture
The application of the term "paludiculture" is debated as it is contingent on whether different peatland agricultural practices are considered sustainable. In terms of greenhouse gas emissions, how sustainable a paludiculture practice is deemed to be depends on the greenhouse gas measured, the species of plant and the water table level of the peatland. "Paludiculture" been used to refer to cultivating native and non-native crops on intact or re-wetted peatlands. In the EU's Common Agricultural Policy, it is defined as the productive land use of wet and rewetted peatlands that preserves the peat soil and thereby minimizes CO2 emissions and subsidence. A 2020 review of tropical peatland paludiculture from the National University of Singapore evaluated wet and re-wetted management pathways in terms of greenhouse gas emissions and carbon sequestration and concluded that commercial paludiculture is only suited to re-wetted peatlands, where it is carbon negative or neutral, as opposed to intact peatlands, where it increases emissions. After decades of re-wetting, can still contribute to global warming to a greater extent than intact peatlands. Exceptions where paludiculture on intact peatlands may be sustainable are some traditions of cultivating native crops semi-wild in intact peat swamp forest, or gathering peatland products without active cultivation. The review also suggests that, to be sustainable, paludiculture should only use native vegetation to restore peatlands whilst producing biomass, as opposed to any wetland plants which have the possibility of surviving. This is because using non-native species may create negative peatland conditions for other native plants, and non-native plants tend to have a lower yield and lifespan in undrained or re-wetted peatlands than when grown in their native habitats or drained wetlands.
Paludiculture and ecosystem services
Assessments of the sustainability of paludiculture should take into account ecosystem services besides carbon sequestration and how paludiculture can be integrated with traditional farming practices. Peatlands can provide a number of other ecosystem services e.g. biodiversity conservation and water regulation. It is therefore important to protect this areas and restore degraded areas. To conserve, restore and improve management of peat lands is a cost efficient and relatively easy way to maintain ecosystem services. However, these the ecosystem services are not priced in a market and do not produce economic profit for the local communities. The drainage and cultivation, grazing, as well as peat mining on the other hand give the local communities short-term economic profits. It has therefore been argued that conservation and restoration, which has a significant and common value, needs to be subsidized by the state or the world at large.
Paludiculture is not focused on nature conservation but on production, but paludiculture and conservation may complement each other in a number of ways. 1) Paludiculture can be the starting point and an intermediate stage in the process of restoring a drained peatland. 2) Paludiculture can lower the cost of the conservation project by e.g. decrease the costs of biomass removal and establishment costs. 3) Areas with paludiculture practice can provide buffer zones around the conserved peat areas. 4) Areas with paludiculture in between conservation areas can provide corridors facilitating species migration. 5) Paludiculture may increase the acceptance by the affected stakeholder to rewet once drained peatland. The support of the local communities in rewetting project are often crucial.
The effect on greenhouse gas emissions of paludiculture is complex. On the one hand a higher water table will reduce the aerobic decomposition of peat and therefore the carbon dioxide emissions. But on the other hand the increased ground water table may increase anaerobic decomposition of organic matter or methanogenesis and therefore increase the emission of methane (CH4), a short-lived but more potent greenhouse gas than CO2. The emissions emanating from rewetted peatland with paludiculture will also be affected by the land-use in terms of type of use (agriculture, forestry, grazing etc.), but also in terms of used species and intensity. Traditional use of peatland has often less impact on the environment than industrial use has, but need not be sustainable in the long run and if used at a larger scale.
Management
The most obvious way to maintain the ecosystem services that peatland provides is conservation of intact peatlands. This is even more true given the limited success of restoration projects especially in tropical peatlands. The conserved peatland still holds value for humans and hence provides a number of ecosystem services e.g. carbon storage, water storage and discharge. Conserving peatlands also avoids costly investments. Conservation is suggested to be a very cost-effective management practice for peatlands. The most obvious ecosystem services that the conservation management provides - i.e. carbon storage and water storage - are not easily priced on the market. Therefore, peatland conservation may need to be subsidised.
To rewet peatland and thereby restore the water table level is the first step in the restoration. The intention is to recreate the hydrological function and processes of the peatland. This takes a longer time than may be expected. Studies have found that rewetted previously drained peatland had the hydrological functions - e.g. water storage and discharge capacity - somewhere between a drained and an intact peatland six years after the restoration.
Undrained peatlands are recommended to be left for conservation and not used for paludiculture. Drained peatlands, on the other hand, can be rewetted and used for paludiculture often using traditional knowledge together with new science. However local communities, especially in the tropics, maintain their livelihood by draining and using the peatland in various ways e.g. agriculture, grazing, and peat mining. Paludiculture can be a way to restore degraded and drained peatlands as well as maintaining an outcome for the local community. For example, studies of Sphagnum cultivation on re-wetted peat bogs in Germany shows a significant decrease of greenhouse gas emission compared to a control with irrigated ditches. The economic feasibility of Sphagnum cultivation on peat bogs are however still unclear. The basis for paludiculture is however very different in the south, among other things because of higher population and economic pressure on peatland.
Locations
Tropical Peatlands
Tropical peatlands extensively occur in Southeast Asia, mainland East Asia, the Caribbean and Central America, South America, and southern Africa. Often located in lowlands, tropical peatlands are uniquely identified by rapid rates of peat soil formation, under high precipitation and high temperature regimes. In contrast, a high temperature climate accelerates decomposition rates, causing degraded tropical peatlands to contribute more substantially to global green house gas emissions. Although tropical peatlands cover only 587,000 km2, they store 119.2 Gigatonnes C at a density per unit area of 203,066 tonnes C km−2. For decades, these large carbon stores have succumbed to draining in order to cater for humanity's socio-economic needs. Between 1990 and 2015, cultivation (for management including industrial and small-holder agriculture) had increased from 11 to 50% of forested peatlands in Peninsular Malaysia, Sumatra, and Borneo. In Malaysia and Indonesia in the last twenty years, peat swamp forests have retreated from covering 77% of peatlands to 36%, endangering many mammals and birds in the region. In 2010, industrial agriculture covers about 3-3.1 million hectares, with oil palm accounting for 2.15 million hectares of this area. The conversion of natural tropical peatlands into other land uses leads to peat fires and the associated health effects, soil subsidence increasing flood risks, substantial greenhouse gas emissions and loss of biodiversity. Today efforts are being made to restore degraded tropical peatlands through paludiculture. Paludiculture is researched as a sustainable solution to reduce and reverse the degradation of peat swamp forests, and includes traditional local agricultural practices which predate the use of the term. Commercial paludiculture has not been trialled to the extent that it has in northern peatlands. Below are examples of paludiculture practices in tropical peatlands.
Congo Basin
The Bantu people in Cuvette Central use peatlands for fishing, hunting and gathering, as well as small-scale agriculture near terra firme forests.
Indonesia
In Indonesia there are three areas that could be the example of paludiculture practices such as beje system in Kutai and Banjar Tribes in East Kalimantan, Nut plantations in Segedong West Kalimantan, and Sago farming in Meranti Island district and Riau Province. Sago is cultivated semi-wild near rivers in Riau. Jelutong is grown in monocultures and mixed plantings in Central Kalimantanm and in South Sumatra and Jambi, and has been traded since the mid-1800s. This trade has been stiffed by 2006 tariffs and sanctions, and growing jelutong in monocultures is considered less efficient than crops like smallholder oil palm.
Besides commercial production, peatland communities in Indonesia have developed less impactful practices for extracting resources. For example, Dayak communities only cultivate peatlands shallower than three meters for small-scale farming of sago and jelutong in coastal areas where the sea inputs nutrients. In Sumatra, timber harvested in peat swamp forests are transported with wooden sleighs, rails and small canals in a traditional method called ongka which is less destructive than commercial logging transport. Peat subsidence and CO2 emissions have still been found present in agroforestry small-holdings in re-wetted peatlands in Jambi and Central Kalimantan, even those with native species.
Malaysia
In Malaysia, sago plantations are mostly semi-wild, situated near rivers such as in Sarawak, although Malaysia also imports sago from Sumatra to make noodles. Peatlands are also used by the Jakun people in South East Pahang for hunting, gathering and fishing.
Peru
Mestizo communities in Loreto, Peru use peatlands for hunting and gathering, and sustainably cultivating native palms, which they replant to restore the resource. They are conscious of the limits to the resource and the need to avoid wasteful felling during harvest.
Northern Peatlands
The greater part of the world's peatlands occur in the northern hemisphere, encompassing both boreal and temperate regions. Global estimates indicate that northern peatlands cover 3,794,000 km2, storing about 450 Gt of C at a density of approximately 118,318 t C km−2 . Peatlands form in poorly drained areas under conditions of high precipitation and low temperature . 66% of northern peatlands are found in Eurasia and 34% in North America. About 60% of these peatlands (2718×103 km2) are perennially frozen, with approximately 2152×103 km2 occurring in Eurasia and 565×103 km2 in North America . In the European Union (25 countries in Europe), peatlands cover approximately 291×103 km2, of which nearly 55% are in Finland and Sweden . Peatlands are more common in Belarus and Ukraine, where they occupy approximately 497×103 km2. Both boreal and temperate peatlands are primarily formed from bryophytes and graminoids, displaying slower rates of accumulation and decomposition comparative to the tropics . Northern peatlands have been drained for agriculture, forestry, and peat mining for fuel and horticulture. Historical uses of intact northern peatlands include fishing, hunting, grazing and gathering berries. Paludiculture is not widely established commercially in northern peatlands and most research projects identified below are ongoing. Many have not yet published peer-reviewed results. Most are focused on Sphagnum and reed farming. Rather than excavating decomposed Sphagnum as peat, non-decomposed reed fibres are harvested in cycles, as a renewable source of biomass. Sphagnum fibres can be used as a growing substrate, packaging to protect plants in transport, or to reintroduce moss when restoring other peatlands.
Belarus
The University of Greifswald and Belarusian State University are researching reed beds in Naroch National Park as filters to reduce nitrogen and phosphorus run-off from degraded peatlands agriculture into the Baltic. With research scheduled from January 2019 to September 2021, they aim to investigate the potential for harvesting reeds in the area to incentivise reed bed management.
Canada
Paludiculture practices include cultivating Sphagnum and cattail. One of the largest research projects was carried out between 2006 and 2012 by researchers from Université Laval in Quebec, trialling Sphagnum farming in eastern Canada. Their bog site, on the Acadian Peninsula, was previously used for block-cutting peat for fuel and so consisted of ditches of Sphagnum and raised areas of other vegetation. They found that Sphagnum farming could be practiced large-scale in the ditches, although they recommend active irrigation management for more consistent harvests.
Finland
The Finnish Forest Research Institute and Vapo Oy, Finland's largest peat mining company, manage around 10 hectares for experiments in cultivating Sphagnum for restoration and to produce substrates.
Germany
The Greifswald Mire Center lists six research projects for cultivating Sphagnum as a raw material for substrates and restoring moors in Germany: Hankhausen, Drenth, Parovinzialmoor, Ramsloh, Sedelsberg and Südfeld. The Drenth and Parovinzialmoor projects, running from 2015 to 2019, included testing varying irrigation and drainage methods. They found that peat moss can be grown on black peat. In Sedelsberg, researchers found cultivating Sphagnum on black peat to be "expensive and time-consuming". Researchers at the Südfeld project in 2002 observed a small increase in peat moss, and increasing reeds, cattails, and willows. Researchers are also investigating reed and cattail cultivation.
In Mecklenburg -West Pomerania, Greifswald University's ongoing Paludi-Pellets-Project aims to create an efficient biofuel source from sedges, reeds and canary grass in the form of dry pellets.
Ireland
Renewable energy company Bord na Móna began peat moss trials in 2012 to restore Sphagnum in raised bogs for potential horticulture.
Lithuania
Lithuania's first peat moss cultivation trial was in 2011, in Aukštumala Moor in Nemunas Delta Regional Park. Researchers from Vilnius Institute of Botany transplanted sections of Sphagnum from a neighbouring degraded raised bog to the exposed peat surface. They found that 94% of the patches survived and expanded to the exposed peat.
The ongoing "DESIRE" project is investigating peatland restoration and paludiculture in the Neman River catchment area to reduce nutrient run-off into the Baltic.
The Netherlands
In the ongoing "Omhoog met het Veen - AddMire in the Netherlands" research project, Landscape Noord-Holland aims to investigate the restoration of reed beds and wet heathlands on moors previously converted for agriculture as well as to raise awareness about peatland degradation. The project is intended to promote paludiculture as an alternative income from agriculture. Researchers have rewetted 8 hectares, including for a water storage buffer area for the peat moss experiments. They are measuring the effects of soil erosion and atmospheric nitrogen on the growth of peat moss and the resulting greenhouse gas emissions and soil chemistry.
Russia
Russia has the largest area of peatlands of all the northern circumpolar countries with the world's largest peatland being the West Siberian mire massif and the largest in Europe the Polistovo-Lovatsky mire in northern Russia. An estimate derived from the digital soil database of Russia at a geographical scale of 1:5 million, indicates that the area of soils with a peat depth of more than 30 cm is nearly 2210×103 km2. Approximately 28% occurs in the zone of seasonally frozen soils, nearly 30% in the zone of sporadic and discontinuous permafrost, and 42% in the zone of continuous permafrost. Peat with a depth of more than 50 cm tends to be dominant in the Northern and Middle Taiga zones, but is uncommon in the Tundra zone.
Ongoing restoration does not seem to include paludiculture. The Wetland International together with the Institute of Forest Science of the Russian Academy of Sciences and the Michael Succow Foundation, implemented a major peatland restoration project in response to the extensive peat fires in the summer of 2010 in the Moscow region. The project was initiated within the framework of co-operation between the Russian Federation and the Federal Republic of Germany to the spearhead the ecological rewetting of peatlands and represents one of the largest peatland ecosystem restoration projects in the world. To date, over 35,000 ha of drained peatlands have been restored using ecological methods with another 10,000 ha currently underway.
Examples of potential crops for cultivation on wet and rewetted peatlands
The Database of Potential Paludiculture plants (DPPP) lists more than 1,000 wetland plants, but only a minor fraction is suitable for paludiculture. Examples for potential and tested paludicultures are provided in the table below.
References
Agriculture by type
Wetlands
Environmental engineering | Paludiculture | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,129 | [
"Hydrology",
"Chemical engineering",
"Wetlands",
"Civil engineering",
"Environmental engineering"
] |
38,352,768 | https://en.wikipedia.org/wiki/Uranium%28IV%29%20hydride | Uranium(IV) hydride is a chemical compound with the chemical formula UH, a metal hydride.
In 1997, Souter et al. reported the production of UH reacting laser ablated uranium atoms with dihydrogen and capturing the product on solid argon. The assignment of the structure was made using infrared spectroscopic evidence supported by DFT calculations. Uranium(IV) hydride has a quasi-tetrahedral (C) structure. UH is formed by the successive insertion of uranium into two hydrogen molecules:
U + H → UH
UH + H → UH
Further reaction with hydrogen, only produces dihydrogen complexes: UH(H) (1 ≤ n ≤ 6).
References
Uranium(IV) compounds
Metal hydrides | Uranium(IV) hydride | [
"Chemistry"
] | 156 | [
"Reducing agents",
"Metal hydrides",
"Inorganic compounds",
"Inorganic compound stubs"
] |
38,353,569 | https://en.wikipedia.org/wiki/Trioctagonal%20tiling | In geometry, the trioctagonal tiling is a semiregular tiling of the hyperbolic plane, representing a rectified Order-3 octagonal tiling. There are two triangles and two octagons alternating on each vertex. It has Schläfli symbol of r{8,3}.
Symmetry
Related polyhedra and tilings
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular octagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
It can also be generated from the (4 3 3) hyperbolic tilings:
The trioctagonal tiling can be seen in a sequence of quasiregular polyhedrons and tilings:
See also
Trihexagonal tiling - 3.6.3.6 tiling
Rhombille tiling - dual V3.6.3.6 tiling
Tilings of regular polygons
List of uniform tilings
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isotoxal tilings
Semiregular tilings
Quasiregular polyhedra | Trioctagonal tiling | [
"Physics"
] | 322 | [
"Isotoxal tilings",
"Semiregular tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Symmetry"
] |
38,353,577 | https://en.wikipedia.org/wiki/Truncated%20octagonal%20tiling | In geometry, the truncated octagonal tiling is a semiregular tiling of the hyperbolic plane. There is one triangle and two hexakaidecagons on each vertex. It has Schläfli symbol of t{8,3}.
Dual tiling
The dual tiling has face configuration V3.16.16.
Related polyhedra and tilings
This hyperbolic tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry.
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
See also
Truncated hexagonal tiling
Octagonal tiling
Tilings of regular polygons
List of uniform tilings
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Semiregular tilings
Truncated tilings
Octagonal tilings | Truncated octagonal tiling | [
"Physics"
] | 299 | [
"Semiregular tilings",
"Truncated tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Symmetry"
] |
38,353,605 | https://en.wikipedia.org/wiki/Truncated%20order-8%20triangular%20tiling | In geometry, the truncated order-8 triangular tiling is a semiregular tiling of the hyperbolic plane. There are two hexagons and one octagon on each vertex. It has Schläfli symbol of t{3,8}.
Uniform colors
Symmetry
The dual of this tiling represents the fundamental domains of *443 symmetry. It only has one subgroup 443, replacing mirrors with gyration points.
This symmetry can be doubled to 832 symmetry by adding a bisecting mirror to the fundamental domain.
Related tilings
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal tiling.
It can also be generated from the (4 3 3) hyperbolic tilings:
This hyperbolic tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (n.6.6), and [n,3] Coxeter group symmetry.
See also
Triangular tiling
Order-3 octagonal tiling
Order-8 triangular tiling
Tilings of regular polygons
List of uniform tilings
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Order-8 tilings
Semiregular tilings
Triangular tilings
Truncated tilings | Truncated order-8 triangular tiling | [
"Physics"
] | 333 | [
"Semiregular tilings",
"Truncated tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Symmetry"
] |
38,353,616 | https://en.wikipedia.org/wiki/Rhombitrioctagonal%20tiling | In geometry, the rhombitrioctagonal tiling is a semiregular tiling of the
hyperbolic plane. At each vertex of the tiling there is one triangle and one octagon, alternating between two squares. The tiling has Schläfli symbol rr{8,3}. It can be seen as constructed as a rectified trioctagonal tiling, r{8,3}, as well as an expanded octagonal tiling or expanded order-8 triangular tiling.
Symmetry
This tiling has [8,3], (*832) symmetry. There is only one uniform coloring.
Similar to the Euclidean rhombitrihexagonal tiling, by edge-coloring there is a half symmetry form (3*4) orbifold notation. The octagons can be considered as truncated squares, t{4} with two types of edges. It has Coxeter diagram , Schläfli symbol s2{3,8}. The squares can be distorted into isosceles trapezoids. In the limit, where the rectangles degenerate into edges, an order-8 triangular tiling results, constructed as a snub tritetratrigonal tiling, .
Related polyhedra and tilings
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
Symmetry mutations
This tiling is topologically related as a part of sequence of cantellated polyhedra with vertex figure (3.4.n.4), and continues as tilings of the hyperbolic plane. These vertex-transitive figures have (*n32) reflectional symmetry.
See also
Rhombitrihexagonal tiling
Order-3 octagonal tiling
Tilings of regular polygons
List of uniform tilings
Kagome lattice
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Semiregular tilings | Rhombitrioctagonal tiling | [
"Physics"
] | 505 | [
"Semiregular tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Symmetry"
] |
38,358,381 | https://en.wikipedia.org/wiki/Kathryn%20Moler | Kathryn Ann Moler (born 1966) is an American physicist, and current dean of research at Stanford University. She received her BSc (1988) and Ph.D. (1995) from Stanford University. After working as a visiting scientist at IBM T.J. Watson Research Center in 1995, she held a postdoctoral position at Princeton University from 1995 to 1998. She joined the faculty of Stanford University in 1998, and became an Associate in CIFAR's Superconductivity Program (now called the Quantum Materials Program) in 2000. She became an associate professor (with tenure) at Stanford in 2002 and is currently a professor of applied physics and of Physics at Stanford. She currently works in the Geballe Laboratory for Advanced Materials (GLAM), and is the director of the Center for Probing the Nanoscale (CPN), a National Science Foundation-funded center where Stanford and IBM scientists continue to improve scanning probe methods for measuring, imaging, and controlling nanoscale phenomena. She lists her scientific interests and main areas of research and experimentation as:
Single vortex dynamics in classical and high temperature superconductors,
Spontaneous currents and vortex effects in highly correlated electron systems, and
Mesoscopic superconductors and currents in normal metal rings, with an increasing interest in the spin properties of such small structures.
Career
Early in her career, with John Kirtley from IBM, their research demonstrated that one of the predictions of a popular theory for high-temperature superconductivity was inaccurate by a factor of 10. In 2011 her research group placed two non-magnetic materials (complex oxides) together and discovered an unexpected result: The layer where the two materials meet has both magnetic and superconducting regions. These are two properties that are normally incompatible, since "superconducting materials, which conduct electricity with no resistance and 100 percent efficiency, normally expel any magnetic field that comes near them." Exploration of this phenomenon will be aimed toward discovery of whether the properties co-exist uneasily, or this marks the discovery of an exotic new form of superconductivity that actively interacts with magnetism.
In May 2018, Moler was named vice provost and dean of research at Stanford University, effective September 1, 2018.
Awards
Member of the U. S. National Academy of Sciences.
Carrington Award for Excellence in Research and Teaching
Stanford Centennial Teaching Assistant
William L. McMillan Award for outstanding contributions in condensed matter physics
Packard Fellowship
Leigh Page Prize Lecturer at Yale University
R.H. Dicke Postdoctoral Fellowship at Princeton University
Frederick Terman Fellowship
Alfred P. Sloan Research Fellowship
Richtmyer Memorial Award 2011
NSF CAREER Award
Publications
"Scanning Probe Manipulation of Magnetism at the LaAlO3/SrTiO3 Heterointerface" — Beena Kalisky: Julie A. Bert, Christopher Bell, Yanwu Xie, Hiroki K. Sato, Masayuki Hosoda, Yasuyuki Hikita, Harold Y. Hwang, and Kathryn A. Moler;
"Critical thickness for ferromagnetism in LaAlO3/SrTiO3 heterostructures" — Beena Kalisky: Julie A. Bert, Brannon B. Klopfer, Christopher Bell, Hiroki K. Sato, Masayuki Hosoda, Yasuyuki Hikita, Harold Y. Hwang & Kathryn A. Moler;
"Scanning SQUID susceptometry of a paramagnetic superconductor" — J. R. Kirtley: B. Kalisky, J. A. Bert, C. Bell, M. Kim, Y. Hikita, H. Y. Hwang, J. H. Ngai, Y. Segal, F. J. Walker, C. H. Ahn, and K. A. Moler;
"Calculation of the effect of random superfluid density on the temperature dependence of the penetration depth" — Thomas M. Lippman: Kathryn A. Moler;
"Direct imaging of the coexistence of ferromagnetism and superconductivity at the LaAlO3/SrTiO3interface" — Julie A. Bert: Beena Kalisky, Christopher Bell, Minu Kim, Yasuyuki Hikita, Harold Y. Hwang & Kathryn A. Moler;
"Behavior of vortices near twin boundaries in underdoped Ba(Fe1-xCox)2As2" — B. Kalisky: J. R. Kirtley, J. G. Analytis, J.-H. Chu, I. R. Fisher, and K. A. Moler;
"Local Measurement of the Superfluid Density in the Pnictide Superconductor Ba(Fe1-xCox)2As2across the Superconducting Dome" — Lan Luan: Thomas M. Lippman, Clifford W. Hicks, Julie A. Bert, Ophir M. Auslaender, Jiun-Haw Chu, James G. Analytis, Ian R. Fisher, and Kathryn A. Moler;
Papers listed at Stanford
Evidence for a Nodal Energy Gap in the Iron-Pnictide Superconductor LaFePO from Penetration Depth Measurements by Scanning SQUID Susceptometry
Terraced Scanning SQUID Susceptometer with Sub-Micron Pickup Loops
Temperature dependence of the half-flux effect
Fluctuation Superconductivity in Mesoscopic Aluminum Rings
Mechanics of Individual, Isolated Vortices in a Cuprate Superconductor
Enhanced superfluid density on twin boundaries in Ba(Fe1-xCox)2As2
A limit on spin-charge separation in high-Tc superconductors from the absence of a vortex-memory effect
Persistent Currents in Normal Metal Rings
Images of interlayer Josephson vortices in Tl2Ba2CuO6+d
Magnetic field dependence of the density of states of YBa2Cu3O6.95 as determined from the specific heat
References
Living people
21st-century American physicists
Stanford University faculty
American women physicists
Superconductivity
Year of birth missing (living people)
1960s births
Members of the United States National Academy of Sciences
21st-century American women scientists
Fellows of the American Physical Society
Recipients of the Presidential Early Career Award for Scientists and Engineers | Kathryn Moler | [
"Physics",
"Materials_science",
"Engineering"
] | 1,307 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
38,358,615 | https://en.wikipedia.org/wiki/Order-5%20pentagonal%20tiling | In geometry, the order-5 pentagonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {5,5}, constructed from five pentagons around every vertex. As such, it is self-dual.
Related tilings
This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (5n).
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-5 tilings
Pentagonal tilings
Regular tilings
Self-dual tilings | Order-5 pentagonal tiling | [
"Physics"
] | 212 | [
"Isogonal tilings",
"Tessellation",
"Self-dual tilings",
"Hyperbolic tilings",
"Isohedral tilings",
"Symmetry"
] |
38,360,352 | https://en.wikipedia.org/wiki/Truncated%20order-5%20pentagonal%20tiling | In geometry, the truncated order-5 pentagonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{5,5}, constructed from one pentagons and two decagons around every vertex.
Related tilings
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Order-5 tilings
Pentagonal tilings
Truncated tilings
Uniform tilings | Truncated order-5 pentagonal tiling | [
"Physics"
] | 179 | [
"Truncated tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,360,375 | https://en.wikipedia.org/wiki/Snub%20pentapentagonal%20tiling | In geometry, the snub pentapentagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of sr{5,5}, constructed from two regular pentagons and three equilateral triangles around every vertex.
Images
Drawn in chiral pairs, with edges missing between black triangles:
Symmetry
A double symmetry coloring can be constructed from [5,4] symmetry with only one color pentagon. It has Schläfli symbol s{5,4}, and Coxeter diagram .
Related tilings
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Snub tilings | Snub pentapentagonal tiling | [
"Physics"
] | 225 | [
"Snub tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Symmetry"
] |
42,570,934 | https://en.wikipedia.org/wiki/Lev%20Sandakchiev | Lev Stepanovich Sandakchiev (; 11 January 1937 in Rostov-on-Don – 29 June 2006 in Koltsovo) was a Soviet and Russian scientist, specialist in molecular biology and virology, Doctor of biology, Professor, and Academician of Russian Academy of Sciences. He was the founder and the first head of State Research Center of Virology and Biotechnology VECTOR, director of it from 1982 to 2005.
Sources
1. Obituary. In memory of Lev Stepanovich Sandakchiev, Journal "Science in Siberia" http://www.sbras.ru/HBC/article.phtml?nid=381&id=23
2. In memory of Lev Stepanovich Sandakchiev, Russian Academy of Science http://www.ras.ru/win/db/show_per.asp?P=.id-1495.ln-ru
1937 births
2006 deaths
20th-century Russian biologists
Scientists from Rostov-on-Don
Corresponding Members of the USSR Academy of Sciences
D. Mendeleev University of Chemical Technology of Russia alumni
Full Members of the Russian Academy of Sciences
Recipients of the Order of Lenin
Recipients of the Order of the Red Banner of Labour
Recipients of the USSR State Prize
Molecular biologists
Russian microbiologists
Soviet microbiologists | Lev Sandakchiev | [
"Chemistry"
] | 272 | [
"Molecular biologists",
"Biochemists",
"Molecular biology"
] |
42,574,504 | https://en.wikipedia.org/wiki/Plasmid%20partition%20system | A plasmid partition system is a mechanism that ensures the stable inheritance of plasmids during bacterial cell division. Each plasmid has its independent replication system which controls the number of copies of the plasmid in a cell. The higher the copy number, the more likely the two daughter cells will contain the plasmid. Generally, each molecule of plasmid diffuses randomly, so the probability of having a plasmid-less daughter cell is 21−N, where N is the number of copies. For instance, if there are 2 copies of a plasmid in a cell, there is 50% chance of having one plasmid-less daughter cell. However, high-copy number plasmids have a cost for the hosting cell. This metabolic burden is lower for low-copy plasmids, but those have a higher probability of plasmid loss after a few generations. To control vertical transmission of plasmids, in addition to controlled-replication systems, bacterial plasmids use different maintenance strategies, such as multimer resolution systems, post-segregational killing systems (addiction modules), and partition systems.
General properties of partition systems
Plasmid copies are paired around a centromere-like site and then separated in the two daughter cells. Partition systems involve three elements, organized in an auto-regulated operon:
A centromere-like DNA site
Centromere binding proteins (CBP)
The motor protein
The centromere-like DNA site is required in cis for plasmid stability. It often contains one or more inverted repeats which are recognized by multiple CBPs. This forms a nucleoprotein complex termed the partition complex. This complex recruits the motor protein, which is a nucleotide triphosphatase (NTPase). The NTPase uses energy from NTP binding and hydrolysis to directly or indirectly move and attach plasmids to specific host location (e.g. opposite bacterial cell poles).
The partition systems are divided in four types, based primarily on the type of NTPases:
Type I : Walker type P-loop ATPase
Type II : Actin-like ATPase
Type III : tubulin-like GTPase
Type IV : No NTPase
Type I partition system
This system is also used by most bacteria for chromosome segregation.
Type I partition systems are composed of an ATPase which contains Walker motifs and a CBP which is structurally distinct in type Ia and Ib. ATPases and CBP from type Ia are longer than the ones from type Ib, but both CBPs contain an arginine finger in their N-terminal part.
ParA proteins from different plasmids and bacterial species show 25 to 30% of sequence identity to the protein ParA of the plasmid P1.
The partition of type I system uses a "diffusion-ratchet" mechanism. This mechanism works as follows:
Dimers of ParA-ATP dynamically bind to nucleoid DNA
ParA in its ATP-bound state interacts with ParB bound to parS
ParB bound to parS stimulates the release of ParA from the nucleoid region surrounding the plasmid
The plasmid then chases the resulting ParA gradient on the perimeter of the ParA depleted region of the nucleoid
The ParA that was released from the nucleoid behind the plasmid's movement redistributes to other regions of the nucleoid after a delay
After plasmid replication, the sister copies segregate to opposite cell halves as they chase ParA on the nucleoid in opposite directions
There are likely to be differences in the details of type I mechanisms.
Type 1 partition has been mathematically modelled with variations in the mechanism described above.
Type Ia
The CBP of this type consists in three domains:
N-terminal NTPase binding domain
Central Helix-Turn-Helix (HTH) domain
C-terminal dimer-domain
Type Ib
The CBP of this type, also known as parG is composed of:
N-terminal NTPase binding domain
Ribon-Helix-Helix (RHH) domain
For this type, the parS site is called parC.
Type II partition system
This system is the best understood of the plasmid partition system.
It is composed of an actin-like ATPAse, ParM, and a CBP called ParR. The centromere like site, parC contains two sets of five 11 base pair direct repeats separated by the parMR promoter.
The amino-acid sequence identity can go down to 15% between ParM and other actin-like ATPase.
The mechanism of partition involved here is a pushing mechanism:
ParR binds to parC and pairs plasmids which form a nucleoprotein complex, or partition complex
The partition complex serves as nucleation point for the polymerization of ParM; ParM-ATP complex inserts at this point and push plasmids apart
The insertion leads to hydrolysis of ParM-ATP complex, leading to depolymerization of the filament
At cell division, plasmids copies are at each cell extremity, and will end up in future daughter cell
The filament of ParM is regulated by the polymerization allowed by the presence the partition complex (ParR-parC), and by the depolymerization controlled by the ATPase activity of ParM.
Type III partition system
The type III partition system is the most recently discovered partition system. It is composed of tubulin-like GTPase termed TubZ, and the CBP is termed TubR.
Amino-acid sequence identity can go down to 21% for TubZ proteins.
The mechanism is similar to a treadmill mechanism:
Multiple TubR dimer binds to the centromere-like region stbDRs of the plasmids.
Contact between TubR and filament of treadmilling TubZ polymer. TubZ subunits are lost from the - end and are added to the + end.
TubR-plasmid complex is pulled along the growing polymer until it reaches the cell pole.
Interaction with membrane is likely to trigger the release of the plasmid.
The net result being transport of partition complex to the cell pole.
Other partition systems
R388 partition system
The partition system of the plasmid R388 has been found within the stb operon. This operon is composed of three genes, stbA, stbB and stbC.
StbA protein is a DNA-binding protein (identical to ParM) and is strictly required for the stability and intracellular positioning of plasmid R388 in E. coli. StbA binds a cis-acting sequence, the stbDRs.
The StbA-stbDRs complex may be used to pair plasmid the host chromosome, using indirectly the bacterial partitioning system.
StbB protein has a Walker-type ATPase motif, it favors for conjugation but is not required for plasmid stability over generations.
StbC is an orphan protein of unknown function. StbC doesn't seem to be implicated in either partitioning or conjugation.
StbA and StbB have opposite but connected effect related to conjugation.
This system has been proposed to be the type IV partition system. It is thought to be a derivative of the type I partition system, given the similar operon organization.
This system represents the first evidence for a mechanistic interplay between plasmid segregation and conjugation processes.
pSK1 partition system (reviewed in )
pSK1 is a plasmid from Staphylococcus aureus. This plasmid has a partition system determined by a single gene, par, previously known as orf245. This gene does not effect the plasmid copy number nor the grow rate (excluding its implication in a post-segregational killing system). A centromere-like binding sequence is present upstream of the par gene, and is composed of seven direct repeats and one inverted repeat.
References
Molecular biology
Mobile genetic elements
Plasmids | Plasmid partition system | [
"Chemistry",
"Biology"
] | 1,680 | [
"Mobile genetic elements",
"Plasmids",
"Molecular genetics",
"Bacteria",
"Molecular biology",
"Biochemistry"
] |
42,575,531 | https://en.wikipedia.org/wiki/Brick-lined%20well | A brick-lined well is a hand-dug water well whose walls are lined with bricks, sometimes called "Dutch bricks" if they are trapezoidal or made on site. The technique is ancient, but is still appropriate in developing countries where labor costs are low and material costs are high.
Antiquity
Hand-dug wells are mentioned in the Bible.
Inscriptions in Mesopotamia tell of construction of brick-lined wells in the period before the rule of Sargon of Akkad (c. 2334 – 2279 BC).
Brick-lined wells have been excavated at Mohenjo-daro and Harappa in the Indus Valley.
Mature Harappan (2600–1900 BC) technology included brick-lined wells, perhaps derived from earlier designs.
One well would have served a neighborhood.
The clay bricks are trapezoidal in shape, with one end smaller than the other.
The bricks are arranged in circles pointing inward. The smaller ends form the inside walls.
In the settlement of Lothal a brick-lined building on an elevated mound included a well lined with baked bricks, a bathing facility and a drain.
Brick-lined wells of more recent date have been found around the world.
They have been found in Sanjan, Gujarat, India, built around the 11th century AD.
Archeological excavations in Virginia, USA, have found what appears to be a brick-lined well from the 17th century.
Brick-lined wells were typical of 19th century farmsteads in rural Illinois.
In the Shijiazhuang area of Hebei, China, irrigation using wells was highly developed before the Revolution.
Five or six men could dig a brick-lined well with a depth of in a week. This could irrigate crops over an area of up to 20 mu.
The same men could dig an unlined well in one day, basically a pit in the ground, but the irrigation capacity was only one fifth of that of the brick-lined well.
Comparison to other linings
In West Africa branches were traditionally used to line hand-dug wells, but this requires use of forest resources that are now often scarce. Old steel barrels can be used to make linings. These can be lowered from the surface as the well is dug and reduce risk when the well is sunk in sand, gravel or some other unstable formation. However, they corrode and deform easily. Cement brick linings are stronger, unlikely to deform, and the courses can be linked structurally.
They are generally cost-effective, although more expensive than barrel linings.
Steel-reinforced wells are stronger again and can be sunk much deeper, but in developing countries their cost is usually prohibitive.
Pre-cast concrete pipe is also an excellent liner, particularly if it has tongue-in-groove joints and a smooth exterior, since it can be used as a crib as the well is deepened.
Again, cost may prohibit its use.
Design
The brick lining will typically rest on a circular concrete well curb.
The lining may have open joints to allow water to enter. In this case the well is often plugged at the bottom and the water enters from the sides. Ballast of diameter is packed around the outside of the lining to prevent sand from flowing into the well. This design may be used in gravel or coarse sand where the water table is shallow.
Impervious wells are made using masonry with cement or lime mortar. They may be sunk deep. Water seeps into a cavity in the open bottom, or comes up from a pipe sunk down from the center of the well into the water-bearing sand.
The top part of the well should prevent foreign matter or surface water from entering the well, so should be impervious.
The top of the well should be protected and the area around the well drained.
The brick lining can greatly improve sanitation if it rises above ground level, preventing contamination of the well water by animal feces.
Peace Corps experience
By 2007 the U.S. Peace Corps had been promoting use of Dutch bricks to build soak pits and wells for many years. The Peace Corps uses the term "Dutch brick" to describe a trapezoidal (as opposed to rectangular) concrete brick used to line a well or soak pit. The brick may be made of a 1:2:3 mix of cement, sand and gravel.
USAID has supported these efforts, for example providing funds to purchase materials such as cement and rebar for construction of Dutch brick wells in Mali and Mauretania.
The Dutch bricks are used to reinforce the sides of the wells, with the concrete mixed onsite and packed into brick molds.
Dutch bricks made for well lining have a trapezoidal shape, with sloping sides so that they can be fitted into a ring.
The slope can be adjusted for larger or smaller rings.
Lining wells with Dutch bricks in this way allows the well to be dug deeper without fear of the walls collapsing.
Problems may however be encountered with incorrectly shaped molds and inexperienced volunteers.
References
Notes
Citations
Sources
Water wells
Brick buildings and structures
Archaeological features | Brick-lined well | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,023 | [
"Hydrology",
"Water wells",
"Environmental engineering"
] |
42,579,971 | https://en.wikipedia.org/wiki/Inductive%20probability | Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception of patterns. It is a source of knowledge about the world.
There are three sources of knowledge: inference, communication, and deduction. Communication relays information found using other methods. Deduction establishes new facts based on existing facts. Inference establishes new facts from data. Its basis is Bayes' theorem.
Information describing the world is written in a language. For example, a simple mathematical language of propositions may be chosen. Sentences may be written down in this language as strings of characters. But in the computer it is possible to encode these sentences as strings of bits (1s and 0s). Then the language may be encoded so that the most commonly used sentences are the shortest. This internal language implicitly represents probabilities of statements.
Occam's razor says the "simplest theory, consistent with the data is most likely to be correct". The "simplest theory" is interpreted as the representation of the theory written in this internal language. The theory with the shortest encoding in this internal language is most likely to be correct.
History
Probability and statistics was focused on probability distributions and tests of significance. Probability was formal, well defined, but limited in scope. In particular its application was limited to situations that could be defined as an experiment or trial, with a well defined population.
Bayes's theorem is named after Rev. Thomas Bayes 1701–1761. Bayesian inference broadened the application of probability to many situations where a population was not well defined. But Bayes' theorem always depended on prior probabilities, to generate new probabilities. It was unclear where these prior probabilities should come from.
Ray Solomonoff developed algorithmic probability which gave an explanation for what randomness is and how patterns in the data may be represented by computer programs, that give shorter representations of the data circa 1964.
Chris Wallace and D. M. Boulton developed minimum message length circa 1968. Later Jorma Rissanen developed the minimum description length circa 1978. These methods allow information theory to be related to probability, in a way that can be compared to the application of Bayes' theorem, but which give a source and explanation for the role of prior probabilities.
Marcus Hutter combined decision theory with the work of Ray Solomonoff and Andrey Kolmogorov to give a theory for the Pareto optimal behavior for an Intelligent agent, circa 1998.
Minimum description/message length
The program with the shortest length that matches the data is the most likely to predict future data. This is the thesis behind the minimum message length and minimum description length methods.
At first sight Bayes' theorem appears different from the minimimum message/description length principle. At closer inspection it turns out to be the same. Bayes' theorem is about conditional probabilities, and states the probability that event B happens if firstly event A happens:
becomes in terms of message length L,
This means that if all the information is given describing an event then the length of the information may be used to give the raw probability of the event. So if the information describing the occurrence of A is given, along with the information describing B given A, then all the information describing A and B has been given.
Overfitting
Overfitting occurs when the model matches the random noise and not the pattern in the data. For example, take the situation where a curve is fitted to a set of points. If a polynomial with many terms is fitted then it can more closely represent the data. Then the fit will be better, and the information needed to describe the deviations from the fitted curve will be smaller. Smaller information length means higher probability.
However, the information needed to describe the curve must also be considered. The total information for a curve with many terms may be greater than for a curve with fewer terms, that has not as good a fit, but needs less information to describe the polynomial.
Inference based on program complexity
Solomonoff's theory of inductive inference is also inductive inference. A bit string x is observed. Then consider all programs that generate strings starting with x. Cast in the form of inductive inference, the programs are theories that imply the observation of the bit string x.
The method used here to give probabilities for inductive inference is based on Solomonoff's theory of inductive inference.
Detecting patterns in the data
If all the bits are 1, then people infer that there is a bias in the coin and that it is more likely also that the next bit is 1 also. This is described as learning from, or detecting a pattern in the data.
Such a pattern may be represented by a computer program. A short computer program may be written that produces a series of bits which are all 1. If the length of the program K is bits then its prior probability is,
The length of the shortest program that represents the string of bits is called the Kolmogorov complexity.
Kolmogorov complexity is not computable. This is related to the halting problem. When searching for the shortest program some programs may go into an infinite loop.
Considering all theories
The Greek philosopher Epicurus is quoted as saying "If more than one theory is consistent with the observations, keep all theories".
As in a crime novel all theories must be considered in determining the likely murderer, so with inductive probability all programs must be considered in determining the likely future bits arising from the stream of bits.
Programs that are already longer than n have no predictive power. The raw (or prior) probability that the pattern of bits is random (has no pattern) is .
Each program that produces the sequence of bits, but is shorter than the n is a theory/pattern about the bits with a probability of where k is the length of the program.
The probability of receiving a sequence of bits y after receiving a series of bits x is then the conditional probability of receiving y given x, which is the probability of x with y appended, divided by the probability of x.
Universal priors
The programming language affects the predictions of the next bit in the string. The language acts as a prior probability. This is particularly a problem where the programming language codes for numbers and other data types. Intuitively we think that 0 and 1 are simple numbers, and that prime numbers are somehow more complex than numbers that may be composite.
Using the Kolmogorov complexity gives an unbiased estimate (a universal prior) of the prior probability of a number. As a thought experiment an intelligent agent may be fitted with a data input device giving a series of numbers, after applying some transformation function to the raw numbers. Another agent might have the same input device with a different transformation function. The agents do not see or know about these transformation functions. Then there appears no rational basis for preferring one function over another. A universal prior insures that although two agents may have different initial probability distributions for the data input, the difference will be bounded by a constant.
So universal priors do not eliminate an initial bias, but they reduce and limit it. Whenever we describe an event in a language, either using a natural language or other, the language has encoded in it our prior expectations. So some reliance on prior probabilities are inevitable.
A problem arises where an intelligent agent's prior expectations interact with the environment to form a self reinforcing feed back loop. This is the problem of bias or prejudice. Universal priors reduce but do not eliminate this problem.
Universal artificial intelligence
The theory of universal artificial intelligence applies decision theory to inductive probabilities. The theory shows how the best actions to optimize a reward function may be chosen. The result is a theoretical model of intelligence.
It is a fundamental theory of intelligence, which optimizes the agents behavior in,
Exploring the environment; performing actions to get responses that broaden the agents knowledge.
Competing or co-operating with another agent; games.
Balancing short and long term rewards.
In general no agent will always provide the best actions in all situations. A particular choice made by an agent may be wrong, and the environment may provide no way for the agent to recover from an initial bad choice. However the agent is Pareto optimal in the sense that no other agent will do better than this agent in this environment, without doing worse in another environment. No other agent may, in this sense, be said to be better.
At present the theory is limited by incomputability (the halting problem). Approximations may be used to avoid this. Processing speed and combinatorial explosion remain the primary limiting factors for artificial intelligence.
Probability
Probability is the representation of uncertain or partial knowledge about the truth of statements. Probabilities are subjective and personal estimates of likely outcomes based on past experience and inferences made from the data.
This description of probability may seem strange at first. In natural language we refer to "the probability" that the sun will rise tomorrow. We do not refer to "your probability" that the sun will rise. But in order for inference to be correctly modeled probability must be personal, and the act of inference generates new posterior probabilities from prior probabilities.
Probabilities are personal because they are conditional on the knowledge of the individual. Probabilities are subjective because they always depend, to some extent, on prior probabilities assigned by the individual. Subjective should not be taken here to mean vague or undefined.
The term intelligent agent is used to refer to the holder of the probabilities. The intelligent agent may be a human or a machine. If the intelligent agent does not interact with the environment then the probability will converge over time to the frequency of the event.
If however the agent uses the probability to interact with the environment there may be a feedback, so that two agents in the identical environment starting with only slightly different priors, end up with completely different probabilities. In this case optimal decision theory as in Marcus Hutter's Universal Artificial Intelligence will give Pareto optimal performance for the agent. This means that no other intelligent agent could do better in one environment without doing worse in another environment.
Comparison to deductive probability
In deductive probability theories, probabilities are absolutes, independent of the individual making the assessment. But deductive probabilities are based on,
Shared knowledge.
Assumed facts, that should be inferred from the data.
For example, in a trial the participants are aware the outcome of all the previous history of trials. They also assume that each outcome is equally probable. Together this allows a single unconditional value of probability to be defined.
But in reality each individual does not have the same information. And in general the probability of each outcome is not equal. The dice may be loaded, and this loading needs to be inferred from the data.
Probability as estimation
The principle of indifference has played a key role in probability theory. It says that if N statements are symmetric so that one condition cannot be preferred over another then all statements are equally probable.
Taken seriously, in evaluating probability this principle leads to contradictions. Suppose there are 3 bags of gold in the distance and one is asked to select one. Then because of the distance one cannot see the bag sizes. You estimate using the principle of indifference that each bag has equal amounts of gold, and each bag has one third of the gold.
Now, while one of us is not looking, the other takes one of the bags and divide it into 3 bags. Now there are 5 bags of gold. The principle of indifference now says each bag has one fifth of the gold. A bag that was estimated to have one third of the gold is now estimated to have one fifth of the gold.
Taken as a value associated with the bag the values are different therefore contradictory. But taken as an estimate given under a particular scenario, both values are separate estimates given under different circumstances and there is no reason to believe they are equal.
Estimates of prior probabilities are particularly suspect. Estimates will be constructed that do not follow any consistent frequency distribution. For this reason prior probabilities are considered as estimates of probabilities rather than probabilities.
A full theoretical treatment would associate with each probability,
The statement
Prior knowledge
Prior probabilities
The estimation procedure used to give the probability.
Combining probability approaches
Inductive probability combines two different approaches to probability.
Probability and information
Probability and frequency
Each approach gives a slightly different viewpoint. Information theory is used in relating probabilities to quantities of information. This approach is often used in giving estimates of prior probabilities.
Frequentist probability defines probabilities as objective statements about how often an event occurs. This approach may be stretched by defining the trials to be over possible worlds. Statements about possible worlds define events.
Probability and information
Whereas logic represents only two values; true and false as the values of statement, probability associates a number in [0,1] to each statement. If the probability of a statement is 0, the statement is false. If the probability of a statement is 1 the statement is true.
In considering some data as a string of bits the prior probabilities for a sequence of 1s and 0s, the probability of 1 and 0 is equal. Therefore, each extra bit halves the probability of a sequence of bits.
This leads to the conclusion that,
Where is the probability of the string of bits and is its length.
The prior probability of any statement is calculated from the number of bits needed to state it. See also information theory.
Combining information
Two statements and may be represented by two separate encodings. Then the length of the encoding is,
or in terms of probability,
But this law is not always true because there may be a shorter method of encoding if we assume . So the above probability law applies only if and are "independent".
The internal language of information
The primary use of the information approach to probability is to provide estimates of the complexity of statements. Recall that Occam's razor states that "All things being equal, the simplest theory is the most likely to be correct". In order to apply this rule, first there needs to be a definition of what "simplest" means. Information theory defines simplest to mean having the shortest encoding.
Knowledge is represented as statements. Each statement is a Boolean expression. Expressions are encoded by a function that takes a description (as against the value) of the expression and encodes it as a bit string.
The length of the encoding of a statement gives an estimate of the probability of a statement. This probability estimate will often be used as the prior probability of a statement.
Technically this estimate is not a probability because it is not constructed from a frequency distribution. The probability estimates given by it do not always obey the law of total of probability. Applying the law of total probability to various scenarios will usually give a more accurate probability estimate of the prior probability than the estimate from the length of the statement.
Encoding expressions
An expression is constructed from sub expressions,
Constants (including function identifier).
Application of functions.
quantifiers.
A Huffman code must distinguish the 3 cases. The length of each code is based on the frequency of each type of sub expressions.
Initially constants are all assigned the same length/probability. Later constants may be assigned a probability using the Huffman code based on the number of uses of the function id in all expressions recorded so far. In using a Huffman code the goal is to estimate probabilities, not to compress the data.
The length of a function application is the length of the function identifier constant plus the sum of the sizes of the expressions for each parameter.
The length of a quantifier is the length of the expression being quantified over.
Distribution of numbers
No explicit representation of natural numbers is given. However natural numbers may be constructed by applying the successor function to 0, and then applying other arithmetic functions. A distribution of natural numbers is implied by this, based on the complexity of constructing each number.
Rational numbers are constructed by the division of natural numbers. The simplest representation has no common factors between the numerator and the denominator. This allows the probability distribution of natural numbers may be extended to rational numbers.
Probability and frequency
The probability of an event may be interpreted as the frequencies of outcomes where the statement is true divided by the total number of outcomes. If the outcomes form a continuum the frequency may need to be replaced with a measure.
Events are sets of outcomes. Statements may be related to events. A Boolean statement B about outcomes defines a set of outcomes b,
Conditional probability
Each probability is always associated with the state of knowledge at a particular point in the argument. Probabilities before an inference are known as prior probabilities, and probabilities after are known as posterior probabilities.
Probability depends on the facts known. The truth of a fact limits the domain of outcomes to the outcomes consistent with the fact. Prior probabilities are the probabilities before a fact is known. Posterior probabilities are after a fact is known. The posterior probabilities are said to be conditional on the fact. the probability that is true given that is true is written as:
All probabilities are in some sense conditional. The prior probability of is,
The frequentist approach applied to possible worlds
In the frequentist approach, probabilities are defined as the ratio of the number of outcomes within an event to the total number of outcomes. In the possible world model each possible world is an outcome, and statements about possible worlds define events. The probability of a statement being true is the number of possible worlds where the statement is true divided by the total number of possible worlds. The probability of a statement being true about possible worlds is then,
For a conditional probability.
then
Using symmetry this equation may be written out as Bayes' law.
This law describes the relationship between prior and posterior probabilities when new facts are learnt.
Written as quantities of information Bayes' Theorem becomes,
Two statements A and B are said to be independent if knowing the truth of A does not change the probability of B. Mathematically this is,
then Bayes' Theorem reduces to,
The law of total of probability
For a set of mutually exclusive possibilities , the sum of the posterior probabilities must be 1.
Substituting using Bayes' theorem gives the law of total probability
This result is used to give the extended form of Bayes' theorem,
This is the usual form of Bayes' theorem used in practice, because it guarantees the sum of all the posterior probabilities for is 1.
Alternate possibilities
For mutually exclusive possibilities, the probabilities add.
Using
Then the alternatives
are all mutually exclusive. Also,
so, putting it all together,
Negation
As,
then
Implication and condition probability
Implication is related to conditional probability by the following equation,
Derivation,
Bayesian hypothesis testing
Bayes' theorem may be used to estimate the probability of a hypothesis or theory H, given some facts F. The posterior probability of H is then
or in terms of information,
By assuming the hypothesis is true, a simpler representation of the statement F may be given. The length of the encoding of this simpler representation is
represents the amount of information needed to represent the facts F, if H is true. is the amount of information needed to represent F without the hypothesis H. The difference is how much the representation of the facts has been compressed by assuming that H is true. This is the evidence that the hypothesis H is true.
If is estimated from encoding length then the probability obtained will not be between 0 and 1. The value obtained is proportional to the probability, without being a good probability estimate. The number obtained is sometimes referred to as a relative probability, being how much more probable the theory is than not holding the theory.
If a full set of mutually exclusive hypothesis that provide evidence is known, a proper estimate may be given for the prior probability .
Set of hypothesis
Probabilities may be calculated from the extended form of Bayes' theorem. Given all mutually exclusive hypothesis which give evidence, such that,
and also the hypothesis R, that none of the hypothesis is true, then,
In terms of information,
In most situations it is a good approximation to assume that is independent of , which means giving,
Boolean inductive inference
Abductive inference starts with a set of facts F which is a statement (Boolean expression). Abductive reasoning is of the form,
A theory T implies the statement F. As the theory T is simpler than F, abduction says that there is a probability that the theory T is implied by F.
The theory T, also called an explanation of the condition F, is an answer to the ubiquitous factual "why" question. For example, for the condition F is "Why do apples fall?". The answer is a theory T that implies that apples fall;
Inductive inference is of the form,
All observed objects in a class C have a property P. Therefore there is a probability that all objects in a class C have a property P.
In terms of abductive inference, all objects in a class C or set have a property P is a theory that implies the observed condition, All observed objects in a class C have a property P.
So inductive inference is a general case of abductive inference. In common usage the term inductive inference is often used to refer to both abductive and inductive inference.
Generalization and specialization
Inductive inference is related to generalization. Generalizations may be formed from statements by replacing a specific value with membership of a category, or by replacing membership of a category with membership of a broader category. In deductive logic, generalization is a powerful method of generating new theories that may be true. In inductive inference generalization generates theories that have a probability of being true.
The opposite of generalization is specialization. Specialization is used in applying a general rule to a specific case. Specializations are created from generalizations by replacing membership of a category by a specific value, or by replacing a category with a sub category.
The Linnaen classification of living things and objects forms the basis for generalization and specification. The ability to identify, recognize and classify is the basis for generalization. Perceiving the world as a collection of objects appears to be a key aspect of human intelligence. It is the object oriented model, in the non computer science sense.
The object oriented model is constructed from our perception. In particularly vision is based on the ability to compare two images and calculate how much information is needed to morph or map one image into another. Computer vision uses this mapping to construct 3D images from stereo image pairs.
Inductive logic programming is a means of constructing theory that implies a condition. Plotkin's "relative least general generalization (rlgg)" approach constructs the simplest generalization consistent with the condition.
Newton's use of induction
Isaac Newton used inductive arguments in constructing his law of universal gravitation. Starting with the statement,
The center of an apple falls towards the center of the Earth.
Generalizing by replacing apple for object, and Earth for object gives, in a two body system,
The center of an object falls towards the center of another object.
The theory explains all objects falling, so there is strong evidence for it. The second observation,
The planets appear to follow an elliptical path.
After some complicated mathematical calculus, it can be seen that if the acceleration follows the inverse square law then objects will follow an ellipse. So induction gives evidence for the inverse square law.
Using Galileo's observation that all objects drop with the same speed,
where and vectors towards the center of the other object. Then using Newton's third law
Probabilities for inductive inference
Implication determines condition probability as,
So,
This result may be used in the probabilities given for Bayesian hypothesis testing. For a single theory, H = T and,
or in terms of information, the relative probability is,
Note that this estimate for P(T|F) is not a true probability. If then the theory has evidence to support it. Then for a set of theories , such that ,
giving,
Derivations
Derivation of inductive probability
Make a list of all the shortest programs that each produce a distinct infinite string of bits, and satisfy the relation,
where is the result of running the program and truncates the string after n bits.
The problem is to calculate the probability that the source is produced by program given that the truncated source after n bits is x. This is represented by the conditional probability,
Using the extended form of Bayes' theorem
The extended form relies on the law of total probability. This means that the must be distinct possibilities, which is given by the condition that each produce a different infinite string. Also one of the conditions must be true. This must be true, as in the limit as there is always at least one program that produces .
As are chosen so that then,
The apriori probability of the string being produced from the program, given no information about the string, is based on the size of the program,
giving,
Programs that are the same or longer than the length of x provide no predictive power. Separate them out giving,
Then identify the two probabilities as,
But the prior probability that x is a random set of bits is . So,
The probability that the source is random, or unpredictable is,
A model for inductive inference
A model of how worlds are constructed is used in determining the probabilities of theories,
A random bit string is selected.
A condition is constructed from the bit string.
A world is constructed that is consistent with the condition.
If w is the bit string then the world is created such that is true. An intelligent agent has some facts about the word, represented by the bit string c, which gives the condition,
The set of bit strings identical with any condition x is .
A theory is a simpler condition that explains (or implies) C. The set of all such theories is called T,
Applying Bayes' theorem
extended form of Bayes' theorem may be applied
where,
To apply Bayes' theorem the following must hold: is a partition of the event space.
For to be a partition, no bit string n may belong to two theories. To prove this assume they can and derive a contradiction,
Secondly prove that T includes all outcomes consistent with the condition. As all theories consistent with C are included then must be in this set.
So Bayes theorem may be applied as specified giving,
Using the implication and condition probability law, the definition of implies,
The probability of each theory in T is given by,
so,
Finally the probabilities of the events may be identified with the probabilities of the condition which the outcomes in the event satisfy,
giving
This is the probability of the theory t after observing that the condition C holds.
Removing theories without predictive power
Theories that are less probable than the condition C have no predictive power. Separate them out giving,
The probability of the theories without predictive power on C is the same as the probability of C. So,
So the probability
and the probability of no prediction for C, written as ,
The probability of a condition was given as,
Bit strings for theories that are more complex than the bit string given to the agent as input have no predictive power. There probabilities are better included in the random case. To implement this a new definition is given as F in,
Using F, an improved version of the abductive probabilities is,
Key people
William of Ockham
Thomas Bayes
Ray Solomonoff
Andrey Kolmogorov
Chris Wallace
D. M. Boulton
Jorma Rissanen
Marcus Hutter
See also
Abductive reasoning
Algorithmic probability
Algorithmic information theory
Bayesian inference
Information theory
Inductive inference
Inductive logic programming
Inductive reasoning
Learning
Minimum message length
Minimum description length
Occam's razor
Solomonoff's theory of inductive inference
Universal artificial intelligence
References
External links
Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076–1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference.
C.S. Wallace, Statistical and Inductive Inference by Minimum Message Length, Springer-Verlag (Information Science and Statistics), , May 2005 – chapter headings, table of contents and sample pages.
Philosophy of statistics
Inductive reasoning
Inference
Machine learning
Probability theory | Inductive probability | [
"Mathematics",
"Engineering"
] | 5,828 | [
"Artificial intelligence engineering",
"Philosophy of statistics",
"Machine learning"
] |
42,579,992 | https://en.wikipedia.org/wiki/Quasi-fibration | In algebraic topology, a quasifibration is a generalisation of fibre bundles and fibrations introduced by Albrecht Dold and René Thom. Roughly speaking, it is a continuous map p: E → B having the same behaviour as a fibration regarding the (relative) homotopy groups of E, B and p−1(x). Equivalently, one can define a quasifibration to be a continuous map such that the inclusion of each fibre into its homotopy fibre is a weak equivalence. One of the main applications of quasifibrations lies in proving the Dold-Thom theorem.
Definition
A continuous surjective map of topological spaces p: E → B is called a quasifibration if it induces isomorphisms
for all x ∈ B, y ∈ p−1(x) and i ≥ 0. For i = 0,1 one can only speak of bijections between the two sets.
By definition, quasifibrations share a key property of fibrations, namely that a quasifibration p: E → B induces a long exact sequence of homotopy groups
as follows directly from the long exact sequence for the pair (E, p−1(x)).
This long exact sequence is also functorial in the following sense: Any fibrewise map f: E → E′ induces a morphism between the exact sequences of the pairs (E, p−1(x)) and (E′, p′−1(x)) and therefore a morphism between the exact sequences of a quasifibration. Hence, the diagram
commutes with f0 being the restriction of f to p−1(x) and x′ being an element of the form p′(f(e)) for an e ∈ p−1(x).
An equivalent definition is saying that a surjective map p: E → B is a quasifibration if the inclusion of the fibre p−1(b) into the homotopy fibre Fb of p over b is a weak equivalence for all b ∈ B. To see this, recall that Fb is the fibre of q under b where q: Ep → B is the usual path fibration construction. Thus, one has
and q is given by q(e, γ) = γ(1). Now consider the natural homotopy equivalence φ : E → Ep, given by φ(e) = (e, p(e)), where p(e) denotes the corresponding constant path. By definition, p factors through Ep such that one gets a commutative diagram
Applying πn yields the alternative definition.
Examples
Every Serre fibration is a quasifibration. This follows from the Homotopy lifting property.
The projection of the letter L onto its base interval is a quasifibration, but not a fibration. More generally, the projection Mf → I of the mapping cylinder of a map f: X → Y between connected CW complexes onto the unit interval is a quasifibration if and only if πi(Mf, p−1(b)) = 0 = πi(I, b) holds for all i ∈ I and b ∈ B. But by the long exact sequence of the pair (Mf, p−1(b)) and by Whitehead's theorem, this is equivalent to f being a homotopy equivalence. For topological spaces X and Y in general, it is equivalent to f being a weak homotopy equivalence. Furthermore, if f is not surjective, non-constant paths in I starting at 0 cannot be lifted to paths starting at a point of Y outside the image of f in Mf. This means that the projection is not a fibration in this case.
The map SP(p) : SP(X) → SP(X/A) induced by the projection p: X → X/A is a quasifibration for a CW pair (X, A) consisting of two connected spaces. This is one of the main statements used in the proof of the Dold-Thom theorem. In general, this map also fails to be a fibration.
Properties
The following is a direct consequence of the alternative definition of a fibration using the homotopy fibre:
Theorem. Every quasifibration p: E → B factors through a fibration whose fibres are weakly homotopy equivalent to the ones of p.
A corollary of this theorem is that all fibres of a quasifibration are weakly homotopy equivalent if the base space is path-connected, as this is the case for fibrations.
Checking whether a given map is a quasifibration tends to be quite tedious. The following two theorems are designed to make this problem easier. They will make use of the following notion: Let p: E → B be a continuous map. A subset U ⊂ p(E) is called distinguished (with respect to p) if p: p−1(U) → U is a quasifibration.
Theorem. If the open subsets U,V and U ∩ V are distinguished with respect to the continuous map p: E → B, then so is U ∪ V.
Theorem. Let p: E → B be a continuous map where B is the inductive limit of a sequence B1 ⊂ B2 ⊂ ... All Bn are moreover assumed to satisfy the first separation axiom. If all the Bn are distinguished, then p is a quasifibration.
To see that the latter statement holds, one only needs to bear in mind that continuous images of compact sets in B already lie in some Bn. That way, one can reduce it to the case where the assertion is known.
These two theorems mean that it suffices to show that a given map is a quasifibration on certain subsets. Then one can patch these together in order to see that it holds on bigger subsets and finally, using a limiting argument, one sees that the map is a quasifibration on the whole space. This procedure has e.g. been used in the proof of the Dold-Thom theorem.
Notes
References
External links
Quasifibrations and homotopy pullbacks on MathOverflow
Quasifibrations from the Lehigh University
Algebraic topology | Quasi-fibration | [
"Mathematics"
] | 1,305 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
58,971,171 | https://en.wikipedia.org/wiki/Adjustable%20pressure-limiting%20valve | An adjustable pressure-limiting valve (commonly abbreviated to APL valve, and also referred to as an expiratory valve, relief valve or spill valve) is a type of flow control valve used in anaesthesiology as part of a breathing system. It allows excess fresh gas flow and exhaled gases to leave the system while preventing ambient air from entering.
Mechanism
Such valves were first described by the American dentist Jay Heidbrink, who used a thin disc that was held in place by a spring. The valve is adjustable and spring-loaded, allowing the opening pressure of the valve to be controlled by screwing the valve top which modifies the pressure on the spring. A very light spring is used, so that at its minimum setting the valve can be opened by the patient's breathing alone using low pressures. In contemporary APL valves, three orifices or "ports" are present: one for intake of gas, one for return of gas to the patient, and an exhaust port for waste gas which can be connected to a scavenging system.
References
Anesthetic equipment
History of anesthesia
Mechanical ventilation
Safety valves
Valves | Adjustable pressure-limiting valve | [
"Physics",
"Chemistry",
"Engineering"
] | 230 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping",
"Industrial safety devices",
"Safety valves"
] |
58,977,312 | https://en.wikipedia.org/wiki/Blasius%20theorem | In fluid dynamics, Blasius theorem states that the force experienced by a two-dimensional fixed body in a steady irrotational flow is given by
and the moment about the origin experienced by the body is given by
Here,
is the force acting on the body,
is the density of the fluid,
is the contour flush around the body,
is the complex potential ( is the velocity potential, is the stream function),
is the complex velocity ( is the velocity vector),
is the complex variable ( is the position vector),
is the real part of the complex number, and
is the moment about the coordinate origin acting on the body.
The first formula is sometimes called Blasius–Chaplygin formula.
The theorem is named after Paul Richard Heinrich Blasius, who derived it in 1911. The Kutta–Joukowski theorem directly follows from this theorem.
References
Fluid dynamics | Blasius theorem | [
"Chemistry",
"Engineering"
] | 182 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
41,149,453 | https://en.wikipedia.org/wiki/Twister%20ribozyme | The twister ribozyme is a catalytic RNA structure capable of self-cleavage. The nucleolytic activity of this ribozyme has been demonstrated both in vivo and in vitro and has one of the fastest catalytic rates of naturally occurring ribozymes with similar function. The twister ribozyme is considered to be a member of the small self-cleaving ribozyme family which includes the hammerhead, hairpin, hepatitis delta virus (HDV), Varkud satellite (VS), and glmS ribozymes.
Discovery
In contrast to in vitro selection methods, which have aided in identifying several classes of catalytic RNA motifs, the twister ribozyme was discovered by a bioinformatics approach as a conserved RNA structure of unknown function. The hypothesis that it functions as a self-cleaving ribozyme was suggested by the similarity between genes nearby to twister ribozymes and genes nearby to hammerhead ribozymes, Indeed, the genes located nearby to these two self-cleaving ribozyme classes overlap significantly. Researchers were inspired to name the newly found twister motif due to its resemblance to the Egyptian hieroglyph 'twisted flax'.
Structure
The basic structure of the Oryza sativa twister ribozyme was crystallographically determined at atomic resolution in 2014. The active site of the twister ribozyme is centered in a double-pseudoknot, facilitating a compact fold structure through two long-range tertiary interactions, in partnership with a helical junction. Magnesium is important for secondary structure stabilization of the ribozyme.
Catalytic Mechanism
Similar to other nucleolytic ribozymes, the twister ribozyme selectively cleaves phopshodiester bonds, through an SN2-related mechanism, into a 2',3'-cyclic phosphate and 5' hydroxyl product. Both experimental and modelling evidence have supported a concerted general-acid-base catalysis involving highly conserved adenine (A1) and guanine (G33) bases, where N3 of A1 acts as a proton donor and G33 the general base. The twister ribozyme generates catalytic activity by specifically orienting the to-be-cleaved P O bond for in-line nucleophilic attack within the active site. Currently, it is known that the rate of reaction of the twister ribozyme is dependent on both pH and temperature. Replacements of the pro-S nonbridging oxygen of the scissile phosphate with a thiol group leads to reduced self-cleavage rates, suggesting that the mechanism is not reliant on bound magnesium. Rescue of the thiol-derivative by cadmium cations indicates that divalent metal ions play a role in rate enhancement. A likely mechanism for this is the stabilization of the transition state by reducing electrostatic strain on the substrate strand from the growing negative charge during cleavage.
Prevalence in Nature
The twister ribozyme motif is relatively common in nature with 2,700 examples observed across bacteria, fungi, plants, and animals. Similarly to hammerhead ribozymes, some eukaryotes contain large numbers of twister ribozymes. In the most extreme known example, there are 1051 predicted twister ribozymes in Schistosoma mansoni, an organism that also contains many hammerhead ribozymes. In bacteria, twister ribozymes are near to gene classes that are also commonly associated with bacterial hammerhead ribozymes. Currently, there is no understood biological function associated with the twister ribozyme.
References
External links
RNA
Ribozymes | Twister ribozyme | [
"Chemistry"
] | 755 | [
"Catalysis",
"Ribozymes"
] |
41,150,682 | https://en.wikipedia.org/wiki/C15H12N2O3 | {{DISPLAYTITLE:C15H12N2O3}}
The molecular formula C15H12N2O3 (molar mass: 268.27 g/mol, exact mass: 268.0848 g/mol) may refer to:
Disperse Red 11
Hydrofuramide
Molecular formulas | C15H12N2O3 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
41,154,780 | https://en.wikipedia.org/wiki/Snaith%27s%20theorem | In algebraic topology, a branch of mathematics, Snaith's theorem, introduced by Victor Snaith, identifies the complex K-theory spectrum with the localization of the suspension spectrum of away from the Bott element.
References
For a proof, see http://people.fas.harvard.edu/~amathew/snaith.pdf
Victor Snaith, Algebraic Cobordism and K-theory, Mem. Amer. Math. Soc. no 221 (1979)
External links
Theorems in algebraic topology
K-theory | Snaith's theorem | [
"Mathematics"
] | 118 | [
"Topology stubs",
"Topology",
"Theorems in algebraic topology",
"Theorems in topology"
] |
52,744,500 | https://en.wikipedia.org/wiki/Tetrahedron%20Computer%20Methodology | The Tetrahedron Computer Methodology was a short lived journal that was published by Pergamon Press (now Elsevier) to experiment with electronic submission of articles in the ChemText format, and the sharing source code to enable reproducibility. It was the first chemical journal to be published electronically, with issues distributed in print and on floppy disks. It is likely it was also the first journal to accept submissions in a non-paper format (on floppy disks). The journal ceased publication owing to technical and non-technical reasons, and may have lacked sufficient institutional support. The last issue appeared in 1992 but was dated 1990.
References
External links
Computer science journals
Cheminformatics
Chemistry journals
Academic journals established in 1988
English-language journals
Elsevier academic journals
Publications disestablished in 1990 | Tetrahedron Computer Methodology | [
"Chemistry"
] | 161 | [
"Computational chemistry",
"nan",
"Cheminformatics"
] |
52,745,729 | https://en.wikipedia.org/wiki/Ghost%20%28physics%29 | In the terminology of quantum field theory, a ghost, ghost field, ghost particle, or gauge ghost is an unphysical state in a gauge theory. Ghosts are necessary to keep gauge invariance in theories where the local fields exceed a number of physical degrees of freedom.
If a given theory is self-consistent by the introduction of ghosts, these states are labeled "good". Good ghosts are virtual particles that are introduced for regularization, like Faddeev–Popov ghosts. Otherwise, "bad" ghosts admit undesired non-virtual states in a theory, like Pauli–Villars ghosts that introduce particles with negative kinetic energy.
An example of the need of ghost fields is the photon, which is usually described by a four component vector potential , even if light has only two allowed polarizations in the vacuum. To remove the unphysical degrees of freedom, it is necessary to enforce some restrictions; one way to do this reduction is to introduce some ghost field in the theory. While it is not always necessary to add ghosts to quantize the electromagnetic field, ghost fields are strictly needed to consistently and rigorously quantize non-Abelian Yang–Mills theory, such as done with BRST quantization.
A field with a negative ghost number (the number of ghosts excitations in the field) is called an anti-ghost.
Good ghosts
Faddeev–Popov ghosts
Faddeev–Popov ghosts are extraneous anticommuting fields which are introduced to maintain the consistency of the path integral formulation. They are named after Ludvig Faddeev and Victor Popov.
Goldstone bosons
Goldstone bosons are sometimes referred to as ghosts, mainly when speaking about the vanishing bosons of the spontaneous symmetry breaking of the electroweak symmetry through the Higgs mechanism. These good ghosts are artifacts of gauge fixing. The longitudinal polarization components of the W and Z bosons correspond to the Goldstone bosons of the spontaneously broken part of the electroweak symmetry SU(2)⊗U(1), which, however, are not observable. Because this symmetry is gauged, the three would-be Goldstone bosons, or ghosts, are "eaten" by the three gauge bosons (W± and Z) corresponding to the three broken generators; this gives these three gauge bosons a mass, and the associated necessary third polarization degree of freedom.
Bad ghosts
"Bad ghosts" represent another, more general meaning of the word "ghost" in theoretical physics: states of negative norm, or fields with the wrong sign of the kinetic term, such as Pauli–Villars ghosts, whose existence allows the probabilities to be negative thus violating unitarity.
Ghost condensate
A ghost condensate is a speculative proposal in which a ghost, an excitation of a field with a wrong sign of the kinetic term, acquires a vacuum expectation value. This phenomenon breaks Lorentz invariance spontaneously. Around the new vacuum state, all excitations have a positive norm, and therefore the probabilities are positive definite.
We have a real scalar field φ with the following action
where a and b are positive constants and
The theories of ghost condensate predict specific non-Gaussianities of the cosmic microwave background. These theories have been proposed by Nima Arkani-Hamed, Markus Luty, and others.
Unfortunately, this theory allows for superluminal propagation of information in some cases and has no lower bound on its energy. This model doesn't admit a Hamiltonian formulation (the Legendre transform is multi-valued because the momentum function isn't convex) because it is acausal. Quantizing this theory leads to problems.
Landau ghost
The Landau pole is sometimes referred as the Landau ghost. Named after Lev Landau, this ghost is an inconsistency in the renormalization procedure in which there is no asymptotic freedom at large energy scales.
See also
No-ghost theorem, related to bad ghosts
BRST quantization, scheme to deal with ghosts
Quantum scar (sometimes called ghosts)
References
External links
Quantum field theory | Ghost (physics) | [
"Physics"
] | 849 | [
"Quantum field theory",
"Quantum mechanics"
] |
52,748,480 | https://en.wikipedia.org/wiki/Plexciton | Plexcitons are polaritonic modes that result from coherently coupled plasmons and excitons. Plexcitons aid direct energy flows in exciton energy transfer (EET). Plexcitons travel for 20 μm, similar to the width of a human hair.
History
Plasmons are a quantity of collective electron oscillations. Excitons are excited electrons bound to the hole produced by their excitation.
Molecular crystal excitons were combined with the collective excitations within metals to create plexcitons. This allowed EET to reach distances of around 20,000 nanometers, an enormous increase over the some 10 nanometers possible previously. However, the transfer direction was uncontrolled.
Topological insulators (TI) act as insulators below their surface, but have conductive surfaces, constraining electrons to move only along that surface. Even materials with moderately flawed surfaces do not impede current flow. Topological plexcitons make use of the properties of TIs to achieve similar control over the direction of current flow.
Plexcitons were found to emerge from an organic molecular layer (excitons) and a metallic film (plasmons). Dirac cones appeared in the plexcitons' two-dimensional band-structure. An external magnetic field created a gap between the cones when the system was interfaced to a magneto-optical layer. The resulting energy gap became populated with topologically protected one-way modes, which traveled only at the system interface.
Potential applications
Plexcitons potentially offer an appealing platform for exploring exotic matter phases and for controlling nanoscale energy flows.
References
External links
Quasiparticles
Plasmonics | Plexciton | [
"Physics",
"Chemistry",
"Materials_science"
] | 355 | [
"Plasmonics",
"Matter",
"Surface science",
"Nanotechnology",
"Condensed matter physics",
"Quasiparticles",
"Solid state engineering",
"Subatomic particles"
] |
51,215,300 | https://en.wikipedia.org/wiki/Electronic%20specific%20heat | In solid state physics the electronic specific heat, sometimes called the electron heat capacity, is the specific heat of an electron gas. Heat is transported by phonons and by free electrons in solids. For pure metals, however, the electronic contributions dominate in the thermal conductivity. In impure metals, the electron mean free path is reduced by collisions with impurities, and the phonon contribution may be comparable with the electronic contribution.
Introduction
Although the Drude model was fairly successful in describing the electron motion within metals, it has some erroneous aspects: it predicts the Hall coefficient with the wrong sign compared to experimental measurements, the assumed additional electronic heat capacity to the lattice heat capacity, namely per electron at elevated temperatures, is also inconsistent with experimental values, since measurements of metals show no deviation from the Dulong–Petit law. The observed electronic contribution of electrons to the heat capacity is usually less than one percent of . This problem seemed insoluble prior to the development of quantum mechanics. This paradox was solved by Arnold Sommerfeld after the discovery of the Pauli exclusion principle, who recognised that the replacement of the Boltzmann distribution with the Fermi–Dirac distribution was required and incorporated it in the free electron model.
Derivation within the free electron model
Internal energy
When a metallic system is heated from absolute zero, not every electron gains an energy as equipartition dictates. Only those electrons in atomic orbitals within an energy range of of the Fermi level are thermally excited. Electrons, in contrast to a classical gas, can only move into free states in their energetic neighbourhood.
The one-electron energy levels are specified by the wave vector through the relation with the electron mass. This relation separates the occupied energy states from the unoccupied ones and corresponds to the spherical surface in k-space. As the ground state distribution becomes:
where
is the Fermi–Dirac distribution
is the energy of the energy level corresponding to the ground state
is the ground state energy in the limit , which thus still deviates from the true ground state energy.
This implies that the ground state is the only occupied state for electrons in the limit , the takes the Pauli exclusion principle into account. The internal energy of a system within the free electron model is given by the sum over one-electron levels times the mean number of electrons in that level:
where the factor of 2 accounts for the spin up and spin down states of the electron.
Reduced internal energy and electron density
Using the approximation that for a sum over a smooth function over all allowed values of for finite large system is given by:
where is the volume of the system.
For the reduced internal energy the expression for can be rewritten as:
and the expression for the electron density can be written as:
The integrals above can be evaluated using the fact that the dependence of the integrals on can be changed to dependence on through the relation for the electronic energy when described as free particles, , which yields for an arbitrary function :
with
which is known as the density of levels or density of states per unit volume such that is the total number of states between and . Using the expressions above the integrals can be rewritten as:
These integrals can be evaluated for temperatures that are small compared to the Fermi temperature by applying the Sommerfeld expansion and using the approximation that differs from for by terms of order . The expressions become:
For the ground state configuration the first terms (the integrals) of the expressions above yield the internal energy and electron density of the ground state. The expression for the electron density reduces to . Substituting this into the expression for the internal energy, one finds the following expression:
Final expression
The contributions of electrons within the free electron model is given by:
, for free electrons :
Compared to the classical result (), it can be concluded that this result is depressed by a factor of which is at room temperature of order of magnitude . This explains the absence of an electronic contribution to the heat capacity as measured experimentally.
Note that in this derivation is often denoted by which is known as the Fermi energy. In this notation, the electron heat capacity becomes:
and for free electrons : using the definition for the Fermi energy with the Fermi temperature.
Comparison with experimental results for the heat capacity of metals
For temperatures below both the Debye temperature and the Fermi temperature the heat capacity of metals can be written as a sum of electron and phonon contributions that are linear and cubic respectively: . The coefficient can be calculated and determined experimentally. We report this value below:
The free electrons in a metal do not usually lead to a strong deviation from the Dulong–Petit law at high temperatures. Since is linear in and is linear in , at low temperatures the lattice contribution vanishes faster than the electronic contribution and the latter can be measured. The deviation of the approximated and experimentally determined electronic contribution to the heat capacity of a metal is not too large. A few metals deviate significantly from this approximated prediction. Measurements indicate that these errors are associated with the electron mass being somehow changed in the metal, for the calculation of the electron heat capacity the effective mass of an electron should be considered instead. For Fe and Co the large deviations are attributed to the partially filled d-shells of these transition metals, whose d-bands lie at the Fermi energy.
The alkali metals are expected to have the best agreement with the free electron model since these metals only one s-electron outside a closed shell. However even sodium, which is considered to be the closest to a free electron metal, is determined to have a more than 25 per cent higher than expected from the theory.
Certain effects influence the deviation from the approximation:
The interaction of the conduction electrons with the periodic potential of the rigid crystal lattice is neglected.
The interaction of the conduction electrons with phonons is also neglected. This interaction causes changes in the effective mass of the electron and therefore it affects the electron energy.
The interaction of the conduction electrons with themselves is also ignored. A moving electron causes an inertial reaction in the surrounding electron gas.
Superconductors
Superconductivity occurs in many metallic elements of the periodic system and also in alloys, intermetallic compounds, and doped semiconductors. This effect occurs upon cooling the material. The entropy decreases on cooling below the critical temperature for superconductivity which indicates that the superconducting state is more ordered than the normal state. The entropy change is small, this must mean that only a very small fraction of electrons participate in the transition to the superconducting state but, the electronic contribution to the heat capacity changes drastically. There is a sharp jump of the heat capacity at the critical temperature while for the temperatures above the critical temperature the heat capacity is linear with temperature.
Derivation
The calculation of the electron heat capacity for super conductors can be done in the BCS theory. The entropy of a system of fermionic quasiparticles, in this case Cooper pairs, is:
where is the Fermi–Dirac distribution with
and
is the particle energy with respect to the Fermi energy
the energy gap parameter where and represents the probability that a Cooper pair is occupied or unoccupied respectively.
The heat capacity is given by .
The last two terms can be calculated:
Substituting this in the expression for the heat capacity and again applying that the sum over in the reciprocal space can be replaced by an integral in multiplied by the density of states this yields:
Characteristic behaviour for superconductors
To examine the typical behaviour of the electron heat capacity for species that can transition to the superconducting state, three regions must be defined:
Above the critical temperature
At the critical temperature
Below the critical temperature
Superconductors at T > T c
For it holds that and the electron heat capacity becomes:
This is just the result for a normal metal derived in the section above, as expected since a superconductor behaves as a normal conductor above the critical temperature.
Superconductors at T < T c
For the electron heat capacity for super conductors exhibits an exponential decay of the form:
Superconductors at T = T c
At the critical temperature the heat capacity is discontinuous. This discontinuity in the heat capacity indicates that the transition for a material from normal conducting to superconducting is a second order phase transition.
See also
Drude model
Fermi–Dirac statistics
Thermal effective mass
Effective mass
Superconductivity
BCS theory
References
General references:
Condensed matter physics
Thermodynamic properties | Electronic specific heat | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,736 | [
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Phases of matter",
"Materials science",
"Thermodynamics",
"Condensed matter physics",
"Matter"
] |
51,217,945 | https://en.wikipedia.org/wiki/Pars%20Rocketry | Pars Rocketry Group or Pars Rocketry Team is a high power rocketry organization founded in June 2012.
It is formed by students from Istanbul Technical University's various engineering majors. By means of owning Tripoli Level 2 Rocketry Certification, Pars is the only civilian organization that has been allowed to launch the most powerful rocket engines in Turkey. Goals of the Pars stated as raising awareness for rocketry among educational structures together with designing and producing unique rocket subsystems for all manner of missions.
Pars has developed original designs and produced them for their own uses such as engine shells, a launch pad and a rocket engine test stand. They have constructed numerous rockets and stored blueprints for further use. Pars has formed Turkish sources over 450 pages about rocketry regulations and delivered them to Grand National Assembly of Turkey. The first hybrid rocket engine fired in Turkey has been drafted and manufactured by Pars Rocketry. In addition to attending several fairs, Pars participates in the Intercollegiate Rocket Engineering Competition, which is the World's Largest University Rocket Engineering Competition, every year since 2014. In June 2016, Pars Rocketry Group's rocket "Istiklal" won the 6th position among 44 teams from all around the world. The team is still working on hybrid rocket engines. The team took part in the Teknofest organization with their rockets .In such organizations, they organize some trainings for amateur rocketers.
References
Rocketry
Istanbul Technical University | Pars Rocketry | [
"Engineering"
] | 295 | [
"Rocketry",
"Aerospace engineering"
] |
51,218,774 | https://en.wikipedia.org/wiki/SystemC%20AMS | SystemC AMS is an extension to SystemC for analog, mixed-signal and RF functionality. The SystemC AMS 2.0 standard was released on April 6, 2016 as IEEE Std 1666.1-2016.
Language specification
ToDo: description
Language features
ToDo: description
MoC - Model of Computation
A model of computation (MoC) is a set of rules defining the behavior and interaction between SystemC AMS primitive
modules. SystemC AMS defines the following models of computation: timed data flow (TDF), linear signal flow
(LSF) and electrical linear networks (ELN).
TDF - Timed Data Flow
In the timed data flow (TDF) model, components exchange analogue values with each other
on a periodic basis at a chosen sampling rate, such as every 10 microseconds.
By the sampling theorem, this would be sufficient to convey
signals of up to 50 MHz bandwidth without aliasing artefacts.
A TDF model defines a method called `processing()' that is invoked
at the appropriate rate as simulation time advances.
A so-called cluster of models share a static schedule of when they should communicate.
This sets the relative ordering of the calls to the processing() methods
of each TDF instance in the cluster.
The periodic behaviour of TDF allows it to operate independently of the main
SystemC event-driven kernel used for digital logic.
ELN - Electrical Linear Networks
The SystemC electrical linear networks (ELN) library provides a set of
standard electrical components that enable SPICE-like
simulations to be run. The three basic components, resistors, capacitors and
inductors are, of course, available. Further voltage-controlled variants, such as a transconductance
amplifier (voltage-controlled current generator) enable most FET and other
semiconductor models to be readily created.
Current flowing in ELN networks of resistors can be solved with a suitable simultaneous equation solver.
These are called the nodal equations.
Where time-varying components, such as capacitors and inductors are included, Euler's method is
typically implemented to model them.
Euler's method is a simple approach to solving finite-difference time-domain (FDTD) problems. For instance,
to simulate the capacitor charge problem on the left below, a timestep delta\_t is selected that is typically
about one percent of the time constant and the iteration on the bottom right is executed.
The error in Euler's method decreases quadratically with smaller time steps, but an overly-small time
step results in a slow simulation for a complex finite-element simulation. But this
is not a problem in many situations where part of a complex SoC or
plant controller is run alongside a plant model that has just a few state variables, such as the car transmission system because
there are orders of magnitude difference in time constants (e.g. 100 MHz clock versus 1~ms shortest inertial time constant).
Simulating the analogue subsystem inside the RTL simulator then makes sense.
Moreover, most plant control situations use closed-loop negative feedback with the controller being just as good at managing
a slightly errored plant model as the real model.
Under the ELN formalism, the SystemC initialisation and simulation cycles are extended to support solving nodal flow
equations. Nodal equation solving is generally solved iteratively
rather than using direct methods such as Gaussian Elimination or based
on matrix inverses. Iterative methods tend to have greater stability
and are fast when the state has only advanced slightly from the
previous time step. When the kernel de-queues a time-advancing event
from the event queue, the simulation time is advanced. The analogue
part of the simulator maintains a time quantum beyond which the nodal
equations need to be re-computed. This quantum is dynamically adjusted
depending on the behaviour of the equations. If the equations are
`bendy', meaning that linear extrapolation using Euler's method over
the quantum will lead to too much error, the time step is reduced,
otherwise it can be gradually enlarged at each step. Overall, two
forms of iteration are needed: the first is iteration at a time step to solve
the nodal equations to a sufficient accuracy. The second is between
time steps. In a simple implementation, once simulation time has
advanced beyond the Euler quantum, the analogue sub-system is
re-solved. If the extrapolation errors are too great, the simulator
must go back to the last time step and simulate forward again using a
smaller analogue quantum. This mechanism is also the basis for SPICE
simulations.
Each analogue variable that is the argument to a `cross',
or other analogue sensitivity, is then examined to see if new digital
domain work has been triggered. If so, new events are injected on the discrete event
queue for the current simulation time.
LSF - Linear Signal Flow
The SystemC linear signal flow (LSF) library provides a set of primitive
analogue operators, such as adders and differentiators that enable
all basic structures found in differential equations to be
constructed in a self-documenting and executable form. The
advantage of constructing the system from a standard operator library
is that `reflection' is possible: other code can analyse the structure
and perform analytic differentiation, summation, integration
and other forms of analysis, such as sensitivity analysis to determine
a good time step.
This would not be possible for an implementation using ad-hoc coding.
In general programming, reflection refers to
a program being able to read its own source code.
Ports
TDF in/outport definition:
sca_tdf::sca_in<PortType>
sca_tdf::sca_out<PortType>
TDF converter in/outport definition:
sca_tdf::sc_in<PortType> // DE → TDF inport
sca_tdf::sc_out<PortType> // TDF → DE outport
ELN terminal definition:
sca_eln::sca_terminal
Nodes
sca_eln::sca_node // ELN node
sca_eln::sca_node_ref // ELN reference node
Cluster
ToDo: description
Tracing
sca_trace_file *tf = sca_create_tabular_trace_file("trace_file_name.dat");
sca_trace(tf, <PORT|SIGNAL|NODE>, "name");
Example code
TDF
Timed-Data-Flow 1st order low pass model:
#include <systemc-ams>
using namespace sca_util; // introduced for convenience: sca_util::sca_vector<TYPE> → sca_vector<TYPE>
using namespace sca_core; // introduced for convenience: sca_core::sca_time() → sca_time()
using namespace sca_ac_analysis; // introduced for convenience: sca_ac_analysis::sca_ac() → sca_ac()
SCA_TDF_MODULE(tdf_low_pass)
{
// TDF ports
sca_tdf::sca_in<double> inp;
sca_tdf::sca_out<double> outp;
// parameters
double fcut; // cut-off frequency
// methods
void initialize(); // simulator callback for initialization purpose
void ac_processing(); // simulator callback for AC behavior implementation
void processing(); // simulator callback for time implementation
// constructor
SCA_CTOR(tdf_low_pass) {
fcut = 1.0e3; // cut-off frequency 1kHz
}
private:
sca_vector<double > num; // numerator coefficients
sca_vector<double > den; // de-numerator coefficients
sca_vector<double > state; // state vector
sca_tdf::sca_ltf_nd ltf_nd; // linear transfer function (numerator/de-numerator type)
};
linear transfer function:
// initialize linear transfer function coefficients
void tdf_low_pass::initialize(){
num(0) = 1.0;
den(0) = 1.0;
den(1) = 1.0/(2.0*M_PI*fcut);
}
ToDo: description
// AC implementation
void tdf_low_pass::ac_processing(){
sca_ac(outp) = sca_ac_ltf_nd(num, den, sca_ac(inp));
}
ToDo: description
// time domain implementation
void tdf_low_pass::processing(){
outp = ltf_nd(num, den, state, inp);
}
ELN
Electrical-Linear-Networks 1st order low pass netlist:
SC_MODULE(eln_low_pass_netlist)
{
// sca eln terminals
sca_eln::sca_terminal n1;
sca_eln::sca_terminal n2;
// internal nodes
sca_eln::sca_node_ref gnd;
// eln modules
sca_eln::sca_r i_r;
sca_eln::sca_c i_c;
SC_CTOR(eln_low_pass_netlist) : i_r("i_r"), i_c("i_c")
{
i_r.value = 1.0;
i_r.p.bind(n1);
i_r.n.bind(n2);
i_c.value = 1.0/(2.0*M_PI*1.0e3);
i_c.p.bind(n2);
i_c.n.bind(gnd);
}
};
LSF
Linear-Signal-Flow netlist:
History
SystemC AMS study group was founded in 2002 to develop and maintain analog and mixed-signal extensions to SystemC, and to initiate an OSCI (Open SystemC initiative) SystemC-AMS working group. The study group has made initial investigations and specified and implemented a SystemC extension to demonstrate feasibility of the approach. In 2006, a SystemC AMS working group has been funded which continued the work of the study group inside OSCI, and now goes on to work on SystemC AMS within the Accellera Systems Initiative, resulting in the AMS 1.0 standard in 2010. After the release of the Accellera SystemC AMS 2.0 standard in 2013, the standard was transferred to the IEEE Standards Association in 2014 for further industry adoption and maintenance. The SystemC AMS standard was released April 6, 2016 as IEEE Std 1666.1-2016. COSEDA Technologies provides with COSIDE the first commercially available design environment based on SystemC AMS standard.
SystemC AMS-Standard IEEE 1666.1-2016
SystemC AMS Proof-of-Concept Download
References
External links
Phase Locked Loop simulator in SystemC AMS - Américo Dias - Keywords: Phase Locked Loop, PLL, SystemC-AMS
SystemC AMS examples - Wolfgang Scherr - Keywords: Filter, ADC, SystemC-AMS
Articles with example C++ code
Hardware description languages
Hardware verification languages
System description languages | SystemC AMS | [
"Engineering"
] | 2,432 | [
"Hardware verification languages",
"Electronic engineering",
"Hardware description languages"
] |
51,221,648 | https://en.wikipedia.org/wiki/RPL%20character%20set | The RPL character set is an 8-bit character set and encoding used by most RPL calculators manufactured by Hewlett-Packard as well as by the HP 82240B thermal printer. It is sometimes referred to simply as "ECMA-94" in documentation, although it is for the most part a superset of ISO/IEC 8859-1 / ECMA-94 in terms of printable characters, and it differs from ISO/IEC 8859-1 by using displayable characters rather than control characters in the 0x80 to 0x9F range of code points.
Overview
In 1986, the original series of RPL calculators (HP-28 series) as well as the HP 82240A thermal printer used a modified variant of the HP Roman-8 character set, of which characters above 147 could not be displayed on the calculator, only be printed.
This changed with the introduction of the HP 82240B printer in 1989 and the HP 48 series in 1990, which came with a new character set now based on ECMA 94 / ISO 8859-1 instead of HP Roman-8, but with the control codes in the range 128 to 159 (0x80 to 0x9F) being replaced by additional displayable characters. Compared to ISO 8859-1, code point 127 (0x7F) showed a medium shaded gray box like in the former HP Roman-8 based character set. Code points 131 (0x83) to 142 (0x8E) were also taken over from the former HP Roman-8 based character set. In addition to this, code point 31 (0x1F) was used for ellipsis (…) and code points 169 (0xA9) and 174 (0xAE) showed ambiguous glyphs which could be viewed as inverse circled number ❸ or copyright symbol (©) and as ❷ or registered trademark symbol (®), respectively. This first version of the character set also had a non-breaking space at position 160 (0xA0).
Translation from HP-48 to HP-28 character set:
In a revision of this character set in 1999, code point 160 (0xA0) was redefined to hold the euro sign (€) in the HP 49/50 series (including the HP 48gII), now deviating from ISO 8859-1. Code points 169 (0xA9) and 174 (0xAE) were now clearly defined as holding the copyright (©) and registered trademark (®) symbols in compliance with ISO 8859-1, whereas the corresponding glyphs still resembled the inverse circled numbers more. The last calculator supporting this variant of the character set was the HP 50g introduced in 2006 and discontinued in 2015.
In a parallel development, the HP 38G also used the HP 48 series' character set internally. Starting with the HP 39G in 2000, the superscript 3 (³) at code point 179 (0xB3) was replaced by a superscript -1 (−1) in the HP 39/40 series (except for the HP 39gII, which started to use Unicode). Code point 160 (0xA0) was also changed to the euro sign (€) in this third variant of the character set. The last calculator supporting this variant of the character set was the HP 40gs introduced in 2006 and discontinued around 2011.
Hewlett-Packard never defined an official Unicode translation, hence several variants evolved in the community, differing in code points 31 (0x1F), 127 (0x7F), 128 (0x80), 129 (0x81), 133 (0x85), 134 (0x86), 158 (0x9E), 160 (0xA0), 169 (0xA9), 174 (0xAE), 178 (0xB3), 181 (0xB5) and 223 (0xDF).
The fact that the Unicode equivalent for x-bar at code point 129 (0x81) is a combination of two characters (x̅) could cause problems in translations, therefore it was suggested to use U+0101 (ā) instead.
Characters which cannot be reasonably transcoded should be mapped to code point 127 (0x7F), similar to what the calculators do when communicating with older printers like the HP 82240A.
Since the calculators allow fonts to be redefined (using FONT→, →FONT, MINIFONT→, →MINIFONT) other codepages can be emulated for as long as symbols which are available on the keyboard or are otherwise associated with specific functionality by the calculator aren't replaced by unrelated symbols.
Code page layout
The following table shows the HP RPL character set. Each character is shown with a potential Unicode equivalent in the tooltip. Where special HP TIO codes are defined to enter the character, they are given as well. The other characters can be entered using the \nnn TIO code syntax with nnn being a three-digit decimal number.
See also
HP trigraphs
Western Latin character sets (computing)
Hewlett-Packard calculator character sets
Notes
References
Further reading
Calculator character sets | RPL character set | [
"Mathematics"
] | 1,096 | [
"Calculators",
"Calculator character sets"
] |
32,763,780 | https://en.wikipedia.org/wiki/Harish-Chandra%27s%20c-function | In mathematics, Harish-Chandra's c-function is a function related to the intertwining operator between two principal series representations, that appears in the Plancherel measure for semisimple Lie groups. introduced a special case of it defined in terms of the asymptotic behavior of a zonal spherical function of a Lie group, and introduced a more general c-function called Harish-Chandra's (generalized) C-function. introduced the Gindikin–Karpelevich formula, a product formula for Harish-Chandra's c-function.
Gindikin–Karpelevich formula
The c-function has a generalization cw(λ) depending on an element w of the Weyl group.
The unique element of greatest length
s0, is the unique element that carries the Weyl chamber onto . By Harish-Chandra's integral formula, cs0 is Harish-Chandra's c-function:
The c-functions are in general defined by the equation
where ξ0 is the constant function 1 in L2(K/M). The cocycle property of the intertwining operators implies a similar multiplicative property for the c-functions:
provided
This reduces the computation of cs to the case when s = sα, the reflection in a (simple) root α, the so-called
"rank-one reduction" of . In fact the integral involves only the closed connected subgroup Gα corresponding to the Lie subalgebra generated by where α lies in Σ0+. Then Gα is a real semisimple Lie group with real rank one, i.e. dim Aα = 1,
and cs is just the Harish-Chandra c-function of Gα. In this case the c-function can be computed directly and is given by
where
and α0=α/〈α,α〉.
The general Gindikin–Karpelevich formula for c(λ) is an immediate consequence of this formula and the multiplicative properties of cs(λ), as follows:
where the constant c0 is chosen so that c(–iρ)=1 .
Plancherel measure
The c-function appears in the Plancherel theorem for spherical functions, and the Plancherel measure is 1/c2 times Lebesgue measure.
p-adic Lie groups
There is a similar c-function for p-adic Lie groups.
and found an analogous product formula for the c-function of a p-adic Lie group.
References
Lie groups | Harish-Chandra's c-function | [
"Mathematics"
] | 522 | [
"Lie groups",
"Mathematical structures",
"Algebraic structures"
] |
32,766,375 | https://en.wikipedia.org/wiki/Zener%20effect | In electronics, the Zener effect (employed most notably in the appropriately named Zener diode) is a type of electrical breakdown, discovered by Clarence Melvin Zener. It occurs in a reverse biased p-n diode when the electric field enables tunneling of electrons from the valence to the conduction band of a semiconductor, leading to numerous free minority carriers which suddenly increase the reverse current.
Mechanism
Under a high reverse-bias voltage, the p-n junction's depletion region widens which leads to a high-strength electric field across the junction. Sufficiently strong electric fields enable tunneling of electrons across the depletion region of a semiconductor, leading to numerous free charge carriers. This sudden generation of carriers rapidly increases the reverse current and gives rise to the high slope conductance of the Zener diode.
Relationship to the avalanche effect
The Zener effect is distinct from avalanche breakdown. Avalanche breakdown involves minority carrier electrons in the transition region being accelerated, by the electric field, to energies sufficient for freeing electron-hole pairs via collisions with bound electrons. The Zener and the avalanche effect may occur simultaneously or independently of one another. In general, diode junction breakdowns occurring below 5 volts are caused by the Zener effect, whereas breakdowns occurring above 5 volts are caused by the avalanche effect. Breakdowns occurring at voltages close to 5 V are usually caused by some combination of the two effects.
Zener breakdown is found to occur at electric field intensity of about .
Zener breakdown occurs in heavily doped junctions (p-type semiconductor moderately doped and n-type heavily doped), which produces a narrow depletion region. The avalanche breakdown occurs in lightly doped junctions, which produce a wider depletion region. Temperature increase in the junction increases the contribution of the Zener effect to breakdown, and decreases the contribution of the avalanche effect.
References
Electrical breakdown | Zener effect | [
"Physics"
] | 386 | [
"Physical phenomena",
"Electrical phenomena",
"Electrical breakdown"
] |
32,767,589 | https://en.wikipedia.org/wiki/Polysilicon%20halide | Polysilicon halides are silicon-backbone polymeric solids. At room temperature, the polysilicon fluorides are colorless to yellow solids while the chlorides, bromides, and iodides are, respectively, yellow, amber, and red-orange. Polysilicon dihalides (perhalo-polysilenes) have the general formula (SiX2)n while the polysilicon monohalides (perhalo-polysilynes) have the formula (SiX)n, where X is F, Cl, Br, or I and n is the number of monomer units in the polymer.
Macromolecular structure
The polysilicon halides can be considered structural derivatives of the polysilicon hydrides, in which the side-group hydrogen atoms are substituted with halogen atoms. In the monomeric silicon dihalide (aka dihalo-silylene and dihalosilene) molecule, which is analogous to carbene molecules, the silicon atom is divalent (forms two bonds). By contrast, in both the polysilicon dihalides and the polysilicon monohalides, as well as the polysilicon hydrides, the silicon atom is tetravalent with a local coordination geometry that is tetrahedral, even though the stoichiometry of the monohalides ([SiX]n = SinXn) might erroneously imply a structural analogy between perhalopolysilynes and [linear] polyacetylenes with the similar formula (C2H2)n. The carbon atoms in the polyacetylene polymer are sp2-hybridized and thus have a local coordination geometry that is trigonal planar. However, this is not observed in the polysilicon halides or hydrides because the Si=Si double bond in disilene compounds are much more reactive than C=C double bonds. Only when the substituent groups on silicon are very large are disilene compounds kinetically non-labile.
Synthesis
The first indication that the reaction of SiX4 and Si yields a higher halide SinX2n+2 (n > 1) was in 1871 for the comproportionation reaction of SiCl4 vapor and Si at white heat to give Si2Cl6. This was discovered by the French chemists Louis Joseph Troost (1825 - 1911) and Paul Hautefeuille (1836–1902). Since that time, it has been shown that gaseous silicon dihalide molecules (SiX2) are formed as intermediates in the Si/SiX4 reactions. The silicon dihalide gas molecules can be condensed at low temperatures. For example, if the gaseous SiF2 (difluorosilylene) produced from SiF4 (g) and Si (s) at 1100-1400°C is condensed at temperatures below -80°C and subsequently allowed to warm to room temperature, (SiF2)n is obtained. That reaction was first observed by Donald C. Pease, a DuPont scientist in 1958. The polymerization is believed to occur via paramagnetic di-radical oligomeric intermediates like Si2F4 (•SiF2-F2Si•) and Si3F6 (•SiF2-SiF2-F2Si•),
The polysilicon dihalides also form from the thermally-induced disproportionation of perhalosilanes (according to: x SinX2n+2 → x SiX4 + (n-1) (SiX2)x where n ≥ 2). For example, SiCl4 and Si forms SinCl2n cyclic oligomers (with n = 12-16) at 900-1200°C. Under conditions of high vacuum and fast pumping, SiCl2 may be isolated by rapidly quenching the reaction products or, under less stringent vacuum conditions, (SiCl2)n polymer is deposited just beyond the hot zone while the perchlorosilanes SinCl2n+2 are trapped farther downstream. The infrared multiphoton dissociation of trichlorosilane (HSiCl3) also yields polysilicon dichloride, (SiCl2)n, along with HCl. SiBr4 and SiI4 react with Si at high temperatures to produce SiBr2 and SiI2, which polymerize on quenching.
Reactivity
The polysilicon dihalides are generally stable under vacuum up to about 150-200°C, after which they decompose to perhalosilanes, SinX2n+2 (where n = 1 to 14), and to polysilicon monohalides. However, they are sensitive to air and moisture. Polysilicon difluoride is more reactive than the heavier polysilicon dihalides. In stark contrast to its carbon analog, polytetrafluoroethylene, (SiF2)n ignites spontaneously in air, whereas (SiCl2)n inflames in dry air only when heated to 150°C. The halogen atoms in polysilicon dihalides can be substituted with organic groups. For example, (SiCl2)n undergoes substitution by alcohols to give poly(dialkoxysilylene)s. The polysilicon monohalides are all stable to 400°C, but are also water and air sensitive. Polysilicon monofluoride reacts more vigorously than the heavier polysilicon monohalides. For example, (SiF)n decomposes [to SiF4 and Si] above 400°C explosively.
See also
Polysilicon hydride
References
External links
Inorganic Chemistry (Holleman and Wiberg)
Inorganic silicon compounds
Halides | Polysilicon halide | [
"Chemistry"
] | 1,201 | [
"Inorganic silicon compounds",
"Inorganic compounds"
] |
32,769,826 | https://en.wikipedia.org/wiki/Radiant%20exposure | In radiometry, radiant exposure or fluence is the radiant energy received by a surface per unit area, or equivalently the irradiance of a surface, integrated over time of irradiation, and spectral exposure is the radiant exposure per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant exposure is the joule per square metre (), while that of spectral exposure in frequency is the joule per square metre per hertz () and that of spectral exposure in wavelength is the joule per square metre per metre ()—commonly the joule per square metre per nanometre ().
Mathematical definitions
Radiant exposure
Radiant exposure of a surface, denoted He ("e" for "energetic", to avoid confusion with photometric quantities), is defined as
where
∂ is the partial derivative symbol;
Qe is the radiant energy;
A is the area;
T is the duration of irradiation;
Ee is the irradiance.
Spectral exposure
Spectral exposure in frequency of a surface, denoted He,ν, is defined as
where ν is the frequency.
Spectral exposure in wavelength of a surface, denoted He,λ, is defined as
where λ is the wavelength.
SI radiometry units
See also
Exposure (photography)
Irradiance
Radiant energy
References
Physical quantities
Radiometry | Radiant exposure | [
"Physics",
"Mathematics",
"Engineering"
] | 274 | [
"Physical phenomena",
"Telecommunications engineering",
"Physical quantities",
"Quantity",
"Physical properties",
"Radiometry"
] |
48,870,431 | https://en.wikipedia.org/wiki/Hyperuniformity | Hyperuniform materials are characterized by an anomalous suppression of density fluctuations at large scales. More precisely, the vanishing of density fluctuations in the long-wave length limit (like for crystals) distinguishes hyperuniform systems from typical gases, liquids, or amorphous solids. Examples of hyperuniformity include all perfect crystals, perfect quasicrystals, and exotic amorphous states of matter.
Quantitatively, a many-particle system is said to be hyperuniform if the variance of the number of points within a spherical observation window grows more slowly than the volume of the observation window. This definition is equivalent to a vanishing of the structure factor in the long-wavelength limit, and it has been extended to include heterogeneous materials as well as scalar, vector, and tensor fields. Disordered hyperuniform systems, were shown to be poised at an "inverted" critical point. They can be obtained via equilibrium or nonequilibrium routes, and are found in both classical physical and quantum-mechanical systems. Hence, the concept of hyperuniformity now connects a broad range of topics in physics, mathematics, biology, and materials science.
The concept of hyperuniformity generalizes the traditional notion of long-range order and thus defines an exotic state of matter. A disordered hyperuniform many-particle system can be statistically isotropic like a liquid, with no Bragg peaks and no conventional type of long-range order. Nevertheless, at large scales, hyperuniform systems resemble crystals, in their suppression of large-scale density fluctuations. This unique combination is known to endow disordered hyperuniform materials with novel physical properties that are, e.g., both nearly optimal and direction independent (in contrast to those of crystals that are anisotropic).
History
The term hyperuniformity (also independently called super-homogeneity in the context of cosmology) was coined and studied by Salvatore Torquato and Frank Stillinger in a 2003 paper, in which they showed that, among other things, hyperuniformity provides a unified framework to classify and structurally characterize crystals, quasicrystals, and exotic disordered varieties. In that sense, hyperuniformity is a long-range property that can be viewed as generalizing the traditional notion of long-range order (e.g., translational / orientational order of crystals or orientational order of quasicrystals) to also encompass exotic disordered systems.
Hyperuniformity was first introduced for point processes and later generalized to two-phase materials (or porous media) and random scalar or vectors fields. It has been observed in theoretical models, simulations, and experiments, see list of examples below.
Definition
A many-particle system in -dimensional Euclidean space is said to be hyperuniform if the number of points in a spherical observation window with radius has a variance that scales slower than the volume of the observation window:This definition is (essentially) equivalent to the vanishing of the structure factor at the origin:for wave vectors .
Similarly, a two-phase medium consisting of a solid and a void phase is said to be hyperuniform if the volume of the solid phase inside the spherical observation window has a variance that scales slower than the volume of the observation window. This definition is, in turn, equivalent to a vanishing of the spectral density at the origin.
An essential feature of hyperuniform systems is their scaling of the number variance for large radii or, equivalently, of the structure factor for small wave numbers. If we consider hyperuniform systems that are characterized by a power-law behavior of the structure factor close to the origin:with a constant , then there are three distinct scaling behaviors that define three classes of hyperuniformity:Examples are known for all three classes of hyperuniformity.
Examples
Examples of disordered hyperuniform systems in physics are disordered ground states, jammed disordered sphere packings, amorphous ices, amorphous speckle patterns, certain fermionic systems, random self-organization, perturbed lattices, and avian photoreceptor cells.
In mathematics, disordered hyperuniformity has been studied in the context of probability theory, geometry, and number theory, where the prime numbers have been found to be effectively limit periodic and hyperuniform in a certain scaling limit. Further examples include certain random walks and stable matchings of point processes.
Ordered hyperuniformity
Examples of ordered, hyperuniform systems include all crystals, all quasicrystals, and limit-periodic sets. While weakly correlated noise typically preserves hyperuniformity, correlated excitations at finite temperature tend to destroy hyperuniformity.
Hyperuniformity was also reported for fermionic quantum matter in correlated electron systems as a result of cramming.
Disordered hyperuniformity
Torquato (2014) gives an illustrative example of the hidden order found in a "shaken box of marbles", which fall into an arrangement, called maximally random jammed packing. Such hidden order may eventually be used for self-organizing colloids or optics with the ability to transmit light with an efficiency like a crystal but with a highly flexible design.
It has been found that disordered hyperuniform systems possess unique optical properties. For example, disordered hyperuniform photonic networks have been found to exhibit complete photonic band gaps that are comparable in size to those of photonic crystals but with the added advantage of isotropy, which enables free-form waveguides not possible with crystal structures. Moreover, in stealthy hyperuniform systems, light of any wavelength longer than a value specific to the material is able to propagate forward without loss (due to the correlated disorder) even for high particle density.
By contrast, in conditions where light is propagated through an uncorrelated, disordered material of the same density, the material would appear opaque due to multiple scattering. “Stealthy” hyperuniform materials can be theoretically designed for light of any wavelength, and the applications of the concept cover a wide variety of fields of wave physics and materials engineering.
Disordered hyperuniformity was recently discovered in amorphous 2‑D materials, including amorphous silica as well as amorphous graphene, which was shown to enhance electronic transport in the material. It was shown that the Stone-Wales topological defects, which transform two-pair of neighboring hexagons to a pair of pentagons and a pair of heptagons by flipping a bond, preserves the hyperuniformity of the parent honeycomb lattice.
Disordered hyperuniformity in biology
Disordered hyperuniformity was found in the photoreceptor cell patterns in the eyes of chickens. This is thought to be the case because the light-sensitive cells in chicken or other bird eyes cannot easily attain an optimal crystalline arrangement but instead form a disordered configuration that is as uniform as possible. Indeed, it is the remarkable property of "mulithyperuniformity" of the avian cone patterns, that enables birds to achieve acute color sensing.
It may also emerge in the mysterious biological patterns known as fairy circles - circle and patterns of circles that emerge in arid places. It is believed such vegetation patterns can optimize the efficiency of water utility, which is crucial for the survival of the plants.
A universal hyperuniform organization was observed in the looped vein network of tree leaves, including ficus religiosa, ficus caulocarpa, ficus microcarpa, smilax indica, populus rotundifolia, and yulania denudate, etc. It was shown the hyperuniform network optimizes the diffusive transport of water and nutrients from the vein to the leaf cells. The hyperuniform vein network organization was believed to result from a regulation of growth factor uptake during vein network development.
Making disordered, but highly uniform, materials
The challenge of creating disordered hyperuniform materials is partly attributed to the inevitable presence of imperfections, such as defects and thermal fluctuations. For example, the fluctuation-compressibility relation dictates that any compressible one-component fluid in thermal equilibrium cannot be strictly hyperuniform at finite temperature.
Recently Chremos & Douglas (2018) proposed a design rule for the practical creation of hyperuniform materials at the molecular level. Specifically, effective hyperuniformity as measured by the hyperuniformity index is achieved by specific parts of the molecules (e.g., the core of the star polymers or the backbone chains in the case of bottlebrush polymers). The combination of these features leads to molecular packings that are highly uniform at both small and large length scales.
Non-equilibrium hyperuniform fluids and length scales
Disordered hyperuniformity implies a long-ranged direct correlation function (the Ornstein–Zernike equation). In an equilibrium many-particle system, this requires delicately designed effectively long-ranged interactions, which are not necessary for the dynamic self-assembly of non-equilibrium hyperuniform states. In 2019, Ni and co-workers theoretically predicted a non-equilibrium strongly hyperuniform fluid phase that exists in systems of circularly swimming active hard spheres, which was confirmed experimentally in 2022.
This new hyperuniform fluid features a special length scale, i.e., the diameter of the circular trajectory of active particles, below which large density fluctuations are observed. Moreover, based on a generalized random organising model, Lei and Ni (2019) formulated a hydrodynamic theory for non-equilibrium hyperuniform fluids, and the length scale above which the system is hyperuniform is controlled by the inertia of the particles. The theory generalizes the mechanism of fluidic hyperuniformity as the damping of the stochastic harmonic oscillator, which indicates that the suppressed long-wavelength density fluctuation can exhibit as either acoustic (resonance) mode or diffusive (overdamped) mode. In the Lei-Ni reactive hard-sphere model, it was found that the discontinuous absorbing transition of metastable hyperuniform fluid into an immobile absorbing state does not have the kinetic pathway of nucleation and growth, and the transition rate decreases with increasing the system size. This challenges the common understanding of metastability in discontinuous phase transitions and suggests that non-equilibrium hyperuniform fluid is fundamentally different from conventional equilibrium fluids.
See also
Crystal
Quasicrystal
Amorphous solid
State of matter
References
External links
Liquids
Concepts in physics
Materials science
Statistical mechanics | Hyperuniformity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,184 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"nan",
"Statistical mechanics",
"Matter",
"Liquids"
] |
48,875,879 | https://en.wikipedia.org/wiki/Senotherapy | Senotherapeutic's refers to therapeutic agents/strategies that specifically target cellular senescence. Senotherapeutic's include emerging senolytic/senoptotic small molecules that specifically induce cell death in senescent cells and agents that inhibit the pro-inflammatory senescent secretome. Senescent cells can be targeted for immune clearance, but an ageing immune system likely impairs senescent cell clearance leading to their accumulation. Therefore, agents which can enhance immune clearance of senescent cells can also be considered as senotherapeutic.
References
Senescence | Senotherapy | [
"Chemistry",
"Biology"
] | 126 | [
"Senescence",
"Anti-aging substances",
"Metabolism",
"Cellular processes"
] |
60,455,889 | https://en.wikipedia.org/wiki/Solid%20fats%20and%20added%20sugars | Solid fats and added sugars (SoFAS) is a dietary education program of the USDA regarding overconsumption of saturated fats, transfats (which are both solid at room temperature) and artificially added sugars especially in highly processed foods.
References
Lipids
Nutrition
United States Department of Agriculture | Solid fats and added sugars | [
"Chemistry"
] | 65 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Lipids"
] |
60,456,204 | https://en.wikipedia.org/wiki/Oxford%20Photovoltaics | Oxford Photovoltaics Limited (Oxford PV) is British solar technology company and Oxford University spin-off specialising in perovskite photovoltaics and solar cells.
History
The company was founded in 2010 by Henry Snaith and Kevin Arthur. the company has raised $100 Million in investment with support from Oxford University Innovation, Goldwind the University of Oxford, Innovate UK the European Investment Bank (EIB), Legal & General, the Engineering and Physical Sciences Research Council (EPSRC) and Equinor. The largest shareholder is the Swiss cell and module production equipment manufacturer Meyer Burger.
In January 2024, they set a new record of 25% efficiency from industrial sized solar modules. In June 2024, they set another new world record for residential solar panels achieving a 26.9% efficiency from perovskite-on-silicon tandem solar cells. This will allow consumers benefit from upwards of 20% more power from the same footprint.
Operation
The company exploits solid-state physics using metal halide high efficiency perovskite solar cells and was among MIT Technology Review’s top 50 most innovative companies of 2017. Oxford PV is headquartered in Yarnton, Oxfordshire with an industrial pilot line in Brandenburg an der Havel, near Berlin, Germany.
References
Photovoltaics manufacturers
Solar energy companies of the United Kingdom | Oxford Photovoltaics | [
"Engineering"
] | 271 | [
"Photovoltaics manufacturers",
"Engineering companies"
] |
60,458,006 | https://en.wikipedia.org/wiki/Self-cultivation | Self-cultivation or personal cultivation () is the development of one's mind or capacities through one's own efforts. Self-cultivation is the cultivation, integration, and coordination of mind and body. Although self-cultivation may be practiced and implemented as a form of cognitive therapy in psychotherapy, it goes beyond healing and self-help to also encompass self-development, self-improvement and self realisation. It is associated with attempts to go beyond and understand normal states of being, enhancing and polishing one's capacities and developing or uncovering innate human potential.
Self-cultivation also alludes to philosophical models in Mohism, Confucianism, Taoism and other Chinese philosophies, as well as in Epicureanism, and is an essential component of well-established East-Asian ethical values. Although this term applies to cultural traditions in Confucianism and Taoism, the goals and aspirations of self-cultivation in these traditions differ greatly.
Theoretical background
Purposes and applications
Self-cultivation is an essential component of the context of . It enhances individuality and personal growth and of human agency. Self-cultivation is a process that cultivates one's mind and body in an attempt to transcend ordinary habitual states of being, enhancing a person's coordination and integration of congruent thoughts, beliefs and actions. It aims to polish or enlighten their capacities and inborn potentials.
Self-cultivation: cultural and philosophical psychotherapies
Confucianism, Taoism, and Buddhism have adopted elements of doctrine from one another to form new branches and sects. Some of these have disseminated to East Asian regions including Taiwan, Japan, and Korea.
Confucianism and the relational self
Confucius believed that one's life is the continuation of one's parents' life. Therefore, followers of Confucianism teach their children in such way that the younger generation is educated to cultivate themselves to live with a satisfactory level of self-discipline. Even though individuals see a clear-cut boundary between themselves and others, each person in a dyadic relationship is seen embedded in a particular social network. By respecting the parents—the elder and the superior—a child is raised to be morally upright according to the expectations of others. This can be a social burden that causes stressful interpersonal relationships, and can cause disturbance and conflicts.
Taoism and the authentic self
Taoism tends to focus on linking the body and mind to the Nature. Taoism advocates the authentic self that is free from legal, social, or political restrictions. It seeks to cultivate an individual's self by healing and emancipating them from the ethical bounds of the human society. Taoism interprets the fortune or misfortune in one's life in terms of one's destiny (), which is determined by the person's birth date and time. By avoiding the interference of personal desires and by relating everything to the system of the opposing elements of yin and yang, the cosmology of Taoism aims to keep individuals and everything in the harmonious balance. The explanation of self-cultivation in Taoism also corresponds to the equilibrium of the Five Transformative Phases ( Wu Xing): metal (), wood (), water (), fire (), and earth ().
Buddhism and the non-self
After the introduction of Buddhism to China, "spiritual self-cultivation" () became one of the terms used to translate the Buddhist concept of . The ultimate life goal in Buddhism is nirvana. People are encouraged to practice self-cultivation by detaching themselves from their desires and egos, and by attaining a mindful awareness of the non-self.
Chán and Zen Buddhist scholars emphasise that the key in self-cultivation is a "beginner's mind" which can allow the uncovering of the "luminous mind" and the realisation of innate Buddha-nature through the experience of sudden enlightenment.
In Japan, the Buddhist practice is equated with the notion of ( ) or personal cultivation.
Influences of self-cultivation on Chinese philosophy
Confucian self-cultivation as a psychological process
Self-cultivation ( ) in the Confucian tradition refers to keeping the balance between inner and outer selves, and between self and others. Self-cultivation in Chinese is an abbreviation of "" (), which literally translates to "rectifying one’s mind and nurturing one’s character (in particular through art, music and philosophy)".
Confucianism embodies metaphysics of self. It develops a complex model of self-cultivation. The cohering key concept is 'intellectual intuition', which is explained as a direct insight and cognition of present knowledge of reality, with no inference of bias toward discernment or logical reasoning. Confucianism has a large emphasis as its foundation the incorporation, application and implementation of filial piety.
Self-cultivation aims to achieve a harmonious society that is dependent on personal noble cultivation. The process entails the pursuit of moral perfection through knowledge and application.
In the Analects of Confucius there are two types of persons. One is the "profound person" (, ), and the other is "petty person" (, ). These two types are opposed to one another in terms of developed potential. Confucius takes something of a blank slate perspective: "all human beings are alike at birth" (Analects 17.2), but eventually "the profound person understands what is moral. The petty person understands what is profitable" (4.16).
The is the person who always manifests the quality of ("humaneness", "co-humanity" in an interdependent, hierarchical universe, "") in themselves and they display the quality of ("rightness", "righteousness") in their actions (4.5). Confucius highlights his fundamentally elitist, hierarchical model of relations by describing how the relates to their fellows:
According to D. C. Lau, is an attribute of actions, and is an attribute of agents. There are conceptual links between , ("ritual propriety"), ("virtue"), and the . According to what is , the exerts the moral force, which is , and thus demonstrates .
The following passages from the Analects point out the pathway towards self-cultivation that Confucius taught, with the ultimate goal of becoming the :
In the first passage, "self-reflection" is explained as "Do not do to others what you do not desire for yourself" (15.24). Confucius considers it extremely important for one to realise the necessity of concern and empathy for others, which can be achieved by reflecting upon oneself. The deeply relational self can then respond to inner reflection with outer virtue.
The second passage indicates the life-long timescale of the process of self-cultivation. It can begin during one's early teenage years, then extend well into more-mature age. The process includes the transformation of the individual, in which they realise that they should be able to distinguish and choose from what is right and what is desired.
Self-cultivation, Confucius expects, is an essential philosophical process for one to become by maximising . Confucius does not suffer from the Cartesian "mind-body problem". In Confucianism, there is no division between inner and outer self, thus the cumulative effect brought by Confucian self-cultivation is not just limited to one's self or person, but extends rather to the social and even cosmic.
Cultural and Ethical Values involved
Self-cultivation is one of the key principles of Confucianism, and may be considered the core of Chinese philosophy. The latter can be seen as the disciplined reflections on the insights of self-cultivation. While Étienne Balazs asserted that all Chinese philosophy is social philosophy and that the idea of the group takes precedence over conceptions of the individual self as the social dimension of the human condition features so prominently in the Chinese world of thought, Wing-Tsit Chan suggests a more comprehensive characterisation of Chinese philosophy as humanism: not the humanism that denies or slights a Supreme Power, but one that professes the unity of man and Heaven.
Similar to the Western sense of guilt, the Chinese sense of shame In Chinese ethics, .
Cultivation of self in East Asian philosophy of education
In East Asian cultures, . To help students and the younger generation understand the meaning of being a person, philosophers (mostly scholars) tried to explain their definitions of self with various theoretical approaches.
The legacy of Chinese philosopher Confucius, among others (for example, Laozi, Zhuangzi, and Mencius), has provided a rich domain of Chinese philosophical heritage in East Asia. Firstly, the goal of education, and one's most noble goal in life, is to properly develop oneself in order to become a "profound person" (, ). Young people were taught that it was shameful to become a "petty person" (, ), as that was the exact opposite to "sage" (, ). However, as both Confucian and Daoist philosophers adopted the term , there has been divergence that led to differences in educational concepts and practices. Besides Confucianism and Daoism, the Hundred Schools of Thought in Ancient China also included Buddhist and other varieties of philosophy, each of which offered different thoughts on the ideal conception of self.
In the modern era, some East Asian cultures have abandoned some of the archaic conceptions, or have replaced traditional humanistic education with a more common modern approach of self-cultivation that adapts the influences of globalisation. Nevertheless, the East Asian descendants and followers of Confucius still consider an ideal human being essential for their life-time education, with their cultural heritage deeply influenced by radical Confucian values.
Modern practices
The "self"-concept in Western culture
The "self" concept in western psychology originated from views of a number of empiricists and rationalists. Hegel (1770–1831) established a view of self-consciousness in which, by observation, our subject-object consciousness stimulates our rationale and reasoning, which then guides human behaviour. Freud (1856–1939) developed a three-part model of the psyche comprising the Id (), the Ego (), and the superego (). Freud's self-concept influenced Erikson (1902–1994), who emphasized self-identity crisis and self-development. Following Erikson, J. Marcia described the continuum of identity development and the nature of our self-identity.
The concept of self-consciousness derives from self-esteem, self-regulation, and self-efficacy.
Morita therapy
Through case-based research, Japanese psychologist Morita Masatake (1874–1938) introduced Morita therapy. It is based on Masatake's theory of consciousness and his four-stage therapeutic method, and is described as an ecological therapy method that focuses on . Morita therapy resembles rational-emotive therapy by American psychologist Albert Ellis, and existential and cognitive behavioral therapy.
Naikan therapy
("", , self-reflection) is a Japanese psychotherapeutic method introduced and developed decades ago by Japanese businessman and Buddhist monk (Jōdo Shinshū) Yoshimoto Ishin (1916–1988).
Initially, therapy was more often used in correctional settings, however it has been adapted to situational and psychoneurotic disorders.
Similar to Morita therapy, requires subordination to a carefully structured period of "retreat" that is compassionately supervised by the practitioner. Contrary to Morita, is shorter (seven days) and utilizes long, regulated periods of daily meditation in which introspection is directed toward the resolution of contemporary conflicts and problems.
"In contrast to Western psychoanalytic psychotherapy, both and Morita tend to keep transference issues simplified and positive, while resistance is dealt with procedurally rather than interpretively."
The theory of constructive living
Based largely on the adaptions of two Japanese structured methods of self-reflection, Naikan therapy and Morita therapy, constructive living is a Western approach to mental health education. Purpose-centered and response-oriented, constructive living (sometimes abbreviated as CL) focuses on the mindfulness and purposes of one's life. It is considered as a process of action to approach the reality thoughtfully. It also emphasizes the ability to understand one's self by recognizing the past, in which it reflects upon the present. Constructive Living highlights the importance of acceptance, of the world we live in, as well as the emotions and feelings individuals have in unique situations.
D. Reynolds, Author of Constructive Living and Director of the Constructive Living Center in Oregon, U.S.A, argues that before taking the actions which may potentially bring positive changes, people are often hold back by the belief of "dealing with negative emotions first". According to Reynolds, the most crucial component of the process of effectuating affirmations is not getting the mind right. However, one's mind and emotions are effectively adjusted during the process of self-reflection, which indicates that there shall be a behavioural change taken place beforehand.
Epicurean meleta
At the closing of his Letter to Menoeceus, Epicurus instructs his disciple to practice (meleta) "both by yourself and with others of like mind". The first field of practice shares semantic roots with and is related to the Hellenistic philosophical concept of "epimeleia heauton" (self-care), which involves methods of self-cultivation. In addition to the study of philosophy, this may include other techniques for living (techne biou) or technologies of the soul, like the visualizing technique known as "placing before the eyes", a cognitive therapy technique known as "relabeling", moral portraiture, and other didactic and ethical methods. We find examples of these techniques in Philodemus of Gadara, the poet Lucretius, and other Epicurean guides.
Nietzsche's ethics of self-cultivation
"If you incorporate this thought within you, amongst your other thoughts" he maintains "It will transform you. If for everything you wish to do you begin by asking yourself: 'Am I certain I want to do this an infinite number of times?' this will become for you the greatest weight." (KSA 9:11 [143])
Nietzsche worked on the project of reviving Self-cultivation, an ancient ethics. "I hate everything that merely instructs me without augmenting or directly invigorating my own activity"(HL 2:1) "It follows therefore that he must conceive eternal recurrence among other things as a practice that stimulates self-cultivation. In fact in one of his characteristically grandiose moments he identified it as 'the great cultivating thought' in the sense that it might weed out those too weak to bear the thought of living again (WP 1053). In a more tempered fashion, however, he framed the thought of recurrence as part of an ethics of self-cultivation and self-transformation."
See also
Self
Neo-Confucianism
Eastern philosophy
References
Bibliography
Confucian Self-Cultivation and Daoist Personhood, H.Wang
Gramsci, A. (1992). Prison notebooks, Vol. 2. New York, NY: Columbia University Press. [Google Scholar]
Heidegger, M. (1969). Identity and difference (J. Stambaugh, Trans. with an introduction.). New York, NY: Harper & Row Publishers. [Google Scholar]
Heidegger, M. (1977). The question concerning technology and other essays ( W. Lovitt, Trans. with an introduction.). New York: Harper Torchbooks. [Google Scholar]
Heidegger, M. (1978). Letter on humanism. In D. F. Krell (Ed.), Basic writings (2nd ed., pp. 213–265). London: Routledge. [Google Scholar]
Huang, C. -C. (2010). Humanism in East Asian Confucian Contexts. Bielefeld: Transcript Verlag.[Crossref], [Google Scholar]
Legge, J. (Trans.). (1861). Confucian analects. The Chinese classics, volume 1. (D. Sturgeon, Ed.). Chinese Text Project. Retrieved 21 March 2017, from http://ctext.org/analects [Google Scholar]
Wittgenstein, L. (1997). Philosophical investigations (2nd ed.). (G. E. M. Anscombe, Trans.). Malden, MA: Blackwell. [Google Scholar]
Wittgenstein, L. (2001). Tractatus Logico-philosophicus (D. F. Pears & B. F. McGuinness, Trans.). New York, NY: Routledge. [Google Scholar]
Yu, K. P. (2013). The hows and whys of the classics of filial piety孝經的道與理 (Xiaojing de dao yu li). Hong Kong: InfoLink. [Google Scholar]
External links
Stanford Encyclopedia of Philosophy Entry: Confucius
Interfaith Online: Confucianism
Confucian Documents at the Internet Sacred Texts Archive.
Oriental Philosophy, "Topic:Confucianism"
Institutional
China Confucian Philosophy
China Confucian Religion
China Confucian Temples
China Kongzi Network
Chinese philosophy
Concepts in ethics
Confucian ethics
Taoist philosophy
Taoist practices
Buddhist practices
Buddhist philosophy
Psychotherapy
Self-care
Personal development
Philosophy of life
Concepts in Chinese philosophy | Self-cultivation | [
"Biology"
] | 3,655 | [
"Personal development",
"Behavior",
"Human behavior"
] |
60,458,133 | https://en.wikipedia.org/wiki/Gummed%20film | Gummed film refers to a technique used to measure nuclear fallout. It involves the use of a sheet of plastic (cellulose acetate) or paper substrate coated on one side with an adhesive (e.g., rubber cement). The sheet is exposed (adhesive-side up) to the environment to be monitored, where fallout particles land on (and thus adhere to) the gummed film. After some period, the films are collected and analyzed for radioactivity.
References
Nuclear fallout | Gummed film | [
"Physics",
"Chemistry",
"Technology"
] | 104 | [
"Radioactive contamination",
"Nuclear fallout",
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Nuclear physics"
] |
60,463,810 | https://en.wikipedia.org/wiki/Aiolosite | Aiolosite is a rare sodium bismuth sulfate mineral with the chemical formula NaBi(SO)Cl. Its type locality is Vulcano, Sicily, Italy. Its name comes from the Greek name Aeolus.
References
External links
Aiolosite data sheet
Sodium minerals
Bismuth minerals
Sulfate minerals
Chloride minerals
Mixed anion compounds | Aiolosite | [
"Physics",
"Chemistry"
] | 71 | [
"Ions",
"Matter",
"Mixed anion compounds"
] |
60,464,196 | https://en.wikipedia.org/wiki/Gravity%20laser | A gravity laser, also sometimes referred to as a Gaser, Graser, or Glaser, is a hypothetical device for stimulated emission of coherent gravitational radiation or gravitons, much in the same way that a standard laser produces coherent electromagnetic radiation.
Principle of function
While photons exist as excitations of a vector potential and so contain an oscillating dipole term, gravitons are a spin-2 field and so have an oscillating quadrupole term. For efficient lasing to occur, there are several conditions that must be met:
There must be particles in an excited state capable of emitting radiation at the desired frequency. In a normal laser, these would be valence electrons in an excited state. For a gaser, the more straightforward analog would be a binary system of massive bodies.
These particles must couple to supplied radiation, in order to provide stimulated emission. This could be possible in a gaser by a stimulated analog of the Penrose process.
The particles must be in an inverted population, where more are in the excited state than the ground state. This typically requires some type of pumping, such as optical pumping.
The lasing medium must be long enough for the radiation to persist and excite more of the same. In optical systems this can typically be created by mirrors, effectively making a larger optical path length. For a gaser, a large-scale, slowly spatially varying gravitational potential could act as a mirror (by the WKB approximation). Alternately, a hypothetical gaser could simply be built with sufficient length to begin with.
Alternate design proposals involve free undulators akin to a free-electron laser. Several proposals involve exploiting the momentum transport properties of superconductors, where s-waves and d-waves couple distinctly to gravitational radiation.
As of 2024, interest in gravity lasers has begun to enter research.
Use in science fiction
The idea of gravity lasers has been popularized by science fiction works such as David Brin's Earth (1990). While attempting to remove micro singularities inadvertently introduced into the planetary mantle, it is found they can serve as mirrors. With the necessary energy levels found in gravitational potentials of the planet's core and mantle, the resulting 'graser' beams are initially employed to nudge the singularities somewhere safer. Other uses are soon found, such as propelling objects into space and for weaponry of various levels of sophistication.
Other works, such as the RPG Star Ocean (1996) use them as a hypothetical weapon. They are also commonly employed as a proposed mechanism for tractor beams, antigravity, and space propulsion.
Earth Unaware (2012) uses 'glasers' as a plot device to enable planetary-scale manipulation of matter, akin to gravity guns.
In Alastair Reynolds' novel Redemption Ark (2002), a graser is utilised by the Inhibitors to bore into, and puncture, Resurgam's sun.
See also
Gamma-ray laser
External links
Discussion on Physics StackExchange.
References
Laser types
Theory of relativity
Video game objects
Fictional energy weapons | Gravity laser | [
"Physics"
] | 637 | [
"Theory of relativity"
] |
60,464,777 | https://en.wikipedia.org/wiki/DXZ4 | DXZ4 is a variable number tandemly repeated DNA sequence. In humans it is composed of 3kb monomers containing a highly conserved CTCF binding site. CTCF is a transcription factor protein and the main insulator responsible for partitioning of chromatin domains in the vertebrate genome.
In addition to being enriched in CpG-islands, DXZ4 transcribes long non-coding RNAs (lncRNAs) and small RNAs of unknown function. Repeat copy number of DXZ4 is highly polymorphic in human populations (varying between 50 and 100 copies). DXZ4 is one of many large tandem repeat loci defined as macrosatellites. Several macrosatellites have been described in humans and share similar features, such as high GC content, large repeat monomers, and high variability for repeat copy number within populations. DXZ4 plays an important role in the unique structural conformation of the inactive X chromosome (Xi) in female somatic cells by acting as a hinge point between two large “super domains”.
In addition to acting as the primary division between domains, DXZ4 forms long-range interactions with a number of other repeat rich regions along the inactive X chromosome. Knockout of the DXZ4 locus revealed loss of this structural conformation on the Xi with chromosome wide silencing being maintained.
References
Gene expression
Repetitive DNA sequences
Transcription factors | DXZ4 | [
"Chemistry",
"Biology"
] | 297 | [
"Gene expression",
"Signal transduction",
"Molecular genetics",
"Cellular processes",
"Induced stem cells",
"Molecular biology",
"Biochemistry",
"Repetitive DNA sequences",
"Transcription factors"
] |
39,797,214 | https://en.wikipedia.org/wiki/Rendezvous%20in%20Space | Rendezvous in Space is a 1964 documentary film about the future of space exploration, directed by Frank Capra. It is notable for being the final film that Frank Capra directed. The film was funded by Martin Marietta and was shown at the Hall of Science Pavilion of the 1964-1965 New York World's Fair. Animated sections illustrate the invention of gunpowder, a space shuttle resupplying a space station, and the problems to be overcome living for long periods in space.
References
External links
1964 films
American short documentary films
Documentary films about outer space
Films directed by Frank Capra
World's fair films
1964 short documentary films
1964 New York World's Fair
American animated documentary films
New York Hall of Science | Rendezvous in Space | [
"Astronomy"
] | 143 | [
"Space art",
"Documentary films about outer space"
] |
39,797,382 | https://en.wikipedia.org/wiki/Nuclear%20pasta | In astrophysics and nuclear physics, nuclear pasta is a theoretical type of degenerate matter that is postulated to exist within the crusts of neutron stars. If it exists, nuclear pasta would be the strongest material in the universe. Between the surface of a neutron star and the quark–gluon plasma at the core, at matter densities of 1014 g/cm3, nuclear attraction and Coulomb repulsion forces are of comparable magnitude. The competition between the forces leads to the formation of a variety of complex structures assembled from neutrons and protons. Astrophysicists call these types of structures nuclear pasta because the geometry of the structures resembles various types of pasta.
Formation
Neutron stars form as remnants of massive stars after a supernova event. Unlike their progenitor star, neutron stars do not consist of a gaseous plasma. Rather, the intense gravitational attraction of the compact mass overcomes the electron degeneracy pressure and causes electron capture to occur within the star. The result is a compact ball of nearly pure neutron matter with sparse protons and electrons interspersed, filling a space several thousand times smaller than the progenitor star.
At the surface, the pressure is low enough that conventional nuclei, such as helium and iron, can exist independently of one another and are not crushed together due to the mutual Coulomb repulsion of their nuclei. At the core, the pressure is so great that this Coulomb repulsion cannot support individual nuclei, and some form of ultra-dense matter, such as the theorized quark–gluon plasma, should exist.
The presence of a small population of protons is essential to the formation of nuclear pasta. The nuclear attraction between protons and neutrons is greater than the nuclear attraction of two protons or two neutrons. Similar to how neutrons act to stabilize heavy nuclei of conventional atoms against the electric repulsion of the protons, the protons act to stabilize the pasta phases. The competition between the electric repulsion of the protons, the attractive force between nuclei, and the pressure at different depths in the star leads to the formation of nuclear pasta.
Phases
While nuclear pasta has not been observed in a neutron star, its phases are theorized to exist in the inner crust of neutron stars, forming a transition region between the conventional matter at the surface and the ultra-dense matter at the core. All phases are expected to be amorphous, with a heterogeneous charge distribution. Towards the top of this transition region, the pressure is great enough that conventional nuclei will be condensed into much more massive semi-spherical collections. These formations would be unstable outside the star, due to their high neutron content and size, which can vary between tens and hundreds of nucleons. This semispherical phase is known as the gnocchi phase.
When the gnocchi phase is compressed, as would be expected in deeper layers of the crust, the electric repulsion of the protons in the gnocchi is not fully sufficient to support the existence of the individual spheres, and they are crushed into long rods, which, depending on their length, can contain many thousands of nucleons. These rods are known as the spaghetti phase. Further compression causes the spaghetti phase rods to fuse and form sheets of nuclear matter called the lasagna phase. Further compression of the lasagna phase yields the uniform nuclear matter of the outer core. Progressing deeper into the inner crust, those holes in the nuclear pasta change from being cylindrical, called by some the bucatini phase or antispaghetti phase, into scattered spherical holes, which can be called the Swiss cheese phase. The nuclei disappear at the crust–core interface, transitioning into the liquid neutron core of the star.
The pasta phases also have interesting topological properties characterized by homology groups.
For a typical neutron star of 1.4 solar masses () and 12 km radius, the nuclear pasta layer in the crust can be about 100 m thick and have a mass of about 0.01 . In terms of mass, this is a significant portion of the crust of a neutron star.
See also
Neutron star merger
References
Exotic matter
Neutron stars
Phases of matter
Amorphous solids
Metaphors referring to food and drink | Nuclear pasta | [
"Physics",
"Chemistry"
] | 860 | [
"Unsolved problems in physics",
"Phases of matter",
"Exotic matter",
"Amorphous solids",
"Matter"
] |
39,797,766 | https://en.wikipedia.org/wiki/Hypoelastic%20material | In continuum mechanics, a hypoelastic material is an elastic material that has a constitutive model independent of finite strain measures except in the linearized case. Hypoelastic material models are distinct from hyperelastic material models (or standard elasticity models) in that, except under special circumstances, they cannot be derived from a strain energy density function.
Overview
A hypoelastic material can be rigorously defined as one that is modeled using a constitutive equation satisfying the following two criteria:
The Cauchy stress at time depends only on the order in which the body has occupied its past configurations, but not on the time rate at which these past configurations were traversed. As a special case, this criterion includes a Cauchy elastic material, for which the current stress depends only on the current configuration rather than the history of past configurations.
There is a tensor-valued function such that in which is the material rate of the Cauchy stress tensor, and is the spatial velocity gradient tensor.
If only these two original criteria are used to define hypoelasticity, then hyperelasticity would be included as a special case, which prompts some constitutive modelers to append a third criterion that specifically requires a hypoelastic model to not be hyperelastic (i.e., hypoelasticity implies that stress is not derivable from an energy potential). If this third criterion is adopted, it follows that a hypoelastic material might admit nonconservative adiabatic loading paths that start and end with the same deformation gradient but do not start and end at the same internal energy.
Note that the second criterion requires only that the function exists. As explained below, specific formulations of hypoelastic models typically employ a so-called objective stress rate so that the function exists only implicitly.
Hypoelastic material models frequently take the form
where is an objective rate of the Kirchhoff stress (), is the deformation rate tensor, and is the so-called elastic tangent stiffness tensor, which varies with stress itself and is regarded as a material property tensor. In hyperelasticity, the tangent stiffness generally must also depend on the deformation gradient in order to properly account for distortion and rotation of anisotropic material fiber directions.
Hypoelasticity and objective stress rates
In many practical problems of solid mechanics, it is sufficient to characterize material deformation by the small (or linearized) strain tensor
where are the components of the displacements of continuum points, the subscripts refer to Cartesian coordinates , and the subscripts preceded by a comma denote partial derivatives (e.g.,
). But there are also many problems where the finiteness of strain must be taken into account. These are of two kinds:
large nonlinear elastic deformations possessing a potential energy, (exhibited, e.g., by rubber), in which the stress tensor components are obtained as the partial derivatives of with respect to the finite strain tensor components; and
inelastic deformations possessing no potential, in which the stress-strain relation is defined incrementally.
In the former kind, the total strain formulation described in the article on finite strain theory is appropriate. In the latter kind an incremental (or rate) formulation is necessary and must be used in every load or time step of a finite element computer program using updated Lagrangian procedure. The absence of a potential raises intricate questions due to the freedom in the choice of finite strain measure and characterization of the stress rate.
For a sufficiently small loading step (or increment), one may use the deformation rate tensor (or velocity strain)
or increment
representing the linearized strain increment from the initial (stressed and deformed) state in the step. Here the superior dot represents the material time derivative ( following a given material particle), denotes a small increment over the step, = time, and = material point velocity or displacement rate.
However, it would not be objective to use the time derivative of the Cauchy (or true) stress . This stress, which describes the forces on a small material element imagined to be cut out from the material as currently deformed, is not objective because it varies with rigid body rotations of the material. The material points must be characterized by their initial coordinates (called Lagrangian) because different material particles are contained in the element that is cut out (at the same location) before and after the incremental deformation.
Consequently, it is necessary to introduce the so-called objective stress rate , or the corresponding increment . The objectivity is necessary for to be functionally related to the element deformation. It means that that must be invariant with respect to coordinate transformations (particularly rotations) and must characterize the state of the same material element as it deforms.
See also
Stress measures
Hyperelastic material
Objective stress rates
Principle of material objectivity
Finite strain theory
Infinitesimal strain theory
Notes
Bibliography
Continuum mechanics
Elasticity (physics) | Hypoelastic material | [
"Physics",
"Materials_science"
] | 1,036 | [
"Physical phenomena",
"Elasticity (physics)",
"Continuum mechanics",
"Deformation (mechanics)",
"Classical mechanics",
"Physical properties"
] |
39,800,443 | https://en.wikipedia.org/wiki/Intermittent%20control | Intermittent control is a feedback control method which not only explains some human control systems but also has applications to control engineering.
In the context of control theory, intermittent control provides a spectrum of possibilities between the two extremes of continuous-time and discrete-time control: the control signal consists of a sequence of (continuous-time) parameterised trajectories whose parameters are adjusted intermittently. It is different from discrete-time control in that the control is not constant between samples; it is different from continuous-time control in that the trajectories are reset intermittently. As a class of control theory, intermittent predictive control is more general than continuous control and provides a new paradigm incorporating continuous predictive and optimal control with intermittent, open loop (ballistic) control.
There are at least three areas where intermittent control is relevant. Firstly, continuous-time model-based predictive control where the intermittency is associated with on-line optimisation. Secondly, event-driven control systems where the intersample interval is time varying and determined by the event times. Thirdly, explanation of physiological control systems which, in some cases, have an intermittent character. This intermittency may be due to the “computation” in the central nervous system.
Conventional sampled-data control uses a zero-order hold, which produces a piecewise-constant control signal and can be used to give a
sampled-data implementation which approximates previously-designed continuous-time controller. In contrast to conventional sampled data control, intermittent control explicitly embeds the underlying continuous-time closed-loop system in a system-matched hold which generates an open-loop intersample control trajectory based on the underlying continuous-time closed-loop control system.
History
Intermittent control initially evolved separately in the engineering and physiological literature.
Physiological literature
The concept of intermittent control appeared in a posthumous paper by Kenneth Craik which states “The human operator behaves basically as an intermittent correction servo”. A colleague of Kenneth Craik, Margaret Vince, related the concept of intermittency to the Psychological refractory period and provided experimental verification of intermittency. Fernando Navas and James Stark showed experimentally that human hand movements were synchronised to input signals rather than to an internal clock: in other words the hand control system is event-driven not clock-driven. The first detailed mathematical model of intermittency was presented by Peter Neilson, Megan Neilson, and Nicholas O’Dwyer.
A more recent mathematical model of intermittency is given by PeterGawthrop, Ian Loram, Martin Lakie and Henrik Gollee.
Engineering literature
In the context of Control Engineering, the term intermittent control was used by Eric Ronco, Taner Arsan and Peter Gawthrop.
They stated that “A conceptual, and practical difficulty with the continuous-time generalised predictive controller is solved by replacing the continuously moving horizon by an intermittently moving horizon. This allows slow optimisation to occur concurrently with a fast control action.” The concept of intermittent model predictive control was refined by Peter Gawthrop working with Liuping Wang, who also looked at event-driven intermittent control.
In a separate line of development Tomas Estrada, Hai Lin and Panos Antsaklis developed the concept of model-based control with intermittent feedback in the context of a networked control system.
References
Control theory | Intermittent control | [
"Mathematics"
] | 693 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
39,800,770 | https://en.wikipedia.org/wiki/Integrated%20modification%20methodology | Integrated modification methodology (IMM) is a procedure encompassing an open set of scientific techniques for morphologically analyzing the built environment in a multiscale manner and evaluating its performance in actual states or under specific design scenarios.
The methodology is structured around a nonlinear phasing process aiming for delivering a systemic understanding of any given urban settlement, formulating the modification set-ups for improving its performance, and examining the modification strategies to transform that system. The basic assumption in IMM is the recognition of the built environment as a Complex Adaptive System.
IMM has been developed by IMMdesignlab, a research lab based at Politecnico di Milano at the Department of Architecture, Built Environment and Construction Engineering (DABC).
History
IMM began in 2010 as an academic research at Politecnico di Milano. That research criticized the analytical approach frequently used to study and evaluate the built environment by most of the sustainable development methods. By Recognizing the built environment as a Complex Adaptive System (CAS), IMM is urged towards a holistic simulation rather than simplifying the complex mechanisms within the cities with reductionism.
In 2013, Massimo Tadi established the IMMdesignlab at the Department of Architecture, Built Environment and Construction Engineering (DABC) of the Politecnico di Milano. The purpose of the mentioned laboratory is to develop IMM through research and education.
IN 2015, Integrated Modification Methodology for the Sustainable Built Environment has been approved as an academic course in the curriculum of the Architectural Engineering, an International Master Program in Politecnico di Milano.
Background
At its theoretical background, Integrated Modification Methodology refers to the contemporary urban development as a highly paradoxical context arisen from the social and economic significance of the cities on the one hand and their arguably negative environmental impacts on the other. Asserting the inevitably of urbanization, IMM declares that the only way to overcome that paradox for the cities is to develop in a profound integration with the ecology. According to IMM, the fundamental prerequisite of ecologically sustainable development is to have a comprehensive systemic understanding of the built environment.
IMM suggests that the advancement in construction techniques, building materials quality and transportation technologies alone have not solved the complex problems of the urban life simply because such improvements are not necessarily dealing with the systemic integration. The core argument of IMM is that the performance of the city is being chiefly driven by the complex relationships subsystems rather than the independent qualities of the urban elements. Thus, it aims at portraying the systemic structure of the built environment by introducing a logical framework for modeling the linkage between the city's static and dynamic elements.
Methodology
Integrated Modification Methodology is based on an iterative process involving the following four major phases:
Investigation
Formulation
Modification
Retrofitting and Optimization
The first phase, Investigation, is a synthesis-based inquiry into the systemic structure of the urban form. It begins with Horizontal Investigation in which the area under study is being dismantled into its morphology-generator elements, namely Urban Built-ups, Urban Voids, Types of Uses, and Links. It follows with Vertical Investigation that is a study of integral relationships between the mentioned elements. The output of Vertical Investigation is a set of quantitative descriptions and qualitative illustrations of certain attributes named Key Categories. In a nutshell they are types of emergence that show how elements come to self-organize or to synchronize their states into forming a new level of organization. Hence in IMM, Key Categories are the result of an emergence process of interaction between elementary parts (Urban Built-ups, Urban Voids, Types of Uses, and Links) to form a synergy able to add value to the combined organization. Key categories are the products of the synergy between elementary parts, a new organization that emerge not (simply as) an additive result of the proprieties of the elementary parts.
IMM declares that the city's functioning manner is chiefly driven from the Key Categories, hence, they have the most fundamental role in understanding the architecture of the city as a Complex Adaptive System. The Investigation phase concludes with the Evaluation step which is basically an examination of the system's performance by referring to a list of verified indicators associated with ecological sustainability. The same indicators are later used in the CAS retrofitting process necessary for the final evaluation of the system performance, after the transformation design process occurred.
The Formulation phase is a technical assumption of the most critical Key Category and the urban element within the area deduced from the Investigation phase. These critical attributes are being interpreted as the Catalysts of transformation and are to come to the designer's use to set a contextual priority list of Design Ordering Principals.
The third phase is the introduction of the modification/design scenarios to the project and advances with examining them by the exact procedure of the Investigation phase in a repetitive manner until the transformed context is predicted to be acceptable in arrangement and evaluation.
The fourth phase, Retrofitting and Optimization, is a testing process of the outcomes of the modification phase, then a local optimization by technical strategies (e.g. installing photovoltaic panels, designing green roofs, studying building orientations etc.) is initiated.
See also
Analysis
Center for the Built Environment
Chaos theory
Circles of Sustainability
Cognitive science
Collaboration
Complex system
Design
Design education
Design Impact Measures
Design research
Design strategy
Design thinking
Ecology
Ecological footprint
Energy conservation
Conceptual framework
Heuristic
Holistic
Innovation
Interaction design
Intuition (knowledge)
Method
Observation
Participatory design
Principles of intelligent urbanism
Programming paradigm
Renewable energy
Simulation
Sustainable architecture
Sustainable design
Sustainable development
Sustainable landscape architecture
Sustainable preservation
Sustainable refurbishment
Wicked problem
World Green Building Council
References
Further reading
Ahern, J. (2006). "Green Infrastructure for Cities: The spatial Dimension". In Cities of the Future Towards Integrated Sustainable Water and Landscape Management, edited by Vladimir Novotny and Paul Brown, 267–283. London: WA publishing.
Anderson, P. (1999). Complexity Theory and Organization Science Organization Science. 10(3): 216–232.
Batty, M. (2009). Cities as Complex Systems: Scaling, Interaction,Networks, Dynamics and Urban Morphologies. In Encyclopedia of Complexity and Systems Science. Springer.
Bennett, S., (2009), A Case of Complex Adaptive Systems Theory- Sustainable Global Governance: The Singular Challenge of the Twenty-first Century. RISC-Research Paper No.5: p. 38
Brownlee, J., (2007), Complex Adaptive Systems. CIS Technical Report: p. 1–6.
Backlund, A. (2000), "The definition of system". In: Kybernetes Vol. 29 nr. 4, pp. 444–451.
Clarke, C. and P. Anzalone, Architectural Applications of Complex Adaptive Systems, XO (eXtended Office). p. 19.
Crotti, S., (1991), Metafora, Morfogenesi e Progetto, E.D'alfonso and E.Franzini, Editors. 1991: Milano.
Hildebrand, F. (1999), Designing the city towards a more sustainable urban form. Routledge.
Hough, Micheal. (2004). Cities and Natural Processes: a Basis for Sustainability. London: Routledge.
Jenks, M., E. Burton, and K. Williams, (1996), The compact city, a sustainable form?: F a FN Spon, an imprint of Chapman & Hall. 288
Ratti C., Baker N., (2005) Steemers K., Energy consumption and urban texture, Energy and buildings, Elsevier.
Salat, S. and L. Bourdic, Urban complexity, scale hierarchy, energy efficiency and economic value creation. WIT Transactions on Ecology and The Environment, 2012. Vol 155: p. 11.
Steel, C. (2009), Hungry City: How Food Shapes Our Lives, Random House UK.
Tadi, M. Vahabzadeh Manesh, S. A.Daysh, G. Kahraman, I. Ursu (2013) The case study of Timișoara (Romania). IMM design for a more sustainable, livable and responsible city. AST Management Pty Ltd, Nerang, QLD, Australia.
Tadi, M. & Bogunovich, D. (2017). New Lynn - Auckland IMM Case Study: Low-density urban morphology and energy performance optimisation. Auckland, New Zealand. Retrieved from http://unitec.ac.nz/epress/
Thom, R., (1975), Stabilite Structurelle et Morphogenese. Massachusetts: W.A.Benjamin, Inc. 348.
Vahabzadeh Manesh, S. M. Tadi, (2013) Neighborhood Design and Urban Morphological Transformation through Integrated Modification Methodology (IMM) part 1. The Designer Architectural Magazine Vol.8. IRAN.
External links
European Environment Agency – Air Pollution
European Environment Agency – Sustainability Transition
Energy Recovery Council
Transit Oriented Development Institute
UNHabitat for a better Urban Future
World Green Building Council
Urban population (% of total) – World Bank website based on UN data.
Degree of urbanization (percentage of urban population in total population) by continent in 2016 – Statista, based on Population Reference Bureau data.
Sustainable architecture
Sustainable building
Sustainable design
Sustainable development
Environmental social science
Sustainable urban planning
Academic disciplines | Integrated modification methodology | [
"Engineering",
"Environmental_science"
] | 1,896 | [
"Sustainable building",
"Sustainable architecture",
"Building engineering",
"Construction",
"Environmental social science",
"Architecture"
] |
39,801,199 | https://en.wikipedia.org/wiki/Nettenchelys%20erroriensis | Nettenchelys erroriensis is an eel in the family Nettastomatidae (duckbill/witch eels). It was described by Emma Stanislavovna Karmovskaya in 1994. It is a marine, deep water-dwelling eel which is known from Error Seamount (from which its species epithet is derived), in the western Indian Ocean. It dwells at a depth range of . Females can reach a maximum total length of .
References
erroriensis
Fish described in 1994
Species known from a single specimen
Fauna of Socotra | Nettenchelys erroriensis | [
"Biology"
] | 112 | [
"Individual organisms",
"Species known from a single specimen"
] |
38,364,055 | https://en.wikipedia.org/wiki/Linear%20parameter-varying%20control | Linear parameter-varying control (LPV control) deals with the control of linear parameter-varying systems, a class of nonlinear systems which can be modelled as parametrized linear systems whose parameters change with their state.
Gain scheduling
In designing feedback controllers for dynamical systems a variety of modern, multivariable controllers are used. In general, these controllers are often designed at various operating points using linearized models of the system dynamics and are scheduled as a function of a parameter or parameters for operation at intermediate conditions. It is an approach for the control of non-linear systems that uses a family of linear controllers, each of which provides satisfactory control for a different operating point of the system. One or more observable variables, called the scheduling variables, are used to determine the current operating region of the system and to enable the appropriate linear controller. For example, in case of aircraft control, a set of controllers are designed at different gridded locations of corresponding parameters such as AoA, Mach, dynamic pressure, CG etc. In brief, gain scheduling is a control design approach that constructs a nonlinear controller for a nonlinear plant by patching together a collection of linear controllers. These linear controllers are blended in real-time via switching or interpolation.
Scheduling multivariable controllers can be a very tedious and time-consuming task. A new paradigm is the linear parameter-varying (LPV) techniques which synthesize of automatically scheduled multivariable controller.
Drawbacks of classical gain scheduling
An important drawback of classical gain scheduling approach is that adequate performance and in some cases even stability is not guaranteed at operating conditions other than the design points.
Scheduling multivariable controllers is often a tedious and time-consuming task and it holds true especially in the field of aerospace control where the parameter dependency of controllers are large due to increased operating envelopes with more demanding performance requirements.
It is also important that the selected scheduling variables reflect changes in plant dynamics as operating conditions change. It is possible in gain scheduling to incorporate linear robust control methodologies into nonlinear control design; however the global stability, robustness and performance properties are not addressed explicitly in the design process.
Though the approach is simple and the computational burden of linearization scheduling approaches is often much less than for other nonlinear design approaches, its inherent drawbacks sometimes outweigh its advantages and necessitates a new paradigm for the control of dynamical systems. New methodologies such as Adaptive control based on Artificial Neural Networks (ANN), Fuzzy logic, Reinforcement Learning, etc. try to address such problems, the lack of proof of stability and performance of such approaches over entire operating parameter regime requires design of a parameter dependent controller with guaranteed properties for which, a Linear Parameter Varying controller could be an ideal candidate.
Linear parameter-varying systems
LPV systems are a very special class of nonlinear systems which appears to be well suited for control of dynamical systems with parameter variations. In general, LPV techniques provide a systematic design procedure for gain-scheduled multivariable controllers. This methodology allows performance, robustness and bandwidth limitations to be incorporated into a unified framework. A brief introduction on the LPV systems and the explanation of terminologies are given below.
Parameter dependent systems
In control engineering, a state-space representation is a mathematical model of a physical system as a set of input, output, and state variables, related by first-order differential equations. The dynamic evolution of a nonlinear, non-autonomous system is represented by
If the system is time variant
The state variables describe the mathematical "state" of a dynamical system and in modeling large complex nonlinear systems if such state variables are chosen to be compact for the sake of practicality and simplicity, then parts of dynamic evolution of system are missing. The state space description will involve other variables called exogenous variables whose evolution is not understood or is too complicated to be modeled but affect the state variables evolution in a known manner and are measurable in real-time using sensors.
When a large number of sensors are used, some of these sensors measure outputs in the system theoretic sense as known, explicit nonlinear functions of the modeled states and time, while other sensors are accurate estimates of the exogenous variables. Hence, the model will be a time varying, nonlinear system, with the future time variation unknown, but measured by the sensors in real-time.
In this case, if denotes the exogenous variable vector, and denotes the modeled state, then the state equations are written as
The parameter is not known but its evolution is measured in real time and used for control. If the above equation of parameter dependent system is linear in time then it is called Linear Parameter Dependent systems. They are written similar to Linear Time Invariant form albeit the inclusion in time variant parameter.
Parameter-dependent systems are linear systems, whose state-space descriptions are known functions of time-varying parameters. The time variation of each of the parameters is not known in advance, but is assumed to be measurable in real time. The controller is restricted to be a linear system, whose state-space entries depend causally on the parameter’s history. There exist three different methodologies to design a LPV controller namely,
Linear fractional transformations which relies on the small gain theorem for bounds on performance and robustness.
Single Quadratic Lyapunov Function (SQLF)
Parameter Dependent Quadratic Lyapunov Function (PDQLF) to bound the achievable level of performance.
These problems are solved by reformulating the control design into finite-dimensional, convex feasibility problems which can be solved exactly, and infinite-dimensional convex feasibility problems which can be solved approximately.
This formulation constitutes a type of gain scheduling problem and contrast to classical gain scheduling, this approach address the effect of parameter variations with assured stability and performance.
References
Further reading
Control theory | Linear parameter-varying control | [
"Mathematics"
] | 1,175 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
38,364,510 | https://en.wikipedia.org/wiki/Arching%20or%20compressive%20membrane%20action%20in%20reinforced%20concrete%20slabs | Arching or compressive membrane action (CMA) in reinforced concrete slabs occurs as a result of the great difference between the tensile and compressive strength of concrete. Cracking of the concrete causes a migration of the neutral axis which is accompanied by in-plane expansion of the slab at its boundaries. If this natural tendency to expand is restrained, the development of arching action enhances the strength of the slab.
The term arching action is normally used to describe the arching phenomenon in one-way spanning slabs and compressive membrane action is normally used to describe the arching phenomenon in two-way spanning slabs.
Background
The strength enhancing effects of arching action in reinforced concrete floors were first recognised near the beginning of last century. However, it was not until the full scale destructive load tests by Ockleston on the Old Dental Hospital in Johannesburg that the extent of strength enhancement caused by arching action was really appreciated. In these tests, collapse loads of between 3 and 4 times those predicted by yield-line theory were obtained.
Approaches to treatment of arching action (CMA)
Since the 1950s there have been several attempts to develop theories for arching action in both one and two-way slabs. One of the principal approaches to membrane action was that due to Park which has been used as a basis for many studies into arching action in slabs. Park's approach was based on rigid plastic slab strip theory, and required the assumption of a critical deflection of one half of the slab depth at failure. Park's approach was later extended by Park and Gamble in their method for predicting the plastic load-deformation response of laterally restrained slabs.
In 1971, the American Concrete Institute produced a special publication which presented the most recent research, to that time, on arching and compressive membrane action in reinforced concrete slabs.
A comprehensive review of the literature and studies of both rigid-plastic and elastic-plastic approaches to arching have been compiled by Braestrup and Braestrup and Morley. Lahlouh and Waldron were some of the earliest researchers to achieve a degree of success in finite element modelling of the phenomenon. In 1993, Kuang and Morley presented a plasticity approach which included the effect of compressive membrane action on the punching shear strength of laterally restrained concrete slabs.
United Kingdom approach to CMA in bridge deck design
In the United Kingdom, the method developed by Kirkpatrick, Rankin & Long in 1984 and substantiated by testing a full-scale bridge in 1986 first led to the introduction of new rules for the economic design of reinforced concrete beam and slab bridge decks in Northern Ireland. The concept and method were later incorporated, by the United Kingdom Highways Agency, into the UK design manual for roads and bridges, BD 81/02, 'Use of Compressive Membrane Action in Bridge Decks'. Use of this CMA methodology normally results in substantial savings in reinforcement in the slab of a beam and slab bridge deck, provided certain limitations and boundary conditions are satisfied.
Kirkpatrick, Rankin & Long's approach to the prediction of the enhanced punching strength of bridge deck slabs was based on the punching shear prediction equation derived by Long for the shear mode of punching failure, combined with an effective reinforcement ratio, which represented the arching action strength enhancement. The effective reinforcement ratio was determined from the maximum arching moment of resistance in a rigidly restrained concrete slab, which Rankin had derived for laterally restrained concrete slabs from McDowell, McKee and Sevin's arching action deformation theory for masonry walls. The derivation of the maximum arching moment of resistance of laterally restrained concrete bridge deck slabs utilised Rankin's idealised elastic-plastic stress-strain criterion for concrete, valid for concrete cylinder strengths up to at least 70N/mm2, which he had derived on the basis of Hognestad, Hanson and McHenry's ultimate parabolic stress block coefficients for concrete.
The adaptation of Kirkpatrick, Rankin & Long's punching strength prediction method for laterally restrained bridge deck slabs, given in BD 81/02, is summarised as follows:
The concrete equivalent cylinder strength, , is given by:
The plastic strain value, , of an idealised elastic-plastic concrete is given by:
The non-dimensional parameter, , for the arching moment of resistance is given by:
In order to treat the slab as restrained, must be less than 0.26. If is greater than 0.26, the deck slab shall be treated as if it were unrestrained.
The non-dimensional arching moment coefficient, , is given by:
The effective reinforcement ratio, , is given by:
The predicted ultimate punching load for a single wheel, (N), is given by:
where:
= average effective depth to tensile reinforcement (mm)
= characteristic concrete cube strength (N/mm2)
= overall slab depth (mm)
= half span of slab strip with boundary restraint (mm)
= diameter of loaded area (mm)
= partial safety factor for strength
Further details on the derivation of the method and how to deal with situations of less than rigid lateral restraint are given by Rankin and Rankin & Long. Long and Rankin claim that the concepts of arching or compressive membrane action in beam and slab bridge decks are also applicable to flat slab and cellular reinforced concrete structures where considerable strength enhancements over design code predictions can also be achieved.
Research into arching or compressive membrane action has continued over the years at Queen's University Belfast, with the work of Niblock, who investigated the effects of CMA in uniformly loaded laterally restrained slabs; Skates, who researched CMA in cellular concrete structures; Ruddle, who researched arching action in laterally restrained rectangular and Tee-beams; Peel-Cross, who researched CMA in composite floor slab construction; Taylor who researched CMA in high strength concrete bridge deck slabs, and Shaat who researched CMA using Finite Element Analysis (FEA) techniques. A comprehensive guide to compressive membrane action in concrete bridge decks, was compiled by Taylor, Rankin and Cleland in 2002.
North American approach to CMA in bridge-deck design
In North America, a more pragmatic approach has been adopted and research into compressive membrane action has primarily stemmed from the work of Hewitt and Batchelor and Batchelor and Tissington in the 1970s. They carried out an extensive series of field tests, which led to the introduction of an empirical method of design into the Ontario Highway Bridge Design Code in 1979. This required minimum isotropic reinforcement (0.3%) in bridge deck slabs, provided certain boundary conditions were satisfied. In the 1990s Mufti et al. extended this research and showed that significant enhancements in the durability of laterally restrained slabs can be achieved by utilising fibre reinforced deck slabs without steel reinforcement. Later, Mufti and Newhook adapted Hewitt and Batchelor's model to develop a method for evaluating the ultimate capacity of fibre reinforced deck slabs using external steel straps for the provision of lateral restraint.
References
Structural engineering | Arching or compressive membrane action in reinforced concrete slabs | [
"Engineering"
] | 1,412 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
38,374,014 | https://en.wikipedia.org/wiki/Pakistan%20Council%20for%20Architects%20and%20Town%20Planners | The Pakistan Council of Architects and Town Planners (abbreviated as PCATP), () established in 1983, is a federal regulatory authority for architects and town planners based in Pakistan. Its headquarters is located in Islamabad.
The Pakistan Council of Architects and Town Planners Ordinance 1983 has been promulgated with a view to give recognition and protection to the profession of architecture and town planning in Pakistan. The council has wide-ranging powers and is authorized to perform all functions and to take steps connected with or ancillary to all aspects of the two professions including laying down standards of conduct, safeguarding interests of its members, assisting the government and national institutions in solving national problems relating to the professions, promotion of reforms in the professions, promotion of education of these professions, reviewing and advising the government in the matter of architecture and town planning education, etc.
In March 2021, Arif Changezi was elected as new chairman of this organization.
Pakistan Council for Architects and Town Planners (PCATP) is a regulatory authority and acts as an accreditation council of Higher Education Commission of Pakistan.
References
External links
PCATP official website
Professional certification in architecture
Pakistan federal departments and agencies
Professional associations based in Pakistan
1983 establishments in Pakistan
Architectural education
Architecture in Pakistan
Government agencies established in 1983
Science and technology in Pakistan | Pakistan Council for Architects and Town Planners | [
"Engineering"
] | 257 | [
"Architectural education",
"Architecture"
] |
44,019,830 | https://en.wikipedia.org/wiki/Hospi | HOSPI is a hospital delivery robot manufactured by Panasonic. HOSPI service robots were originally developed to be used in healthcare amid Japan's rapidly aging society. It features autonomous navigation capabilities, which allows it navigate using onboard sensors instead of obtrusive rail systems or delineated routes.
Development
The HOSPI robot was launched in 2004. It was built to move autonomously through the pre-installed mapping information within them. It is installed with an on-board sensor and an advanced collision-avoidance system that helps it to move around avoiding obstacles, and stop if a person suddenly runs in front of it. At IREX in 2013, Panasonic introduced a new version of the robot and began to conduct hospital trials of it. Those trials were declared successful and Panasonic began to sell the robots.
In January 2017, the Crowne Plaza ANA Narita hotel began using HOSPI robots primarily for serving drinks and clearing the tables. Panasonic also announced in the same year that there are four hospitals in Japan that use the HOSPI technology. In the same period, Narita International Airport also became the location of the first demonstration experiment of a HOSPI variant designed as an autonomous signage robot. This experiment is designed to evaluate the value of the signage robot in such a setting.
Capabilities
Hospi has security features that prevent theft, its contents from being stolen.
Hospi is able to deliver loads that are timely and loads that humans are incapable of carrying.
Hospi is programmed and equipped with sensors to efficiently and flexibly navigate a hospital layout. Hospi is able to do this also do this all autonomously
Real Implementation
Changi general hospital uses hospi as part of the hospitals’ porter management system
In 2017, Hospi was tested at a hotel call ANA Crowne Plaza Narita hotel and in Narita International Airport.
See also
Pharmacy automation
Robotic surgery
Rehabilitation robotics
Biorobotics
References
Medical robotics
Pharmacy
Panasonic | Hospi | [
"Chemistry",
"Biology"
] | 402 | [
"Pharmacology",
"Medical robotics",
"Medical technology",
"Pharmacy"
] |
44,021,915 | https://en.wikipedia.org/wiki/Molecular%20gyroscope | Molecular gyroscopes are chemical compounds or supramolecular complexes containing a rotor that moves freely relative to a stator, and therefore act as gyroscopes. Though any single bond or triple bond permits a chemical group to freely rotate, the compounds described as gyroscopes may protect the rotor from interactions, such as in a crystal structure with low packing density or by physically surrounding the rotor avoiding steric contact. A qualitative distinction can be made based on whether the activation energy needed to overcome rotational barriers is higher than the available thermal energy. If the activation energy required is higher than the available thermal energy, the rotor undergoes "site exchange", jumping in discrete steps between local energy minima on the potential energy surface. If there is thermal energy sufficiently higher than that needed to overcome the barrier to rotation, the molecular rotor can behave more like a macroscopic freely rotating inertial mass.
For example, several studies in 2002 with a p-phenylene rotor found that some structures using variable-temperature (VT) solid-state 13C CPMAS and quadrupolar echo 2H NMR were able to detect a two-site exchange rate of 1.6 MHz (over 106/second at 65 °C), described as "remarkably fast for a phenylene group in a crystalline solid", with steric barriers of 12–14 kcal/mol. However, tert-butyl modification of the rotor increased the exchange rate to over 108 per second at room temperature, and the rate for inertially rotating p-phenylene without barriers is estimated to be approximately 2.4 x 1012 revolutions per second.
References
Supramolecular chemistry
Chemical physics | Molecular gyroscope | [
"Physics",
"Chemistry",
"Materials_science"
] | 354 | [
"Applied and interdisciplinary physics",
"nan",
"Nanotechnology",
"Chemical physics",
"Supramolecular chemistry"
] |
44,022,080 | https://en.wikipedia.org/wiki/Traian%20V.%20Chiril%C4%83 | Traian V. Chirilă (born 14 February 1948 in Arad, Romania) is a Romanian-Australian polymer and organic chemist who is the inventor of AlphaCor, an artificial cornea in current clinical use throughout the world.
His past and current research has contributed in several areas of biomaterials, polymer science and bioengineering, especially in the understanding of biomaterials and biocompatibility, in the development of polymers, hydrophilic sponges, artificial cornea, artificial vitreous substitutes and in topics such as interaction of laser radiation with polymers, photoresponsive polymers, supramolecular polymers, sustained release of bioactive agents, tissue engineering and the use of polymers in genetic therapies.
Education
Chirilă was born and educated in Romania, where he obtained a BEng in polymer technology (1972) and a PhD in organic chemistry (1981) from the Polytechnic University of Timișoara.
After ten years of research in polymers and organic chemistry, he relocated to Australia. During 1984 he was a research fellow at the Curtin School of Applied Chemistry. In 1986 he joined Lions Eye Institute in Perth as a senior scientist with the task of establishing a department for research and development of polymeric biomaterials for ophthalmology. In 2005, he joined the newly founded Queensland Eye Institute in Brisbane, where he was offered a position of senior scientist to continue his research and to establish a department of ophthalmic bioengineering. He was made a fellow of Royal Australian Chemical Institute (RACI) in 1992. Currently, he holds three adjunct professorships at the School of Physical and Chemical Sciences of Queensland University of Technology, Australian Institute for Bioengineering and Nanotechnology of University of Queensland, and Faculty of Health Sciences of University of Queensland.
Research
Contributions to the synthetic and structural chemistry of acetals, especially cyclic acetals.
Studies of hydrogels containing UV-absorbing agents. Correlating the concentration, absorptive properties and extractability of the agents.
Interaction of polymers with IR laser radiation – demonstrating that the monomer release following the irradiation of IOL materials with surgical IR lasers is too low to cause deleterious effects in the eye.
First investigation of interaction between poly(2-hydroxyethyl methacrylate) (PHEMA) and UV laser radiation. First use of X-ray photoelectron spectroscopy to investigate the process of ablation of ophthalmic hydrogels with excimer lasers. General studies on the interaction between high-energy laser radiation and polymers.
Invention and development of melanin-containing synthetic hydrogels able to absorb UV and blue radiation and their application as IOL materials. First polymer-biopolymer combinations to be reported as interpenetrating polymer networks (IPNs).
Invention and development of an artificial cornea. Initially known as "Chirila keratoprosthesis", this device has been commercially developed as AlphaCor and received approvals from FDA and other regulatory bodies and is used in human patients.
Development of hydrogels with very high water content as potential substitutes for the vitreous body, including a methodology for their evaluation in vitro.
Evaluation of porous hydrogel scaffolds for nerve repair.
Development and study of polymer matrices for the sustained release of bioactive agents, including therapeutic oligonucleotides.
Development of an orbital implant, currently commercialised as AlphaSphere.
Contributions to the history of ophthalmology and biomaterials.
Development of tissue-engineered corneal constructs for the restoration of ocular surface.
His research has resulted to date in 175 journal publications and 13 patents. He has contributed over 175 presentations at scientific meetings and he has been invited to present lectures in China, the United States, Japan, Romania, Italy, France, Switzerland, Korea, Germany and The Netherlands.
Awards
1993 RACI Polymer Division Citation
1999 Applied Research Award and Don Rivett Medal of RACI
2002 Euro-Asia Promotion and Cultural Foundation (Romanian Branch)Diploma of Excellence
2003 Corresponding Member of the Romanian Academy of Scientists
2014 SRB Excellence Award of the Romanian Society for Biomaterials
2014 Emeritus Member of Politehnica Foundation, Timișoara, Romania
Grants
Since 1987, he has. Retrieved 30 research grants totalling over A$ 12 million.
Memberships
Romanian Academy of Scientists
Romanian Academy of Scientists (American Branch)
New York Academy of Sciences
American Chemical Society
Royal Australian Chemical Institute
Australasian Society for Biomaterials and Tissue Engineering
KPro Study Group
Personal life
He spent his childhood in a small town in Transylvania, Chișineu-Criș, graduating from the local high school in 1966. He is nicknamed "Tanu". His mother died in 2019 while still living in Timisoara, Romania where he graduated from the university. Traian is married to Mika, who is from Japan, and they have a son, Sebastian.
References
1948 births
People from Arad, Romania
Organic chemists
Australian chemists
20th-century Australian inventors
Romanian chemists
Romanian inventors
Politehnica University of Timișoara alumni
Academic staff of the Politehnica University of Timișoara
Polymer scientists and engineers
Naturalised citizens of Australia
Romanian expatriates in Libya
Living people | Traian V. Chirilă | [
"Chemistry",
"Materials_science"
] | 1,078 | [
"Organic chemists",
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
47,164,545 | https://en.wikipedia.org/wiki/Polynomial%20decomposition | In mathematics, a polynomial decomposition expresses a polynomial f as the functional composition of polynomials g and h, where g and h have degree greater than 1; it is an algebraic functional decomposition. Algorithms are known for decomposing univariate polynomials in polynomial time.
Polynomials which are decomposable in this way are composite polynomials; those which are not are indecomposable polynomials or sometimes prime polynomials (not to be confused with irreducible polynomials, which cannot be factored into products of polynomials). The degree of a composite polynomial is always a composite number, the product of the degrees of the composed polynomials.
The rest of this article discusses only univariate polynomials; algorithms also exist for multivariate polynomials of arbitrary degree.
Examples
In the simplest case, one of the polynomials is a monomial. For example,
decomposes into
since
using the ring operator symbol ∘ to denote function composition.
Less trivially,
Uniqueness
A polynomial may have distinct decompositions into indecomposable polynomials where where for some . The restriction in the definition to polynomials of degree greater than one excludes the infinitely many decompositions possible with linear polynomials.
Joseph Ritt proved that , and the degrees of the components are the same up to linear transformations, but possibly in different order; this is Ritt's polynomial decomposition theorem. For example, .
Applications
A polynomial decomposition may enable more efficient evaluation of a polynomial. For example,
can be calculated with 3 multiplications and 3 additions using the decomposition, while Horner's method would require 7 multiplications and 8 additions.
A polynomial decomposition enables calculation of symbolic roots using radicals, even for some irreducible polynomials. This technique is used in many computer algebra systems. For example, using the decomposition
the roots of this irreducible polynomial can be calculated as
Even in the case of quartic polynomials, where there is an explicit formula for the roots, solving using the decomposition often gives a simpler form. For example, the decomposition
gives the roots
but straightforward application of the quartic formula gives equivalent results but in a form that is difficult to simplify and difficult to understand; one of the four roots is:
Algorithms
The first algorithm for polynomial decomposition was published in 1985, though it had been discovered in 1976, and implemented in the Macsyma/Maxima computer algebra system. That algorithm takes exponential time in worst case, but works independently of the characteristic of the underlying field.
A 1989 algorithm runs in polynomial time but with restrictions on the characteristic.
A 2014 algorithm calculates a decomposition in polynomial time and without restrictions on the characteristic.
Notes
References
Polynomials
Computer algebra | Polynomial decomposition | [
"Mathematics",
"Technology"
] | 529 | [
"Polynomials",
"Computational mathematics",
"Computer algebra",
"Computer science",
"Algebra"
] |
47,165,905 | https://en.wikipedia.org/wiki/Venous%20access | Venous access is any method used to access the bloodstream through the veins, either to administer intravenous therapy (e.g. medication, fluid), parenteral nutrition, to obtain blood for analysis, or to provide an access point for blood-based treatments such as dialysis or apheresis. Access is most commonly achieved via the Seldinger technique, and guidance tools such as ultrasound and fluoroscopy can also be used to assist with visualizing access placement.
Methods
Peripheral
The most common form of venous access is a peripheral venous cannula which is generally inserted into veins of the hands, forearms, and occasionally feet. Healthcare providers may use a number of different techniques in order to improve the chances of successful access. Some techniques include using a tourniquet, tapping over the vein, warming the area to dilate the vein, or using an ultrasound to directly visualize the target vein. Near-infrared illumination devices can also be used to help identify superficial veins that are not easily felt or seen with the naked eye. These devices emit infrared light which is absorbed by hemoglobin in the blood, allowing for easier visualization of the vasculature.
Peripheral access is usually indicated when short-term access up to 7 days is needed. Complication rates from these peripheral access points increase quickly the longer they remain in place (such as inflammation of the veins), and thus are routinely removed and replaced every 3–4 days if possible.
Central
In some situations, venous access is obtained by inserting catheters into the large central veins of the trunk of the body such as the internal jugular, subclavian, or femoral veins. This type of venous access is performed with central venous catheters (CVCs), and is required in certain situations where peripheral access is inadequate. Such situations include, but are not limited to, the need for long-term venous access (for weeks or months, not days), administering of medications that can damage smaller veins (e.g. chemotherapy), measuring central venous pressure, obtaining certain blood tests (specifically central venous oxygen saturation), or performing dialysis. Types of CVCs include non-tunneled and tunneled catheters, peripherally inserted central catheter (PICC lines), and implanted ports.
Midline
Midline access is a type of peripheral venous access inserted into peripheral veins and that extends further than standard peripheral catheters but does not yet reach the large central veins of the thorax. They are used when intermediate-term access (one month) is needed or when administering medications that are highly irritating to smaller veins. However, their use is declining in favor of PICC lines which have the added benefit of more central access and longer potential dwell-times.
In children
In children, the most common form of venous access is also peripheral access although the dwell time in children are much shorter than in adults, 1–4 days. Accessing veins in the legs in children can promote immobilization, but is used if there are no other way. In neonates, scalp veins can also be used if other peripheral veins are not accessible. Umbilical veins are also an option in neonates, but is per definition a central access.
When accessing veins in children, certain other factors are considered such as their smaller caliber veins and anatomical variations. Gaining venous access in children can thus present a number of different challenges than in adults. For example, certain antiseptic cleaners are avoided because they may irritate the skin of young children. Children also have thinner connective tissues than adults and thus some techniques used to illuminate veins may have a risk of causing burns.
Complications
Most common complications with venous access are catheter related infections, thrombophlebitis and venous thrombosis. If having thrombophlebitis or thrombosis; pain when using the access is another complication. Peripheral venous access is least prone to thrombosis, followed by midline catheters and the centrally placed catheters. Central venous access is the most common reason for venous thrombosis in children.
Thrombosis and blockage
Long term central venous catheters for dialysis and apheresis are often locked (injection of a limited volume of liquid to prevent malfunctioning when catheter is not in use) with high concentration heparin (5000 units per ml) to prevent catheter malfunctioning due to clot formation. Besides, flushing of catheters using normal saline before and after administration of medications, parenteral nutrition, blood components, contrast media, fluids, and blood sampling reduces the likelihood of catheter blockage in the future.
Emergency situations
In emergency situations when peripheral access cannot be easily achieved, such as in arrest scenarios, intraosseous methods can be used to gain rapid access to the venous system. These methods usually involve inserting an access device into the tibia or femur bones in the legs, humerus in the upper arm, or sometimes the sternum in the chest.
Venous cutdown can also be done to gain immediate emergency access to the venous system. Venous cutdown procedures most commonly target the great saphenous vein in the leg because it is superficial, easily accessible, and consistently in the same anatomical location. This procedure is used in certain populations such as critically ill patients or patients in hypovolemic shock or when less invasive methods such as peripheral catheters or CVCs have failed. However, in many cases the use of intraosseous access has replaced the need for venous cutdown procedures.
References
External links
Venous access, Society for Vascular Surgery
Medical equipment | Venous access | [
"Biology"
] | 1,184 | [
"Medical equipment",
"Medical technology"
] |
47,166,226 | https://en.wikipedia.org/wiki/The%20Moving%20Museum | The Moving Museum is a not-for-profit organisation that runs a nomadic programme of contemporary art exhibitions. It has held projects in Dubai, Istanbul, and London comprising large-scale exhibitions, artist residencies, public programming, publishing, artwork commissions, and digital programming.
Artists are invited through a collaborative curatorial model composed of contributors from various disciplines and are included in diverse ways: as producers, collaborators, curators, and advisors. Over 50 new projects have been commissioned across a wide range of media including works by Amalia Ulman, Broomberg and Chanarin, Clunie Reid, Hannah Perry, Hito Steyerl, Jeremy Deller, Jon Rafman, Jeremy Bailey, James Bridle, Michael Rakowitz, Tom Sachs, Ryan Gander, Mai-Thu Perret, Slavs and Tatars, Zach Blas, Anne de Vries, Ben Schumacher, Ming Wong and Lucky PDF.
The Moving Museum is an independent and non-political organization founded by Aya Mousawi and Simon Sakhai in 2012; a registered Community Interest Company (CIC) in England and Wales; and a registered 501(c)(3) Charity in the United States of America. The organisation's website was designed by new media artist Jeremy Bailey with Harm van den Dorpel, Joe Hamilton, and Jonas Lund.
References
Art exhibitions in London
Art museums and galleries in the United Arab Emirates
Art museums and galleries in Istanbul
Internet art
Art and design organizations
International cultural organizations
Art museums and galleries established in 2012
Community interest companies | The Moving Museum | [
"Engineering"
] | 315 | [
"Design",
"Art and design organizations"
] |
47,166,388 | https://en.wikipedia.org/wiki/Lysophosphatidic%20acid%20receptor | The lysophosphatidic acid receptors (LPARs) are a group of G protein-coupled receptors for lysophosphatidic acid (LPA) that include:
Lysophosphatidic acid receptor 1 (LPAR1; formerly known as EDG2, GPR26)
Lysophosphatidic acid receptor 2 (LPAR2; formerly known as EDG4)
Lysophosphatidic acid receptor 3 (LPAR3; formerly known as EDG7)
Lysophosphatidic acid receptor 4 (LPAR4; formerly known as GPR23, P2RY9)
Lysophosphatidic acid receptor 5 (LPAR5; formerly known as GPR92)
Lysophosphatidic acid receptor 6 (LPAR6; formerly known as GPR87, P2RY5)
See also
Lysophospholipid receptor
Sphingosine-1-phosphate receptor
P2Y receptor
References
G protein-coupled receptors | Lysophosphatidic acid receptor | [
"Chemistry",
"Biology"
] | 218 | [
"Biotechnology stubs",
"Signal transduction",
"Biochemistry stubs",
"G protein-coupled receptors",
"Biochemistry"
] |
55,635,955 | https://en.wikipedia.org/wiki/Dropel%20Fabrics | Dropel Fabrics is an American technology company that develops, manufactures, and licenses sustainable treatments for natural fabrics to make spill proof and stain proof threads. The company is known for creating the world's first water and stain repellent naturals fabrics that maintain their softness and breathability.
History
Dropel was founded in 2015 by Sim Gulati following his research in material sciences and innovative textile processes. In 2014, after observing a broader need in apparel for innovation in natural fabrics, Gulati developed cotton fabrics using sustainable nanotechnology treatments for cotton in an effort to supplant less durable and less environmentally friendly clothing applications for polyester and other synthetics. In 2015 the company consulted with Amanda Parkes, Ph.D., termed a “fashion scientist” from Massachusetts Institute of Technology by Industry magazine.
Dropel incubated in New York based fashion accelerator, New York Fashion Tech Lab, and launched at the incubator's June 2015 demonstration day.
The New York Times reported that Dropel “patented a nanotechnology process that bonds hydrophobic polymers with natural fibers on the molecular level to make them water- and stain-repellent, a process that can be licensed by clothing brands.” The company has integrated its technology with brands AREA NYC, CEAM and Mister French. Dropel was part of the inaugural class of Fashion For Good, a sustainable fashion accelerator led by Kering, Plug and Play Ventures, Galleries Lafayette and the C&A Foundation. Fashion Tech Lab, a venture-capital accelerator led by Russian retail entrepreneur Miroslava Duma, Gaetan Bonhomme, Alex Moore, Cybernaut Venture Capital, and Full Tilt Capital invested in Dropel's seed round of funding. In 2015, Business Insider named Dropel Fabrics one of the “100 most exciting startups in New York City.”
References
External links
Textile companies of the United States
Biodegradable materials | Dropel Fabrics | [
"Physics",
"Chemistry"
] | 389 | [
"Biodegradation",
"Biodegradable materials",
"Materials",
"Matter"
] |
55,637,071 | https://en.wikipedia.org/wiki/Enceladus%20Icy%20Jet%20Analyzer | The Enceladus Icy Jet Analyzer (ENIJA) is a time-of-flight mass spectrometer developed to search for prebiotic molecules like amino acids and biosignatures in the plumes of Saturn's moon Enceladus.
Most of the ice particles in Enceladus' plume have been shown to be direct samples of subsurface waters, offering an opportunity to assess its internal ocean's geochemical and habitability potential without having to land and drill through the ice.
The ENIJA instrument has been formally proposed to fly on two missions: the Enceladus Life Finder (ELF), and on the Explorer of Enceladus and Titan (E2T).
Description
The instrument is based on the principle of impact ionization and is optimized for the analysis of high dust fluxes and number densities as typically occur during Enceladus plume crossings. Impact ionization shows an excellent sensitivity for compounds embedded in a water ice matrix. Ice particles as small as 0.1 μm at an impact speed of 5 km/s can be analyzed.
The mass resolution is > 970 m/dm for typical plume particles in the size range 0.01 to 100 μm. Detection of elemental and molecular species over such a wide mass range permits clear characterization of particle chemistry, simultaneously covering individual ions like H+, C—, O, and complex organics with masses of many hundred u. ENIJA records time-of-flight mass spectra in the range between 1 and 2000 u. Up to 50 spectra are recorded per second. The instrument has a mass of 3.5 kg, and peak power is 14.2 W.
References
Spacecraft instruments
Mass spectrometry | Enceladus Icy Jet Analyzer | [
"Physics",
"Chemistry"
] | 354 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
55,638,696 | https://en.wikipedia.org/wiki/Small%20boundary%20property | In mathematics, the small boundary property is a property of certain topological dynamical systems. It is dynamical analog of the inductive definition of Lebesgue covering dimension zero.
Definition
Consider the category of topological dynamical system (system in short) consisting of a compact metric space and a homeomorphism . A set is called small if it has vanishing orbit capacity, i.e., . This is equivalent to: where denotes the collection of -invariant measures on .
The system is said to have the small boundary property (SBP) if has a basis of open sets whose boundaries are small, i.e., for all .
Can one always lower topological entropy?
Small sets were introduced by Michael Shub and Benjamin Weiss while investigating the question "can one always lower topological entropy?" Quoting from their article:
"For measure theoretic entropy, it is well known and quite easy to see that a positive entropy transformation always has factors of smaller entropy. Indeed the factor generated by a two-set partition with one of the sets having very small measure will always have small entropy. It is our purpose here to treat the analogous question for topological entropy... We will exclude the trivial factor, where it reduces to one point."
Recall that a system is called a factor of , alternatively is called an extension of , if there exists a continuous surjective mapping which is eqvuivariant, i.e. for all .
Thus Shub and Weiss asked: Given a system and , can one find a non-trivial factor so that ?
Recall that a system is called minimal if it has no proper non-empty closed -invariant subsets. It is called infinite if .
Lindenstrauss introduced SBP and proved:
Theorem: Let be an extension of an infinite minimal system. The following are equivalent:
has the small-boundary property.
, where denotes mean dimension.
For every , , there exists a factor so and .
where is an inverse limit of systems with finite topological entropy for all .
Later this theorem was generalized to the context of several commuting transformations by Gutman, Lindenstrauss and Tsukamoto.
Systems with no non-trivial finite entropy factors
Let and be the shift homeomorphism
This is the Baker's map, formulated as a two-sided shift. It can be shown that has no non-trivial finite entropy factors. One can also find minimal systems with the same property.
References
Topological dynamics | Small boundary property | [
"Mathematics"
] | 497 | [
"Topology",
"Topological dynamics",
"Dynamical systems"
] |
55,641,006 | https://en.wikipedia.org/wiki/Cyaphide | Cyaphide, P≡C−, is the phosphorus analogue of cyanide. It is not known as a discrete salt; however, in silico measurements reveal that the −1 charge in this ion is located mainly on carbon (0.65), as opposed to phosphorus.
The word "cyaphide" was first coined in 1992, by analogy with cyanide.
Preparation
Organometallic complexes of cyaphide were first reported in 1992. More recent preparations use two other routes:
From SiR3-functionalised phosphaalkynes
Treatment of the η1-coordinated phosphaalkyne complex trans– with an alkoxide resulted in desilylation, followed by subsequent rearrangement to the corresponding carbon-bound cyaphide complex. Cyaphide-alkynyl complexes are prepared similarly.
From 2-phosphaethynolate anion (−OC≡P)
An actinide cyaphide complex can be prepared by C−O bond cleavage of the phosphaethynolate anion, the phosphorus analogue of cyanate. Reaction of the uranium complex [] with [ in the presence of 2.2.2-cryptand results in the formation of a dinuclear, oxo-bridged uranium complex featuring a C≡P ligand.
See also
phosphaalkyne (P≡CR)
Methylidynephosphane
Cyaarside
References
Anions | Cyaphide | [
"Physics",
"Chemistry"
] | 299 | [
"Ions",
"Matter",
"Anions"
] |
55,643,195 | https://en.wikipedia.org/wiki/El%20poder%20brutal | El poder brutal ("Brutal Power"), also known as La cara del diablo ("The Face of the Devil") or El diablo de Tandapi ("The devil of Tandapi"), is a colossal sculpture located in Mejía Canton, Pichincha Province, Ecuador. It is carved into the living rock of a mountain on Ecuador Highway 20, about 5 kilometers from the town of Tandapi. It is famous for its size and because it is located on the most traveled route between Quito and Guayaquil.
The figure is located 30 meters above the ground. It is 20 meters high and protrudes from the side of a hill around which the road curves. The massive face has a pair of horns on its forehead; a pointed nose; and a mouth which is half-open to reveal small fangs. On the pedestal below the face, in capital letters, is carved the phrase "EL PODER BRUTAL."
History
El poder brutal was sculpted between 1985 and 1987 by César Octaviano Cristóbal Buenaño Núñez, a tractor driver employed by the Ecuadorean Ministry of Public Works. Buenaño was born in Ambato and lived in Santo Domingo de los Colorados. He died of leukemia in 2001. Although he had only a primary-school education and lacked formal artistic training, he was an autodidact and managed to gather several million sucres in order to sculpt "El poder brutal" over the course of more than a year. Before he commenced on the sculpture, only his family and close friends were aware of his artistic tendencies.
The Ministry of Public Works tasked Buenaño with the demolition of a rocky hill along a curve of the Alóag–Santo Domingo road, at approximately the 50-km marker. This rock was blocking the view of drivers and causing accidents along the road. Buenaño began his work with a Payloader tractor. In this work he uncovered a gigantic solid rock, which he decided to sculpt.
First, he sketched the sculpture. In addition to this design work, the sculptor made his own tools, such as chisels, combos, hammers, and a system of pulleys to navigate the surface of the rock while he carved it. Bueñano worked the mountain with his tractor between 6 AM and 3 PM every day; from 3 PM to 9 PM he worked on the sculpture. He angled his tractor work with great care so that the colossal sculpture would not be visible from the road until it was finished. Finally, the Ministry demanded that Buenaño finish the demolition; the final dynamite blast revealed the sculpture to view.
Bueñano commented during the construction to his friends that "by sculpting a figure that represented the Devil, the Devil would leave the drivers alone." Referring to a small Catholic shrine 10 km away, he also said, "If the Virgin and the saints have their sculptures, why cannot the Devil have his own?"
Symbolism
Some local legends attribute Buenaño's sculpture to Satanic inspiration. In fact, the sculptor was very religious.
According to his son Luis, Octaviano Buenaño had the motivation to leave a message of wisdom to humanity through sculpture. El poder brutal does not merely represent the Devil by his physical traits; its greater meaning is the brutal power of our interior, the free will of humankind. Physically it represents the rational man and the unconscious man, the sculpture is based on physical traces of man (eyes and nose) and animal (ears and fangs).
Location
El poder brutal is located on the boundary between the coastal "Costa" region of Ecuador and the mountainous "Sierra" region; thus at dusk it is often foggy, and precipitation is frequent and violent. Until the 1990s, the road past the statue was the only road between the Costa and the Sierra; and to this day it remains the most traveled road in the country.
See also
List of colossal sculptures in situ
References
External links
Cara del diablo, image
Destino Ecuador: Cara del Diablo (Spanish)
Mil Enigmas: El Poder Brutal, de Octaviano Buenaño. 26 de octubre de 2015 (Spanish)
1987 sculptures
Colossal statues
Ecuadorian art
Tourist attractions in Ecuador
Outdoor structures in Ecuador | El poder brutal | [
"Physics",
"Mathematics"
] | 863 | [
"Quantity",
"Colossal statues",
"Physical quantities",
"Size"
] |
55,644,301 | https://en.wikipedia.org/wiki/Car%20elevator | A car elevator or vehicle elevator is an elevator designed for the vertical transportation of vehicles inside buildings, so increasing the number of vehicles that can be parked in parking lots and parking garages. Where real estate is costly, these car parking systems can reduce overall costs by using less land to park the same number of cars.
Vehicle lifts, which lift a car at its center of gravity, are used in garages and repair shops and are designed to allow access to a car's undercarriage for repair.
Examples
American politician and former presidential candidate Mitt Romney included a car elevator in his 2008 proposal for rebuilding his beach house in La Jolla, San Diego. The elevator is intended to transport cars between floors in a planned split-level, four-vehicle garage. Romney received final approval for the project in October 2013, after an appeal against San Diego's approval of the project was dismissed.
The Porsche Design Tower, a high-rise residential building with 132 units in Sunny Isles Beach, Florida, near Miami, contains three car elevators. The elevators, named "Dezervators" after building developer Gil Dezer, transport cars up to parking spaces directly connected to each apartment unit. The elevators are in circular glass structures and rotate to align with the correct car parking space, allowing residents to exit directly from their cars to their apartments. The building opened in 2017. Another nearby building in Sunny Isles Beach, the Bentley Residences, uses the same technology in a slightly larger building. It will have more automobile capacity with four car elevators and be the tallest building in Sunny Isles Beach.
The Boring Company, a company owned by entrepreneur Elon Musk, built a prototype car elevator in 2017. In 2018, the company received permission from the Hawthorne, California city council to construct a car elevator designed to connect an above-ground garage to the Boring Test Tunnel, an underground test tunnel. The Boring Company intends to use the test tunnel and elevator for research and development of a proposed underground Hyperloop system designed to solve traffic congestion in Los Angeles.
See also
Automated parking system
Car parking system
Car ramp
References
External links
Elevators
Automotive technologies | Car elevator | [
"Engineering"
] | 425 | [
"Building engineering",
"Elevators"
] |
55,648,201 | https://en.wikipedia.org/wiki/EURO%20Journal%20on%20Computational%20Optimization | The EURO Journal on Computational Optimization (EJCO) is a peer-reviewed academic journal that was established in 2012 and is now published by Elsevier.
It is an official journal of the Association of European Operational Research Societies, promoting the use of computers for the solution of optimization problems. Coverage includes both methodological contributions and innovative applications, typically validated through convincing computational experiments.
The editor-in-chief is
Immanuel Bomze.
Past Editor-in-Chief:
Martine Labbé (2012-2020).
Abstracting and indexing
The journal is abstracted and indexed in the following databases:
EBSCO Information Services
Emerging Sources Citation Index
Google Scholar
International Abstracts in Operations Research
Mathematical Reviews
OCLC
Research Papers in Economics
Scopus
Summon by ProQuest
Zentralblatt Math
External links
Operations research
English-language journals
Academic journals established in 2012 | EURO Journal on Computational Optimization | [
"Mathematics"
] | 172 | [
"Applied mathematics",
"Operations research"
] |
42,585,591 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Hajnal%20conjecture | In graph theory, a branch of mathematics, the Erdős–Hajnal conjecture states that families of graphs defined by forbidden induced subgraphs have either large cliques or large independent sets. It is named for Paul Erdős and András Hajnal, who first posed it as an open problem in a paper from 1977.
More precisely, for an arbitrary undirected graph , let be the family of graphs that do not have as an induced subgraph. Then, according to the conjecture, there exists a constant such that the -vertex graphs in have either a clique or an independent set of size . In other words, for any hereditary family of graphs that is not the family of all graphs, there exists a constant such that the -vertex graphs in have either a clique or an independent set of size .
A convenient and symmetric reformulation of the Erdős–Hajnal conjecture is that for every graph , the -free graphs necessarily contain a perfect induced subgraph of polynomial size. This is because every perfect graph necessarily has either a clique or independent set of size proportional to the square root of their number of vertices, and conversely every clique or independent set is itself perfect.
Background on the conjecture can be found in two surveys, one of András Gyárfás and the other of Maria Chudnovsky.
Graphs without large cliques or independent sets
In contrast, for random graphs in the Erdős–Rényi model with edge probability 1/2, both the maximum clique and the maximum independent set are much smaller: their size is proportional to the logarithm of , rather than growing polynomially. Ramsey's theorem proves that no graph has both its maximum clique size and maximum independent set size smaller than logarithmic. Ramsey's theorem also implies the special case of the Erdős–Hajnal conjecture when itself is a clique or independent set.
Partial results
This conjecture is due to Paul Erdős and András Hajnal, who proved it to be true when is a cograph. They also showed, for arbitrary , that the size of the largest clique or independent set grows superlogarithmically. More precisely, for every there is a constant such that the -vertex -free graphs have cliques or independent sets containing at least vertices. The graphs for which the conjecture is true also include those with four verticies or less, all five-vertex graphs, and any graph that can be obtained from these and the cographs by modular decomposition.
As of 2024, however, the full conjecture has not been proven, and remains an open problem.
An earlier formulation of the conjecture, also by Erdős and Hajnal, concerns the special case when is a 5-vertex cycle graph. This case has been resolved by Maria Chudnovsky, Alex Scott, Paul Seymour, and Sophie Spirkl.
Relation to the chromatic number of tournaments
Alon et al. showed that the following statement concerning tournaments is equivalent to the Erdős–Hajnal conjecture.
Conjecture. For every tournament , there exists and such that for every -free tournament with vertices .
Here denotes the chromatic number of , which is the smallest such that there is a -coloring for . A coloring of a tournament is a mapping such that the color classes are transitive for all .
The class of tournaments with the property that every -free tournament has for some constant satisfies this equivalent Erdős–Hajnal conjecture (with ). Such tournaments , called heroes, were considered by Berger et al. There it is proven that a hero has a special structure which is as follows:
Theorem. A tournament is a hero if and only if all its strong components are heroes. A strong tournament with more than one vertex is a hero if and only if it equals or for some hero and some integer .
Here denotes the tournament with the three components , the transitive tournament of size and a single node . The arcs between the three components are defined as follows: . The tournament is defined analogously.
References
External links
The Erdös-Hajnal Conjecture, The Open Problem Garden
Ramsey theory
Conjectures
Unsolved problems in graph theory
Hajnal conjecture | Erdős–Hajnal conjecture | [
"Mathematics"
] | 853 | [
"Unsolved problems in mathematics",
"Combinatorics",
"Unsolved problems in graph theory",
"Conjectures",
"Mathematical problems",
"Ramsey theory"
] |
42,591,967 | https://en.wikipedia.org/wiki/Monolith%20%28catalyst%20support%29 | Monolithic catalyst supports are extruded structures that are the core of many catalytic converters, most diesel particulate filters, and some catalytic reactors. Most catalytic converters are used for vehicle emissions control. Stationary catalytic converters can reduce air pollution from fossil fuel power stations.
Properties
Monoliths for automotive catalytic converters are made of a ceramic that contains a large proportion of synthetic cordierite, 2MgO•2Al2O3•5SiO2, which has a low coefficient of thermal expansion.
Each monolith contains thousands of parallel channels or holes, which are defined by many thin walls, in a honeycomb structure. The channels can be square, hexagonal, round, or other shapes. The hole density may be from 30 to 200 per cm2, and the separating walls can be 0.05 to 0.3 mm. The many small holes have a much larger surface area than one large hole. High surface area facilitates catalytic reaction or filtration. The open spaces in the cross-sectional area are 72 to 87% of the frontal area, so resistance to the flow of gases through the holes is low, which minimizes energy consumed forcing gases through the structure.
The monolith is a substrate that supports a catalyst. After the monolith is complete, a washcoat is applied that deposits oxides and catalyst(s) (most commonly platinum, palladium, and/or rhodium) on the walls of the holes.
Alternative structures include corrugated metal and a packed bed of coated pellets or other shapes.
Uses
Diesel particulate filters (DPF)
Catalytic incineration
Catalyst support for chemical processes
Vehicle emissions control
References
Catalysis | Monolith (catalyst support) | [
"Chemistry"
] | 344 | [
"Catalysis",
"Chemical reaction stubs",
"Chemical kinetics",
"Chemical process stubs"
] |
57,406,245 | https://en.wikipedia.org/wiki/Peptide%20receptor%20radionuclide%20therapy | Peptide receptor radionuclide therapy (PRRT) is a type of radionuclide therapy, using a radiopharmaceutical that targets peptide receptors to deliver localised treatment, typically for neuroendocrine tumours (NETs).
Mechanism
A key advantage of PRRT over other methods of radiotherapy is the ability to target delivery of therapeutic radionuclides directly to the tumour or target site. This works because some tumours have an abundance (overexpression) of peptide receptors, compared to normal tissue. A radioactive substance can be combined with a relevant peptide (or its analogue) so that it preferentially binds to the tumour. With a gamma emitter as the radionuclide, the technique can be used for imaging with a gamma camera or PET scanner to locate tumours. When paired with alpha or beta emitters, therapy can be achieved, as in PRRT.
The current generation of PRRT targets somatostatin receptors, with a range of analogue materials such as octreotide and other DOTA compounds. These are combined with indium-111, lutetium-177 or yttrium-90 for treatment. 111In is primarily used for imaging alone, however in addition to its gamma emission there are also Auger electrons emitted, which can have a therapeutic effect in high doses.
PRRT radiopharmaceuticals are constructed with three components; the radionuclide, chelator, and somatostatin analogue (peptide). The radionuclide delivers the actual therapeutic effect (or emission, such as photons, for imaging). The chelator is the essential link between the radionuclide and peptide. For 177Lu and 90Y this is typically DOTA (tetracarboxylic acid, and its variants) and DTPA (pentetic acid) for 111In. Other chelators known as NOTA (triazacyclononane triacetic acid) and HYNIC (hydrazinonicotinamide) have also been experimented with, albeit more for imaging applications. The somatostatin analogue affects biodistribution of the radionuclide, and therefore how effectively any treatment effect can be targeted. Changes affect which somatostatin receptor is most strongly targeted. For example, DOTA-lanreotide (DOTALAN) has a lower affinity for receptor 2 and a higher affinity for receptor 5 compared to DOTA-octreotide (DOTATOC).
Applications
The body of research on the effectiveness of current PRRT is promising, but limited. Complete or partial treatment response has been seen in 20-30% of patients in trials treated with 177Lu-DOTATATE or 90Y-DOTATOC, among the most widely used PRRT drugs. When it comes to comparing these two PRRT, Y-labeled and Lu-labeled PRRTs, it appears that Y-labeled is more effective for larger tumors, while Lu-labeled is better for smaller and primary tumors. The lack of ɤ-emission with Y-labeled PPRTs is also an important difference between Lu peptides and Y peptide. In particular, with Y-labeled PRRT it becomes difficult to set up a dose of radiations specific to the patient's needs. In most cases PRRT is used for cancers of the gastroenteropancreatic and bronchial tracts, and in some cases phaeochromocytoma, paraganglioma, neuroblastoma or medullary thyroid carcinoma. Various approaches to approve effectiveness and limit side effects are being investigated, including radiosensitising drugs, fractionation regimes and new radionuclides. Alpha emitters, which have much shorter ranges in tissue (limiting the effect on nearby healthy tissue), such as bismuth-213 or actinium-225 labelled DOTATOC are of particular interest.
A comparative cohort study of 1051 neuroendocrine tumor patients undergoing 90Y-DOTATOC (n=910) or 177Lu-DOTATOC (n=141) reported no significant difference in overall survival between the groups. However, patients with high tumor accumulation and multiple lesions seemed to benefit from 90Y-DOTATOC, while patients with low tumor burden, solitary lesions and extra-hepatic disease experienced more favorable outcome on 177Lu-DOTATOC. There were significantly fewer cases of transitory hematotoxicity in the 177Lu-DOTATOC group compared with the 90Y-DOTATOC group (1.4% versus 10.1%, p=0.001).
The randomized controlled phase III Neuroendocrine Tumors Therapy (NETTER-1) trial evaluated the efficacy and safety of 177Lu-DOTATATE as compared with high-dose octreotide long-acting repeatable (LAR) in patients with advanced progressive somatostatin-receptor positive midgut neuroendocrine tumors. Patients were randomly assigned to receive either 177Lu-DOTATATE and octreotide LAR at a dose of 30 mg every four weeks for symptom control (n=116) or to only receive octreotide LAR at a dose of 60 mg every four weeks (n=113, control group). In total, 200 out of the 231 patients entered long-term follow-up. Final overall survival in the intention-to-treat population was median 48.0 months in the 177Lu-DOTATATE group versus median 36.3 months in the control group (p=0.30). In other words, there was numerical difference of 11.7 months, not reaching statistical significance. 177Lu-DOTATATE was associated with limited acute toxic effects. In neuroendocrine tumor patients with advanced well-differentiated disease and progression on somatostatin analogs, 177Lu-DOTATATE is likely to reduce the risk of disease progression and be associated with quality-of-life benefits.
Dosimetry
Therapeutic PRRT treatments typically involve several gigabecquerels (GBq) of activity. Several radiopharmaceuticals allow simultaneous imaging and therapy, enabling precise dosimetric estimates to be made. For example, the bremsstrahlung emission from 90Y and gamma emissions from 177Lu can be detected by a gamma camera. In other cases, imaging can be performed by labelling a suitable radionuclide to the same peptide as used for therapy. Radionuclides that can be used for imaging include gallium-68, technetium-99m and fluorine-18.
Currently used peptides can result in high kidney doses, as the radiopharmaceutical is retained for relatively long periods. Renal protection is therefore used in some cases, taking the form of alternative substances that reduce the uptake of the kidneys.
Availability
PRRT is not yet widely available, with various radiopharmaceuticals at different stages of clinical trials. The cost of small volume production of the relevant radionuclides is high. The cost of Lutathera, a commercial 177Lu-DOTATATE product, has been quoted by the manufacturer as £71,500 (€80,000 or $94,000 in July 2018) for 4 administrations of 7.4 GBq.
United States
177Lu-DOTATATE (international nonproprietary name: lutetium (177Lu) oxodotreotide) was approved by the FDA in early 2018, for treatment of gastroenteropancreatic neuroendocrine tumors (GEP-NETs).
Europe
Marketing authorisation for 177Lu-DOTATATE was granted by the European Medicines Agency on 26 September 2017. 90Y-DOTATOC (international nonproprietary name: yttrium (90Y) edotreotide) and 177Lu-DOTATOC are designated as orphan drugs, but have not yet received marketing authorisation.
United Kingdom
In guidance published in August 2018, lutetium (177Lu) oxodotreotide was recommended by NICE for treating unresectable or metastatic neuroendocrine tumours.
Turkey
The first therapies in Turkey using 177Lu-DOTATATE PRRT were carried out in early 2014, for treatment of gastroenteropancreatic neuroendocrine tumors (GEP-NETs) at the Istanbul University-Cerrahpaşa.
Australia
Research in Australia into the use of lutetium-177-labelled antibodies for various cancers began in the Department of Nuclear Medicine at Fremantle Hospital and Health Service (FHHS), Fremantle, Australia in the late 1990s. The first therapies in Australia using 177Lu-DOTATATE PRRT for NET began in February 2005 on a trial basis under the Therapeutic Goods Administration's (TGA) Special Access Scheme (SAS) and compassionate usage of unapproved therapeutic goods. Shortly after this, 177Lu-DOTATATE PRRT was provided to Western Australian NET patients on a routine basis under the SAS, as well as under various on-going research trials.
In Australia, most centres synthesise the lutetium-177 peptide on-site from lutetium-177 chloride and the appropriate peptide.
Side effects
Like any form of radiotherapy, ionising radiation can harm healthy tissue as well as the intended treatment target. Radiation from lutetium (177Lu) oxodotreotide can cause damage when the medicine passes through tubules in the kidney. Arginine/lysine can be used to reduce renal radiation exposure during peptide receptor radionuclide therapy with lutetium (177Lu) oxodotreotide.
See also
Nuclear medicine
Targeted alpha-particle therapy
References
Cancer treatments
Medical physics
Nuclear medicine procedures
Nuclear technology | Peptide receptor radionuclide therapy | [
"Physics"
] | 2,016 | [
"Nuclear technology",
"Applied and interdisciplinary physics",
"Medical physics",
"Nuclear physics"
] |
57,409,796 | https://en.wikipedia.org/wiki/Stuart%E2%80%93Landau%20equation | The Stuart–Landau equation describes the behavior of a nonlinear oscillating system near the Hopf bifurcation, named after John Trevor Stuart and Lev Landau. In 1944, Landau proposed an equation for the evolution of the magnitude of the disturbance, which is now called as the Landau equation, to explain the transition to turbulence based on a phenomenological argument and an attempt to derive this equation from hydrodynamic equations was done by Stuart for plane Poiseuille flow in 1958. The formal derivation to derive the Landau equation was given by Stuart, Watson and Palm in 1960. The perturbation in the vicinity of bifurcation is governed by the following equation
where
is a complex quantity describing the disturbance,
is the complex growth rate,
is a complex number and is the Landau constant.
The evolution of the actual disturbance is given by the real part of i.e., by . Here the real part of the growth rate is taken to be positive, i.e., because otherwise the system is stable in the linear sense, that is to say, for infinitesimal disturbances ( is a small number), the nonlinear term in the above equation is negligible in comparison to the other two terms in which case the amplitude grows in time only if . The Landau constant is also taken to be positive, because otherwise the amplitude will grow indefinitely (see below equations and the general solution in the next section). The Landau equation is the equation for the magnitude of the disturbance,
which can also be re-written as
Similarly, the equation for the phase is given by
For non-homogeneous systems, i.e., when depends on spatial coordinates, see Ginzburg–Landau equation. Due to the universality of the equation, the equation finds its application in many fields such as hydrodynamic stability, Belousov–Zhabotinsky reaction, etc.
General solution
The Landau equation is linear when it is written for the dependent variable ,
The general solution for of the above equation is
As , the magnitude of the disturbance approaches a constant value that is independent of its initial value, i.e., when . The above solution implies that does not have a real solution if and . The associated solution for the phase function is given by
As , the phase varies linearly with time,
It is instructive to consider a hydrodynamic stability case where it is found that, according to the linear stability analysis, the flow is stable when and unstable otherwise, where is the Reynolds number and the is the critical Reynolds number; a familiar example that is applicable here is the critical Reynolds number, , corresponding to the transition to Kármán vortex street in the problem of flow past a cylinder. The growth rate is negative when and is positive when and therefore in the neighbourhood , it may written as wherein the constant is positive. Thus, the limiting amplitude is given by
Negative Landau constant
When the Landau constant is negative, , we must include a negative term of higher order to arrest the unbounded increase of the perturbation. In this case, the Landau equation becomes
The limiting amplitude then becomes
where the plus sign corresponds to the stable branch and the minus sign to the unstable branch. There exists a value of a critical value where the above two roots are equal () such that , indicating that the flow in the region is metastable, that is to say, in the metastable region, the flow is stable to infinitesimal perturbations, but not to finite amplitude perturbations.
See also
Landau's phase transition theory
Ginzburg–Landau theory
References
Fluid dynamics
Mechanics
Lev Landau | Stuart–Landau equation | [
"Physics",
"Chemistry",
"Engineering"
] | 734 | [
"Chemical engineering",
"Mechanics",
"Mechanical engineering",
"Piping",
"Fluid dynamics"
] |
57,017,151 | https://en.wikipedia.org/wiki/List%20of%20gases | This is a list of gases at standard conditions, which means substances that boil or sublime at or below and 1 atm pressure and are reasonably stable.
List
This list is sorted by boiling point of gases in ascending order, but can be sorted on different values. "sub" and "triple" refer to the sublimation point and the triple point, which are given in the case of a substance that sublimes at 1 atm; "dec" refers to decomposition. "~" means approximately. Blue type items have an article available by clicking on the name.
Known as gas
The following list has substances known to be gases, but with an unknown boiling point.
Fluoroamine
Trifluoromethyl trifluoroethyl trioxide CF3OOOCF2CF3 boils between 10 and 20°
Bis-trifluoromethyl carbonate boils between −10 and +10° possibly +12, freezing −60°
Difluoroaminosulfinyl fluoride F2NS(O)F is a gas but decomposes over several hours
Trifluoromethylsulfinyl chloride CF3S(O)Cl
Nitrosyl cyanide ?−20° blue-green gas 4343-68-4
Thiazyl chloride NSCl greenish yellow gas; trimerises.
Possible
This list includes substances that may be gases. However reliable references are not available.
cis-1-Fluoro-1-propene
trans-1-Chloropropene ?
cis-1-Chloropropene ?
Perfluoro-1,2-butadiene
Perfluoro-1,2,3-butatriene −5 polymerizes
Perfluoropent-2-ene
Perfluoropent-1-ene 29-30°
Trifluoromethanesulfenylfluoride CF3SF
Difluorocarbamyl fluoride F2NCOF −52°
N-Sulfinyltrifluoromethaneamine CF3NSO 18°
(Chlorofluoromethyl)silane 373-67-1
Difluoromethylsilane 420-34-8
Trifluoromethyl sulfenic trifloromethyl ester
Pentafluoro(penta-fluorethoxy)sulfur 900001-56-6 15°
Ethenol 557-75-5 10.5° = vinyl alcohol (tautomerizes)
1,1,1,2,2,3,4,4,4-nonafluorobutane 2-10° melt −129°
trans-2H-Heptafluoro-2-butene
Pentafluoroethylhypochlorite around −10°
Trifluoromethyl pentafluoroethyl sulfide 6° 33547-10-3
1,1,1-Trifluoro-N-(trifluoromethoxy)methanamine 671-63-6 0.6°
1-Chloro-1,1,2,2,3,3-hexafluoropropane 422-55-9 16.7
1-Chloro-1,1,2,3,3,3-hexafluoropropane 359-58-0 17.15
2-Chloro-1,1,1,2,3,3-hexafluoropropane 51346-64-6 16.7°
3-Chloro-1,1,1,2,2,3-hexafluoropropane 422-57-1 16.7°
Trifluormethyl 1,2,2,2-tetrafluoroethyl ether 2356-62-9 11°
2-Chloro-1,1,1,3,3-pentafluoropropane HFC-235da 134251-06-2 8°
1,1,2,3,3-Pentafluoropropane 24270-66-4 −3.77
2,2,3,3,4,5,5-Heptafluoro oxolane
(Heptafluoropropyl)carbonimidic difluoride 378-00-7
Pentafluoroethyl carbonimidic difluoride 428-71-7
(Trifluoromethyl)carbonimidic difluoride 371-71-1 CF3N=CF2
Perfluoro[N-methyl-(propylenamine)] 680-23-9
Perfluoro-N,N-dimethylvinylamine 13821-49-3
3,3,4-Trifluoro-2,4-bis-trifluoromethyl-[1,2]oxazetidine 714-52-3
Bis(trifluoromethyl) 2,2-difluoro-vinylamine 13747-23-4
Bis(trifluoromethyl) 1,2-difluoro-vinylamine 13747-24-5
1,1,2-Trifluoro-3-(trifluoromethyl)cyclopropane 2967-53-5
Bis(trifluoromethyl) 2-fluoro-vinylamine 25211-47-6
2-Fluoro-1,3-butadiene 381-61-3
Trifluormethylcyclopropane 381-74-8
cis-1-Fluoro-1-butene 66675-34-1
trans-1-Fluoro-1-butene 66675-35-2
2-Fluoro-1-butene
3-Fluoro-1-butene
trans-1-Fluoro-2-butene
cis-2-fluoro-2-butene
trans-2-fluoro-2-butene
1-Fluoro-2-methyl-1-propene
3-Fluoro-2-methyl-1-propene
Perfluoro-2-methyl-1,3-butadiene 384-04-3
1,1,3,4,4,5,5,5-Pctafluoro-1,2-pentadiene 21972-01-0
Near misses
This list includes substances that boil just above standard condition temperatures. Numbers are boiling temperatures in °C.
1,1,2,2,3-Pentafluoropropane 25–26 °C
Dimethoxyborane 25.9 °C
1,4-Pentadiene 25.9 °C
2-Bromo-1,1,1-trifluoroethane 26 °C
1,2-Difluoroethane 26 °C
Hydrogen cyanide 26 °C
Trimethylgermane 26.2 °C
1,H-Pentafluorocyclobut-1-ene
1,H:2,H-hexafluorocyclobutane
Tetramethylsilane 26.7 °C
Chlorosyl trifluoride 27 °C
2,2-Dichloro-1,1,1-trifluoroethane 27.8 °C
Perfluoroethyl 2,2,2-trifluoroethyl ether 27.89 °C
Perfluoroethyl ethyl ether 28 °C
Perfluorocyclopentadiene C5F6 28 °C
2-Butyne 29 °C
Digermane 29 °C
Perfluoroisopropyl methyl ether 29 °C
Trifluoromethanesulfonyl chloride 29–32 °C
Perfluoropentane 29.2 °C
Rhenium(VI) fluoride 33.8 °C
Chlorodimethylsilane 34.7 °C
1,2-Difluoropropane 43 °C
1,3-Difluoropropane 40-42 °C
Dimethylarsine 36 °C
Spiro[2.2]pentane 39 °C
Ruthenium(VIII) oxide 40 °C
Nickel carbonyl 42.1 °C
Trimethylphosphine 43 °C
Unstable substances
Gallane liquid decomposes at 0 °C.
Nitroxyl and diazene are simple nitrogen compounds known to be gases but they are too unstable and short lived to be condensed.
Methanetellurol CH3TeH 25284-83-7 unstable at room temperature.
Sulfur pentafluoride isocyanide isomerises to sulfur pentafluoride cyanide.
References
Gases
Gases | List of gases | [
"Physics",
"Chemistry"
] | 1,827 | [
"Matter",
"Phases of matter",
"nan",
"Statistical mechanics",
"Gases"
] |
36,938,881 | https://en.wikipedia.org/wiki/Torque%20ripple | Torque ripple is an effect seen in many electric motor designs, referring to a periodic increase or decrease in output torque as the motor shaft rotates. It is measured as the difference in maximum and minimum torque over one complete revolution, generally expressed as a percentage.
Examples
A common example is "cogging torque" due to slight asymmetries in the magnetic field generated by the motor windings, which causes variations in the reluctance depending on the rotor position. This effect can be reduced by careful selection of the winding layout of the motor, or through the use of realtime controls to the power delivery.
References
"Torque ripple", Emetor.
External links
Electric motors
Torsional vibration
Ripple | Torque ripple | [
"Physics",
"Technology",
"Engineering"
] | 141 | [
"Force",
"Physical quantities",
"Engines",
"Electric motors",
"Electrical engineering",
"Wikipedia categories named after physical quantities",
"Torque"
] |
36,942,251 | https://en.wikipedia.org/wiki/Ricci%20scalars%20%28Newman%E2%80%93Penrose%20formalism%29 | In the Newman–Penrose (NP) formalism of general relativity, independent components of the Ricci tensors of a four-dimensional spacetime are encoded into seven (or ten) Ricci scalars which consist of three real scalars , three (or six) complex scalars and the NP curvature scalar . Physically, Ricci-NP scalars are related with the energy–momentum distribution of the spacetime due to Einstein's field equation.
Definitions
Given a complex null tetrad and with the convention , the Ricci-NP scalars are defined by (where overline means complex conjugate)
Remark I: In these definitions, could be replaced by its trace-free part or by the Einstein tensor because of the normalization (i.e. inner product) relations that
Remark II: Specifically for electrovacuum, we have , thus
and therefore is reduced to
Remark III: If one adopts the convention , the definitions of should take the opposite values; that is to say, after the signature transition.
Alternative derivations
According to the definitions above, one should find out the Ricci tensors before calculating the Ricci-NP scalars via contractions with the corresponding tetrad vectors. However, this method fails to fully reflect the spirit of Newman–Penrose formalism and alternatively, one could compute the spin coefficients and then derive the Ricci-NP scalars via relevant NP field equations that
while the NP curvature scalar could be directly and easily calculated via with being the ordinary scalar curvature of the spacetime metric .
Electromagnetic Ricci-NP scalars
According to the definitions of Ricci-NP scalars above and the fact that could be replaced by in the definitions, are related with the energy–momentum distribution due to Einstein's field equations . In the simplest situation, i.e. vacuum spacetime in the absence of matter fields with , we will have . Moreover, for electromagnetic field, in addition to the aforementioned definitions, could be determined more specifically by
where denote the three complex Maxwell-NP scalars which encode the six independent components of the Faraday-Maxwell 2-form (i.e. the electromagnetic field strength tensor)
Remark: The equation for electromagnetic field is however not necessarily valid for other kinds of matter fields.
For example, in the case of Yang–Mills fields there will be where are Yang–Mills-NP scalars.
See also
Newman–Penrose formalism
Weyl scalar
References
General relativity | Ricci scalars (Newman–Penrose formalism) | [
"Physics"
] | 503 | [
"General relativity",
"Theory of relativity"
] |
36,945,036 | https://en.wikipedia.org/wiki/Vibration%20fatigue | Vibration fatigue is a mechanical engineering term describing material fatigue, caused by forced vibration of random nature. An excited structure responds according to its natural-dynamics modes, which results in a dynamic stress load in the material points. The process of material fatigue is thus governed largely by the shape of the excitation profile and the response it produces. As the profiles of excitation and response are preferably analyzed in the frequency domain it is practical to use fatigue life evaluation methods, that can operate on the data in frequency-domain, s power spectral density (PSD).
A crucial part of a vibration fatigue analysis is the modal analysis, that exposes the natural modes and frequencies of the vibrating structure and enables accurate prediction of the local stress responses for the given excitation. Only then, when the stress responses are known, can vibration fatigue be successfully characterized.
The more classical approach of fatigue evaluation consists of cycle counting, using the rainflow algorithm and summation by means of the Palmgren-Miner linear damage hypothesis, that appropriately sums the damages of respective cycles. When the time history is not known, because the load is random (e.g. a car on a rough road or a wind driven turbine), those cycles can not be counted. Multiple time histories can be simulated for a given random process, but such procedure is cumbersome and computationally expensive.
Vibration-fatigue methods offer a more effective approach, which estimates fatigue life based on moments of the PSD. This way, a value is estimated, that would otherwise be calculated with the time-domain approach. When dealing with many material nodes, experiencing different responses (e.g. a model in a FEM package), time-histories need not be simulated. It then becomes viable, with the use of vibration-fatigue methods, to calculate fatigue life in many points on the structure and successfully predict where the failure will most probably occur.
Vibration-fatigue-life estimation
Random load description
In a random process, the amplitude can not be described as a function of time, because of its probabilistic nature. However, certain statistical properties can be extracted from a signal sample, representing a realization of a random process, provided the latter is ergodic. An important characteristics for the field of vibration fatigue is the amplitude probability density function, that describes the statistical distribution of peak amplitudes. Ideally, the probability of cycle amplitudes, describing the load severity, could then be deduced directly. However, as this is not always possible, the sought-after probability is often estimated empirically.
Effects of structural dynamics
Random excitation of the structure produces different responses, depending on the natural dynamics of the structure in question. Different natural modes get excited and each greatly affects the stress distribution in material. The standard procedure is to calculate frequency response functions for the analyzed structure and then obtain the stress responses, based on given loading or excitation. By exciting different modes, the spread of vibration energy over a frequency range directly affects the durability of the structure. Thus the structural dynamics analysis is a key part of vibration-fatigue evaluation.
Vibration-fatigue methods
Calculation of damage intensity is straightforward once the cycle amplitude distribution is known. This distribution can be obtained from a time-history simply by counting cycles. To obtain it from the PSD another approach must be taken.
Various vibration-fatigue methods estimate damage intensity based on moments of the PSD, which characterize the statistical properties of the random process. The formulas for calculating such estimate are empirical (with very few exceptions) and are based on numerous simulations of random processes with known PSD. As a consequence, the accuracy of those methods varies, depending on analyzed response spectra, material parameters and the method itself - some are more accurate than others.
The most commonly used method is the one developed by T. Dirlik in 1985. Recent research on frequency-domain methods of fatigue-life estimation compared well established methods and also recent ones; conclusion showed that the methods by Zhao and Baker, developed in 1992 and by Benasciutti and Tovo, developed in 2004 are also very suitable for vibration-fatigue analysis. For narrow-band approximation of random process analytical expression for damage intensity is given by Miles. There are some approaches with adaptation of narrow-band approximation; Wirsching and Light proposed the
empirical correction factor in 1980 and Benasciutti presented 0.75 in 2004. In 2008, Gao and Moan published a spectral method that combines three narrow-band processes. Implementation of those method is given in the Python open-source FLife package.
Applications
Vibration fatigue methods find use wherever the structure experiences loading, that is caused by a random process. These can be the forces that bumps on the road extort on the car chassis, the wind blowing on the wind turbine, waves hitting an offshore construction or a marine vessel. Such loads are first characterized statistically, by measurement and analysis. The data is then used in the product design process.
The computational effectiveness of vibration-fatigue methods in contrast to the classical approach, enables their use in combination with FEM software packages, to evaluate fatigue after the loading is known and the dynamic analysis has been performed. Use of the vibration-fatigue methods is well-suited, as structural analysis is studied in the frequency-domain.
Common practice in the automotive industry is the use of accelerated vibration tests. During the test, a part or a product is exposed to vibration, that are in correlation with those expected during the service-life of the product. To shorten the testing time, the amplitudes are amplified. The excitation spectra used are broad-band and can be evaluated most effectively using vibration-fatigue methods.
See also
Fatigue (material)
Structural failure
Vibration
Structural dynamics
Modal analysis
Random vibration
Rainflow-counting algorithm
Seismic analysis
Solder Fatigue
References
Solid mechanics
Mechanical failure modes
Mechanical vibrations
Fracture mechanics
Materials degradation | Vibration fatigue | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 1,184 | [
"Structural engineering",
"Solid mechanics",
"Mechanical failure modes",
"Fracture mechanics",
"Technological failures",
"Materials science",
"Mechanics",
"Mechanical vibrations",
"Materials degradation",
"Mechanical failure"
] |
61,607,382 | https://en.wikipedia.org/wiki/Cohomology%20of%20a%20stack | In algebraic geometry, the cohomology of a stack is a generalization of étale cohomology. In a sense, it is a theory that is coarser than the Chow group of a stack.
The cohomology of a quotient stack (e.g., classifying stack) can be thought of as an algebraic counterpart of equivariant cohomology. For example, Borel's theorem states that the cohomology ring of a classifying stack is a polynomial ring.
See also
l-adic sheaf
smooth topology
References
Algebraic geometry
Cohomology theories | Cohomology of a stack | [
"Mathematics"
] | 124 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
61,609,022 | https://en.wikipedia.org/wiki/Antibody-oligonucleotide%20conjugate | Antibody-oligonucleotide conjugates or AOCs belong to a class of chimeric molecules combining in their structure two important families of biomolecules: monoclonal antibodies and oligonucleotides.
Combination of exceptional targeting capabilities of monoclonal antibodies with numerous functional modalities of oligonucleotides has been fruitful for a variety of applications with AOC including imaging, detection and targeted therapeutics.
Cell uptake/internalisation still represents the biggest hurdle towards successful ON therapeutics. A straightforward uptake, like for most small-molecule drugs, is hindered by the polyanionic backbone and the molecular size of ONs. Being adapted from the broad and successful class of Antibody-Drug conjugates, antibodies and antibody analogues are more and more used in research in order to overcome hurdles related to delivery and internalisation of ON therapeutics. By exploiting bioconjugation methodology several conjugates have been obtained.
Development of therapeutic AOCs
The first AOC was reported in 1995 where the lysines of a transferrin-antibody were connected using a SMCC bifunctional linker (NHS ester and maleimide moiety) to radiolabelled and cys-bearing ASOs targeting HIV mRNA. Marcin and his colleagues developed a different construct using the same chemistry, but they utilized siRNA instead of an ASO in 2011. In 2013, MYERS and coworkers then unspecifically labelled an anti-CD19 antibody with N-succinimidyl 3-(2-pyridyl-dithio) propionate to form disulphide bonds with cys-modified ASO targeting the mRNA of oncoprotein E2A–PBX1. Ultimately, they could prove in-vivo antitumour effects which in contrast were not obtained with the single entities. In the same timeframe, several antibodies were exploited for ON delivery in combination with nanoparticles and in non-covalent strategies.
Only recently the first examples for a site-selective conjugation between an ON therapeutic and a mAb was published: in 2015 Genentech exploited the SMCC linker to conjugate siRNA to several engineered mAb based on their proprietary Thiomab technology, which allows site-specific introduction of a cysteine into the antibody sequence[32]. They could prove the functionality of both entities in the construct and by screening different antibodies, they validated their importance for an effective antisense effect. The main obstacle encountered was a limited endosomal escape but ultimately a functional construct which shows antisense effect in-vivo was reported. After development of the SMCC based conjugates, there were two constructs reported in literature based on strain-promoted alkyne-azide cycloadditions: an MXD3 mRNA targeting gapmer (cEt and PS modified) linked to an anti-CD22 antibody targeting preB cells leads to in-vitro apoptosis of targeted cells and in-vivo increased length of mouse survival in xenograft models. Notably, the dose required for the same therapeutic effect was 20 times lower for the developed conjugate (vs. naked mAb). Another reported conjugate, exploiting the same unselective conjugation chemistry, employs an CD44 respectively EphA2 targeting antibody which covalently carries a therapeutically irrelevant “sense-carrier” oligonucleotide. This oligonucleotide base pairs with the actual antisense oligonucleotide (gapmer bearing phosphorothioate linkages and 2’-deoxy-2’-fluoro-beta-D-arabinonucleic acid modifications and a terminal fluorophor) aiming for an increased RNaseH activity.
Antibody Analogue-Oligonucleotide Conjugate
Despite their tremendous potential, ADCs and AOCs suffer from the physical size of the antibody (mAb) entity (150 kDa) which limits solid tumour penetration (at least at low concentrations). Moreover, the site-selective modification of the antibody is hardly achievable: due to the difficult production of mAbs the selective introduction of an unnatural amino acid into the protein is not easily possible.
Thats why there is intensive research to exploit antibody analogues and antibody fragments which retain a high target specificity but combined with a smaller size and a greater possibility of modification. Nanobodies for example are natural single-domain antibodies found in camelids with an average mass of 15kDa. They bear an increased stability, solubility and tissue penetration compared to mAbs.
One conjugate, consisting out of an EGFR Nanobody and a siRNA being combined through maleimide bioconjugation, proves the possibility of successful delivery of ONs by nanobodies.
Another example consists out of an anti-CD71 Fab fragment which was conjugated to a maleimide bearing siRNA (itself having 2’OMe/2’F modifications and phosphorothioate linkages). Several (cleavable and uncleavable) linkers between the maleimide moiety and the siRNA were screened revealing only a small influence on silencing efficacy (uncleavable linkers leading to the best results). To play out the small size of the Fab fragment, subcutaneous administration was investigated in mouse models leading to equivalent silencing results compared to intravenous administration. By comparison with other mAb-siRNA conjugates the authors even speculate that endosomal escape is largely facilitated by the smaller size of the Fab (vs. mAb).
Moreover, Nanobody-ON conjugates are intensively used for imaging purposes exploiting the small nanobody size to reduce imaging displacement.
See also
Antibody-drug conjugate
Conjugated protein
Immune stimulating antibody conjugate
Bioconjugation
References
Nucleic acids
Monoclonal antibodies
Biotechnology | Antibody-oligonucleotide conjugate | [
"Chemistry",
"Biology"
] | 1,236 | [
"Biomolecules by chemical classification",
"nan",
"Biotechnology",
"Nucleic acids"
] |
61,610,247 | https://en.wikipedia.org/wiki/Go%206976 | Go 6976 (also known as Go-6976 and Goe 6976) is an organic protein kinase inhibitor. It has some specificity for protein kinase C alpha and beta, and through their inhibition it is thought to induce the formation of cell junctions, and hence inhibit the invasion of urinary bladder carcinoma cells".
References
Protein kinase inhibitors
Indolocarbazoles
Nitriles | Go 6976 | [
"Chemistry"
] | 86 | [
"Nitriles",
"Functional groups"
] |
61,611,328 | https://en.wikipedia.org/wiki/BX-912 | BX-912 is a small molecule that inhibits 3-phosphoinositide dependent protein kinase-1. The phosphoinositide 3-kinase/3-phosphoinositide-dependent kinase 1 (PDK1)/AKT signaling pathway plays a role in cancer cell growth, and tumor angiogenesis, and could be a new target for anti-cancer drugs.
References
EC 2.7.11
Imidazoles
Bromoarenes
Aminopyrimidines
Ureas
Kinase inhibitors | BX-912 | [
"Chemistry"
] | 115 | [
"Organic compounds",
"Ureas"
] |
61,613,630 | https://en.wikipedia.org/wiki/Deterministic%20Networking | Deterministic Networking (DetNet) is an effort by the IETF DetNet Working Group to study implementation of deterministic data paths for real-time applications with extremely low data loss rates, packet delay variation (jitter), and bounded latency, such as audio and video streaming, industrial automation, and vehicle control.
DetNet operates at the IP Layer 3 routed segments using a software-defined networking layer to provide IntServ and DiffServ integration, and delivers service over lower Layer 2 bridged segments using technologies such as MPLS and IEEE 802.1 Time-Sensitive Networking. Deterministic Networking aims to migrate time-critical, high-reliability industrial control and audio-video applications from special-purpose Fieldbus networks (HDMI, CAN bus, PROFIBUS, RS-485, RS-422/RS-232, and I²C) to packet networks and IP in particular. DetNet will support both the new applications and existing IT applications on the same physical network.
To support real-time applications, DetNet implements reservation of data plane resources in intermediate nodes along the data flow path, calculation of explicit routes that do not depend on network topology, and redistribute data packets over time and/or space to deliver data even with the loss of one path.
Rationale
Standard IT infrastructure cannot efficiently handle latency-sensitive data. Switches and routers use fundamentally uncertain algorithms for processing packet/frames, which may result in sporadic data flow.
A common solution for smoothing out these flows is to increase buffer sizes, but this has a negative effect on delivery latency because data has to fill the buffers before transmission to the next switch or router can start.
IEEE Time-Sensitive Networking (TSN) task group has defined deterministic algorithms for queuing, shaping and scheduling which allow each node to allocate bandwidth and latency according to requirements of each data flow, by computing the buffer size at the network switch. The same algorithms can be employed at higher network layers to improve delivery of IP packets and provide interoperability with TSN hardware when available.
Requirements
Applications from different fields often have fundamentally similar requirements, which may include:
Time synchronization at each node (routers/bridge)across the entire network, with accuracy from nanoseconds to microseconds.
Deterministic data flow, which shall support:
unicast or multicast packets;
guaranteed minimum and maximum latency endpoint-to-endpoint across the entire network, with tight jitter when required;
Ethernet packet loss ratio from 10−9 to 10−12, wireless mesh networks around 10−5;
high utilization of the available network bandwidth (no need for massive over-provisioning);
flow processing without throttling, congestion feedback, or other network-defined transmission delay;
a fixed transmission schedule, or a maximum bandwidth and packet size.
Scheduling, shaping, limiting, and controlling transmission at each node.
Protection against misbehaving nodes (in both the data and the control planes): a flow cannot affect other flows even under high load.
Reserving resources in nodes that carry the flow.
Operation
Resource allocation
To reduce contention related packet loss, resources such as buffer space or link bandwidth can be assigned to the flow along the path from source to destination. Maintaining adequate buffer storage at each node also limits maximum end-to-end latency.
The maximum transmission rate and maximum packet size have to be explicitly defined for each flow.
Each network node along the path shall not exceed these data rates, as any packet sent out of scheduled time requires additional buffering on the next node, which may exceed its allocated resources.
To limit data rates, traffic policing and shaping functions are applied at the ingress ports. This also protects regular IT traffic from misbehaving DetNet sources.
Time-of-execution fields in the packets and sub-microsecond time synchronization across all nodes are used to ensure minimum end-to-end latency and eliminate irregular delivery (jitter). Jitter reduces the perceived quality of audiovisual applications, and control network applications built around serial communication protocols cannot handle jitter at all.
Service protection
Packet loss can also result from media errors and equipment failures. Packet replication and elimination and packet encoding provide service protection from these failures.
Replication and elimination works by spreading the data across several explicit paths and reassembling them in-order near the destination. Sequence number or timestamp is added to DetNet flow or transport protocol packet, then duplicate packets are eliminated and out-of-order packets are reordered, based on sequencing information and transmission logs,
Adhering to the flow latency constraints also imposes constraints on misordering, as out-of-order packets impact the jitter and require additional buffering.
Different path lengths also require additional buffering to equalize the delays and ensure bandwidth constraints after failure recovery.
Replication and elimination may be used by multiple DetNet nodes to improve protection against multiple failures.
Packet encoding uses multiple transmission units for each packet, adding redundancy and error correction information from multiple packets to each transmission unit.
Explicit routes
In mesh networks, topology events such as failure or recovery can impact data flow even in remote network segments. A side effect of route changes is out-of-order packet delivery.
Real-time networks are often based on physical rings with a simple control protocol and two ports per device for redundant paths, though at a cost of increased hop count and latency.
DetNet routes are typically explicitly defined and do not change (at least immediately) in response to network topology events, so there are no interruptions from routing or bridging protocol negotiations.
Explicit routes can be established with RSVP-TE, Segment Routing, IS-IS, MPLS-TE label-switched path (LSP), or a software-defined networking layer.
Traffic engineering
IETF Traffic Engineering Architecture and Signaling (TEAS) work group maintains MPLS-TE LSP and RSVP-TE protocols. These traffic Engineering (TE) routing protocols translate DetNet flow specification to IEEE 802.1 TSN controls for queuing, shaping, and scheduling algorithms, such as IEEE 802.1Qav credit-based shaper, IEEE802.1Qbv time-triggered shaper with a rotating time scheduler, IEEE802.1Qch synchronized double and triple buffering, 802.1Qbu/802.3br Ethernet packet pre-emption, and 802.1CB frame replication and elimination for reliability. Protocol interworking defined by IEEE 802.1CB is used to advertise TSN sub-network capabilities to DetNet flows via the Active Destination MAC and VLAN Stream identification functions. DetNet flows are matched by destination MAC address, VLAN ID and priority parameters to Stream ID and QoS requirements for talkers and listeners in the AVB/TSN sub-network.
Use cases
IETF foresees the following use cases:
pro audio and video (Audio Video Bridging);
electrical generation and distribution;
building automation systems (BAS);
wireless industrial mesh networks;
cellular radio (fronthaul/backhaul);
industrial machine to machine (M2M) networks;
mining industry (remote vehicle control);
private blockchain;
network slicing.
See also
Audio over Ethernet
Audio over IP
Internet standards
References
External links
Deterministic Networking (detnet) Working Group
Internet Standards
Industrial Ethernet
Control engineering
Audio engineering
Automotive electronics
Network protocols | Deterministic Networking | [
"Engineering"
] | 1,526 | [
"Industrial Ethernet",
"Electrical engineering",
"Audio engineering",
"Control engineering"
] |
61,614,725 | https://en.wikipedia.org/wiki/Type%20IV%20secretion%20system | The bacterial type IV secretion system, also known as the type IV secretion system or the T4SS, is a secretion protein complex found in gram negative bacteria, gram positive bacteria, and archaea. It is able to transport proteins and DNA across the cell membrane. The type IV secretion system is just one of many bacterial secretion systems. Type IV secretion systems are related to conjugation machinery which generally involve a single-step secretion system and the use of a pilus. Type IV secretion systems are used for conjugation, DNA exchange with the extracellular space, and for delivering proteins to target cells. The type IV secretion system is divided into type IVA and type IVB based on genetic ancestry.
Notable instances of the type IV secretion system include the plasmid insertion into plants of Agrobacterium tumefaciens, the toxin delivery methods of Bordetella pertussis (whooping cough) and Legionella pneumophila (Legionnaires' disease), the translocation of effector proteins into host cells by bacteria from the Brucella genus (Brucellosis), and the F sex pilus.
Function
The type IV secretion system is a protein complex found in prokaryotes used to transport DNA, proteins, or effector molecules from the cytoplasm to the extracellular space beyond the cell. The type IV secretion system is related to prokaryotic conjugation machinery. Type IV secretion systems are a highly versatile group, present in Gram positive bacteria, Gram negative bacteria, and archaea. They usually involve a single step which utilizes a pilus, though exceptions exist.
Type IV secretion systems are highly diverse, with a variety of functions and types due to different evolutionary paths. Primarily, type IV secretion systems are grouped based on structural and genetic similarity and are only distantly related to each other. Type IVA systems are similar to the VirB/D4 system of Agrobacterium tumefaciens. Type IVB systems are similar to the Dot/Icm systems found in intracellular pathogens such as Legionella pneumophila. The “other” type systems resemble neither IVA or IVB. Types are genetically distinct and use separate sets of proteins, however, proteins between the sets have strong homologies to each other, which leads them to function similarly.
Type IV secretion systems are also classified by function into three main types. Conjugative systems: used for DNA transfer via cell to cell contact (a process called conjugation); DNA release and uptake systems: used to exchange DNA with the extracellular environment (a process called transformation); and effector systems: used to transfer proteins to target cells. Conjugative as well as DNA release and uptake systems play an important role in horizontal gene transfer, which allows prokaryotes to adapt to their environment, such as, developing antibiotic resistance. Effector systems allow for the interaction between microbes and larger organisms. The effector systems are used as a toxin delivery method by many human pathogens such as, Helicobacter pylori (stomach ulcers), whooping cough, and Legionnaires' disease.
Structure
Currently, only the structure of type IVA secretion systems, which occur in gram-negative bacteria, is well described. It is composed of 12 protein subunits, VirB1 - VirB11 and VirD4, analogies of which exist in all type IVA systems. The Type IV secretion system’s components can be separated into 3 groups: the translocation channel scaffold, the ATPases, and the pilus.
The translocation channel scaffold is the portion of the machinery that creates the channel between extracellular space and the cytoplasm through the inner and outer membranes, and contains VirB6 - VirB10. The core complex of the scaffold is composed of 14 copies of VirB7, VirB9, and VirB10 which form a cylindrical channel that spans both membranes and connects the cytoplasm to the extracellular space.
A single protein, VirB10 is integral in both the inner and outer membranes. It inserts into the outer membrane using an α-helical barrel structure which helps form a channel between the two membranes. There is an opening on the cytoplasmic end of the channel which is followed by a large chamber and a second opening. The second opening requires a conformational change to allow substrate passage from the cytoplasm into the channel. Either VirB6 or VirB8 is believed to form the inner membrane pore, as they are integral proteins on the inner membrane and have direct contact with the substrate.
The ATPases consist of VirB4, VirB11, and VirD4, which drive the substrate motion through the channel and provide the system with energy. VirB11 belongs to a class of transmembrane transporters called “traffic ATPases”. VirB4 is not well characterized.
The pilus is composed of VirB2 and VirB5, with VirB2 being the major component. In A. tumefaciens, the pilus is 8-12 nm in diameter, and less than one μm in length. F pili, another commonly examined type of pilus, are much longer with a length of 2-20 μm.
Mechanism
Due to the wide variety of type IV secretion systems in both origin and function, it is difficult to state much mechanistically about the group as a whole.
In general, after DNA is packaged in a conjugative system it is recruited by ATPase analogues to the VirD4 coupling protein, then translocated through the pilus. In A. tumefaciens specifically, the DNA passes through a characterized chain of enzymes before reaching the pilus. The DNA is recruited by VirD4, then VirB11, then to the intermembrane proteins (VirB6, and VirB8), moved to VirB9, and finally sent to the pilus (VirB2).
References
Secretion
Cellular processes
Membrane biology | Type IV secretion system | [
"Chemistry",
"Biology"
] | 1,283 | [
"Membrane biology",
"Cellular processes",
"Molecular biology"
] |
41,160,090 | https://en.wikipedia.org/wiki/Gromov%20boundary | In mathematics, the Gromov boundary of a δ-hyperbolic space (especially a hyperbolic group) is an abstract concept generalizing the boundary sphere of hyperbolic space. Conceptually, the Gromov boundary is the set of all points at infinity. For instance, the Gromov boundary of the real line is two points, corresponding to positive and negative infinity.
Definition
There are several equivalent definitions of the Gromov boundary of a geodesic and proper δ-hyperbolic space. One of the most common uses equivalence classes of geodesic rays.
Pick some point of a hyperbolic metric space to be the origin. A geodesic ray is a path given by an isometry such that each segment is a path of shortest length from to .
Two geodesics are defined to be equivalent if there is a constant such that for all . The equivalence class of is denoted .
The Gromov boundary of a geodesic and proper hyperbolic metric space is the set is a geodesic ray in .
Topology
It is useful to use the Gromov product of three points. The Gromov product of three points in a metric space is
. In a tree (graph theory), this measures how long the paths from to and stay together before diverging. Since hyperbolic spaces are tree-like, the Gromov product measures how long geodesics from to and stay close before diverging.
Given a point in the Gromov boundary, we define the sets there are geodesic rays with and . These open sets form a basis for the topology of the Gromov boundary.
These open sets are just the set of geodesic rays which follow one fixed geodesic ray up to a distance before diverging.
This topology makes the Gromov boundary into a compact metrizable space.
The number of ends of a hyperbolic group is the number of components of the Gromov boundary.
Gromov boundary of a group
The Gromov boundary is a quasi-isometry invariant; that is, if two Gromov-hyperbolic metric spaces are quasi-isometric, then the quasi-isometry between them induces a homeomorphism between their boundaries. This is important because homeomorphisms of compact spaces are much easier to understand than quasi-isometries of spaces.
This invariance allows to define the Gromov boundary of a Gromov-hyperbolic group: if is such a group, its Gromov boundary is by definition that of any proper geodesic space space on which acts properly discontinuously and cocompactly (for instance its Cayley graph). This is well-defined as a topological space by the invariance under quasi-isometry and the Milnor-Schwarz lemma.
Examples
The Gromov boundary of a regular tree of degree d≥3 is a Cantor space.
The Gromov boundary of hyperbolic n-space is an (n-1)-dimensional sphere.
The Gromov boundary of the fundamental group of a compact hyperbolic Riemann surface is the unit circle.
The Gromov boundary of most hyperbolic groups is a Menger sponge.
Variations
Visual boundary of CAT(0) space
For a complete CAT(0) space X, the visual boundary of X, like the Gromov boundary of δ-hyperbolic space, consists of equivalence class of asymptotic geodesic rays. However, the Gromov product cannot be used to define a topology on it. For example, in the case of a flat plane, any two geodesic rays issuing from a point not heading in opposite directions will have infinite Gromov product with respect to that point. The visual boundary is instead endowed with the cone topology. Fix a point o in X. Any boundary point can be represented by a unique geodesic ray issuing from o. Given a ray issuing from o, and positive numbers t > 0 and r > 0, a neighborhood basis at the boundary point is given by sets of the form
The cone topology as defined above is independent of the choice of o.
If X is proper, then the visual boundary with the cone topology is compact. When X is both CAT(0) and proper geodesic δ-hyperbolic space, the cone topology coincides with the topology of Gromov boundary.
Cannon's Conjecture
Cannon's conjecture concerns the classification of groups with a 2-sphere at infinity:
Cannon's conjecture: Every Gromov hyperbolic group with a 2-sphere at infinity acts geometrically on hyperbolic 3-space.
The analog to this conjecture is known to be true for 1-spheres and false for spheres of all dimension greater than 2.
Notes
References
Geometric group theory
Properties of groups | Gromov boundary | [
"Physics",
"Mathematics"
] | 976 | [
"Geometric group theory",
"Mathematical structures",
"Group actions",
"Properties of groups",
"Algebraic structures",
"Symmetry"
] |
41,162,191 | https://en.wikipedia.org/wiki/Richard%20B.%20Norgaard | Richard B. Norgaard (born August 18, 1943) is a professor emeritus of ecological economics in the Energy and Resources Group at the University of California, Berkeley, the first chair and a continuing member of the independent science board of CALFED (California Bay-Delta Authority), and a founding member and former president of the International Society for Ecological Economics. He received the Kenneth E. Boulding Memorial Award in 2006 for recognition of advancements in research combining social theory and the natural sciences. He is considered one of the founders of and a continuing leader in the field of ecological economics.
Personal life
Norgaard was born on August 18, 1943, in Washington D. C., and raised in Montclair, an East Bay neighborhood in the San Francisco Bay Area of California.
At an early age, he was interested in white water rafting, and was introduced to the sport by a friend, whose father, Lou Elliott, worked for the Sierra Club coordinating river trips. When he was 15 Norgaard started working for H.A.T.C.H River Expeditions as a pot washer, and was based in Vernal, Utah, near the confluence of the Green River (Colorado River) and Yampa Rivers. Norgaard continued in the business of white water rafting, quickly becoming a head boatman, and bounced around many guiding companies including one that Lou Elliott eventually founded after his career at The Sierra Club. His commitment to and involvement in the environmental movement began when he served as a river guide to David Brower, then executive director of the Sierra Club, for the Glen Canyon stretch of the Colorado River in the early 1960s. Norgaard also worked shortly as a professional photographer prior to his career in academics.
Since 2004, following the election of George W. Bush to a second term, Norgaard has been seen wearing only black colored attire, a silent yet visible protest against the folly of American electorate and the rise of anti-government, market fundamentalism, and "know-nothingism". He has four children and is married to Nancy A. Rader, the Executive Director of the California Wind Energy Association (CALWEA). Norgaard continues river rafting every summer with his family.
Academic career
Norgaard received his B.A. in economics from University of California, Berkeley, a M.S. in agricultural economics from Oregon State University, and a Ph.D. in economics from the University of Chicago in 1971. That same year, at the age of 27, he was an advisor for President Richard Nixon as part of the President’s Council on Environmental Quality. During the 1970s Dr. Norgaard was one of the nation's leading experts on the leasing of petroleum rights, especially on the outer continental shelf, as well as a leading expert on the economics of pesticide use and biological control of pests. He published an influential paper in 1975 that showed that farmers who hired an independent pest-control expert had higher profits and used half as much pesticide as those who relied on the advice of agribusiness representatives.
Dr. Norgaard became a Professor at U.C. Berkeley at the age of 27 in the Department of Agricultural and Resource Economics. Thereafter he helped found the field of Ecological Economics, and also helped initiate the interdisciplinary Energy and Resources Group at U.C. Berkeley as a graduate program in the early 1970s; he was later fully integrated as a member of its core faculty in the 1980s. He was a professor at the University of California, Berkeley for over 40 years before his retirement in 2013, most recently having taught courses in ecological economics; history of economics; and the history, science, and politics of California's water. His field experience was primarily in Alaska, Brazil, California, and Vietnam with minor forays in other parts of the globe.
Dr. Norgaard is the author of one book, co-author or editor of three additional books, and has over 100 other publications spanning the fields of environment and development, tropical forestry and agriculture, environmental epistemology, energy economics, and ecological economics. Although his research scholarship has been an eclectic mix of sociology, economics, philosophy, and the natural sciences, and he is well known for his iconoclast perspectives of conventional economics, stemming from a strong commitment to inter-disciplinarity and social justice, Professor Norgaard is also among the 1000 economists in the world most cited by other economists (Millennium Editions of Who's Who in Economics, 2000) and was one of ten American economists interviewed in The Changing Face of Economics: Conversations with Cutting Edge Economists (Colander, Holt, and Rosser, University of Michigan Press, 2004).
He is frequently recognized within the field of economics (Who’s Who in Economics, Millennium Edition, and The Changing Face of Economics: Conversations with Cutting Edge Economists 2004) and the field of ecological economics (Kenneth E. Boulding Award, 2006) for both his critiques of and contributions to economics even while he has dedicated most of his time working across disciplinary ways of understanding. The American Association for the Advancement of Science elected Norgaard to the status of “Fellow” in 2007. His research emphasizes how the resolution of complex socio-environmental problems challenges modern beliefs about science and policy and explores development as a process of coevolution between social and environmental systems. His writing is informed through work on energy, environment, and development issues around the globe with different periods of his efforts emphasizing Alaska, Brazil, and California.
Norgaard is a lead author of the 5th Assessment of the Intergovernmental Panel on Climate Change, and serves on the International Panel on Sustainable Resource Management of the United Nations Environment Programme. In 2006, Norgaard was awarded the Kenneth Boulding Memorial Award for "expanding transdisciplinary approaches to knowledge, promoting pluralism, and forging a coevolutionary approach to economy, society, and the environment in the spirit of the open and inquisitive mind that was the hallmark of Boulding's work." He was selected as a fellow of the American Association for the Advancement of Science in 2007. Norgaard also is continuing to lead the Bay Delta Conservation Plan of the Independent Science Board of CALFED (California Bay-Delta Authority).
Norgaard serves on the board of directors of the New Economics Institute, on scientific advisory boards to Tsinghua and Beijing Normal University, and on the board of EcoEquity. He has also served on the board of directors of the American Institute of Biological Sciences (2000–2009), in the position of treasurer (2003–2009. He served as president of the International Society for Ecological Economics (1998–2001). He served as the founding chair of the board of Redefining Progress (1994–97) and as a member of its board until 2007. Norgaard was a project specialist with the Ford Foundation in Brazil (1978 and 1979), a visiting research fellow at the World Bank (1992). Norgaard also has previously served on the science advisory board of the U.S. EPA (2000–2004), as a member of the U.S. committee of the Scientific Committee on Problems of the Environment (SCOPE), and on numerous panels of the National Research Council and the former Office of Technology Assessment.
Selected publications
Books
Norgaard, Richard B. 1994. Development Betrayed: The End of Progress and a Coevolutionary Revisioning of the Future. London and New York. Routledge.
Costanza, Robert, John Cumberland, Herman Daly, Robert Goodland, and Richard B. Norgaard. 1997. An Introduction to Ecological Economics (intermediate level college text). International Society for Ecological Economics and St. Lucie Press, Florida.
Dryzek, John S., David Schlosberg, and Richard B. Norgaard. (eds). 2011. The Oxford Handbook of Climate Change and Society. Oxford University Press. Oxford.
Dryzek, John S., Richard B. Norgaard, and David Schlosberg. 2013. Climate-Challenged Society. Oxford University Press.
Selected journal articles
Hall, Darwin C., and Richard B. Norgaard. "On the timing and application of pesticides." American Journal of Agricultural Economics 55.2 (1973): 198–201.
Norgaard, Richard B. "Coevolutionary development potential." Land economics 60.2 (1984): 160–173.
Norgaard, Richard B. "Environmental economics: an evolutionary critique and a plea for pluralism." Journal of Environmental Economics and Management 12.4 (1985): 382–394.
Howarth, Richard B., and Richard B. Norgaard. "Intergenerational resource rights, efficiency, and social optimality." Land economics 66.1 (1990): 1–11.
Mcneely, Jeffrey A., and Richard B. Norgaard. "Developed country policies and biological diversity in developing countries." Agriculture, ecosystems & environment 42.1 (1992): 194–204.
Howarth, Richard B., and Richard B. Norgaard. "Environmental valuation under sustainable development." The American economic review 82.2 (1992): 473–477.
Norgaard, R. B. "Ecology, politics, and economics: finding the common ground for decision making in conservation." Principles of conservation biology. Sinauer Associates, Sunderland, Massachusetts, USA (1994): 439–465.
Norgaard, Richard B., and Thomas O. Sikor. "The methodology and practice of agroecology." Agroecology, the Science of Sustainable Agriculture (1995): 53–62.
Lélé, Sharachchandra, and Richard B. Norgaard. "Sustainability and the scientist’s burden." Conservation Biology 10.2 (1996): 354–365.
Lélé, Sharachchandra, and Richard B. Norgaard. "Practicing interdisciplinarity." BioScience 55.11 (2005): 967–975.
Norgaard, Richard B., and Paul Baer. "Collectively seeing complex systems: The nature of the problem." BioScience 55.11 (2005): 953–960.
Norgaard, Richard B., and Paul Baer. "Collectively seeing climate change: The limits of formal models." BioScience 55.11 (2005): 961–966.
Norgaard, Richard B. "Bubbles in a back eddy: a commentary on “the origin, diagnostic attributes and practical application of coevolutionary theory”." Ecological Economics 54.4 (2005): 362–365.
Sneddon, Christopher, Richard B. Howarth, and Richard B. Norgaard. 2006. Sustainable Development in a Post-Brundtland World. Ecological Economics 57(2):253–68.
Norgaard, Richard B. and Xuemei Liu. 2007. Market Governance Failure. Ecological Economics. 60(3):634–641.
Norgaard, Richard B. "Deliberative economics." Ecological Economics 63.2-3 (2007): 375–82.
Norgaard, Richard B. "Finding hope in the millennium ecosystem assessment." Conservation Biology 22.4 (2008): 862–869.
Norgaard, Richard B., and Ling Jin. "Trade and the governance of ecosystem services." Ecological Economics 66.4 (2008): 638–652.
Norgaard, Richard B., Giorgos Kallis, and Michael Kiparsky. "Collectively engaging complex socio-ecological systems: re-envisioning science, governance, and the California Delta." environmental science & policy 12.6 (2009): 644–652.
Norgaard, Richard B. "Ecosystem services: From eye-opening metaphor to complexity blinder." Ecological Economics 69.6 (2010): 1219–1227.
Kallis, Giorgos, and Richard B. Norgaard. "Coevolutionary ecological economics." Ecological Economics 69.4 (2010): 690–699.
Gual, Miguel A., and Richard B. Norgaard. "Bridging ecological and social systems coevolution: A review and proposal." Ecological economics 69.4 (2010): 707–717.
Articles about
1992. The Price of Green. Economics Focus. The Economist (May 9): 87.
1992. Warsh, David. Economics, Ecology: Twin sciences of the 21st century. Economic Principles. The Boston Sunday Globe (May 24): 29–30.
1992. Interview titled: Wirtschaften für unsere Enkelkinder? WEINER BLÆTTER 05/92 pages 19–21.
1992. Interview titled: Richard B. Norgaard. Options International Institute for Applied Systems Analysis. (September):14-15.
2004. “Richard B. Norgaard”. Chapter 8 in The Changing Face of Economics: Conversations with Cutting Edge Economists. Dave Colander, Ric Holt, and J. Barkley Rosser. Ann Arbor. University of Michigan Press.
2005. “Return to a lost world of upside-down mountains”. Barry Bergman. Berkeleyan 34(6):8 (September 22).
1992. Taking Future Generations into Account. Lynn Atwood. Berkeleyan20(12):
2010. “Co-Evolutionary Economics (main originator: Richard Norgaard)”. Chapter 9 in Integral Economics: Releasing the Economic Genius of Society.London. Gower Ashgate.
2011. “Richard Norgaard”. Chapter 6 in The Wildness Within: Remembering David Brower. Kenneth Brower. Berkeley. Heyday Books.
References
American non-fiction environmental writers
Ecological economists
Living people
1943 births
Academics from Washington, D.C.
Environmental social scientists | Richard B. Norgaard | [
"Environmental_science"
] | 2,834 | [
"Environmental social scientists",
"Environmental social science"
] |
41,162,315 | https://en.wikipedia.org/wiki/Stanene | Stanene is a topological insulator, theoretically predicted by Shoucheng Zhang's group at Stanford, which may display dissipationless currents at its edges near room temperature. It is composed of tin atoms arranged in a single layer, in a manner similar to graphene. Stanene got its name by combining stannum (the Latin name for tin) with the suffix -ene used by graphene. Research is ongoing in Germany and China, as well as at laboratories at Stanford and UCLA.
The addition of fluorine atoms to the tin lattice could extend the critical temperature up to 100 °C. This would make it practical for use in integrated circuits to make smaller, faster and more energy efficient computers.
See also
Graphene
Silicene
Boron
Stannenes (Similar name to Stanene)
Stannane (similar name as Stanene, too)
Semiconductors
Topological insulator
Superconductivity
Superconductors
References
External links
Superconductors
Tin
Nanomaterials | Stanene | [
"Chemistry",
"Materials_science"
] | 203 | [
"Nanotechnology",
"Superconductivity",
"Nanomaterials",
"Superconductors"
] |
41,165,083 | https://en.wikipedia.org/wiki/Structural%20channel | The structural channel, C-channel or parallel flange channel (PFC), is a type of (usually structural steel) beam, used primarily in building construction and civil engineering. Its cross section consists of a wide "web", usually but not always oriented vertically, and two "flanges" at the top and bottom of the web, only sticking out on one side of the web. It is distinguished from I-beam or H-beam or W-beam type steel cross sections in that those have flanges on both sides of the web.
Uses
The structural channel is not used as much in construction as symmetrical beams, in part because its bending axis is not centered on the width of the flanges. If a load is applied equally across its top, the beam will tend to twist away from the web. This may not be a weak point or problem for a particular design, but is a factor to be considered.
Channels or C-beams are often used where the flat, back side of the web can be mounted to another flat surface for maximum contact area. They are also sometimes welded together back-to-back to form a non-standard I-beam.
See also
Hollow structural section
References
Further reading
M. F. Ashby, 2005, Materials Selection in Mechanical Design, Elsevier.
External links
Canadian Institute of Steel Construction website
American Institute of Steel Construction website
Wood I-joists
British Constructional Steelwork Association website
Structural engineering
Structural steel | Structural channel | [
"Engineering"
] | 297 | [
"Construction",
"Civil engineering",
"Structural engineering",
"Structural steel"
] |
41,167,140 | https://en.wikipedia.org/wiki/Schur%20product%20theorem | In mathematics, particularly in linear algebra, the Schur product theorem states that the Hadamard product of two positive definite matrices is also a positive definite matrix.
The result is named after Issai Schur (Schur 1911, p. 14, Theorem VII) (note that Schur signed as J. Schur in Journal für die reine und angewandte Mathematik.)
The converse of the theorem holds in the following sense: if is a symmetric matrix and the Hadamard product is positive definite for all positive definite matrices , then itself is positive definite.
Proof
Proof using the trace formula
For any matrices and , the Hadamard product considered as a bilinear form acts on vectors as
where is the matrix trace and is the diagonal matrix having as diagonal entries the elements of .
Suppose and are positive definite, and so Hermitian. We can consider their square-roots and , which are also Hermitian, and write
Then, for , this is written as for and thus is strictly positive for , which occurs if and only if . This shows that is a positive definite matrix.
Proof using Gaussian integration
Case of M = N
Let be an -dimensional centered Gaussian random variable with covariance . Then the covariance matrix of and is
Using Wick's theorem to develop we have
Since a covariance matrix is positive definite, this proves that the matrix with elements is a positive definite matrix.
General case
Let and be -dimensional centered Gaussian random variables with covariances , and independent from each other so that we have
for any
Then the covariance matrix of and is
Using Wick's theorem to develop
and also using the independence of and , we have
Since a covariance matrix is positive definite, this proves that the matrix with elements is a positive definite matrix.
Proof using eigendecomposition
Proof of positive semidefiniteness
Let and . Then
Each is positive semidefinite (but, except in the 1-dimensional case, not positive definite, since they are rank 1 matrices). Also, thus the sum is also positive semidefinite.
Proof of definiteness
To show that the result is positive definite requires even further proof. We shall show that for any vector , we have . Continuing as above, each , so it remains to show that there exist and for which corresponding term above is nonzero. For this we observe that
Since is positive definite, there is a for which (since otherwise for all ), and likewise since is positive definite there exists an for which However, this last sum is just . Thus its square is positive. This completes the proof.
References
External links
Bemerkungen zur Theorie der beschränkten Bilinearformen mit unendlich vielen Veränderlichen at EUDML
Linear algebra
Matrix theory
Issai Schur | Schur product theorem | [
"Mathematics"
] | 583 | [
"Theorems in algebra",
"Theorems in linear algebra"
] |
62,627,023 | https://en.wikipedia.org/wiki/Interfacial%20rheology | Interfacial rheology is a branch of rheology that studies the flow of matter at the interface between a gas and a liquid or at the interface between two immiscible liquids. The measurement is done while having surfactants, nanoparticles or other surface active compounds present at the interface. Unlike in bulk rheology, the deformation of the bulk phase is not of interest in interfacial rheology and its effect is aimed to be minimized. Instead, the flow of the surface active compounds is of interest..
The deformation of the interface can be done either by changing the size or shape of the interface. Therefore interfacial rheological methods can be divided into two categories: dilational and shear rheology methods.
Interfacial dilational rheology
In dilatational interfacial rheology, the size of the interface is changing over time. The change in the surface stress or surface tension of the interface is being measured during this deformation. Based on the response, interfacial viscoelasticity is calculated according to well established theories:
where
|E| is the complex surface dilatational modulus
γ is the surface tension or interfacial tension of the interface
A is the interfacial area
δ is the phase angle difference between the surface tension and area
E’' is the elastic (storage) modulus
E’'' is the viscous (loss) modulus
Most commonly, the measurement of dilational interfacial rheology is conducted with an optical tensiometer combined to a pulsating drop module. A pendant droplet with surface active molecules in it is formed and pulsated sinusoidally. The changes in the interfacial area causes changes in the molecular interactions which then changes the surface tension. Typical measurements include performing a frequency sweep for the solution to study the kinetics of the surfactant.
In another measurement method suitable especially for insoluble surfactants, a Langmuir trough is used in an oscillating barrier mode. In this case, two barriers that limit the interfacial area are being oscillated sinusoidally and the change in surface tension measured.
Interfacial shear rheology
In interfacial shear rheology, the interfacial area remains the same throughout the measurement. Instead, the interfacial area is sheared in order to be able to measure the surface stress present. The equations are similar to dilatational interfacial rheology but shear modulus is often marked with G instead of E like in dilational methods. In a general case, G and E are not equal.
Since interfacial rheological properties are relatively weak, it causes challenges for the measurement equipment. For high sensitivity, it is essential to maximize the contribution of the interface while minimizing the contribution of the bulk phase. The Boussinesq number, Bo, depicts how sensitive a measurement method is for detecting the interfacial viscoelasticity.
The commercialized measurement techniques for interfacial shear rheology include magnetic needle method, rotating ring method and rotating bicone method. The magnetic needle method, developed by Brooks et al., has the highest Boussinesq number of the commercialized methods. In this method, a thin magnetic needle is oscillated at the interface using a magnetic field. By following the movement of the needle with a camera, the viscoelastic properties of the interface can be detected. This method is often used in combination with a Langmuir trough in order to be able to conduct the experiment as a function of the packing density of the molecules or particles.
Applications
When surfactants are present in a liquid, they tend to adsorb in the liquid-air or liquid-liquid interface. Interfacial rheology deals with the response of the adsorbed interfacial layer on the deformation. The response depends on the layer composition, and thus interfacial rheology is relevant in many applications in which adsorbed layer play a crucial role, for example in development surfactants, foams and emulsions. Many biological systems like pulmonary surfactant and meibum are dependent on interfacial viscoelasticity for their functionality. Interfacial rheology has been employed to understand the structure-function relationship of these physiological interfaces, how compositional deviations cause diseases such as infant respiratory distress syndrome or dry eye syndrome, and has helped to develop therapies like artificial pulmonary surfactant replacements and eye drops.
Interfacial rheology enables the study of surfactant kinetics, and the viscoelastic properties of the adsorbed interfacial layer correlate well with emulsion and foam stability. Surfactants and surface active polymers used are for stabilising emulsions and foams in food and cosmetic industries. Proteins are surface active and adsorb at the interface, where they can change conformation and influence the interfacial properties. Natural surfactants like asphaltenes and resins stabilize water-oil emulsions in crude oil applications, and by understanding their behavior the crude oil separation process can be enhanced. Also enhanced oil recovery efficiency can be optimized.
Specialized setups that allow bulk exchange during interfacial rheology measurements are used to investigate the response of adsorbed proteins or surfactants upon changes in pH or salinity. These setups can also be used to mimic more complex conditions like the gastric environment to investigate the in vitro displacement or enzymatic hydrolysis of polymers adsorbed at oil-water interfaces to understand how respective emulsion are digested the stomach.
Interfacial rheology allows the probation of bacteria adsorption and biofilm formation at liquid-air or liquid-liquid interfaces.
In food science, interfacial rheology was used to understand the stability of emulsions like mayonnaise, the stability of espresso foam, the film formed on black tea, or the formation of kombucha biofilms.
See also
Rheology
Langmuir trough
Tensiometer
Surface tension
References
Rheology
Surface science | Interfacial rheology | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,268 | [
"Rheology",
"Condensed matter physics",
"Surface science",
"Fluid dynamics"
] |
52,757,053 | https://en.wikipedia.org/wiki/Mine%20Kafon%20Drone | The Mine Kafon Drone is a drone for demining, led by Afghanistan-born Massoud Hassani. The drone is designed to map an area for land mines, detect the mines, and then detonate them remotely. It has been field-tested with the Dutch Ministry of Defence. The use of a drone is safer and less expensive than typical methods for mine removal, which endanger trained mine disposal experts and dogs. The Mine Kafon Foundation, established by Hassani in 2013, is based in Eindhoven, Netherlands.
Background
Massoud Hassani
Massoud Hassani was born in Afghanistan, where there are an estimated 10 million mines buried in about . He and his brother, Mahmud, in fear of the landmines, took a special path to school. Massoud says, that knowing that there are buried landmines "becomes like a mental disorder... The fear is on your mind all the time." As children, the boys made wind-driven toys to play with around the Kabul deserts, but they would get stuck in the middle of minefields.
His mother arranged for smugglers to get him out of the country when he was 14 years of age. The Hassani family settled in the Netherlands. Massoud studied Industrial Design at Design Academy Eindhoven, and inspired by homemade wind-powered toys he made during his childhood, he and his brother created the Mine Kafon wind-powered landmine machine—Kafon means "explode" in Dari. The machine, which looks like a giant dandelion puff ball that rolls across areas of land and detonates landmines, was created for his 2011 graduation project. Made of bamboo, iron, and plastic, the design that was inspired by a starburst was a finalist in London's Design Museum's 2012 Design of the Year Award. Called a visual poem by the New York Times, it was exhibited in 2013 at the Museum of Modern Art in New York City and the following year at "The Fab Mind: Hints of the Future in a Shifting World" design exhibition as one of the "socially and politically engaged designs".
The concept for demining using the dandelion-shaped machine works in theory, particularly in open desert areas where the wind blows freely, but it could cause more problems—in terms of retrieval and maintenance—once it was damaged in the middle of a minefield, says Henk van der Slik of the Dutch Explosive Ordnance Disposal organization. While it is not an effective tool for demining, it could be used to identify potential areas where mines were placed.
Landmines worldwide
There are about 100 million buried landmines in 60 countries. The United Nations states that there are 20,000 individuals—mostly the elderly, women, and children— that are maimed by landmines annually. According to Ingenieur, civilians make up the largest portion of victims of landmines, at an estimate of 79% of the total victims. The military are estimated to be 18% of the victims, and professional mine sweepers are 3% of the victims.
Typically, mines are removed using mine disposal experts, dogs, and wheeled vehicles, which is dangerous. Further, the mines become more unstable over time. It costs between $300–1,000 to remove each mine, according to the World Economic Forum. It is also a lengthy process. There were 171,000 American and Russian mines laid in Mozambique during their revolution, which have been said to have killed up to 15,000 people, according to Human Rights Watch. It took 22 years to clear the mines from the country. The effort was completed in 2015.
The project
Subsequently, the project rapidly gained media-interest. In 2012, Massoud and his brother Mahmud organised a Kickstarter campaign to raise funds for the development of the Mine Kafon tumbleweed mine detonator ball. The project raised funds on the crowdfunding site Kickstarter with their goal set at £100,000 and receiving £119,456. After the successful fundraising campaign, Massoud established the Mine Kafon Foundation, a research and development organization, in 2013 in Eindhoven, Netherlands.
Prototyping and field testing of the drone was conducted with the support of the Dutch Ministry of Defence. They also crowd-sourced globally for designers and engineers to collaborate on the project. The team, led by Massoud, currently optimises the Mine Kafon to safely and efficiently operate across all landmine contaminated terrains.
The drone
The unmanned airborne de-mining system uses a three step process to autonomously map, detect and detonate land mines. It flies above potentially dangerous areas, generating a 3D map using its 3D camera, GPS, and a computer. It then uses a metal detector that hovers close above the ground using sensors and a retractable arm keep to pinpoint and geotag the location of mines. The drone can then place a detonator above the mines using its robotic gripping arm, before retreating to a safe distance and detonating the mine. The firm claims its drone is safer, 20 times faster and up to 200 times cheaper than current technologies and might clear mines globally in 10 years. Some of the challenges are that it is difficult to rely on GPS for precise locations and it is difficult to identify mines that have been buried for decades.
In terms of the mechanics, the goal now is to optimize the drone and create base stations. The team will explore using external antennas to triangulate locations, to improve the results of using GPS alone. In addition, the plan is to train pilots to use the drone and carry out tests in different countries. Another Kickstarter campaign was established in July 2016 to help fund these efforts with the goal set at €70,000 and receiving over €100,000 above it (€177,456).
References
External links
Mine warfare
Kickstarter-funded products
Unmanned aerial vehicles of the Netherlands
Eindhoven
Mine action | Mine Kafon Drone | [
"Engineering"
] | 1,218 | [
"Military engineering",
"Mine warfare"
] |
52,758,243 | https://en.wikipedia.org/wiki/Kinetic%20energy%20metamorphosis | Kinetic energy metamorphosis (KEM) is a tribological process of gradual crystal re-orientation and foliation of component minerals in certain rocks. It is caused by very high, localized application of kinetic energy. The required energy may be provided by prolonged battery of fluvially propelled bed load of cobbles, by glacial abrasion, tectonic deformation, and even by human action. It can result in the formation of laminae on specific metamorphic rocks that, while being chemically similar to the protolith, differ significantly in appearance and in their resistance to weathering or deformation. These tectonite layers are of whitish color and tend to survive granular or mass exfoliation much longer than the surrounding protolith.
KEM in cupules
The products of KEM were first identified in 2015 in cupules, a form of rock art consisting of spherical cap or dome-shaped depressions created by percussion with hammer-stones. KEM laminae, caused by solid state re-metamorphosis of metamorphic rock, have been observed in cupules on three rock types:
On quartzite at Indragarh Hill, Bhanpura, India; Nchwaneng, Korannaberg site complex, South Africa; and Inca Huasi, Mizque, central Bolivia.
On sandstone at Jabal al-Raat, Shuwaymis site complex, northern Saudi Arabia; Umm Singid and Jebel as-Suqur, Sudan; Tabrakat, Acacus site complex, Libya; and Inca Huasi, Mizque, central Bolivia.
On schist at Condor Mayu 2, Santivañez site complex, Cochabamba, Bolivia.
On granite at Wushigou 1, Fangcheng, Henan Province, China.
Replication has established that cupules produced on very hard rocks, such as quartzite, require many tens of thousands of blows with hammer-stones to make. Therefore, the cumulative force applied to very small surface areas (<15 cm2) is in the order of tens of kN (kilo Newtons). In one extreme case, the KEM lamina has been developed to a thickness of c. 10 mm, but the most commonly observed thickness is about 1–2 mm. The tectonite layer is always thickest in the central part of the cupule, i.e. where the greatest amount of energy was applied.
Geological KEM phenomena
These phenomena have since also been observed in geological contexts, generally of three types:
On the bedrock of paleochannels (geologically ancient river courses) that has been heavily impacted by battering with fluvial detrital loads in places of high kinetic energy, such as ancient rapids. It can even occur on the transported cobbles and boulders found deposited in such palaeochannels.
On glacially abraded pavements of quartzite, caused by the tribological action of the lithic load of ancient glaciers.
In the form of whitish sheets of planar or curvi-planar tectonite contained in sandstone that has been subjected to tectonic foliation.
Kinetic energy metamorphosis products are tribological phenomena, caused by very focused, localized cumulative effect of kinetic energy on the syntaxial silica (and the voids it contains) that forms the cement of such rocks as sandstones and quartzites. The conversion to tectonite does not appear to be reversible, and the high resistance of that product to weathering processes protects the parent rock it conceals from both granular and mass exfoliation. Its susceptibility to dating techniques needs to be explored.
References
Tribology
Kinetic energy | Kinetic energy metamorphosis | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 758 | [
"Tribology",
"Mechanical quantities",
"Physical quantities",
"Kinetic energy",
"Materials science",
"Surface science",
"Mechanical engineering"
] |
60,466,332 | https://en.wikipedia.org/wiki/Sinusoidal%20plane%20wave | In physics, a sinusoidal plane wave is a special case of plane wave: a field whose value varies as a sinusoidal function of time and of the distance from some fixed plane. It is also called a monochromatic plane wave, with constant frequency (as in monochromatic radiation).
Basic representation
For any position in space and any time , the value of such a field can be written as
where is a unit-length vector, the direction of propagation of the wave, and "" denotes the dot product of two vectors. The parameter , which may be a scalar or a vector, is called the amplitude of the wave; the coefficient , a positive scalar, its spatial frequency; and the adimensional scalar , an angle in radians, is its initial phase or phase shift.
The scalar quantity gives the (signed) displacement of the point from the plane that is perpendicular to and goes through the origin of the coordinate system. This quantity is constant over each plane perpendicular to .
At time , the field varies with the displacement as a sinusoidal function
The spatial frequency is the number of full cycles per unit of length along the direction .
For any other value of , the field values are displaced by the distance in the direction . That is, the whole field seems to travel in that direction with velocity .
For each displacement , the moving plane perpendicular to at distance from the origin is called a wavefront. This plane lies at distance from the origin when , and travels in the direction also with speed ; and the value of the field is then the same, and constant in time, at every one of its points.
A sinusoidal plane wave could be a suitable model for a sound wave within a volume of air that is small compared to the distance of the source (provided that there are no echos from nearly objects). In that case, would be a scalar field, the deviation of air pressure at point and time , away from its normal level.
At any fixed point , the field will also vary sinusoidally with time; it will be a scalar multiple of the amplitude , between and
When the amplitude is a vector orthogonal to , the wave is said to be transverse. Such waves may exhibit polarization, if can be oriented along two non-collinear directions. When is a vector collinear with , the wave is said to be longitudinal. These two possibilities are exemplified by the S (shear) waves and P (pressure) waves studied in seismology.
The formula above gives a purely "kinematic" description of the wave, without reference to whatever physical process may be causing its motion. In a mechanical or electromagnetic wave that is propagating through an isotropic medium, the vector of the apparent propagation of the wave is also the direction in which energy or momentum is actually flowing. However, the two directions may be different in an anisotropic medium.(See also: Wave vector#Direction of the wave vector.)
Alternative representations
The same sinusoidal plane wave above can also be expressed in terms of sine instead of cosine using the elementary identity
where . Thus the value and meaning of the phase shift depends on whether
the wave is defined in terms of sine or co-sine.
Adding any integer multiple of to the initial phase has no effect on the field. Adding an odd multiple of has the same effect as negating the amplitude . Assigning a negative value for the spatial frequency has the effect of reversing the direction of propagation, with a suitable adjustment of the initial phase.
The formula of a sinusoidal plane wave can be written in several other ways:
Complex exponential form
A plane sinusoidal wave may also be expressed in terms of the complex exponential function
where is the base of the natural exponential function, and is the imaginary unit, defined by the equation . With those tools, one defines the complex exponential plane wave as
where are as defined for the (real) sinusoidal plane wave.
This equation gives a field whose value is a complex number, or a vector with complex coordinates. The original wave expression is now simply the real part,
To appreciate this equation's relationship to the earlier ones, below is this same equation expressed using sines and cosines. Observe that the first term equals the real form of the plane wave just discussed.
The introduced complex form of the plane wave can be simplified by using a complex-valued amplitude substitute the real valued amplitude .
Specifically, since the complex form
one can absorb the phase factor into a complex amplitude by letting , resulting in the more compact equation
While the complex form has an imaginary component, after the necessary calculations are performed in the complex plane, its real value (which corresponds to the wave one would actually physically observe or measure) can be extracted giving a real valued equation representing an actual plane wave.
The main reason one would choose to work with complex exponential form of plane waves is that complex exponentials are often algebraically easier to handle than the trigonometric sines and cosines. Specifically, the angle-addition rules are extremely simple for exponentials.
Additionally, when using Fourier analysis techniques for waves in a lossy medium, the resulting attenuation is easier to deal with using complex Fourier coefficients. If a wave is traveling through a lossy medium, the amplitude of the wave is no longer constant, and therefore the wave is strictly speaking no longer a true plane wave.
In quantum mechanics the solutions of the Schrödinger wave equation are by their very nature complex-valued and in the simplest instance take a form identical to the complex plane wave representation above. The imaginary component in that instance however has not been introduced for the purpose of mathematical expediency but is in fact an inherent part of the “wave”.
In special relativity, one can utilize an even more compact expression by using four-vectors.
Thus,
becomes
Applications
The equations describing electromagnetic radiation in a homogeneous dielectric medium admit as special solutions that are sinusoidal plane waves. In electromagnetism, the field is typically the electric field, magnetic field, or vector potential, which in an isotropic medium is perpendicular to the direction of propagation . The amplitude is then a vector of the same nature, equal to the maximum-strength field. The propagation speed will be the speed of light in the medium.
The equations that describe vibrations in a homogeneous elastic solid also admit solutions that are sinusoidal plane waves, both transverse and longitudinal. These two types have different propagation speeds, that depend on the density and the Lamé parameters of the medium.
The fact that the medium imposes a propagation speed means that the parameters and must satisfy a dispersion relation characteristic of the medium. The dispersion relation is often expressed as a function, . The ratio gives the magnitude of the phase velocity, and the derivative gives the group velocity. For electromagnetism in an isotropic medium with index of refraction , the phase velocity is , which equals the group velocity if the index is not frequency-dependent.
In linear uniform media, a general solution to the wave equation can be expressed as a superposition of sinusoidal plane waves. This approach is known as the angular spectrum method. The form of the planewave solution is actually a general consequence of translational symmetry. More generally, for periodic structures having discrete translational symmetry, the solutions take the form of Bloch waves, most famously in crystalline atomic materials but also in photonic crystals and other periodic wave equations. As another generalization, for structures that are only uniform along one direction (such as a waveguide along the direction), the solutions (waveguide modes) are of the form multiplied by some amplitude function . This is a special case of a separable partial differential equation.
Polarized electromagnetic plane waves
Represented in the first illustration toward the right is a linearly polarized, electromagnetic wave. Because this is a plane wave, each blue vector, indicating the perpendicular displacement from a point on the axis out to the sine wave, represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the axis.
Represented in the second illustration is a circularly polarized, electromagnetic plane wave. Each blue vector indicating the perpendicular displacement from a point on the axis out to the helix, also represents the magnitude and direction of the electric field for an entire plane perpendicular to the axis.
In both illustrations, along the axes is a series of shorter blue vectors which are scaled down versions of the longer blue vectors. These shorter blue vectors are extrapolated out into the block of black vectors which fill a volume of space. Notice that for a given plane, the black vectors are identical, indicating that the magnitude and direction of the electric field is constant along that plane.
In the case of the linearly polarized light, the field strength from plane to plane varies from a maximum in one direction, down to zero, and then back up to a maximum in the opposite direction.
In the case of the circularly polarized light, the field strength remains constant from plane to plane but its direction steadily changes in a rotary type manner.
Not indicated in either illustration is the electric field’s corresponding magnetic field which is proportional in strength to the electric field at each point in space but is at a right angle to it. Illustrations of the magnetic field vectors would be virtually identical to these except all the vectors would be rotated 90 degrees about the axis of propagation so that they were perpendicular to both the direction of propagation and the electric field vector.
The ratio of the amplitudes of the electric and magnetic field components of a plane wave in free space is known as the free-space wave-impedance, equal to 376.730313 ohms.
See also
Angular spectrum method
Collimated beam
Plane waves in a vacuum
Plane-wave expansion
Rectilinear propagation
Wave equation
References
J. D. Jackson, Classical Electrodynamics (Wiley: New York, 1998).
L. M. Brekhovskikh, "Waves in Layered Media, Series:Applied Mathematics and Mechanics, Vol. 16, (Academic Press, 1980).
Wave mechanics | Sinusoidal plane wave | [
"Physics"
] | 2,056 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
60,472,185 | https://en.wikipedia.org/wiki/Dual%20circadian%20oscillator%20model | In the field of chronobiology (the study of circadian rhythms), the dual circadian oscillator model refers to a model of entrainment (where rhythmic events in an organism match oscillation in the environment) initially proposed by Colin Pittendrigh and Serge Daan. The dual oscillator model suggests the presence of two coupled circadian oscillators: E (evening) and M (morning). The E oscillator is responsible for entraining the organism’s evening activity (activity offset) to dusk cues when the daylight fades, while the M oscillator is responsible for entraining the organism’s morning activity (activity onset) to dawn cues, when daylight increases. The E and M oscillators operate in an antiphase relationship. As the timing of the sun's position fluctuates over the course of the year, the oscillators' periods adjust accordingly. Other oscillators, including seasonal oscillators, have been found to work in conjunction with circadian oscillators in order to time different behaviors in organisms such as fruit flies.
Discovery
In 1966, Jürgen Aschoff, a German chronobiologist, observed that some animals exhibited two bouts of activity per day, one in the morning and one in the evening. These bouts of activity are defined by the animals' anticipation of the lights turning on or off. In 1976, Colin Pittendrigh and Serge Daan, two chronobiologists, first proposed a dual-oscillator model for nocturnal rodents as the mechanism for these E and M bouts of activity. The model hypothesized the presence of two separate oscillators that have opposite dependence on light intensity. Pittendrigh and Daan found that the M oscillator is synchronized to dawn and experiences acceleration from light, meaning that the period decreases with each subsequent cycle. The E oscillator, on the other hand, is synchronized to dusk and experiences deceleration from light, meaning that the period increases for each subsequent cycle. They postulated that the E&M model had an enhanced ability to adjust the circadian rhythm to the season and changes in day length.
Pittendrigh and Daan found several limitations in the model of a single oscillator controlling sleep/wake behavior that led them to develop the dual oscillator model. The first key finding was the splitting behavior in the locomotor activity of hamsters under constant high intensity light conditions. The two separate bouts of activity indicated that more than one oscillator may be controlling locomotor activity. They also observed transient changes in the phase of sleep/wake behavior in Drosophila melanogaster following temperature changes. The lack of a new steady state rhythm suggested the presence of another temperature sensitive oscillator downstream of the known oscillator. These findings and others led Pittendrigh and Daan to propose the dual circadian oscillator model.
They examine slices of the hamster hypothalamus sectioned horizontally through the optic chiasm, in addition to the standard vertical (coronal) sections (Fig. 1). Whereas the coronal sections always gave a single plateau of increased activity that lasted 7 hours and occurred once a day, the horizontal sections generated a completely unexpected output: two peaks, each lasting about 4 hours, that were clearly separable.
Following Pittendrigh and Daan's behavioral characterization of the dual oscillator model, it took several years before scientists discovered the mechanistic basis of it. In 2000, Anita Jagota, Horacio de la Iglesia, and William J. Schwartz were the first group to show two distinct peaks of electrical activity in the mammalian suprachiasmatic nucleus (SCN) after studying horizontally sectioned hamster hypothalamus. Further experiments need to be conducted to validate that the two peaks represent the E and M oscillators. In 2004, Brigette Grima and Dan Stoleru independently investigated E and M activity peaks in Drosophila melanogaster (fruit flies) using different gene expression manipulations. They found two separate circadian neuron groups control the E and M peaks of activity in Drosophila melanogaster. They also found the lateral neurons are responsible for the morning and evening peaks. In 2007, Stoleru found that the M cells dominate the circadian rhythm on short days while the E cells dominate the rhythm on long days. This alternating domination pattern allows the circadian rhythms in animals to persist on both short and long days, respectively. Stoleru's work significantly contributed to the field of chronobiology by revealing the mechanism through which animals are able to adapt to environmental changes such as seasonal variations in daylight length. His research has also provided insights into the circadian clock's role in Seasonal Affective Disorder (SAD) and other related mood disorders that are responsive to light therapy.
Background
Distinctive features of the E&M dual oscillator model include alpha compression and the presence of an intermediate τ value. Each oscillator has a unique τ (tau) which is the period of an organism's sleep/wake cycle when they are in constant conditions with no environmental cues, also known as free-running. When coupled, these oscillators produce a distinct, observed free-running period known as an intermediate τ, which is a function of the E and M oscillator’s respective τ values. Alpha compression, a term coined by Jürgen Aschoff, refers to the observation that under constant light conditions, the length of activity of nocturnal organisms shortens. The organism’s duration of activity is called the alpha phase and typically measured in terms of locomotor activity. Alternatively, the alpha phase of diurnal organisms lengthens under constant light; this phenomenon is known as alpha expansion. Furthermore, alpha compression refers to the decreasing duration of activity before splitting occurs. Splitting is defined as the process by which one bout of activity separates into two, distinct bouts of activity, each free-running at a τ independent of the other. Pittendrigh observed that the M oscillator ran shorter after splitting compared to the E oscillator's relatively longer intrinsic period as indicated in the actogram, a diagram of the organism's daily rest and activity phases. After splitting occurs, the oscillators responsible for the two distinct activity bouts either recouple into one with an intermediate τ or stabilize at their new, separate τ values.
E and M cells possess different capabilities to control behavior and respond to light through either accelerating or decelerating their individual, internal circadian clock speeds. The difference in the phase angle of entrainment, or the relationship between the timing of the biological clock and the timing of the external time cue, of each cell varies depending on the amount of light in the environment. Greater amounts of light lead to a greater phase angle of entrainment. The amount of light and pigment dispersing factor (PDF) control the acceleration and deceleration of the speed of M and E cells, respectively. Furthermore, the coupling of the E and M oscillators increases as the phase angle of entrainment decreases, displaying an inverse relationship between the duration of light and the coupling of the two oscillators. This phenomenon also shows the importance of E and M cells for adapting the activity of an organism different photoperiods.
Evidence in single-celled organisms
In single-celled dinoflagellates such as Gonyaulax polyedra, researchers have found evidence of circadian rhythms in bioluminescence, photosynthetic capacity, time of cell division, and enzyme synthesis rates. Bioluminescence can be expressed through either independent flashing or a continual glow. Both modes of bioluminescent expression are rhythmic and peak at different times. Researchers have hypothesized that the two rhythms operate under distinct pacemakers, as they appear to peak at different times under varying conditions (such as long vs. short days or a 23-hour vs. a 24-hour entraining period). Under constant conditions, the two rhythms in bioluminescence free-run with different periods, suggesting a dual-oscillator model, and they also appear to be coupled. However, the molecular mechanism of that coupling is not yet known.
Evidence in Drosophila melanogaster
Drosophila melanogaster, or fruit flies, show diurnal rhythms in locomotor activity that are corpuscular, meaning they exhibit both morning and evening peaks in activity that align with dawn and dusk. Both these bouts of activity are intrinsic and observed in constant darkness, although the morning peak is more pronounced during a light-dark cycle. Groups of lateral clock neurons in the Drosophila brain have been found to contain neurons responsible for these morning and evening peaks, indicating they could be the source for the M and E oscillators. Independent studies have found that the ventrolateral neurons anticipate lights-on while the dorsolateral neurons anticipate lights-off. Further studies have narrowed down the morning anticipation to four small ventrolateral neurons, which are the master clock during constant darkness and express pigment-dispersing factor (PDF). PDF is involved in the molecular coupling mechanism of M and E oscillators: M oscillator cells express PDF and entrain to dawn, while E cells receive PDF and become phase delayed, entraining to dusk.
However, other studies have shown that flies lacking lateral neurons still show residual morning and evening peaks, indicating that dorsal neurons play a role in evening and morning oscillations. Researchers have shown that rhythms of core clock proteins such as PER are the same in both morning and evening cells. During long days and high temperatures, scientists have observed phase advances in morning cells and phase delays in evening cells of molecular rhythms, potentially explaining how these cells determine different bouts of activity. However, there are other clock neurons found in fruit flies that do not function as E or M cells. In addition, other studies have found results inconsistent with the traditional dual oscillator model, suggesting a network of oscillators instead. These results have led some researchers to propose a plastic oscillator model in which different neurons can assume the role of E or M when needed.
Clock neurons in the fly brain entrain to external light stimuli via a cryptochrome (CRY) response pathway. In response to light exposure, CRY binds to the timeless (TIM) protein, ultimately leading to the degradation of TIM within the clock neuron and delaying the internal circadian oscillation of period (PER) and TIM proteins, meaning their onset and offset of activity occur later in the day.
The E and M oscillators in Drosophila are also theorized to have different temperature sensitivities. A group of chronobiologists found that the Drosophila morning activity peak synchronized to temperature increases in the morning, whereas the evening activity peak synchronized to the decrease in evening temperature. They also showed that the phase of the evening peak depended on temperature level, as the evening peak in activity was delayed at high temperatures. However, the morning peak was not influenced significantly by changes in temperature, suggesting that the E and M oscillators have different sensitivities to changes in temperature levels.
Circadian oscillators also work with seasonal oscillators to time behaviors such as daily activity throughout the year. For example, the expression of dper (the Drosophila per gene) and tim (Timeless (gene)) varies with temperature and length of day. Colder, shorter days increase the accumulation of mRNA transcripts for dper and tim, affecting the timing of evening activity and midday inactivity.
Lateral neurons
The lateral neurons (LN) are Drosophila'''s main clock neurons. When circadian oscillation was inhibited in clock neurons other than LNs, flies still maintained rhythmic activity in constant conditions. When this same inhibition was performed in LN cells, however, flies did not show rhythmic activity, demonstrating that LN cells are necessary for synchronized circadian rhythms in flies. LN neurons can be divided into three subgroups (LNd, s-LNv, 1-LNv), which each perform different functions. Ablation of the s-LNv cells caused a loss in the morning peak of fly activity, suggesting that this cell group functions as a morning oscillator. Meanwhile, ablation of the LNd cells caused a loss in the evening peak, which suggests that this cell group functions as the evening oscillator. Furthermore, light inhibited s-LNv cell outputs but excited LNd cell outputs. These two cell types regulate circadian control under opposite conditions, providing further evidence for distinct morning and evening oscillator cells.
S-LNv cells play another vital role in maintaining the circadian clock within flies. The majority of these cells produce pigment dispersing factor (PDF), a neurotransmitter that helps coordinate the various clock neurons in the fly brain. These s-LNv cells within the clock network are required for synchronizing the different clock neurons in the absence of light.
Dorsal neurons
Dorsal neurons (DN) are several other groups of clock neurons within the fly brain. While DN cells do contribute to circadian control in light-dark cycles, they are not sufficient to produce rhythmicity in constant conditions. Therefore, these cells are not the primary morning or evening circadian clocks within the fly brain. Research has shown, however, that several subsets of DN cells can contribute to morning and evening peaks in activity.
When in constant dark and manipulated to overexpress the shaggy (sgg) gene, the Drosophila ortholog of GSK3, morning oscillator cells influenced the rate and rhythm of TIM transcription in evening oscillator cells. In constant light conditions, Drosophila overexpressing sgg in E cells remained rhythmic, while M cells became arrhythmic, like their WT counterparts. s-LNv M cells cannot autonomously drive rhythmicity under constant light conditions and the E cells lacking the clock protein CRY that can independently drive rhythmicity in constant light cannot do so in constant darkness. Drosophila’s clock is thought to consist of CRY-positive, s-LNv M cells, and the CRY-negative E cells.
Evidence in Neurospora crassa Neurospora crassa, a type of fungus, have shown circadian rhythms in conidiation patterns when observed under constant darkness. These rhythms appear to be under the control of a transcription-translation feedback loop. The frequency (frq) gene, first discovered by Dr. Jerry Feldman, appears to control a TTFL which uses white collar 1 (WC-1) to respond to light cues. WC-1 then dimerizes with white collar 2 (WC-2) to form the white collar complex (WCC), which is a positive regulator of frq. The WCC binds to the frq promoter to enhance its transcription, increasing levels of FRQ protein. FRQ proteins, once phosphorylated, inhibit the WCC through a negative feedback mechanism. However, researchers have discovered rhythms in Neurospora cells without FRQ or WC-1 and WC-2. These cells are collectively referred to as FLOs (FRQ-less oscillators). One FLO that has been investigated further is the WC-FLO (WC-dependent FLO, specifically the ccg-16 gene). The discovered rhythm in mRNA accumulation required functional WC-1 and WC-2, which researchers suggested might indicate its coupling somehow to the FRQ/WCC oscillator loop. The WC-FLO can function independently, but the dependency of both the FRQ-based oscillator and WC-FLO suggested to researchers that the two oscillators might be coupled by the WC proteins. This coupling is analogous to the situation in Drosophila; researchers have proposed the model that the Neurospora M oscillator would be the light-sensitive FRQ/WCC oscillator that controls morning clock genes, while the E oscillator would be the WC-FLO oscillator that controls evening clock genes. This dual oscillator model would include WC-FLO receiving input both directly from the environment via WC-1 and indirectly through the FRQ/WCC oscillator, which is sensitive to both light and temperature.
Evidence in mammals
According to the dual oscillator model, there are two oscillating circadian clocks located in the suprachiasmatic nucleus (SCN) of the mammalian hypothalamus. Their circadian oscillations are regulated by a negative feedback loop. The protein dimer CLOCK/BMAL1 regulates the products of clock genes Per and Cry, which, when present in high quantities, repress their own transcription. Other hypotheses for the existence of E and M oscillators in mammals involve single-cell dual oscillator models. Within a mammalian cell, there exists redundant copies of several clock genes (per1 and cry1; per2 and cry2). The hypothesis states that each set of these genes would be sufficient to produce endogenous oscillation in cell function; however, each gene set responds differently to light and temporal cues. The per1/cry1 oscillator (morning oscillator) is energized by light and tracks dawn. Conversely, the per2/cry2 oscillator is energized by darkness and tracks dusk.
Rodents
Significant progress has been made in chronobiologists’ understanding of the neural and molecular mechanisms underlying the dual oscillator model and function in mice. Mice are nocturnal animals whose activity is compressed under long photoperiods and extended under short photoperiods. The dual oscillator model that has been developed for mice and other nocturnal rodents posits that two separate circadian oscillators drive the organism’s activity in their unique responses to light. One possibility is that each mouse SCN cell contains both an E and an M oscillator. Evidence for this version of the dual oscillator model lie in the respective peaks of Per1, Per2, Cry1, and Cry2 mRNAs, demonstrating different patterns of oscillation. In reference to the Per gene, Per1 mRNA peaks around circadian time (CT) 4, while Per2 mRNA peaks six hours later at CT10. Circadian time (CT) indicates the amount of time after the start of the animal's subjective day. Similarly, Cry1 mRNA has been shown to peak earlier than Cry2 mRNA. These differences in oscillation support the interpretation that the Per1/Cry1 negative feedback loop represents the timing of the M oscillator, while the Per2/Cry2 feedback loop represents the timing of the E oscillator. Furthermore, the dual oscillator model predicts that upon illumination, the M oscillator will accelerate while the E oscillator decelerates. This proposed pattern of oscillation, as measured in Per and Cry mRNA levels, has been observed in multiple experiments in mice, and suggests that both E and M oscillators are present in each SCN cell.
Another possibility is that the mixture of neurons that make up the SCN contain either an E oscillator or an M oscillator. Evidence for this model comes from an experiment conducted in Syrian hamsters in which slices of SCN cut in this horizontal plane oscillated with distinct morning and evening peaks in electrical activity. These results suggest that E and M oscillators may be located in the rostrocaudal plane of the SCN. The distinct Per2 mRNA oscillations in sections from both the rostral and caudal regions of the SCN (caudal Per2 peaks around lights on, rostral Per2 peaks around lights off) indicate that an M oscillator may be present in the caudal SCM and an E oscillator may be present in the rostral SCN. Similar phase differences in Per1 mRNA oscillations have been observed between the rostral and caudal SCN in mice, suggesting the presence of separate E oscillator neurons and M oscillator neurons in the mouse SCN. In addition, rats exposed to a 22-hour light-dark cycle show two distinct locomotor rhythms with distinct periods. In these rats, the dorsal and ventral SCN had different periods in the expression of clock genes, suggesting two oscillators in different regions of the SCN.
Humans
Evidence for the dual oscillator model in humans is related to changes in melatonin secretion. A mechanism previously proposed for rodents posits that scotoperiod, the duration of night, can induce changes in nocturnal melatonin secretion, and that this results from an adjustment in the timing of two circadian oscillators. Similarly, duration of nocturnal melatonin secretion in humans has been shown to respond to changes in scotoperiod, and changes in nocturnal secretion duration result mainly from the time of morning secretion offset. These results also suggest that the dual oscillator model may explain the human regulation of melatonin secretion, as well as other functions. Furthermore, bimodal patterns of melatonin levels have been observed, but mostly in women with seasonal pathology. These observed morning and evening peaks in plasma melatonin levels provide physical substrates for, and adds to the plausibility of, the dual oscillator model in humans. Additional work with human melatonin secretion has shown that its onset and offset (occurring in the evening and morning, respectively) have opposing effects on phase following melatonin administration; morning melatonin secretion enhanced morning light exposure's effect on advancing secretion onset.
The alternating domination by the E and M oscillators depending on daylight duration produces seasonal changes in internal, biological processes like reproduction. Human conception rates increase at certain times of the year, a pattern that also varies with how developed the country is. Melatonin secretion levels, previously shown to potentially be affected by the dual oscillator, can have behavioral impacts as well. Research on seasonal affective disorder (SAD) has shown that men with SAD have longer melatonin secretion in the winter than healthy men; however, women with SAD vs. without SAD showed opposite trends. While there have been conflicting findings from circadian research on SAD, reliable studies have found evidence for circadian phase delays in SAD. The corresponding phase-delay hypothesis suggests that manipulating timing of light exposure could counteract the phase delay, impacting the dual oscillator system and producing a therapeutic effect.
Evidence in other organisms
There is no substantial evidence for distinct morning and evening oscillator cells in plants, fungi, or cyanobacteria. However, several single-cell dual oscillator models exist, providing alternative models to explain responses to changes in light stimuli. In systems of multiple oscillators, there are often "pacemaker" and "slave" oscillators in which the slave oscillator is entrained by the pacemaker and does not necessarily have all the circadian features of a central oscillator. For example, a proposed alternative to the traditional dual-coupled oscillator model in cyanobacteria's Kai protein system is a damped oscillator containing an autonomous post-transcriptional oscillator (PTO). While the damped oscillator regulates the TTFL, the PTO would act as a central circadian oscillator.
Other alternatives to the dual oscillator model include oscillators which contain feedback loops. Studies in Arabidopsis thaliana have shown that its plant circadian clock is composed of multiple interlocking TTFLs which include transcription factors whose expressions peak in the evening and morning.
Dual coupled oscillators have been discovered in the Leucophaea maderae (cockroach) optic lobes and the Aplysia or Bulla (marine molluscs) eyes.
Limitations of the Dual Oscillator model
The E&M oscillator model is one of the most prominent models in chronobiology. While useful to explain flies' adjustments between short and long days, the model is limited by its simplicity.
Some studies have shown that E cells can each drive multiple activity components without M cells. In 2009, experiments were performed in Drosophila with period'' gene expression restricted to the 5th s-LNv and 3 LNds lateral neurons, cells thought to belong to the E oscillator. Ablation of PDF-positive s-LNv cells did not remove the M peak as expected. Despite limited Period protein-expressing cells, under low light conditions, the flies still expressed normal bimodal activity patterns, with up to 3 free-running components. They differed only in the phase of the E and M peaks. 2 LNd advanced upon moonlight, acting as an M oscillator, and 5th s-LNv and 1 LNd delayed upon moonlight, acting as E. The researchers suggested that M and E characteristics could be flexible to environmental conditions and should not be interpreted strictly or restricted to certain clock neurons.
In certain conditions, M cells have also been found to drive both M and E activity peaks at high light intensity and temperature. Researchers reasoned that the cells studied were not solely M oscillators or varying environmental conditions influence their behavior to resemble either M or E cells. Other more complex models being developed include a multi-oscillator system composed of flexible M and E cells or a clock neuron network without specific M and E assignments.
See also
Chronobiology
Circadian rhythm
Colin Pittendrigh
Cryptochrome
Drosophila Melanogaster
Period (gene)
Suprachiasmatic nucleus (SCN)
Jürgen Aschoff
Pigment dispersing factor (PDF)
References
Circadian rhythm | Dual circadian oscillator model | [
"Biology"
] | 5,461 | [
"Behavior",
"Sleep",
"Circadian rhythm"
] |
60,474,207 | https://en.wikipedia.org/wiki/Plastid%20evolution | A plastid is a membrane-bound organelle found in plants, algae and other eukaryotic organisms that contribute to the production of pigment molecules. Most plastids are photosynthetic, thus leading to color production and energy storage or production. There are many types of plastids in plants alone, but all plastids can be separated based on the number of times they have undergone endosymbiotic events. Currently there are three types of plastids; primary, secondary and tertiary. Endosymbiosis is reputed to have led to the evolution of eukaryotic organisms today, although the timeline is highly debated.
Primary endosymbiosis
The first plastid is highly accepted within the scientific community to be derived from the engulfment of cyanobacteria ancestor into a eukaryotic organism. Evidence supporting this belief is found in many morphological similarities such as the presence of a two plasma membranes. It is thought that the first membrane belonged to the cyanobacteria ancestor. During phagocytosis, a vesicle engulfs a molecule with its plasma membrane to allow safe import. When the cyanobacteria became engulfed, the bacterium avoided digestion and led to the double membrane found in primary plastids. However, in order to live in symbiosis, the eukaryotic cell that engulfed the cyanobacterium must now provide proteins and metabolites to maintain the functions of the bacteria in exchange for energy. Thus, an engulfed cyanobacterium must give up some of its genetic material to allow for endosymbiotic gene transfer to the eukaryote, a phenomenon that is thought to be extremely rare due to the "learned nature" of the interactions that must occur between the cells to allow for processes such as; gene transfer, protein localization, excretion of highly reactive metabolites, and DNA repair. This would mean a reduction in genome size for the cyanobacteria, but also an increase in cytobacterial genes within the eukaryotic genome. The Synechocystis sp. strain PCC6803 is a unicellular fresh water cyanobacteria that encodes 3725 genes, and a 3.9 Mb sized genome. However, most plastids rarely exceed 200 protein coding genes. It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is Gloeomargarita lithophora.
Separately, somewhere about 90–140 million years ago, primary endosymbiosis happened again in the amoeboid Paulinella with a cyanobacterium in the genus Prochlorococcus. This independently evolved chloroplast is often called a chromatophore instead of a chloroplast.
A 2010 study sequenced the genome of a cyanobacterium that was living extracellularly in endosymbiosis with the water-fern Azolla filiculoides. Endosymbiosis was supported by the fact that the cyanobacterium was unable to grow autonomously, and the observance of the cyanobacterium being vertically transferred between succeeding generations. After cyanobacterium genome analysis, the researchers found that over 30% of the genome was made up of pseudogenes. In addition, roughly 600 transposable elements were found within the genome. The pseudogenes were found in genes such as dnaA, DNA repair genes, glycolysis and nutrient uptake genes. dnaA is essential to initiation of DNA replication in prokaryotic organisms, thus Azolla filiculoides is thought to provide nutrients, and transcriptional factors for DNA replication in exchange for fixed nitrogen that is not readily available in water. Although the cyanobacterium had not been completely engulfed in the eukaryotic organism, the relationship is thought to demonstrate the precursor to endosymbiotic primary plastids.
Secondary endosymbiosis
Secondary endosymbiosis results in the engulfment of an organism that has already performed primary endosymbiosis. Thus, four plasma membranes are formed. The first originating from the cyanobacteria, the second from the eukaryote that engulfed the cyanobacteria, and the third from the eukaryote who engulfed the primary endosymbiotic eukaryote. Chloroplasts contain 16S rRNA and 23S rRNA. 16S and 23S rRNA is found only in prokaryotes by definition. Chloroplasts and mitochondria also replicate semi-autonomously outside of the cell cycle replication system via binary fission. Consistent with the theory, decreased genome size within the organelle and gene integration into the nucleus occurred. Chloroplasts genomes encode 50-200 proteins, compared to the thousands in cyanobacterium. Furthermore, in Arabidopsis, nearly 20% of the nuclear genome originate from cyanobacterium, the highly recognized origin of chloroplasts. Recent studies have been able to identify the speed and size at which chloroplast genes are able to incorporate themselves into the host genome. Using chloroplast transformation genes encoding spectinomycin and kanamycin resistance were inserted into the DNA of chloroplasts found in tobacco plants. After subjecting the plants to spectinomycin and kanamycin selection, some plants began to tolerate spectinomycin and kanamycin. Roughly 1 in every 5 million cells on the tobacco leaves highly expressed spectinomycin and kanamycin resistant genes. By using the cells expressing resistances, they were able to grow tobacco from these cells to maturity. Once mature, the plants were mated with wild-type plants, and 50% of the progeny expressed spectinomycin and kanamycin resistance genes. Pollen was thought not to be able to transfer chloroplast DNA in tobacco (which later turned out not to be as true as was thought at the time), thus leading to believe that the genes were incorporated into the tobaccos genome. Furthermore, 11kb of integrated chloroplast DNA was introduced to the host genome, transferring more DNA that previously predicted at a faster rate than previously predicted.
Tertiary endosymbiosis
Although previous endosymbiotic events resulted in the increase in the number of membranes, tertiary plastids can have 3-4 membranes. The most largely studied tertiary plastids are found in dinoflagellates, where several independent tertiary endosymbiosis events have occurred.
In the groups that contains a haplophyte plastid, these tertiary plastids are believed to have been derived from a red algae replacing secondary plastids. Consistent with our previous rules for reduction in genome size, and incorporation of genes into the host genome, tertiary plastid genome consists of about 14 genes. These genes are broken down further into small minicircles that contain 1-3 genes. These genomes are circular like prokaryotic genomes. Further, they only encode atpA, atpB, petB, perD, psaA, psaB, psbA-E, psbI, 16S and 23S rRNA. These genes play vital proteins used in photosystem I and II, indicating further their cyanobacterial origin. Unusually, the three lineages that contain a haplophyte plastid each acquired their plastid independently.
"Dinotoms" (Durinskia and Kryptoperidinium) have plastids derived from diatoms. These are highly unusual among tertiary endosymbioants as the symbioant is not reduced to a mere plastid: instead, it still has a DNA-containing nucleus, a large volume of cytoplasm, and even its own DNA-containing mitochondria.
Two previously undescribed dinoflagellates ("MGD" and "TGD") contain a green algal endosymbioant that has a nucleus, most closely related to Pedinomonas.
References
Endosymbiotic events
Photosynthesis
Plastids | Plastid evolution | [
"Chemistry",
"Biology"
] | 1,704 | [
"Symbiosis",
"Endosymbiotic events",
"Photosynthesis",
"Plastids",
"Biochemistry"
] |
58,982,979 | https://en.wikipedia.org/wiki/Strand%20sort | Strand sort is a recursive sorting algorithm that sorts items of a list into increasing order. It has worst-case time complexity, which occurs when the input list is reverse sorted. It has a best-case time complexity of , which occurs when the input is already sorted.
The algorithm first moves the first element of a list into a sub-list. It then compares the last element in the sub-list to each subsequent element in the original list. Once there is an element in the original list that is greater than the last element in the sub-list, the element is removed from the original list and added to the sub-list. This process continues until the last element in the sub-list is compared to the remaining elements in the original list. The sub-list is then merged into a new list. Repeat this process and merge all sub-lists until all elements are sorted. This algorithm is called strand sort because there are strands of sorted elements within the unsorted elements that are removed one at a time. This algorithm is also used in J Sort for fewer than 40 elements.
Example
This example is based on the description of the algorithm provided in the book IT Enabled Practices and Emerging Management Paradigms.
Step 1: Start with a list of numbers: {5, 1, 4, 2, 0, 9, 6, 3, 8, 7}.
Step 2: Next, move the first element of the list into a new sub-list: sub-list contains {5}.
Step 3: Then, iterate through the original list and compare each number to 5 until there is a number greater than 5.
1 < 5, so 1 is not added to the sub-list.
4 < 5, so 4 is not added to the sub-list.
2 < 5, so 2 is not added to the sub-list.
0 < 5, so 0 is not added to the sub-list.
9 > 5, so 9 is added to the sub-list and removed from the original list.
Step 4: Now compare 9 with the remaining elements in the original list until there is a number greater than 9.
6 < 9, so 6 is not added to the sub-list.
3 < 9, so 3 is not added to the sub-list.
8 < 9, so 8 is not added to the sub-list.
7 < 9, so 7 is not added to the sub-list.
Step 5: Now there are no more elements to compare 9 to, so merge the sub-list into a new list, called solution-list.
After step 5, the original list contains {1, 4, 2, 0, 6, 3, 8, 7}.
The sub-list is empty, and the solution list contains {5, 9}.
Step 6: Move the first element of the original list into sub-list: sub-list contains {1}.
Step 7: Iterate through the original list and compare each number to 1 until there is a number greater than 1.
4 > 1, so 4 is added to the sub-list and 4 is removed from the original list.
Step 8: Now compare 4 with the remaining elements in the original list until there is a number greater than 4.
2 < 4, so 2 is not added to the sub-list.
0 < 4, so 0 is not added to the sub-list.
6 > 4, so 6 is added to the sub-list and is removed from the original list.
Step 9: Now compare 6 with the remaining elements in the original list until there is a number greater than 6.
3 < 6, so 3 is not added to the sub-list.
8 > 6, so 8 is added to the sub-list and is removed from the original list.
Step 10: Now compare 8 with the remaining elements in the original list until there is a number greater than 8.
7 < 8, so 7 is not added to the sub-list.
Step 11: Since there are no more elements in the original list to compare {8} to, the sub-list is merged with the solution list. Now the original list contains {2, 0, 3, 7}, the sub-list is empty, and the solution-list contains {1, 4, 5, 6, 8, 9}.
Step 12: Move the first element of the original list into sub-list. Sub-list contains {2}.
Step 13: Iterate through the original list and compare each number to 2 until there is a number greater than 2.
0 < 2, so 0 is not added to the sub-list.
3 > 2, so 3 is added to the sub-list and is removed from the original list.
Step 14: Now compare 3 with the remaining elements in the original list until there is a number greater than 3.
7 > 3, so 7 is added to the sub-list and is removed from the original list.
Step 15: Since there are no more elements in the original list to compare {7} to, the sub-list is merged with the solution list. The original list now contains {0}, the sub-list is empty, and solution list contains {1, 2, 3, 4, 5, 6, 7, 8, 9}.
Step 16: Move the first element of the original list into sub-list. Sub-list contains {0}.
Step 17: Since the original list is now empty, the sub-list is merged with the solution list. The solution list now contains {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. There are now no more elements in the original list, and all of the elements in the solution list have successfully been sorted into increasing numerical order.
Implementation
Since Strand Sort requires many insertions and deletions, it is best to use a linked list when implementing the algorithm. Linked lists require constant time for both insertions and removals of elements using iterators. The time to traverse through the linked list is directly related to the input size of the list. The following implementation is done in Java 8 and is based on the description of the algorithm from the book IT Enabled Practices and Emerging Management Paradigms.package strandSort;
import java.util.*;
public class strandSort {
static LinkedList<Integer> solList = new LinkedList<Integer>();
static int k = 0;
/**
* This is a recursive Strand Sort method. It takes in a linked list of
* integers as its parameter. It first checks the base case to see if the
* linked list is empty. Then proceeds to the Strand sort algorithm until
* the linked list is empty.
*
* @param origList:
* a linked list of integers
*/
public static void strandSortIterative(LinkedList<Integer> origList) {
// Base Case
if (origList.isEmpty()) {
return;
}
else {
// Create the subList and add the first element of
// The original linked list to the sublist.
// Then remove the first element from the original list.
LinkedList<Integer> subList = new LinkedList<Integer>();
subList.add(origList.getFirst());
origList.removeFirst();
// Iterate through the original list, checking if any elements are
// Greater than the element in the sub list.
int index = 0;
for (int j = 0; j < origList.size(); j++) {
if (origList.get(j) > subList.get(index)) {
subList.add(origList.get(j));
origList.remove(j);
j = j - 1;
index = index + 1;
}
}
// Merge sub-list into solution list.
// There are two cases for this step/
// Case 1: The first recursive call, add all of the elements to the
// solution list in sequential order
if (k == 0) {
for (int i = 0; i < subList.size(); i++) {
solList.add(subList.get(i));
k = k + 1;
}
}
// Case 2: After the first recursive call,
// merge the sub-list with the solution list.
// This works by comparing the greatest element in the sublist (which is always the last element)
// with the first element in the solution list.
else {
int subEnd = subList.size() - 1;
int solStart = 0;
while (!subList.isEmpty()) {
if (subList.get(subEnd) > solList.get(solStart)) {
solStart++;
} else {
solList.add(solStart, subList.get(subEnd));
subList.remove(subEnd);
subEnd--;
solStart = 0;
}
}
}
strandSortIterative(origList);
}
}
public static void main(String[] args) {
// Create a new linked list of Integers
LinkedList<Integer> origList = new LinkedList<Integer>();
// Add the following integers to the linked list: {5, 1, 4, 2, 0, 9, 6, 3, 8, 7}
origList.add(5);
origList.add(1);
origList.add(4);
origList.add(2);
origList.add(0);
origList.add(9);
origList.add(6);
origList.add(3);
origList.add(8);
origList.add(7);
strandSortIterative(origList);
// Print out the solution list
for (int i = 0; i < solList.size(); i++) {
System.out.println(solList.get(i));
}
}
}
References
Sorting algorithms | Strand sort | [
"Mathematics"
] | 2,118 | [
"Order theory",
"Sorting algorithms"
] |
58,983,750 | https://en.wikipedia.org/wiki/Copperas%20works | Copperas works are manufactories where copperas (iron(II) sulfate) is produced from pyrite, often obtained as a byproduct during coal mining, and iron. The history of producing green vitriol, as it was known, goes back hundreds of years in Scotland. In 1814 the wool-producing city of Steubenville, Ohio had seven copperas-producing manufacturers.
Pyrite has been used since classical times to manufacture copperas. Iron pyrite was heaped up and allowed to weather (an example of an early form of heap leaching). The acidic runoff from the heap was then boiled with iron to produce iron sulfate.
Containment of leachate is important due to its toxicity; a fish kill that occurred in the 1890s in the Kanawha River was attributed to copperas solution release from the mines in Cannelton, West Virginia.
The "vitriolic waters of Fahlun" (Falun, Sweden), according to Murray (1844), annually produced "about 600 quintals of green vitriol" (sulfate of iron), as well as a "small quantity of blue vitriol" (sulfate of copper). These may have been obtained through evaporation of the groundwater associated with mines in order to yield the crystalline form of copperas.
References
Iron compounds
Sulfates
Metallurgical processes | Copperas works | [
"Chemistry",
"Materials_science"
] | 284 | [
"Metallurgical processes",
"Metallurgy",
"Sulfates",
"Salts"
] |
58,988,012 | https://en.wikipedia.org/wiki/Methyl%20p-toluate | Methyl p-toluate is the organic compound with the formula CH3C6H4CO2CH3. It is a waxy white solid that is soluble in common organic solvents. It is the methyl ester of p-toluic acid. Methyl p-toluate per se is not particularly important but is an intermediate in some routes to dimethyl terephthalate, a commodity chemical.
References
Methyl esters
Benzoate esters
Commodity chemicals | Methyl p-toluate | [
"Chemistry"
] | 100 | [
"Commodity chemicals",
"Products of chemical industry"
] |
58,990,339 | https://en.wikipedia.org/wiki/Small%20RNA%20sequencing | Small RNA sequencing (Small RNA-Seq) is a type of RNA sequencing based on the use of NGS technologies that allows to isolate and get information about noncoding RNA molecules in order to evaluate and discover new forms of small RNA and to predict their possible functions. By using this technique, it is possible to discriminate small RNAs from the larger RNA family to better understand their functions in the cell and in gene expression. Small RNA-Seq can analyze thousands of small RNA molecules with a high throughput and specificity. The greatest advantage of using RNA-seq is represented by the possibility of generating libraries of RNA fragments starting from the whole RNA content of a cell.
Introduction
Small RNAs are noncoding RNA molecules between 20 and 200 nucleotide in length. The item "small RNA" is a rather arbitrary term, which is vaguely defined based on its length comparing with regular RNA such as messenger RNA (mRNA). Previously bacterial short regulatory RNAs have been referred to as small RNAs, but they are not related to eukaryotic small RNAs.
Small RNAs include several different classes of noncoding RNAs, depending on their sizes and functions: snRNA, snoRNA, scRNA, piRNA, miRNA, YRNA, tsRNA, rsRNA, and siRNA. Their functions go from RNAi (specific for endogenously expressed miRNA and exogenously derived siRNA), RNA processing and modification, gene silencing (i.g. X chromosome inactivation by Xist RNA), epigenetics modifications, protein stability and transport.
Small RNA sequencing
Purification
This step is very critical and important for any molecular-based technique since it ensures that the small RNA fragments found in the samples to be analyzed are characterized by a good level of purity and quality. There are different purification methods that can be used, based on the purposes of the experiment:
acid guanidinium thiocyanate-phenol-chloroform extraction: it is based on the use of a guanidinium-thiocyanate solution combined with acid phenol that disrupts cell membranes bringing in solution the nucleic acids and inactivating cellular ribonucleases (chaotropic agent). After this step an aliquot of chloroform is added in order to separate the aqueous phase (containing the RNA molecules) from the organic phase (cellular debris and other contaminants).
spin column chromatography: universally used method to purify nucleic acids that exploits a spin column containing a special resin that, after a first step of cell lysis, allows the binding of the RNA molecules, eluting unbound particles (several proteins and rRNA). The protocol includes two separate chromatographic runs: the first one is required to isolate the whole RNA content from the sample, while the second one is specific for the isolation of small RNA by adding a small RNA enriched matrix to the column and by using a specific buffer to finally elute them. This method can separate small RNA molecules without the need of adding phenol.
Once small RNAs have been isolated, it is important to quantify them and to evaluate the quality of the purification. There are two different methods to do this:
analysis of the absorbances and gel electrophoresis: this practical approach exploits the use of a spectrophotometer to evaluate the absorbance of RNA molecules at 260 nm (1 OD = 40 μg/μL) in order to estimate their concentration and to discover possible contaminations (i.g. proteins or carbohydrates); this can be coupled with an electrophoretic run performed in denaturating conditions (8 M urea) to analyze the quality of the purification extracts (low quality extracts will be degraded and displayed as smears in the gel).
Agilent bioanalyzer: fully automated technique that is based on the use of a special apparatus composed by a chip that allows to perform capillary electrophoresis (CE) using small aliquots of the starting samples and obtaining an electropherogram that is useful to estimate the quality of the extracts thanks to a score (ranging from 1 to 10) attributed by the system.
Library preparation and amplification
Many of the NGS sequencing protocols rely on the production of a genomic library that contains thousands of fragments of the target nucleic acids that will then be sequenced by proper technologies. According to the sequencing methods to be used, libraries can be created differently (in the case of the Ion Torrent technology RNA fragments are directly attached to a magnetic bead through an adapter, while for Illumina sequencing, the RNA fragments are firstly ligated to the adapters and then attached to the surface of a plate): generally, universal adapters A and B (containing well known sequences comprehensive of Unique Molecular Identifiers that are used to quantify small RNAs in a sample and sample indexing that allows to discriminate between different RNA molecules deriving from different samples) are ligated to the 5' and 3' ends of the RNA fragments thanks to the activity of the T4 RNA ligase 2 truncated. After the adapters are ligated to both ends of the small RNAs, retrotranscription occurs producing complementary DNA molecules (cDNAs) which will be, eventually, amplified by different amplification techniques depending on the sequencing protocol that is being followed (Ion Torrent exploits the emulsion PCR, while Illumina requires a bridge PCR) in order to obtain up to billions of amplicons to be sequenced. Besides the regular PCR mix, masking oligonucleotides targeting 5.8s rRNA are added to increase sensitivity to small RNA targets and to improve the amplification results. Caution has to be used, as RNA samples are prone to degradation, and further improvement of this technique should be oriented towards the elimination of adapter dimers. Some specific RNA modifications (such as 5′ hydroxyl (5′-OH), 3′-phosphate (3′-P) and 2′,3′-cyclic phosphate (2′3′-cP)) can block the adapter ligation process, while some other RNA modifications ( such as m1A, m3C, m1G and m22G) can interfere with reverse transcription process. Small RNA bearing one or more of these modifications are often inefficiently and incompletely converted into cDNAs, leading to challenges with their detection and quantitation by deep sequencing, which can be overcome by enzyme (such as PNK and AlkB) pre-treatment.
Sequencing
Depending on the purpose of the analysis, RNA-seq can be performed using different approaches:
Ion Torrent sequencing: NGS technology based on the use of a semiconductor chip where the sample is loaded integrated with an ion-sensitive field-effect transistor able to sensitively detect reductions of the pH value due to the release of one or more protons after the incorporation of one or more dNTPs during sequencing by synthesis: the signal is, then, transmitted to a machine composed of an electronic reading board to interface with the chip, a microprocessor for signal processing and a fluidics system to control the flow of reagents over the chip.
Illumina sequencing: it offers a good method for small RNA sequencing and it is the most widely used approach. After the library preparation and amplification steps, the sequencing (based on the use of reversible dye-terminators) can be performed by using different systems, such as Miseq System, Miseq Series, NextSeq Series and many others, according to the applications
Data analysis and storage
The final step regards analysis of data and storage: after obtaining the sequencing reads, UMI and index sequences are automatically removed from the reads and their quality is analyzed by PHRED (software able to evaluate the quality of the sequencing process); reads can then be mapped or aligned to a reference genome in order to extract information about their similarity: reads having the same length, sequence and UMI are considered as equal and are removed from the hit list. Indeed, the number of different UMIs for a given small RNA sequence reflects its copy number.
The small RNAs are finally quantified by assigning molecules to transcript annotations from different databases (Mirbase, GtRNAdb and Gencode).
Applications
Small RNA sequencing can be useful for:
studying the expression profile of miRNA and other small RNAs
increasing the understanding of how cells are regulated or misregulated under pathological conditions
small RNA clustering
novel small RNA discovery
small RNA prediction
differential expression of all small RNAs in any sample
References
See also
Barquist L, Vogel J (2015). "Accelerating Discovery and Functional Analysis of Small RNAs with New Technologies". Annual Review of Genetics. 49:367-394. 10.1146/annurev-genet-112414-054804. .
Hrdlickova R, Toloue M, Tian B (2017 January). "RNA-Seq methods for transcriptome analysis". Wiley Online Library. 8(1). 10.1002/wrna.1364. .
Faridani OR, Abdullayev I, Hagemann-Jensen M, Schell JP, Lanner F, Sandberg R (2016 December). "Single-cell sequencing of the small-RNA transcriptome". Nature Biotechnology. 34(12):1264-1266. 10.1038/nbt.3701. .
Shi J, Zhang Y, Tan D, Zhang X, Yan M, Zhang Y, Franklin R, et, al. (2021 April). "PANDORA-seq expands the repertoire of regulatory small RNAs by overcoming RNA modifications". Nature Cell Biology. 23(4):424-436. 10.1038/s41556-021-00652-7. .
Ozsolak F, Milos PM (2011 February). "RNA sequencing: advances, challenges and opportunities". Nature Reviews Genetics. 12(2):87-98. 10.1038/nrg2934. .
Veneziano D, Di Bella S, Nigita G, Laganà A, Ferro A, Croce CM (2016 December). "Noncoding RNA: Current Deep Sequencing Data Analysis Approaches and Challenges". Wiley Online Library. 37(12):1283-1298. 10.1002/humu.23066. .
Raabe CA, Tang TH, Brosius J, Rozhdestvensky TS (2014 February). "Biases in small RNA deep sequencing data". Nucleic Acids Research. 42(3):1414-1426. 10.1093/nar/gkt1021. .
't Hoen PA, Friedländer MR, Almlöf J, Sammeth M, Pulyakhina I, Anvar SY, Laros JF, Buermans HP, Karlberg O, Brännvall M; GEUVADIS Consortium, den Dunnen JT, van Ommen GJ, Gut IG, Guigó R, Estivill X, Syvänen AC, Dermitzakis ET, Lappalainen T (2013 November). "Reproducibility of high-throughput mRNA and small RNA sequencing across laboratories". Nature Biotechnology. 31(11):1015-1022. 10.1038/nbt.2702. .
Byron SA, Van Keuren-Jensen KR, Engelthaler DM, Carpten JD, Craig DW (2016 May). "Translating RNA sequencing into clinical diagnostics: opportunities and challenges". Nature Reviews Genetics. 17(5):257-271. 10.1038/nrg.2016.10. .
Cieślik M, Chinnaiyan AM (2018 February). "Cancer transcriptome profiling at the juncture of clinical translation". Nature Reviews Genetics. 19(2):93-109. 10.1038/nrg.2017.96. .
Martin JA, Wang Z (2011 September). "Next-generation transcriptome assembly". Nature Reviews Genetics. 12(10):671-82. 10.1038/nrg3068. .
Molecular biology
RNA
Gene expression
RNA sequencing | Small RNA sequencing | [
"Chemistry",
"Biology"
] | 2,584 | [
"Genetics techniques",
"Gene expression",
"RNA sequencing",
"Molecular biology techniques",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
50,365,786 | https://en.wikipedia.org/wiki/Sources%20of%20electrical%20energy | This article provides information on the following six methods of producing electric power.
Friction: Energy produced by rubbing two material together.
Heat: Energy produced by heating the junction where two unlike metals are joined.
Light: Energy produced by light being absorbed by photoelectric cells, or solar power.
Chemical: Energy produced by chemical reaction in a voltaic cell, such as an electric battery.
Pressure: Energy produced by compressing or decompressing specific crystals.
Magnetism: Energy produced in a conductor that cuts or is cut by magnetic lines of force.
Friction
Friction is the least-used of the six methods of producing energy. If a cloth rubs against an object, the object will display an effect called friction electricity. The object becomes charged due to the rubbing process, and now possesses an static electrical charge, hence it is also called static electricity. There are two main types of electrical charge: positive and negative. Each type of charge attracts the opposite type and repels the same type. This can be stated in the following way: Like charges repel and unlike charges attract. Static electricity has several applications. Its main application is in Van de Graaff generators, used to produce high voltages in order to test the dielectric strength of insulating materials. Other uses are in electrostatic painting and sandpaper manufacturing. The course grains acquire a negative charge as they move across the negative plate. As unlike charges attract, the positive plate attracts the course grains and their impact velocity enables them to be embedded into the adhesive.
Heat
In 1821 Thomas Seebeck discovered that the junction between two metals generates a voltage that is a function of temperature. If a closed circuit consists of conductors of two different metals, and if one junction of the two metals is at a higher temperature than the other, an electromotive force is created in a specific polarity. An example of this is in the case of copper and iron, the electrons first flow along the iron from the hot junction to the cold one. The electrons cross from the iron to the copper at the hot junction, and from the copper to the iron at the cold junction. This property of electromotive force production is known as the Seebeck effect. This effect is utilized in the most widely employed method of thermometry.
Light
The sun's rays can be used to produce electrical energy. The direct user of sunlight is the solar cell or photovoltaic cell, which converts sunlight directly into electrical energy without the incorporation of a mechanical device. This technology is simpler than the fossil-fuel-driven systems of producing electrical energy. A solar cell is formed by a light-sensitive p-n junction semiconductor, which when exposed to sunlight is excited to conduction by the photons in light. When light, in the form of photons, hits the cell and strikes an atom, photo-ionisation creates electron-hole pairs. The electrostatic field causes separation of these pairs, establishing an electromotive force in the process. The electric field sends the electron to the p-type material, and the hole to the n-type material. If an external current path is provided, electrical energy will be available to do work. The electron flow provides the current, and the cell's electric field creates the voltage. With both current and voltage the silicon cell has power. The greater the amount of light falling on the cell's surface, the greater is the probability of photons releasing electrons, and hence more electric energy is produced.
Chemical
When a zinc electrode and copper electrode are placed in a dilute solution of sulfuric acid, the two metals react to each other's presence within the electrolyte and develop a potential difference of about 1 volt between them. When a conducting path joins the electrodes externally, the zinc electrode dissolves slowly into the acid electrolyte, The zinc molecule goes into the electrolyte in the form of positive ions while its electrons are left on the electrode. The copper electrode on the other hand does not dissolve in the electrolyte. Instead, it gives up its electrons to the positively charged ions of hydrogen in the electrolyte, turning them into molecules of hydrogen gas that bubble up around the electrode. The zinc ion combines with the sulfate ion to form zinc sulfate, and this salt falls to the bottom of the cell. The effect of all this is that the dissolving zinc electrode becomes negatively charged, the copper electrode is left with a positive charge, and electrons from the zinc pass through the external circuit to the copper electrode.
Pressure
The molecules of some crystals and ceramics are permanently polarised: some parts of the molecule are positively charged, while other parts are negatively charged. These materials produce an electric charge when the material changes dimension as a result of an imposed external force. The charge produced is referred to as piezoelectricity. Many crystalline materials such as the natural crystals of quartz and Rochelle salted together with manufactured polycrystalline ceramics such as lead titanate zirconate and barium titanate exhibit piezoelectric effects. Piezoelectric materials are used as buzzers inside pagers, ultrasonic cleaners and mobile phones, and in gas igniters. In addition, these piezoelectric sensors are able to convert pressure, force, vibration, or shock into electrical energy. Being capable only of measuring active events, they are also used in flow meters, accelerometers and level detectors, as well as motor vehicles, to sense changes in the transmission, fuel injection and coolant pressure. When a voltage or an applied electric field stresses a piezo element electrically, its dimensions change. This phenomenon is known as electrostriction, or the reverse piezoelectric effect. This effect enables the element to act as a translating device called an actuator. Piezoelectric materials are used in power actuators, converting electrical energy into mechanical energy, and in acoustic transducers, converting electric fields into sound waves.
Magnetism
The most useful and widely employed application of magnetism is in the production of electrical energy. The mechanical power needed to assist in this production is provided by a number of different sources. These sources are called prime movers, and include diesel, petrol and natural gas engines. Coal, oil, natural gas, biomass and nuclear energy are energy sources that are used to heat water to produce super-heated steam. Non-mechanical prime movers include water, steam, wind, wave motion and tidal current. These non-mechanical prime movers engage a turbine that is coupled to a generator. Generators that employ the principle of electro-magnetic induction carry out the final conversion of these energy sources. In order to do this, three necessary conditions must exist before a voltage is created by magnetism: movement, conductors and a magnetic field.
In accordance with these conditions, when a conductor or conductors move through a magnetic field to cut the lines of force, electrons are enabled to enter the conduction band thereby inducing an electric pressure for the production of alternating current in an external circuit. This may be referred to as an elementary alternator, consisting of a single wire loop called an armature with each end being attached to slip-rings and arranged so as to revolve midway between the magnetic poles. Two copper-graphite brushes connect with the external circuit on the slip-rings in order to collect the alternating current, generated in the conductor when the alternator is in operation. Another machine used for converting mechanical energy into electrical energy by means of electromagnetic induction is called a dynamo or direct current generator.
The key difference between an alternator and a generator is that the alternator delivers AC (alternating current) to the external circuit, while the generator delivers DC (direct current). In both machines alternating current is induced in the armature, but the type of current delivered to the external circuit depends on the way in which the induced current is collected. In an alternator, the current is collected by brushes bearing against slip-rings; in a generator, a form of rotating switch called the commutator is placed between the armature and the external circuit. The commutator is designed to reverse the connections with the external circuit at the instant of each reversal of induced current in the armature, producing rectified current or direct current. This rectified current is not pure like the current of a voltaic cell but is instead a pulsating current that is constant in direction and varying in intensity.
Electric power | Sources of electrical energy | [
"Physics",
"Engineering"
] | 1,719 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
50,369,883 | https://en.wikipedia.org/wiki/Pesticide%20detection%20kit | A Pesticide detection kit is a kit that scientific test kit detects the presence of pesticide residues. Various organizations create them, among them Defence Food Research Laboratory of India.
References
Pesticides
Chemical tests | Pesticide detection kit | [
"Chemistry",
"Biology",
"Environmental_science"
] | 42 | [
"Pesticides",
"Toxicology",
"Biotechnology stubs",
"Chemical tests",
"Biochemistry stubs",
"Biochemistry",
"Biocides"
] |
50,376,988 | https://en.wikipedia.org/wiki/Anaerobic%20membrane%20bioreactor | Anaerobic membrane bioreactor or AnMBR is the name of a technology utilized in wastewater treatment. It is a technology in membrane filtration for biomass retention. AnMBR works by using a membrane bioreactor (MBR) in a anaerobic environment. Anaerobic bacteria ( Mesophile or Thermophile) and archaea convert organic materials into carbon dioxide (CO2) and methane (CH4). The sewage is filtered and separated by membranes leaving the effluent and sludge apart. The produced biogas can later be combusted to generate heat or electricity. It can also be upgraded (purified) into Renewable natural gas of household quality. AnMBR is considered to be a sustainable alternative for sewage treatment because the energy that can be generated by methane combustion can exceed the energy required for maintaining the process.
Process
The AnMBR technology goes through two stages to ensure maximum solid-liquid separation, adhering to increasing standards for effluent. First, the wastewater enters the anaerobic bioreactor unit, where the organic load goes through the anaerobic process to be transformed into biogas. Subsequently, the remaining liquid, which still has small amounts of solids, goes into the membrane unit, to separate the remaining, smaller solid particles from the anaerobically treated wastewater. This wastewater, otherwise known as effluent, can now either directly be recycled, or can further be treated by Reverse osmosis. The remaining solid particles are then cycled back to the anaerobic bioreactor unit where they can go through the biogas production process. Overall, this process removes 99% of the organic load contained within wastewater, and also produces biogas with a 70% purity.
Anaerobic Process
The anaerobic aspect of this process, the process carried out without oxygen, is performed by anaerobic microorganisms degrading organic substances in the wastewater. An integral process within this part of the separation is hydrolysis, which decomposes the organic compounds into much simpler compounds which can then be passed through microbial cells. This is the first of a four-step process to complete the transformation of organic matter to biogases:
Hydrolysis: Specifically, enzymatic hydrolysis is used to release proteins from the microorganisms. These processes subsequently break down complex compounds, such as lipids, proteins, and polysaccharides, which have a large molecular mass, to simple compounds such as:
Lipids → fatty acids and glycerol
Proteins → amino acids
Polysaccharides → glucose, fructose, and galactose
Acidogenesis: Takes the products of hydrolysis, mentioned above, and utilizes acidogenic bacteria to transform them into:
Short-chain fatty acids: lactic acid, propionic acid, and butyric acids
Ethanol
Hydrogen gas
Carbon dioxide
Acetogenesis: Anaerobic bacteria are used to convert the products of acidogenesis to:
Acetic acid
Hydrogen gas
Carbon dioxide
Small organic molecules
Methanogenesis: In the final step, methanogenic bacteria transform the aforementioned intermediate products into biogas(methane and carbon dioxide).
This process has to be carried out under a specific range of temperatures and pH. Most of the time, an anaerobic process will be slowed down if the temperature of the process goes below 35 °C (mesophilic and thermophilic conditions). Since a distinguishing characteristic of anaerobic processes is their slow development, many other factors can further slow down this process such as the organic matter composition, nutrient concentration, and the presence of toxic substances. All of these factors can fully veer the treatment off its course if not analyzed properly.
Membrane Process
The membrane process utilizes biofilm, a naturally occurring substance found in lakes, rivers, rocks, and other natural formations. They are utilized by causing the necessary biomass/organic matter to attach to the desired area. The solid particles are too big to permeate the membrane, so only pure liquid is able to get through. This allows for a high retention rate, therefore allowing the wastewater to be reusable.
Variants of the AnMBR
The first known variant of the AnMBR was developed by Dorr-Oliver in 1980, specifically to treat wastewater with high organic loads, specifically, dairy wastewater. Due to the high cost of membranes, the technology was never applied on a larger commercial scale, only going through laboratory and pilot scale trials.
There are three main configurations of the AnMBR, each with a different location of the membrane unit. The variants all have their own advantages and disadvantages both in terms of cost and operability.
Crossflow/ External AnMBR
This variant keeps the membrane unit outside of the main reactor unit, hence its name. In this configuration, the wastewater goes through the anaerobic process. After this step is complete, the remaining mixed liquor, put under a high amount of pressure, flows into the external membrane unit. Keeping the same pressure, crossflow filtration is utilized to separate the permeate and retentate, effluent and organic load respectively. Ultimately, the two end up settling on opposite sides of the membrane filtration system. From here, the effluent is released and the organic load cycles back to the main reactor unit where it can go through the anaerobic process to create more biogas.
Submerged AnMBR
This variant integrates the membrane unit directly into the bioreactor unit. This configuration varies from the other two in the fact that the raw influent enters the membrane unit instead of first going through the anaerobic process. In the membrane unit, a low negative pressure separates the retentate and permeate. The permeate, otherwise known as the effluent, leaves the system while the retentate goes through the anaerobic process to become biogas.
Externally Submerged AnMBR
This variant of the AnMBR combines the previous two variants, keeping the membrane unit external, but submerged within an external chamber. The anaerobic process takes place first, and then subsequently enters the membrane unit for filtration. Here, the influent(wastewater) is pumped into the externally submerged chamber where it is then filtered into the permeate(effluent) and the retentate(organic load). This variant, similar to the submerged AnMBR, also utilizes low negative pressure to separate the permeate and retentate. Following this process, the effluent leaves the system while the organic load recirculates into the bioreactor unit to then turn into biogas.
Advantages and Disadvantages of Each Variant
Between the three variants of the AnMBR, there are many factors that weigh into their industrial use.
Cost: The submerged AnMBR variant is the most cost-effective due to the low negative pressure requirements. In addition, the liquid does not need to be pumped into an external chamber to go through the filtration process. Due to both of these characteristics of the submerged AnMBR, it has a lower energy requirement, therefore costing less to operate than the external AnMBR variants.
Operability: The external AnMBR/ external submerged AnMBR variants are the most advantageous in terms of operability. In these variants, less membrane fouling occurs, and therefore these variants are functional for long periods of time. Additionally, due to the two units being separated, they are much simpler to clean when compared with the submerged AnMBR.
Size: While one of the main overall advantages of the AnMBR is its relatively compact size, the submerged AnMBR variant is the most compact of the three, keeping all of its operations within a single unit.
Despite the disadvantages that the submerged AnMBR harbors in terms of operability, its cost and size are both desirable traits to companies, subsequently making it the frequented variant in the industry.
Environmental Impacts
This technology can be used to diminish the effect of droughts by effectively treating the wastewater in such a way that it can be reused, specifically for agricultural purposes.
In addition, the AnMBR properly treats the organic load in wastewater such that it is not being released into the environment. The technology also produces less sludge due to the conversion into biogas, which provides more of an opportunity for recycling.
When compared to its counterparts, the traditional Membrane Bioreactor and the Aerobic Membrane Bioreactor, the AnMBR comes up ahead due to the higher quality of effluent that it discharges as well as the lesser amount of sludge, due to biogas production.
Shortcomings
While the AnMBR technology has many benefits for revolutionizing wastewater treatment, it does not come without its drawbacks. The AnMBR is prone to membrane fouling by aggregation of bacteria. This proves to be quite dangerous for the technology as it would drastically reduce the efficiency of filtration, in turn also increasing energy consumption, making the entire process more expensive. Membrane fouling also leads to the technology having to be replaced much more often, which is also expensive. In addition, the anaerobic bacteria are susceptible to entering the effluent, which leads to their loss in the reactor unit.
Industrial Applications
While the AnMBR has not been applied at anindustrial scale yet, there are a few companies that produce the technology and are marketing it as a viable alternative to current wastewater treatment technologies. Two prominent companies who are marketing the AnMBR system are Aquatech and Evoqua. Currently, Aquatech produces the external AnMBR configuration while Evoqua produces the submerged AnMBR configuration.
References
Water treatment
Environmental engineering
Sanitation
Sewerage
Bioreactors
Membrane technology | Anaerobic membrane bioreactor | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 1,995 | [
"Bioreactors",
"Biological engineering",
"Bioengineering stubs",
"Separation processes",
"Water treatment",
"Chemical reactors",
"Chemical engineering",
"Biotechnology stubs",
"Biochemical engineering",
"Water pollution",
"Sewerage",
"Microbiology equipment",
"Membrane technology",
"Civil ... |
50,376,989 | https://en.wikipedia.org/wiki/Flood%20embankment | A flood embankment is traditionally an earth wall used to shore up flood waters.
Most flood embankments are between 1 metre and 3 metres high. A flood embankment is rare.
Modern improvements to this design include constructing an internal central core made from impermeable substance like clay or concrete, some even use metal pilings.
Some authorities call man-made structures levees.
Problems
Examples
Clifton, Rawcliffe, Poppleton and Leeman ings in York
River Gowan, Cumbria
River Trent
Animation
This is an animation showing a flood event overwhelming neighbouring properties and the added construction of a flood embankment and flood warning and protection status.
References
Flood control | Flood embankment | [
"Chemistry",
"Engineering"
] | 130 | [
"Flood control",
"Environmental engineering"
] |
32,779,191 | https://en.wikipedia.org/wiki/Hahn%E2%80%93Exton%20q-Bessel%20function | In mathematics, the Hahn–Exton q-Bessel function or the third Jackson q-Bessel function is a q-analog of the Bessel function, and satisfies the Hahn-Exton q-difference equation (). This function was introduced by in a special case and by in general.
The Hahn–Exton q-Bessel function is given by
is the basic hypergeometric function.
Properties
Zeros
Koelink and Swarttouw proved that has infinite number of real zeros.
They also proved that for all non-zero roots of are real (). For more details, see . Zeros of the Hahn-Exton q-Bessel function appear in a discrete analog of Daniel Bernoulli's problem about free vibrations of a lump loaded chain (, )
Derivatives
For the (usual) derivative and q-derivative of , see . The symmetric q-derivative of is described on .
Recurrence Relation
The Hahn–Exton q-Bessel function has the following recurrence relation (see ):
Alternative Representations
Integral Representation
The Hahn–Exton q-Bessel function has the following integral representation (see ):
Hypergeometric Representation
The Hahn–Exton q-Bessel function has the following hypergeometric representation (see ):
This converges fast at . It is also an asymptotic expansion for .
References
Special functions
Q-analogs | Hahn–Exton q-Bessel function | [
"Mathematics"
] | 288 | [
"Special functions",
"Q-analogs",
"Combinatorics"
] |
32,781,954 | https://en.wikipedia.org/wiki/Backus%E2%80%93Gilbert%20method | In mathematics, the Backus–Gilbert method, also known as the optimally localized average (OLA) method is named for its discoverers, geophysicists George E. Backus and James Freeman Gilbert. It is a regularization method for obtaining meaningful solutions to ill-posed inverse problems. Where other regularization methods, such as the frequently used Tikhonov regularization method, seek to impose smoothness constraints on the solution, Backus–Gilbert instead seeks to impose stability constraints, so that the solution would vary as little as possible if the input data were resampled multiple times. In practice, and to the extent that is justified by the data, smoothness results from this.
Given a data array X, the basic Backus-Gilbert inverse is:
where C is the covariance matrix of the data, and Gθ is an a priori constraint representing the source θ for which a solution is sought. Regularization is implemented by "whitening" the covariance matrix:
with C′ replacing C in the equation for Hθ. Then,
is an estimate of the activity of the source θ.
References
Backus, G.E., and Gilbert, F. 1968, "The Resolving power of Gross Earth Data", Geophysical Journal of the Royal Astronomical Society, vol. 16, pp. 169–205.
Backus, G.E., and Gilbert, F. 1970, "Uniqueness in the Inversion of inaccurate Gross Earth Data", Philosophical Transactions of the Royal Society of London A, vol. 266, pp. 123–192.
Inverse problems
Linear algebra | Backus–Gilbert method | [
"Mathematics"
] | 328 | [
"Inverse problems",
"Applied mathematics",
"Linear algebra",
"Algebra"
] |
54,088,882 | https://en.wikipedia.org/wiki/Theoretical%20strength%20of%20a%20solid | The theoretical strength of a solid is the maximum possible stress a perfect solid can withstand. It is often much higher than what current real materials can achieve. The lowered fracture stress is due to defects, such as interior or surface cracks. One of the goals for the study of mechanical properties of materials is to design and fabricate materials exhibiting strength close to the theoretical limit.
Definition
When a solid is in tension, its atomic bonds stretch, elastically. Once a critical strain is reached, all the atomic bonds on the fracture plane rupture and the material fails mechanically. The stress at which the solid fractures is the theoretical strength, often denoted as . After fracture, the stretched atomic bonds return to their initial state, except that two surfaces have formed.
The theoretical strength is often approximated as:
where
is the maximum theoretical stress the solid can withstand.
E is the Young's Modulus of the solid.
Derivation
The stress-displacement, or vs x, relationship during fracture can be approximated by a sine curve, , up to /4. The initial slope of the vs x curve can be related to Young's modulus through the following relationship:
where
is the stress applied.
E is the Young's Modulus of the solid.
is the strain experienced by the solid.
x is the displacement.
The strain can be related to the displacement x by , and is the equilibrium inter-atomic spacing. The strain derivative is therefore given by
The relationship of initial slope of the vs x curve with Young's modulus thus becomes
The sinusoidal relationship of stress and displacement gives a derivative:
By setting the two together, the theoretical strength becomes:
The theoretical strength can also be approximated using the fracture work per unit area, which result in slightly different numbers. However, the above derivation and final approximation is a commonly used metric for evaluating the advantages of a material's mechanical properties.
See also
Strength of materials
Fracture mechanics
Solid mechanics
Stress (mechanics)
Ultimate tensile strength
Fracture
Creep (deformation)
Material Failure Theory
References
Solid mechanics
Materials science | Theoretical strength of a solid | [
"Physics",
"Materials_science",
"Engineering"
] | 413 | [
"Solid mechanics",
"Applied and interdisciplinary physics",
"Materials science",
"Mechanics",
"nan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.