id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
78,735,305
https://en.wikipedia.org/wiki/Gildeuretinol
Gildeuretinol is an investigational new drug being developed by Alkeus Pharmaceuticals for the treatment of retinal diseases, particularly Stargardt disease and geographic atrophy secondary to age-related macular degeneration (AMD). Stargardt disease is caused by a defect in the ABCA4 gene that clears toxic byproducts resulting from the dimerization of vitamin A. Gildeuretinol is a deuterated derivative of Vitamin A that is designed to reduce the dimerization of vitamin A without affecting the visual cycle. Gildeuretinol has received breakthrough therapy and orphan drug designations from the U.S. Food and Drug Administration. References Vitamin A Deuterated compounds
Gildeuretinol
[ "Chemistry" ]
147
[ "Pharmacology", "Vitamin A", "Medicinal chemistry stubs", "Biomolecules", "Pharmacology stubs" ]
78,737,233
https://en.wikipedia.org/wiki/Serge%20Belamant
Serge Belamant is a South African inventor and entrepreneur known for his contributions to cryptography technologies and financial systems. He is the founder of NET1 Technologies (now Lesaka Technologies) and played a significant role in the development of the Chip Offline Pre-authorized Card (COPAC) and the Universal Electronic Payment System (UEPS). Early life and education Serge Belamant was born in 1953 and moved to South Africa at the age of 14. In 1972, he enrolled at Witwatersrand University, initially studying engineering before switching to computer science and applied mathematics. After two years, he discontinued his studies and pursued courses in information systems through UNISA (University of South Africa). Career In 1989, Serge Belamant developed the Universal Electronic Payment System (UEPS), enabling secure, real-time transactions even in areas with limited connectivity. In the same year, he founded NET1 UEPS Technologies Inc., serving as its CEO and Director. In 1995, VISA tasked Belamant with designing the Chip Offline Pre-authorized Card (COPAC), a technology still widely used in chip-enabled credit and debit cards. A year later, he listed his company APLITEC (Applied Technology Holdings Limited) on the Johannesburg Stock Exchange. In 1999, Belamant acquired Cash Payment Services (CPS) from First National Bank of South Africa, modernizing its welfare payment system to serve millions in rural areas. In 2005, he led NET1 Technologies to an IPO, listing it as NET1 UEPS Technologies Inc. on the Nasdaq. A secondary listing on the Johannesburg Stock Exchange (JSE) followed in 2008. Under Belamant's leadership, NET1 managed welfare payments for the South African Social Security Agency (SASSA), handling payments for over 10 million beneficiaries monthly. Despite criticism over handling the SASSA contract, investigations by the U.S. Department of Justice and the South African Constitutional Court found no wrongdoing. Belamant retired in 2017, leaving NET1 with over 10,000 pay points and annual transaction volumes of 160 billion rand—15 to 20% of South Africa's national budget. In 2018, he co-founded Zilch Technology, a direct-to-consumer ad-subsidized payments network, alongside Philip Belamant and Sean O’Connor. References 1953 births Living people South African inventors South African businesspeople Businesspeople in information technology University of South Africa alumni South African emigrants Cryptography
Serge Belamant
[ "Mathematics", "Engineering" ]
517
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
78,737,317
https://en.wikipedia.org/wiki/Imocitrelvir
Imocitrelvir is an investigational new drug that is being evaluated for the treatment of viral infections. It is a 3C protease inhibitor in picornaviruses. Originally developed by Pfizer for treating human rhinovirus infections, this small molecule has shown promise against a broader range of viruses, including polioviruses. References Antiviral drugs Amides Enoic acids Esters Isoxazoles Propargyl compounds Pyridones Pyrrolidones
Imocitrelvir
[ "Chemistry", "Biology" ]
102
[ "Pharmacology", "Antiviral drugs", "Biocides", "Esters", "Functional groups", "Medicinal chemistry stubs", "Organic compounds", "Pharmacology stubs", "Amides" ]
78,738,102
https://en.wikipedia.org/wiki/Myerson%20value
The Myerson value is a solution concept in cooperative game theory. It is a generalization of the Shapley value to communication games on networks. The solution concept and the class of cooperative communication games it applies to was introduced by Roger Myerson in 1977. Preliminaries Cooperative games A (transferable utility) cooperative game is defined as a pair , where is a set of players and is a characteristic function, and is the power set of . Intuitively, gives the "value" or "worth" of coalition , and we have the normalization restriction . The set of all such games for a fixed is denoted as . Solution concepts and the Shapley value A solution concept – or imputation – in cooperative game theory is an allocation rule , with its -th component giving the value that player receives.A common solution concept is the Shapley value , defined component-wise as Intuitively, the Shapley value allocates to each how much they contribute in value (defined via the characteristic function ) to every possible coallition . Communication games Given a cooperative game , suppose the players in are connected via a graph – or network – . This network represents the idea that some players can communicate and coordinate with each other (but not necessarily with all players), imposing a restriction on which coalliations can be formed. Such overall structure can be represented by a communication game . The graph can be partitioned into its components, which in turn induces a unique partition on any subset given by Intuitively, if the coallition were to break up into smaller coallitions in which players could only communicate with each through the network , then is the family of such coallitions. The communication game induces a cooperative game with characteristic function given by Definition Main definition Given a communication game , its Myerson value is simply defined as the Shapley value of its induced cooperative game : Extensions Beyond the main defintion above, it is possible to extend the Myerson value to networks with directed graps. It is also possible define allocation rules which are efficient (see below) and coincide with the Myerson value for communication games with connected graphs. Properties Existence and uniqueness Being defined as the Shapley value of an induced cooperative game, the Myerson value inherits both existence and uniqueness from the Shapley value. Efficiency In general, the Myerson value is not efficient in the sense that the total worth of the grand coallition is distributed among all the players: The Myerson value will coincide with the Shapley value (and be an efficient allocation rule) if the network is connected. (Component) efficiency For every coalition , the Myerson value allocates the total worth of the coallition to its members: Fairness For any pair of agents such that – i.e., they are able to communicate through the network–, the Myerson value ensures that they have equal gains from bilateral agreement to its allocation rule: where represents the graph with the link removed. Axiomatic characterization Indeed, the Myerson value is the unique allocation rule that satisfies both (component) efficiency and fairness. Notes References Cooperative games Network theory
Myerson value
[ "Mathematics" ]
644
[ "Graph theory", "Cooperative games", "Network theory", "Game theory", "Mathematical relations" ]
78,738,320
https://en.wikipedia.org/wiki/Thurston%27s%2024%20questions
Thurston's 24 questions are a set of mathematical problems in differential geometry posed by American mathematician William Thurston in his influential 1982 paper Three-dimensional manifolds, Kleinian groups and hyperbolic geometry published in the Bulletin of the American Mathematical Society. These questions significantly influenced the development of geometric topology and related fields over the following decades. History The questions appeared following Thurston's announcement of the geometrization conjecture, which proposed that all compact 3-manifolds could be decomposed into geometric pieces. This conjecture, later proven by Grigori Perelman in 2003, represented a complete classification of 3-manifolds and included the famous Poincaré conjecture as a special case. By 2012, 22 of Thurston's 24 questions had been resolved. Table of problems Thurston's 24 questions are: See also Geometrization conjecture Hilbert's problems Taniyama's problems List of unsolved problems in mathematics Poincaré conjecture Smale's problems References Geometric topology Unsolved problems in mathematics
Thurston's 24 questions
[ "Mathematics" ]
213
[ "Mathematical problems", "Unsolved problems in mathematics", "Topology", "Geometric topology" ]
78,739,502
https://en.wikipedia.org/wiki/Pieter%20Abraham%20van%20de%20Velde
Pieter Abraham van de Velde (22 November 1913 – 10 May 2001) was a Dutch civil engineer and professor of road and hydraulic engineering. He contributed to several major water engineering projects in the Netherlands, notably the drainage of Walcheren at the end of the Second World War, dike restorations following the 1953 North Sea flood, and the Deltaplan. A proponent of integrating statistical methods into engineering, van de Velde advocated for probabilistic approaches to assess safety and manage uncertainty in the design of flood defences. In his 1980 farewell lecture at Delft University of Technology, he emphasised the limitations of deterministic safety factors and underscored the importance of using probabilistic techniques, such as Monte Carlo simulations, to model risks and failure probabilities in complex systems. Early life and education Van de Velde was born on 22 November 1913 in Utrecht. He attended the Hogere Burgerschool and graduated as a civil engineer in 1937 from the Technische Hogeschool Delft. After completing his military service he worked for two years at the Waterloopkundig Laboratorium in Delft, followed by positions at Rijkswaterstaat until 1967. Career at Rijkswaterstaat Van de Velde served in various roles at Rijkswaterstaat, including the last six years (1961–1966) as chief engineering director of the (English: Delta Department, North) and the (English: Directorate of Closure Works). During this period, he worked on major projects such as the design and construction of closure dams in Walcheren, the Zuiderzee, and the Delta Works. Walcheren reclamation (1944–1945) During the drainage of Walcheren at the end of World War II, van de Velde was part of the engineering team of the (English: Walcheren Reclamation Service), which was led by Pieter Philippus Jansen. Their task involved sealing breaches in the dikes created by Allied bombing. The event was later chronicled in the novel Het verjaagde water by A. den Doolaard, in which van de Velde is portrayed as the character “Schoonebloem". Repairs following the 1953 North Sea flood In the aftermath of the North Sea flood of 1953, van de Velde oversaw work to close large dike breaches near the Schelphoek and Ouwerkerk. Van de Velde was appointed as lead engineer for the Schelphoek breach on 27 April 1953, which had become so deep due to strong ebb and flood currents that it could not be closed using traditional methods. Under van de Velde's leadership, a strategy combining innovative engineering and adaptive management was employed. Initial studies examined the hydrodynamic and geological conditions of the site, with extensive tidal calculations and laboratory experiments conducted to assess the forces acting on the breach and predict the behaviour of water flow during the closure process. The dynamic nature of the breach required real-time measurements and adjustments to the proposed solutions. The construction of a replacement dike was approached in stages, starting with preparatory works to stabilize the surrounding area. The final closure was executed using massive Phoenix caissons, pre-fabricated concrete structures that had previously been used by the Allies in World War II during the Normandy landings, and had been used at Walcheren. The caissons were carefully positioned on a prepared bed of stone and sand, forming a watertight barrier that allowed for the gradual re-establishment of the dike. Van de Velde also played a significant role in the closure of the breach at Ouwerkerk, which involved the use of 11,500 workers, 4 Phoenix caissons, as well as a number of tugboats and stone dumping vessels. The final caisson was placed on the evening of 6 November 1953, in the presence of Queen Juliana and the Dutch prime minister Willem Drees. Involvement in the Delta Works Van de Velde's contributions to the Delta Works included the design of the Haringvlietdam between 1958 and 1970, where he was chief structural engineer for the design of the sluices. For the construction of the Grevelingendam, van de Velde came up with the idea of using a 1.9 kilometre-long cable car system. The advantage of this system in the required gradual vertical closure was that flow velocities were limited, resulting in a reduction in the magnitude of scour holes either side of the dam. Another advantage of the cable car solution was that only a single pylon was required in the centre of the channel. The cable car system was designed by van de Velde and staff at Rijkswaterstaat, in combination with the French company Neyrpic, and used self-propelled cars and a one-way system to optimise capacity. He later advised on plans for the closure of the Eastern Scheldt, which was accomplished by constructing the (Eastern Scheldt Storm Surge Barrier) between the islands of Schouwen-Duiveland and Noord-Beveland. Spanning nine kilometres, the dam was the largest component of the entire Delta Works. Originally the dam had been designed, and partly built, as a fully closed structure. However, following public protests from environmental activists and fishing communities, the Den Uyl cabinet decided in 1974 to make major alterations to the project, thereby requiring a partially open design. Such a structure was unprecedented worldwide, with no existing design codes or construction experience to draw upon. An alternative design was subsequently adopted, featuring substantial sluice-gate doors installed along the final four kilometres of the dam. Under normal circumstances, these gates are left open to allow natural tidal movement, but they can be securely closed during adverse weather conditions. Van de Velde advised the contractor, Dijksbouw Oosterschelde, and liaised with the chief engineer Frank Spaargaren and other key hydraulic engineers such as Jan Agema during the construction. Whilst the innovative design safeguarded the saltwater marine ecosystem, enabled continued fishing activities, and provided effective flood control for the land behind the dam, van de Velde expressed public criticism of the alternative design, believing that the safety risks were too great and the cost estimates for construction too optimistic. The Oosterscheldekering was completed in 1986 and officially opened by Beatrix of the Netherlands on 4 October that year. Professor at Delft University of Technology (1966–1980) From 1966 until his retirement in 1980, Van de Velde was appointed as a professor of civil engineering at the Technische Hogeschool Delft, succeeding Pieter Philippus Jansen. He taught courses, supervised doctoral research, and continued to advise on coastal defences, reclamation projects, and major hydraulic engineering projects in the Netherlands and abroad. He also served on technical advisory committees, including the (English: Technical Advisory Committee on Flood Defences), chairing the group on dike coverings that produced guidelines for using asphalt in hydraulic engineering works. In 1980, van de Velde delivered his farewell lecture, "Veiligheid en Monte Carlo" (Safety and Monte Carlo), at Delft. The lecture explored the challenges of ensuring safety in civil engineering, particularly in the context of Dutch water management. Van de Velde emphasised the importance of probabilistic methods, such as the Monte Carlo approach, to account for uncertainties in factors like material strength, environmental loads, and design parameters. He reflected on the limitations of deterministic safety factors, advocating for statistical techniques to assess the likelihood of structural failure. Drawing from the history of Dutch flood defences, he highlighted key advances in dike safety after the 1953 disaster, including the integration of statistical insights into design standards. In particular, van de Velde credited the pioneering work of Pieter Jacobus Wemelsfelder as instrumental in shaping modern approaches to flood defence, noting how a 1938 paper by Wemelsfelder had introduced the application of statistical methods to analyse storm surge heights, challenging the then-standard practice of basing dike heights solely on historical maximum water levels. Van de Velde also acknowledged the work of David van Dantzig on integrating statistical probabilities of extreme water levels with economic considerations, and underscored how this statistical framework allowed for more efficient dike improvements, where modest height increases could drastically reduce failure probabilities, marking a turning point in the Dutch approach to water management. Honours Officer of the Order of Orange-Nassau Selected publications See also Delta Works Flood control in the Netherlands Grevelingendam Haringvlietdam Oosterscheldekering Rijkswaterstaat References Further reading Various (1981). Delta Works Dutch civil engineers Academic staff of the Delft University of Technology 1913 births 2001 deaths Delft University of Technology alumni 20th-century Dutch engineers
Pieter Abraham van de Velde
[ "Physics" ]
1,801
[ "Physical systems", "Hydraulics", "Delta Works" ]
78,740,968
https://en.wikipedia.org/wiki/Medog%20Hydropower%20Station
The Medog Hydropower Station () is a planned 60,000 megawatt (MW) hydroelectric dam project under development on the Yarlung Tsangpo river in Tibet Autonomous Region, China. Upon completion, it will become the world's largest hydropower facility, with an anticipated annual power generation capacity of 300 billion kilowatt-hours—triple that of the Three Gorges Dam. The Chinese government authorized the dam's construction in December 2024, with an estimated investment exceeding 1 trillion yuan (approximately US$137 billion). The project is intended to be developed as a single-phase installation, with commercial operations planned for 2033. Location The facility is planned to be constructed in Medog County within the Nyingtri Prefecture, situated near the Indian border state of Arunachal Pradesh. The dam site is intended to be located along the lower sections of the Yarlung Tsangpo, which originates in western Tibet's glacial regions. This watercourse continues into India as the Brahmaputra River and into Bangladesh as the Jamuna River, serving as a crucial water source for these regions. Overview The Medog Hydropower Station represents part of China's broader hydroelectric development strategy in Tibet. Since 2000, China has initiated or proposed 193 hydropower projects in the region, with approximately 60% still in planning or preparatory phases. As of late 2024, while construction approval has been granted, specific details regarding the project's commencement and completion timeline remain unpublished. The Chinese government has not yet released comprehensive environmental impact assessments or detailed implementation plans for the project. The project is wholly owned and developed by Power Construction Corporation of China (PowerChina), a state-owned construction enterprise. Commercial operations were planned to begin in 2033. With a projected investment more than quadruple that of the Three Gorges Dam (which cost 250 billion yuan), the Medog Hydropower Station represents one of China's most ambitious infrastructure projects and one of the most expensive infrastructure projects in history. The facility's planned annual power output of 300 billion kilowatt-hours would establish it as the world's most productive hydroelectric installation, significantly surpassing current records. The project intends to harness a 2,000 meter river elevation drop within a 50-kilometer stretch, granting it the ability to generate significant amounts of hydroelectric power. This section flows through the Yarlung Tsangpo Grand Canyon, recognized as Earth's deepest canyon system. The intended construction plan necessitates the excavation of four 20-kilometer tunnels through Namcha Barwa mountain to divert the Yarlung Tsangpo River. Criticism The project has faced resistance from various parties, which include environmental organizations, downstream nations, and Tibetan rights groups. Similar hydroelectric developments in Tibet have previously sparked protests, including recent demonstrations against the Kamtok Dam project on the Drichu/Yangtze River that led to over 1,000 arrests. India and Bangladesh have also voiced apprehension about the project's potential effects on their water resources. Cultural impact and displacement Tibetan rights organizations characterized the project as an example of resource exploitation at the expense of Buddhist cultural heritage and local communities. While specific displacement figures remain undisclosed, the project will necessitate population relocation in the affected area. For comparison, the Three Gorges Dam project resulted in approximately 1.4 million relocations, although the Medog region's lower population density suggests fewer displacements may be required. The development threatens to impact culturally significant sites in what Tibetans consider one of their most sacred regions. According to the International Campaign for Tibet, the 193 combined projects in the region could potentially displace over 1.2 million people and affect numerous religious sites if completed. Environmental Environmental organizations have identified several potential ecological consequences of the project. Many expressed concern about project's impact on the Tibetan Plateau's biodiversity. The region that will be impacted by the dam is recognized as one of Tibet's most ecologically diverse areas, leading to fears about ecosystem disruption. The dam's construction is expected to significantly alter downstream water flow patterns and impact local biodiversity. The project site's location in a seismically active zone prone to landslides has raised additional safety concerns, as the reservoir's water mass could potentially influence geological stability. The steep, narrow topography of the gorge caused geological experts to warn about increased landslide risks. In 2022, engineers from the Sichuan provincial geological bureau specifically highlighted the dangers of "earthquake-induced landslides and mud-rock flows" as significant threats to the project's stability. Chinese state media has characterized the project as environmentally conscious, emphasizing its role in advancing Beijing's climate neutrality objectives while promoting regional economic development. Chinese officials maintained that the project will have minimal environmental impact, though specific impact assessments remain unpublished. Water security The project has generated apprehension among downstream nations regarding water security. Hydrological experts have drawn parallels with China's previous dam projects on the Mekong River, where upstream water control has been associated with increased drought frequency and severity in downstream regions over the past twenty years. Critics noted that India and Bangladesh could face compromised water access, biodiversity disruption, and riverbank erosion akin to those faced by Thailand, Vietnam, and Cambodia from earlier Chinese hydroelectric projects. A 2020 analysis by the Lowy Institute indicated that China's control over Tibetan Plateau rivers could potentially provide significant geopolitical leverage over India's economy. Indian authorities responded to the project by exploring countermeasures, including the potential development of their own large-scale hydroelectrical dam and reservoir system to mitigate the dam's impacts. The Chinese Ministry of Foreign Affairs asserted in 2020 that China maintains a "legitimate right" to dam the river, stating they have considered downstream effects in their planning. See also Three Gorges Dam List of dams on the Brahmaputra River References Dams in China Hydroelectric power stations in Tibet Brahmaputra River Dams on the Brahmaputra River Reservoirs and dams in Tibet Dam controversies China–India relations Gravity dams Megaprojects Dams under construction in China
Medog Hydropower Station
[ "Engineering" ]
1,217
[ "Megaprojects" ]
78,741,791
https://en.wikipedia.org/wiki/Enerisant
Enerisant is an experimental drug under investigation as a potential treatment for sleep-wake disorders, particularly narcolepsy. It belongs to a histamine H3 receptor antagonist/inverse agonist class of medications. Pharmacology Pharmacodynamics Enerisant functions as a potent and highly selective antagonist/inverse agonist of the histamine H3 receptor. This mechanism of action is similar to that of pitolisant, a currently approved H3 receptor antagonist/inverse agonist for narcolepsy; however, enerisant has demonstrated greater affinity and selectivity for the H3 receptor in preclinical studies. By blocking H3 receptors, enerisant increases histamine release from histaminergic neurons, leading to stimulation of postsynaptic histamine H1 receptors, a key mechanism in promoting wakefulness Pharmacokinetics Enerisant exhibits minimal metabolism in humans and is primarily eliminated unchanged via renal excretion. After oral administration, it rapidly absorbs and exhibits dose-dependent plasma concentrations. Within 48 hours, 64.5-89.9% of the administered dose is recovered unchanged in urine. Plasma protein binding is approximately 31.0–31.7% in humans. References Antihistamines Experimental drugs Carboxamides Ethers Morpholines Phenols Pyrazoles Pyrrolidines H3 receptor antagonists
Enerisant
[ "Chemistry" ]
289
[ "Organic compounds", "Functional groups", "Ethers" ]
71,383,907
https://en.wikipedia.org/wiki/Seismic%20wide-angle%20reflection%20and%20refraction
Seismic wide-angle reflection and refraction is a technique used in geophysical investigations of Earth's crust and upper mantle. It allows the development of a detailed model of seismic velocities beneath Earth's surface well beyond the reach of exploration boreholes. The velocities can then be used, often in combination with the interpretation of standard seismic reflection data and gravity data, to interpret the geology of the subsurface. Theory In comparison to the typical seismic reflection survey, which is restricted to relatively small incidence angles due to the limited offsets between source and receiver, wide-angle reflection and refraction (WARR) data are acquired with long offsets, allowing the recording of both refracted and wide-angle reflection arrivals. Acquisition The acquisition setup depends on the type of seismic source being used and the target of the investigation. Source The source of the seismic waves may be either "passive", e.g. naturally occurring sources, such as earthquakes, or anthropogenic sources, such as quarry blasts, or "active", sometimes referred to as "controlled source", e.g. explosive charges set off in shallow boreholes or seismic vibrators onshore or air guns offshore. Exceptionally, the sound waves from nuclear explosions have been used to look at the structure of the upper mantle down to the base of the transition zone at 660 km depth. Receiver The sound waves are normally recorded using 3-component seismometers, with ocean-bottom seismometers (OBS) used offshore. The three components allow the recording of S-waves as well as the P-waves that single component instruments can record. The offset range used depends on the depth of the target. For the top few kilometres of the crust, such as when investigating beneath a thick layer of basalt, a range of 10–20 km may be appropriate, while for the lower crust and mantle, offsets greater than 100 km are normally necessary. Modelling The processing approach used in standard seismic reflection profiling is not appropriate for wide-angle data. The main modelling approach used for WARR profiles is to match predicted travel times, based on the geology, with those observed in the data. An initial model of variations in seismic velocity is set up, based on whatever knowledge is available from other sources. A ray tracing algorithm is used to calculate the travel times and the model is adjusted iteratively to reduce the misfit between observed and modelled times. Most modelling uses P-waves, but S-waves are also modelled in some cases. References Geophysics
Seismic wide-angle reflection and refraction
[ "Physics" ]
520
[ "Applied and interdisciplinary physics", "Geophysics" ]
72,910,789
https://en.wikipedia.org/wiki/Absolutely%20maximally%20entangled%20state
The absolutely maximally entangled (AME) state is a concept in quantum information science, which has many applications in quantum error-correcting code, discrete AdS/CFT correspondence, AdS/CMT correspondence, and more. It is the multipartite generalization of the bipartite maximally entangled state. Definition The bipartite maximally entangled state is the one for which the reduced density operators are maximally mixed, i.e., . Typical examples are Bell states. A multipartite state of a system is called absolutely maximally entangled if for any bipartition of , the reduced density operator is maximally mixed , where . Property The AME state does not always exist; in some given local dimension and number of parties, there is no AME state. There is a list of AME states in low dimensions created by Huber and Wyderka. The existence of the AME state can be transformed into the existence of the solution for a specific quantum marginal problem. The AME can also be used to build a kind of quantum error-correcting code called holographic error-correcting code. References Quantum information science Quantum states
Absolutely maximally entangled state
[ "Physics" ]
243
[ "Quantum states", "Quantum mechanics" ]
72,912,536
https://en.wikipedia.org/wiki/Nine-point%20stencil
In numerical analysis, given a square grid in two dimensions, the nine-point stencil of a point in the grid is a stencil made up of the point itself together with its eight "neighbors". It is used to write finite difference approximations to derivatives at grid points. It is an example for numerical differentiation. This stencil is often used to approximate the Laplacian of a function of two variables. Motivation If we discretize the 2D Laplacian by using central-difference methods, we obtain the commonly used five-point stencil, represented by the following convolution kernel: Even though it is simple to obtain and computationally lighter, the central difference kernel possess an undesired intrinsic anisotropic property, since it doesn't take into account the diagonal neighbours. This intrinsic anisotropy poses a problem when applied on certain numerical simulations or when more accuracy is required, by propagating the Laplacian effect faster in the coordinate axes directions and slower in the other directions, thus distorting the final result. This drawback calls for finding better methods for discretizing the Laplacian, reducing or eliminating the anisotropy. Implementation The two most commonly used isotropic nine-point stencils are displayed below, in their convolution kernel forms. They can be obtained by the following formula: The first one is known by Oono-Puri, and it is obtained when γ=1/2. The second one is known by Patra-Karttunen or Mehrstellen, and it is obtained when γ=1/3. Both are isotropic forms of discrete Laplacian, and in the limit of small Δx, they all become equivalent, as Oono-Puri being described as the optimally isotropic form of discretization, displaying reduced overall error, and Patra-Karttunen having been systematically derived by imposing conditions of rotational invariance, displaying smallest error around the origin. Desired anisotropy On the other hand, if controlled anisotropic effects are a desired feature, when solving anisotropic diffusion problems for example, it is also possible to use the 9-point stencil combined with tensors to generate them. Consider the laplacian in the following form: Where c is just a constant coefficient. Now if we replace c by the 2nd rank tensor C: Where c1 is the constant coefficient for the principal direction in x axis, and c2 is the constant coefficient for the secondary direction in y axis. In order to generate anisotropic effects, c1 and c2 must be different. By multiplying it by the rotation matrix Q, we obtain C', allowing anisotropic propagations in arbitrary directions other than the coordinate axes. Which is very similar to the Cauchy stress tensor in 2 dimensions. The angle can be obtained by generating a vector field in order to orientate the pattern as desired. Then: Or, for different anisotropic effects using the same vector field It is important to note that, regardless of the values of , the anisotropic propagation will occur parallel to the secondary direction c2 and perpendicular to the principal direction c1:. The resulting convolution kernel is as follows If, for example, c1=c2=1, the cxy component will vanish, resulting in the simple five-point stencil, rendering no controlled anisotropy. If c2>c1 and =0, the anisotropic effects will be more pronounced in the vertical axis. if c2>c1 and =45 degrees, the anisotropic effects will be more pronounced in the upper-right / lower-left diagonal. References Numerical analysis Mathematical physics Computational science Finite differences Numerical differential equations Mathematical analysis Linear operators in calculus Non-Newtonian calculus
Nine-point stencil
[ "Physics", "Mathematics" ]
788
[ "Mathematical analysis", "Calculus", "Applied mathematics", "Theoretical physics", "Computational mathematics", "Non-Newtonian calculus", "Finite differences", "Computational science", "Mathematical relations", "Numerical analysis", "Mathematical physics", "Approximations" ]
74,385,749
https://en.wikipedia.org/wiki/Accumulated%20winter%20season%20severity%20index
The accumulated winter season severity index or AWSSI provides a scientific way to compare the severity of a winter relative to its weather history. Points are assigned daily based on the maximum and minimum temperature, snowfall and snow depth for a specific site and accumulated through the winter. The index can be used for historical comparisons, road maintenance and to understand how severe a current winter is. History The AWSSI was originally developed in 2015 by researchers Barbara E. Mayes Boustead, Steven D. Hilberg, Martha D. Shulski and Kenneth G. Hubbard. The index was developed "to examine relationships to teleconnection patterns, determine trends, and create sector-specific applications, as well as to analyze an ongoing winter or any individual winter season to place its severity in context." Calculation Values are assigned on a daily basis based on the maximum and minimum temperature, 24-hour snowfall and depth of snow on the ground. Values start being calculated at the start of winter. The start of winter is defined when any of these conditions are met: 1) daily maximum temperature ≤ 32 °F (0 °C), 2) first measurable snowfall or 3) it is December 1. Likewise, values stop being calculated at the end of winter – when the last of the following four conditions occurs: 1) daily maximum temperature ≤ 32 °F (0 °C) no longer occurs, 2) no daily measurable snowfall, 3) daily snow depth ≥ 1.0 in. (2.5 cm) is no longer observed, or 4) it is March 1. AWSSI climatology See also Regional Snowfall Index References External links Accumulated Winter Season Severity Index (AWSSI) Hazard scales Meteorological quantities National Centers for Environmental Prediction
Accumulated winter season severity index
[ "Physics", "Mathematics" ]
345
[ "Quantity", "Physical quantities", "Meteorological quantities" ]
74,389,181
https://en.wikipedia.org/wiki/Haldane%E2%80%93Shastry%20model
In quantum statistical physics, the Haldane–Shastry model is a spin chain, defined on a one-dimensional, periodic lattice. Unlike the prototypical Heisenberg spin chain, which only includes interactions between neighboring sites of the lattice, the Haldane–Shastry model has long-range interactions, that is, interactions between any pair of sites, regardless of the distance between them. The model is named after and was defined independently by Duncan Haldane and B. Sriram Shastry. It is an exactly solvable model, and was exactly solved by Shastry. Formulation For a chain with spin 1/2 sites, the quantum phase space is described by the Hilbert space . The Haldane–Shastry model is described by the Hamiltonian where denotes the Pauli vector at the th site (acting nontrivially on the th copy of in ). Note that the pair potential suppressing the interaction strength at longer distances is an inverse square , with the chord distance between the and th sites viewed as being equispaced on the unit circle. See also Inozemtsev model References Quantum lattice models Spin models
Haldane–Shastry model
[ "Physics" ]
238
[ "Spin models", "Statistical mechanics", "Quantum mechanics", "Quantum lattice models" ]
77,326,711
https://en.wikipedia.org/wiki/Magnetization%20roasting%20technology
Magnetic roasting technology refers to the process of heating materials or ores under specific atmospheric conditions to induce chemical reactions. This process selectively converts weakly magnetic iron minerals such as hematite (Fe2O3), siderite (FeCO3), and limonite (Fe2O3·nH2O) into strongly magnetic magnetite (Fe3O4) or maghemite (γ-Fe2O3), while the magnetic properties of gangue minerals remain almost unchanged. By artificially increasing the magnetic disparity between iron oxides and gangue minerals through magnetic roasting, the selectivity of iron ore is improved, making it the most effective method for separating refractory iron ores. Additionally, the roasting process can eliminate harmful impurities such as crystalline water, sulfur, and arsenic from the ore, loosening the ore structure and enhancing subsequent grinding efficiency. Researchers in mineral processing have been developing magnetic roasting technology for iron ore since the early 20th century. Depending on the type of reactor used, magnetic roasting can be classified into shaft furnace roasting, rotary kiln roasting, fluidized bed roasting, and microwave roasting. Types Shaft furnace magnetic roasting Shaft furnace magnetization roasting is a metallurgical process, mainly used to treat iron ore, so that in a high temperature environment by reacting with reducing agents (such as coal, coke or gas), the iron oxides (such as hematite, limonite, etc.) to reduce to magnetic iron minerals (mainly magnetite). The process is usually carried out in the vertical furnace, the charge is top-down under the action of gravity, through layer by layer heating and reduction reaction, and finally obtain magnetic iron ore, so as to improve its magnetic separation performance, and facilitate the subsequent beneficiation and smelting process. The main steps of magnetizing roasting in shaft furnace include: Charge preparation: Mix iron ore with reducing agent in a certain proportion. Heating: The charge is added from the top of the shaft furnace and heated layer by layer as it falls to a roasting temperature (usually between 700 °C and 900 °C). Reduction reaction: The iron oxide in the ore reacts with the reducing agent and is reduced to magnetic iron minerals (such as magnetite). Cooling and discharge: The roasted material is cooled and discharged from the bottom of the shaft furnace. Rotary Kiln Magnetic Roasting Magnetization roasting in rotary kiln is to reduce iron oxides (such as hematite, limonite, etc.) in the ore to magnetic iron minerals (mainly magnetite) by reacting iron ore containing iron ore with reducing agents (such as coal, coke or natural gas) in a rotating high-temperature kiln. This process helps to improve the magnetic separation performance of iron ore and facilitate subsequent beneficiation and smelting operations.. The main steps of magnetization roasting in rotary kiln include: Raw material preparation: Mix the iron ore with the appropriate amount of reducing agent, and add the binder if necessary to improve the roasting effect. Feed: The mixture is uniformly fed into the kiln head of the rotary kiln through the feed device. Roasting: The rotary kiln is rotated at high temperatures (usually between 700 °C and 900 °C), and the material is continuously rolled and moved forward in the kiln, in full contact with the reducing atmosphere, so that the iron oxide is reduced to magnetic iron minerals (such as magnetite). Cooling and discharge: The calcined material is cooled by a cooling system (such as a cooling kiln or cooling cylinder) and discharged from the end of the rotary kiln. Fluidized bed magnetic roasting Fluidized bed magnetic roasting is the use of suspension roaster to fully mix and contact fine ore with reducing agents (such as pulverized coal, natural gas, etc.) in high temperature environment, so that the iron oxides in the ore (such as hematite, limonite, etc.) are reduced to magnetic iron minerals (mainly magnetite), thereby improving the magnetic separation performance of the ore and facilitating subsequent beneficiation and smelting operations. The main steps of suspension magnetization roasting include: Raw material preparation: Mix iron ore powder with reducing agent, add auxiliary agent if necessary to improve roasting effect. Roasting: The mixed material is suspended in the baking furnace by air flow, and at high temperatures (usually between 700 °C and 900 °C), the material is fully in contact with the reducing gas, and the reduction reaction is carried out to convert the iron oxide into magnetic iron minerals. Cooling and collection: The roasted material is cooled by a cooling system and collected by equipment such as a cyclone or a cloth bag collector. Microwave magnetic roasting Microwave magnetization roasting uses microwave as an energy source to reduce iron oxides (such as hematite, limonite, etc.) in iron ore to magnetic iron minerals (mainly magnetite). In this process, the ore is rapidly heated, so that the reduction reaction is completed in a short time, so as to improve the magnetic separation performance of the ore, and facilitate subsequent beneficiation and smelting operations. The main steps of microwave magnetization roasting include: Raw material preparation: Mix iron ore powder with reducing agent (such as toner, pulverized coal, etc.) evenly. Microwave heating: The mixture is placed in a microwave oven and heated by microwave radiation. Microwave energy acts directly on the material to rapidly heat it to the desired roasting temperature (usually between 500 °C and 900 °C). Reduction reaction: At high temperatures, iron oxides and reducing agents undergo a reduction reaction to produce magnetic iron minerals (such as magnetite). Cooling and collection: The roasted material is cooled by a cooling system and collected for treatment Magnetization roasting method The commonly used magnetization roasting methods can be divided into reduction roasting, neutral roasting, oxidation roasting, redox roasting and reduction oxidation roasting. Reduction roasting After heating to a certain temperature, hematite, limonite and iron-manganese ore can be transformed into strong magnetic magnetite by reacting with an appropriate amount of reducing agent. Commonly used reducing agents are C, CO, H2 and so on. The reaction of hematite with reducing agent is as follows: 3 Fe2O3 + C -> 2 Fe3O4 + CO 3 Fe2O3 + CO -> 2Fe3O4 + CO2 3 Fe2O3 + H2 -> 2 Fe3O4 + H2O Neutral roasting Carbonated iron ores such as siderite, magnesite, magnesite and magnesium siderite can be decomposed to produce magnetite after heating to a certain temperature (300-- 400 degrees Celsius) without air or by injecting a small amount of air. The chemical reaction is as follows: 3 FeCO3 -> Fe3O4 + 2 CO2 + CO 3 FeCO3 + CO -> 2 Fe3O4 + CO2 Oxidation roasting Pyrite is oxidized in oxygen for a short time to oxidize to pyrite. If the roasting time is very long, the pyrite can continue to react into magnetite. The chemical reaction is as follows. 7 FeS2 + 6 CO -> Fe7S8 + 6 SO2 3 Fe7S8 + 38 O2 -> 7 Fe3O4 + 24 SO2 Redox roasting Iron ore containing siderite, hematite or limonite, when the ratio of siderite to hematite is less than 1, the siderite is oxidized to hematite to a certain extent in the oxidizing atmosphere, and then it is reduced to magnetite together with the original hematite in the ore in the reducing atmosphere. Reduction oxidation roasting The magnetite produced by magnetization roasting of various iron ore can be oxidized into strong magnetic hematite when cooled to below 400 °C in an oxygen-free atmosphere and then in contact with the air. The chemical reaction is as follows: 3 Fe3O4 + O2 -> 6 Fe2O3 See also Iron Chemical reaction Roasting (metallurgy) Magnetic separation Rotary kiln References External links Metallurgical processes
Magnetization roasting technology
[ "Chemistry", "Materials_science" ]
1,729
[ "Metallurgical processes", "Metallurgy" ]
69,821,827
https://en.wikipedia.org/wiki/Shape%20control%20in%20nanocrystal%20growth
Shape control in nanocrystal growth is the control of the shape of nanocrystals (crystalline nanoparticles) formed in their synthesis by means of varying reaction conditions. This is a concept studied in nanosciences, which is a part of both chemistry and condensed matter physics. There are two processes involved in the growth of these nanocrystals. Firstly, volume Gibbs free energy of the system containing the nanocrystal in solution decreases as the nanocrystal size increases. Secondly, each crystal has a surface Gibbs free energy that can be minimized by adopting the shape that is energetically most favorable. Surface energies of crystal planes are related to their Miller indices, which is why these can help predict the equilibrium shape of a certain nanocrystal. Because of these two different processes, there are two competing regimes in which nanocrystal growth can take place: the kinetic regime, where the crystal growth is controlled by minimization of the volume free energy, and the thermodynamic regime, where growth is controlled by minimization of the surface free energy. High concentration, low temperatures and short aging times favor the kinetic regime, whereas low concentration, high temperatures and long aging times favor the thermodynamic regime. The different regimes lead to different shapes of the nanocrystals: the kinetic regime can give anisotropic shapes which are often connected to the kinetic Wulff construction, whereas the thermodynamic regime gives equilibrium, isotropic shapes, which can be determined using the Wulff construction. The shape of the nanocrystal determines many properties of the nanocrystal, such as the band gap and polarization of emitted light. Miller indices and surface energy The surface energy of a solid is the free energy per unit area of its surface. It equals half the energy per unit area needed for cutting a larger piece of solid in two parts along the surface under examination. This costs energy because chemical bonds are broken. Typically, materials are considered to have one specific surface energy. However, in the case of crystals, the surface energy depends on the orientation of the surface with respect to the unit cell. Different facets of a crystal thus often have different surface energies. This can be understood from the fact that in non-crystalline materials, the building blocks that make up the material (e.g., atoms or molecules) are spread in a homogeneous manner. On average, the same number of bonds needs to be broken, so the same energy per unit area is needed, to create any surface. In crystals, surfaces exhibit a periodic arrangement of particles which is dependent on their orientation. Different numbers of bonds with different bond strengths are broken in the process of creating surfaces along different planes of the material, which causes the surface energies to be different. The type of plane is most easily described using the orientation of the surface with respect to a given unit cell that is characteristic of the material. The orientation of a plane with respect to the unit cell is most conveniently expressed in terms of Miller indices. For example, the set of Miller indices (110) describes the set of parallel planes (family of lattice planes) parallel to the z-axis and cutting the x- and the y-axis once, such that every unit cell is bisected by precisely one of those planes in the x- and y-direction. Generally, a surface with high Miller indices has a high surface energy. Qualitatively, this follows from the fact that for higher Miller indices, on average more surface atoms are at positions at a corner instead of a terrace, as can be seen in the figure. After all, corner atoms have even fewer neighbours to interact with than terrace atoms. For example, in the case of a 2D square lattice, they have two instead of three neighbours. These additionally broken bonds all cost energy, which is why lower Miller indices planes generally have lower surface energies and are as a consequence more stable. However, the comparison is in fact somewhat more complex, as the surface energy as function of the Miller indices also depends on the structure of the crystal lattice (e.g., bcc or fcc) and bonds between non-next nearest neighbours play a role as well. Experimental research on noble metals (copper, gold and silver), shows that for these materials, the surface energy is well-approximated by taking only the nearest neighbours into account. The next-nearest neighbour interactions apparently do not play a major role in these metals. Also, breaking any of the nearest neighbour bonds turns out to cost the same amount of energy. Within this approximation, the surface energy of a certain Miller indices (hkl) surface is given by with the ratio of the number of bonds broken when making this (hkl) plane with respect to making a (111) plane, and the surface energy of the (111) plane. For any surface of an fcc crystal, is given by assuming . In this model, the surface energy indeed increases with higher Miller indices. This is also visible in the following table, which lists computer simulated surface energies of some planes in copper (Cu), silver (Ag) and gold (Au). Again, is the number of broken bonds between nearest neighbours created when making the surface, being 3 for the (111) plane. The surface energy indeed increases for a larger number of broken bonds and therefore larger Miller indices. It is also possible for surfaces with high Miller indices to have a low surface energy, mainly if the unit cell contains multiple atoms. After all, Miller indices are based on the unit cell, and it is the atoms, not the unit cell, that are physically present. The choice of unit cell is up to some level arbitrary as they are constructed by the interpreter. High Miller indices planes with low surface energy can be found by searching for planes with a high density of atoms. A large density of atoms in a plane after all implies a large number of in-plane bonds and thus a small number of out-of-plane bonds that would cause the surface energy to be large. If a crystal's unit cell contains only one atom, those planes naturally correspond to the planes with low Miller indices, which is why planes with low Miller indices are usually considered to have a low surface energy. The table below shows examples of computer simulated surface energies of (hk0) planes in a NiO crystal (with ). In this case, the unit cell has a multi-atom basis, as there are two types of atoms that make up the crystal (nickel and oxygen). The data has been ordered by increasing surface energy. From this table, it is clearly visible that the trend between surface energy and Miller indices is not as straightforward in this case as for the noble metals discussed above. Surface energy and Equilibrium shape Planes with low surface energies are relatively stable and thus tend to be predominantly present in the thermodynamic equilibrium shape of a crystal. After all, in equilibrium, the free energy is minimized. However, a crystal's thermodynamic equilibrium shape typically does not only consist of planes with the lowest possible surface energy. The reason for this is that involving planes with a slightly higher surface energy can decrease the total surface area, which lowers the total energy penalty for creating the material's surface. The optimum shape in terms of free energy can be determined by the Wulff construction. Thermodynamic versus kinetic control The growth of crystals can be carried out under two different regimes: the thermodynamic and the kinetic regime. Research on this topic is mainly centered around nanocrystals, as their synthesis is not as straightforward as that of bulk materials and thus requires a deeper understanding of types of crystal growth. Due to the high surface-volume ratio and the resulting instability, nanocrystals most easily show the difference between the thermodynamic and kinetic regime. These concepts can however be generalized further to bulk material. A commonly used production method of nanocrystals is that of growth by monomer addition. A seed is formed or placed in a solution of monomers that are the building blocks of the crystal. The nanocrystal (seed) grows larger by consuming the monomers in solution. The addition of a monomer to the crystal happens at the highest energy facet of the crystal, since that is the most active site and the monomer deposition thus has the lowest activation energy there. Usually, this facet is situated at a corner of the nanoparticle. These facets however, as explained in the section above, are not the most energetically favorable position for the added monomer. Thus the monomer will, if it gets the chance to, diffuse along the crystal surface to a lower energy site. The regime in which the monomers have the time to relocate is called the thermodynamic regime, as the product is formed that is expected thermodynamically. In the kinetic regime, the addition of monomers happens so rapidly that the crystal continues growing at the corners. In this case, the formed product is not at a global minimum of the free energy, but is in a metastable anisotropic state. Thermodynamic regime The thermodynamic regime is characterized by relatively low growth rates. Because of these, the amount the Gibbs free energy is lowered due to incorporating a new monomer is smaller than due to rearranging the surface. The former is associated with the minimization of volume Gibbs free energy, whereas the latter is associated with minimizing the surface free energy. Thus, the shape evolution is driven by minimization of surface Gibbs free energy, and therefore the equilibrium shape is the one with the lowest overall surface Gibbs free energy. This corresponds to the shape with a global minimum in Gibbs free energy, which can be obtained via the Wulff construction. From this Wulff construction, it also follows that the thermodynamic product is always symmetrical. The activation energy for the thermodynamic product is higher than the activation energy for the kinetic product. From the Arrhenius equation with the reaction rate, a constant, the activation energy, the Boltzmann constant and the temperature, follows that for overcoming a higher activation energy barrier, a higher temperature is needed. The thermodynamic regime is therefore associated with high temperature conditions. The thermodynamic regime can also be characterized by giving the system a sufficiently long time to rearrange its atoms such that the global minimum in Gibbs free energy of the entire system is reached. Raising the temperature has a similar effect because the extra thermal energy increases the mobility of the atoms on the surface, making rearrangements easier. Finally, the thermodynamic product can be obtained by having a low monomer concentration. This too ties into the longer time the system has at hand to rearrange before incorporating the next monomer at a lower monomer concentration, as the speed of diffusion of monomers through the solution to the crystal is strongly dependent on their concentration. Kinetic regime The kinetic control regime is characterized by high growth rates. Due to these, the system is driven by lowering the volume Gibbs free energy, which decreases rapidly upon monomer consumption. Minimization of the surface Gibbs free energy is of less relevance to the system and the shape evolution is controlled by reaction rates instead. Thus the product obtained in this regime is a metastable state, with a local minimum in Gibbs free energy. Kinetic control is obtained when there is not enough time for atoms on the surface to diffuse to an energetically more favorable state. Conditions that favor kinetic control are low temperatures (to ensure thermal energy is smaller than activation energy of the thermodynamic reaction) and high monomer concentration (in order to obtain high growth rates). Because of the high concentrations needed for kinetic control, the diffusion spheres around the nanocrystal are small and have steep concentration gradients. The more extended parts of the crystal that reach out further through the diffusion sphere grow faster, because they reach parts of the solution where the concentration is higher. The extended facets thus grow even faster, which can result in an anisotropic product. Due to this effect, an important factor determining the final shape of the product is the shape of the initial seed. Consequences of shape and size The band gap as well as the density of states of nanoparticles depend significantly on their shape and size. Generally, smaller nanoparticles have a larger band gap. Quantum confinement effects lie at the basis of this. Whereas the density of states is a smooth function for 3D crystals which are large in any direction, it becomes saw-tooth-shaped for 2D nanocrystals (e.g., disks), staircase-shaped for 1D nanocrystals (e.g., wires) and a delta function for 0D nanocrystals (balls, pyramids etc.). Also, the polarization of emitted light and its magnetic anisotropy are affected by the shape of the nanoparticle. Studying different shapes of nanoparticles can improve the understanding of quantum confinement effects. By elongating an axis in certain spherical nanoparticles (quantum dots), degeneracies in the energy levels can be resolved. Also, the energy difference between photon absorption and photon emission can be tuned using shape control. This could possibly be utilized in LED technology, as it helps to prevent re-adsorption. References Nanomaterials
Shape control in nanocrystal growth
[ "Materials_science" ]
2,754
[ "Nanotechnology", "Nanomaterials" ]
69,824,159
https://en.wikipedia.org/wiki/Borak%20%28cosmetic%29
Borak or burak is a cosmetic face powder or paste that is applied on the face for protection from the sun. It is traditionally used by the Sama-Bajau people of the Philippines, Malaysia, and Indonesia. Borak is most commonly used by Sama-Bajau women to protect the face and exposed skin areas from the harsh tropical sun at sea. Ingredients can include talcum powder, rice flour, turmeric, and other ingredients. When dry, borak is in powder form. The powder is first soaked in water to form a paste before being applied on the face. The paste can be a yellowish color or sometimes white. Similar pastes In Myanmar, thanaka, a yellow-white cosmetic paste made of ground tree bark, is traditionally used for sun protection. In Madagascar, a paste of wood called masonjoany is worn for decoration as well as for sun protection. See also Sunscreen Masonjoany Thanaka Lotion References Bajau culture Cosmetics Skin care Powders
Borak (cosmetic)
[ "Physics" ]
203
[ "Materials", "Powders", "Matter" ]
69,826,457
https://en.wikipedia.org/wiki/Ganoderma%20microsporum%20immunomodulatory%20protein
Ganoderma microsporum immunomodulatory protein or GMI is a protein discovered from the mushroom species Ganoderma microsporum. GMI is a pure protein composed of 111 amino acids and exists in nature as a tetramer. Discovery GMI is found in the mycelium of Ganoderma microsporum. During the life cycle of G. microsporum, GMI acts as an important signaling factor in the transition from the fungi's mycelium phase to the fruiting body phase. However, the levels of GMI found in both the mycelium and fruiting body are very low. In 2005, researchers utilized genetic and bio-engineering methods to obtain purified GMI, and proved that the protein is structurally similar to LZ-8, the first fungal immunomodulatory protein discovered in 1989. The name GMI is derived from the fact that when cultured with immune cells, GMI was found to not only increase the cells’ hormone production, but also induce higher levels of cellular activity. References Proteins Ganodermataceae Fungi Fungal proteins Immunoglobulin superfamily
Ganoderma microsporum immunomodulatory protein
[ "Chemistry", "Biology" ]
234
[ "Biomolecules by chemical classification", "Proteins", "Fungi", "Molecular biology" ]
69,828,543
https://en.wikipedia.org/wiki/Cobalt%20arsenide
Cobalt arsenide is a binary inorganic compound of cobalt and arsenic with the chemical formula CoAs. The compound occurs naturally as the mineral modderite. Physical properties Cobalt arsenide crystallizes in the orthorhombic system, space group Pnam, parameter parameters a = 0.515 nm, b = 0.596 nm, c = 0.351 nm, Z = 4. Cobalt arsenide is isostructural with FeAs. At approximately 6-8 GPa, single crystals of CoAs undergo a transformation to a lower-symmetry phase. Use CoAs is used as a semiconductor and in photo optic applications. References Arsenides Cobalt(III) compounds Semiconductors
Cobalt arsenide
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
146
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
69,829,264
https://en.wikipedia.org/wiki/Custom-made%20medical%20device
A custom-made medical device, commonly referred to as a custom-made device (CMD) (Canada, the European Union, the United Kingdom) or a custom device (United States), is a medical device designed and manufactured for the sole use of a particular patient. Examples of custom-made medical devices include auricular splints, dentures, orthodontic appliances, orthotics and prostheses. Definition There is no globally agreed definition, but a custom-made medical device can be broadly defined as a medical device that has been designed and manufactured in accordance with a prescription from an appropriately qualified person for the sole use of a particular patient to meet their specific needs. Mass-produced medical devices that have been adapted for specific patient requirements such as customised wheelchairs, hearing aids, and spectacle frames do not typically fall within the definition of a custom-made medical device. Definitions by jurisdiction Types Depending on the jurisdiction, custom-made medical devices can be prescribed by various healthcare professionals working within numerous medical specialties such as dentists, hearing aid dispensers, ocularists/orbital prosthetists, orthotists, medical practitioners/physicians and prosthetists. Manufacturers of custom-made medical devices include anaplastologists, audiologists, clinical dental technicians/dental prosthetists/denturists, dental assistants/dental nurses, dental technicians, dentists, ocularists/orbital prosthetists, ophthalmologists, optometrists, orthopaedic shoe fitters, orthopedic technicians, orthotists and prosthetists. Legislative requirements Australia In Australia manufacturers of custom-made medical devices are exempt from registering with the Australian Register of Therapeutic Goods (ARTG). Manufacturers of custom-made medical devices cannot advertise such devices directly to patients and are required to: Notify the Therapeutic Goods Administration that they are providing custom-made medical devices. Comply with the ARTG exemption conditions concerning inspection and review. Provide appropriate documentation with devices that they manufacture and/or supply. Maintain records relating to the devices that they have manufactured and/or supplied in Australia for at least five years. Submit an annual report with details of the custom-made medical devices that they have manufactured and/or supplied to the Therapeutic Goods Administration. Canada In Canada, custom-made medical devices are subject to Part 2 of the Medical Devices Regulations (SOR/98-282) under the Food and Drugs Act. Serious adverse incidents with medical devices must be reported to Health Canada within 72 hours. European Union Custom-made devices manufactured in the European Union are subject to Regulation (EU) 2017/745 (Medical Device Regulation [EU MDR]), which replaced and repealed Directive 93/42/EEC (Medical Devices Directive [MDD]). Under the MDD, manufacturers of custom-made devices were required to follow the relevant Essential Requirements set out in Annex I and the procedure set out in Annex VIII. The EU MDR was published on 5 April 2017, came into force on 25 May 2017 and, following a three-year transition period, was expected to replace and repeal the MDD on 26 May 2020. But on 23 April 2020, Regulation (EU) 2020/561 was adopted which deferred the full implementation of the EU MDR for one year until 26 May 2021 so that efforts could be concentrated on the response to the coronavirus disease 2019 (COVID-19) pandemic. Under the EU MDR, manufacturers of custom-made devices are required to: Establish, document, implement and maintain, keep up to date and continually improve a quality management system. These requirements are provided in EU MDR Article 10(9) and are aligned with certain clauses of ISO 13485, the International Organization for Standardization (ISO) quality management system requirements for the design and manufacture of medical devices. Comply with the relevant General Safety and Performance Requirements set out in Annex I. These obligations are comparable with the MDD Annex I Essential Requirements but are expanded and include the requirement to establish, implement, document and maintain a risk management system, the requirements of which are in alignment with ISO 14971, the ISO standard for the application of risk management to medical devices. The procedure set out in Annex XIII, which is comparable with MDD Annex VIII but with some enhanced requirements. Review and document experience gained in the post-production phase and report serious incidents and field safety corrective actions. Manufacturers outside the EU who are placing medical devices on the EU market are obligated to appoint a European Authorized Representative. Custom-made devices are not required to carry the CE marking. United Kingdom In the UK manufacturers of custom-made devices are required to register with the Medicines and Healthcare products Regulatory Agency. Until the UK left the European Union on 31 January 2020, custom-made devices were governed by the MDD, which was given effect in UK law by The Medical Devices Regulations 2002 (Statutory Instrument 2002/618 [UK MDR 2002]). Immediately after the UK's departure, the UK entered an 11-month implementation period (IP), during which EU law continued to apply. In preparation for the UK's departure from the EU, the EU MDR was essentially transposed into The Medical Devices (Amendment etc.) (EU Exit) Regulations 2019, (Statutory Instrument 2019/791 [UK MDR 2019]), an amendment of the UK MDR 2002) and was expected to be fully implemented on exit day. The UK MDR 2002 was further amended by The Medical Devices (Amendment etc.) (EU Exit) Regulations 2020 (Statutory Instrument 2020/1478 [UK MDR 2020]), which removed the provisions of the EU MDR and substituted 'exit day' for 'IP completion day'. In Great Britain medical devices can conform to either the UK MDR 2002 (as amended) or the EU MDR until 30 June 2023. Northern Ireland remains in line with EU law under the terms of the Protocol on Ireland/Northern Ireland. Custom-made devices are not required to carry the CE marking or the UK Conformity Assessed (UKCA) marking. United States Custom devices are subject to requirements including labelling (21 CFR Part 801), reporting (21 CFR Part 803), corrections and removals (21 CFR Part 806), registration and listing (21 CFR Part 807) and quality systems regulation (21 CFR 820). Manufacturers of custom devices are obligated to submit an annual report of custom devices to the Food and Drug Administration but are exempt from Premarket Approval (PMA) requirements and conformance to mandatory performance standards. See also Medical device Canada: Marketed Health Products Directorate United States: Medical Device Regulation Act References American football equipment Biomedical engineering Biotechnology Dental equipment Ear procedures European Union directives European Union regulations Hearing aids Implants (medicine) Lacrosse equipment Martial arts equipment Medical devices Medical technology Orthodontic appliances Orthopedic braces Plastic surgery Prosthetics Prosthodontology Protective gear Rehabilitation medicine Rehabilitation team Regulation of medical devices Restorative dentistry Rugby league equipment Rugby union equipment Sports equipment
Custom-made medical device
[ "Engineering", "Biology" ]
1,462
[ "Biological engineering", "Biomedical engineering", "Biotechnology", "Medical devices", "nan", "Medical technology" ]
78,744,720
https://en.wikipedia.org/wiki/REFPROP
REFPROP is a software program for the prediction of thermophysical properties of fluids, developed by the National Institute of Standards and Technology (NIST). The primary component of REFPROP is an equation of state for each implemented fluid. For most pure fluids, the equation of state is obtained by fitting an expression for the Helmholtz free energy to experimental data. This formulation allows the computation of all equilibrium properties of the fluid, such as density, temperature, pressure, sound speed, heat capacity, second virial coefficients, vapor pressures, saturated liquid and vapor densities, enthalpy of vaporization, entropy, and the Joule-Thomson coefficient. REFPROP also predicts surface tension, viscosity, and thermal conductivity for many fluids, either using extended corresponding states formulations or fluid-specific equations fit directly to experimental data. Various methods are used to compute the analogous properties of fluid mixtures. The full list of fluids properties implemented in REFPROP v10.0 can be found in Table 2 of Huber, et al. (2022). List of implemented fluids REFPROP v10.0 implements equation of state models for 147 pure fluids, listed in Table 1. Except for , R-13, R-123, and R-152a, all of these are Helmholtz free energy formulations. REFPROP v10.0 also predicts surface tension, viscosity, and thermal conductivity for most of the listed fluids. See also Process engineering Prediction of viscosity Prediction of thermal conductivity Viscosity models for mixtures Theorem of corresponding states Departure function The International Association for the Properties of Water and Steam References External links Fortran software Physics software Thermodynamics
REFPROP
[ "Physics", "Chemistry", "Mathematics" ]
358
[ "Dynamical systems", "Physics software", "Thermodynamics", "Computational physics" ]
78,747,343
https://en.wikipedia.org/wiki/Yang%E2%80%93Baxter%20operator
Yang–Baxter operators are invertible linear endomorphisms with applications in theoretical physics and topology. They are named after theoretical physicists Yang Chen-Ning and Rodney Baxter. These operators are particularly notable for providing solutions to the quantum Yang–Baxter equation, which originated in statistical mechanics, and for their use in constructing invariants of knots, links, and three-dimensional manifolds. Definition In the category of left modules over a commutative ring , Yang–Baxter operators are -linear mappings . The operator satisfies the quantum Yang-Baxter equation if where , , The represents the "twist" mapping defined for -modules and by for all and . An important relationship exists between the quantum Yang-Baxter equation and the braid equation. If satisfies the quantum Yang-Baxter equation, then satisfies . Applications Yang–Baxter operators have applications in statistical mechanics and topology. See also Yang–Baxter equation Hopf algebra Lie bialgebra Yangian Braid theory Quantum groups References Geometric topology Morphisms Theoretical physics
Yang–Baxter operator
[ "Physics", "Mathematics" ]
211
[ "Functions and mappings", "Mathematical structures", "Theoretical physics", "Mathematical objects", "Geometric topology", "Topology", "Category theory", "Mathematical relations", "Morphisms" ]
78,749,948
https://en.wikipedia.org/wiki/Alfv%C3%A9n%20Mach%20number
The Alfvén Mach number (also known as the Alfvén number, Alfvénic Mach number, and magnetic Mach number; or ) is a dimensionless quantity representing the ratio of the relative velocity of a fluid to the local Alfvén speed. It is used in plasma physics, where it is analogous to the Mach number but based on Alfvén waves rather than sound waves. Alfvén and Mach were physicists who studied shock waves. Along with the sonic Mach number, the Alfvén Mach number is frequently used to characterize shock fronts and turbulence in magnetized plasmas. where is the Alfvén Mach number, is the flow velocity, and is the Alfvén speed. When , the flow is referred to as sub-Alfvénic; and when , the flow is referred to as super-Alfvénic. See also Plasma parameters Magnetic Reynolds number Lundquist number References Dimensionless numbers of fluid mechanics Plasma parameters
Alfvén Mach number
[ "Physics" ]
185
[ "Plasma physics stubs", "Plasma physics" ]
78,752,103
https://en.wikipedia.org/wiki/Plunger%20%28hydraulics%29
A plunger is a cylindrical rod used to transmit hydraulic compression force. It is characterized by its length being much greater than its diameter, and it is thus distinguished from a regular piston (where the working surface is larger than the thickness of the rod, i.e. more like a disk). They are mainly used as part of certain types of pumps and hydraulic machines. Plungers are used for fluid-mechanical power transmission in pumps (plunger pumps), hydraulic gearboxes, high-pressure diesel injection pumps, hydraulic workshop presses and jacks, and other equipment, and are distinguished in fluid mechanics by being a piston without moving seals. The seals are instead located in the wall through which the plunger slides (as opposed to piston rings on a piston). Plungers are often supplied with a suitable stationary plunger bushing that fits tightly against the plunger (together they are called a plunger pair), and together these form a seal that can withstand high pressures. Compared to a piston that has to act against a cylinder wall, it is easier to manufacture a plunger to close tolerances against a plunger bushing (since the plunger has a cylindrical shape). Some define a plunger as a type of piston that is also its own piston rod. Plunger pumps are often used to pump slurries such as sludge or liquid cement. An advantage compared to classic pistons is the simplicity of manufacture (since the plunger is a simple rod) and the relatively easy use of a plunger bushing for sealing. Another advantage is resistance to dirt. Thanks to the simple shape, dirt has no place to stick, unlike a classic piston. Unlike a piston (where the seal is on the piston rings), the seal of a plunger is located in the cylinder wall, and when the plunger performs a reciprocating motion, the plunger surface thus moves along the seal. Plungers are mainly used in hydraulic axial piston pumps, radial piston pumps and piston pumps. They have also become widespread in fuel supply systems for diesel engines (injection pumps) in pairs of plungers. Plunger pump Plunger pumps are capable of operating at higher pressures than piston pumps. The reason for this is that plungers require high precision on its outer cylindrical surface, while piston pumps require more precise machining of the inner surface of the cylinder, which is technically more difficult to achieve. The volume of the displaced medium depends directly on the stroke length of the plunger. By changing the pump stroke length, the flow rate will be adjusted. The precision achieved on modern plunger and rotary hydraulic plunger machines is so high that the distance between the inner and outer cylindrical surfaces of plunger pairs reaches 2-3 micrometres (0.002-0.003 mm). The pressure that plunger pairs can withstand is very high. During fuel injection in diesel engines, the pressure in the plunger pair can reach 200 megapascals (MPa). Plunger lift The term "plunger" is also used in pipelines. Here, the plunger is a movable control element in a control valve whose movement changes the volume flow. References Hydraulics Pumps
Plunger (hydraulics)
[ "Physics", "Chemistry" ]
647
[ "Pumps", "Turbomachinery", "Physical systems", "Hydraulics", "Fluid dynamics" ]
72,934,089
https://en.wikipedia.org/wiki/57%20Tauri
57 Tauri, also known as h Tauri and V483 Tauri, is a star 148 light years from the Earth, in the constellation Taurus. It is a 5th magnitude star, so it will be visible to the naked eye of an observer located far from city lights. 57 Tauri is a member of the Hyades star cluster. It is a Delta Scuti variable star, whose brightness changes slightly, ranging from magnitude 5.55 to 5.59. In 1908, Lewis Boss listed 57 Tauri as a member of the Hyades cluster based upon its proper motion agreeing with the motions of other cluster members. Its membership in the cluster was firmly established forty-four years later by Hendrik van Bueren, using both proper motion and radial velocity. 57 Tauri is located 10.8 light years from the core of the Hyades cluster. Robert Millis discovered that 57 Tauri is a variable star, in 1967. He reported that the amplitude varied by 0.02 magnitudes with a period of 1.5 hours. In 1972, it was given the variable star designation V483 Tauri. A year 2000 study of 57 Tauri, based on 54 nights of photometric data, identified twelve pulsation frequencies ranging in period from 58.6 minutes to 6.17 days. In 1999, Anthony Kaye discovered that 57 Tauri is a spectroscopic binary by examining 139 high signal-to-noise spectra obtained at Kitt Peak. References Taurus (constellation) 020219 027397 Tauri, V483 1351 Delta Scuti variables F-type subgiants
57 Tauri
[ "Astronomy" ]
335
[ "Taurus (constellation)", "Constellations" ]
72,937,548
https://en.wikipedia.org/wiki/Kovasznay%20flow
Kovasznay flow corresponds to an exact solution of the Navier–Stokes equations and are interpreted to describe the flow behind a two-dimensional grid. The flow is named after Leslie Stephen George Kovasznay, who discovered this solution in 1948. The solution is often used to validate numerical codes solving two-dimensional Navier-Stokes equations. Flow description Let be the free stream velocity and let be the spacing between a two-dimensional grid. The velocity field of the Kovaszany flow, expressed in the Cartesian coordinate system is given by where is the root of the equation in which represents the Reynolds number of the flow. The root that describes the flow behind the two-dimensional grid is found to be The corresponding vorticity field and the stream function are given by Similar exact solutions, extending Kovasznay's, has been noted by Lin and Tobak and C. Y. Wang. References Flow regimes Fluid dynamics
Kovasznay flow
[ "Chemistry", "Engineering" ]
194
[ "Piping", "Chemical engineering", "Flow regimes", "Fluid dynamics" ]
71,397,452
https://en.wikipedia.org/wiki/Helios%20Dust%20Instrumentation
The Helios 1 and 2 spacecraft each carried two dust instruments to characterize the Zodiacal dust cloud inside the Earth’s orbit down to spacecraft positions 0.3 AU from the sun. The Zodiacal light instrument measured the brightness of light scattered by interplanetary dust along the line of sight. The in situ Micrometeoroid analyzer recorded impacts of meteoroids onto the sensitive detector surface and characterized their composition. The instruments delivered radial profiles of their measured data. Comet or meteoroid streams, and even interstellar dust were identified in the data. Overview The two Helios spacecraft were the result of a joint venture of West Germany's space agency DLR and NASA. The spacecraft were built in Germany and launched from Cape Canaveral Air Force Station, Florida. Helios 1 was launched in December 1974 onto an elliptic orbit between 1 and 0.31 AU. Helios 2 followed in January 1976 and reached 0.29 AU perihelion distance. The orbital periods were about 6 Months. The Helios spacecraft were spinning with the spin axis perpendicular to the ecliptic plane. The Helios 1 spin axis pointed to ecliptic north whereas the Helios 2 orientation was inverted and the spin axis pointed to ecliptic south. The despun high gain antenna beam pointed always to Earth. Because of the orbit the distance between the spacecraft and Earth varied between a few and 300 million km and the data transmission rate varied accordingly. Twice per Helios orbit the spacecraft was in conjunction (in front or behind the Sun) and no data transmission was possible for a few weeks. Helios 1 delivered scientific data for ten years and Helios 2 for five years. The Zodiacal light instrument The primary goal of the Zodiacal light instrument on Helios was to determine the three-dimensional spatial distribution of interplanetary dust. To this end, from all along its orbit, Helios performed precise zodiacal light measurements covering a substantial part of the sky. These partial sky maps, because of the rotation of Helios, consisted of a band 1° wide at ecliptic latitude ß=16° with 32 sectors 5.62°, 11.25° and 22.5° long, a similar band 2° wide at ecliptic latitude ß = 31° and a field of 3° diameter at the ecliptic pole. All fields were in the south for Helios 1, in the north for Helios 2. The width of the sectors was chosen to be smallest for the brightest regions of zodiacal light. This map has been realized by three small (36 mm aperture) photometers, P15, P30, and P90, one for each ecliptic latitude. A stepping motor changed the observing wavelength - with or without polarization - to 360 ± 30 nm, 420 ± 40 nm, 540 ± 70 nm (close to the UBV system) or to dark current and calibration measurements. Each of the 36 resulting different brightness maps represents an average over 512 Helios rotations, leading to a cycle of total length 5.2 hours, which is continually repeated. The sensors were photomultipliers EMR 541 N operating in photon pulse counting mode. Throughout their mission the Helios space probes were exposed to full sunlight, which exceed the typical zodiacal light intensity by factor of 1012 to 1013. For accurate (1%) measurements demanding stray light suppression by a factor of 1015 was required, the main design goal to be met. This could be achieved in three steps: The zodiacal light photometers were fully kept in the shadow of the Helios solar cell cone, giving 3x10−3 stray light reduction. The multiple reflection in the stray light suppressing baffle added 4x10−7. The coronograph design of the photometers provided the needed additional 3x10−6 of stray light reduction. The Zodiacal light instrument was developed at the Max Planck Institute for Astronomy in Heidelberg by Christoph Leinert and colleagues and built by Dornier systems. The Micrometeoroid analyzer The goal of the Micrometeoroid Analyzer was 1. to determine the spatial distribution of the dust flux in the inner planetary system, and 2. to search for variations of the compositional and physical properties of micrometeoroids. The instrument consisted of two impact ionization time-of-flight mass spectrometers and was developed by PI Eberhard Grün, Principal Engineer Peter Gammelin, and colleagues at the Max Planck Institute for Nuclear Physics in Heidelberg. Each sensor (Ecliptic sensor and South sensor) was a 1 m long and 0.15 m diameter tube with two grids and a venetian blind type impact target in front, several more grids, a 0.8 m long field-free drift tube and an electron multiplier in the inside. Micrometeoroids hitting the venetian blind type impact target generate an impact plasma. Electrons are collected by the positively biased grid in front of the target while positive ions are drawn inward by a negatively biased grid behind the target. Part of the ions reach the time-lag focusing region from which they fly through the field-free drift tube at -200 V potential. Ions of different masses reach the electron multiplier at different times and generate a mass spectrum at the multiplier output. Impact signals are recorded by charge-sensitive preamplifiers attached to the electron grid in front and the ion grid behind the target. From these signals together with the mass spectrum the mass and energy of the dust particle and the composition of the impact plasma are obtained. The South sensor was shielded by the spacecraft rim from direct sun light, whereas the ecliptic sensor was directly exposed to the intense solar radiation (up to 13 kW/m2). Therefore, the interior of the sensor was protected by a 0.3 micron thick aluminized parylene film which was attached to the first entrance grid. In order to study the effect of micrometeoroids penetrating the film, extensive dust accelerator studies with various materials were performed. It was shown that the penetration limit of the Helios film depends strongly on the density of meteoroids. Impact experiments with a lab version of the Helios micrometeoroid sensor were performed using several materials at the accelerators at the Max Planck Institute for Nuclear Physics in Heidelberg and at the Ames Research Center, ARC, in Moffet Field. The projectile materials included iron (Fe), quartz, glass, aluminium (Al), aluminium oxide (Al2O3), polystyrene, and kaolin. The mass resolution of the mass spectra of the Helios sensors was low , i.e. only ions of atomic mass unit 10 u could be separated from ions of mass 11 u. These mass spectra served as reference for the spectra obtained in space. Spectra were recorded from 10 u to 70 u. The mean calibration spectra are presented in a three phase diagram: low masses (10 to 30 u), medium masses (30 to 50 u), and high masses (50 to 70 u). Micrometeoroid data During ten orbits about the sun from 1974 to 1980 the Helios 1 micrometeoroid analyzer transmitted data of 235 dust impacts to Earth. Since the onboard data storage capability was limited and the data transmission rate varied strongly depending on the distance between spacecraft and Earth not all data recorded by the sensors was received on Earth. The effective measuring time ranged from ~30% at perihel to ~75% at 1AU distance. Many noise events caused by solar wind plasma and photo electrons were recorded by the sensors as well. Only events within a coincidence time of 12 micro seconds between positive and negative signals and, mainly, the measurement of a mass spectrum following the initial trigger were considered dust impacts. Quantities determined for each impact are: the time and position, the azimuth of the sensor viewing at the time of impact, the total positive charge of the impact signal, the rise-time of the charge signal (proxy for the impact speed) and a complete mass spectrum. The micrometeoroid instrument on Helios 2 was much noisier and recorded only a handful of impacts that did not provide additional information. Results The Zodiacal light carries information on those regions of interplanetary space along the line of sight, which contribute significantly to its observed brightness. For Helios this covers the range of 0.09 to about 2 Astronomical Units. Spatial distribution Radial dependencies The zodiacal light instrument observed a strong increase of the zodiacal light brightness inward the Earth orbit. The brightness was more than a factor 10 higher at spacecraft position 0.3 AU than at 1 AU. This brightness increase corresponds to interplanetary dust density increase corresponding to . This strong increase requires that there is a source of interplanetary dust inside the Earth’s orbit. It was suggested that collisional fragmentation of bigger meteoroids generates the dust observed in the zodiacal light. The radial flux of micrometeoroids recorded by Helios increased by a factor 5 to 10 depending on the mass from 10−17 kg to 10−13 kg. This information together with the position and azimuth measurements was used in the first dynamical model of the interplanetary dust cloud; also the zodiacal light intensities observed by the Helios Zodiacal light instrument were included in this model. The Helios data defined the core, the inclined, and the eccentric populations of this model. Plane of symmetry From the difference between the measured zodiacal light brightness during inbound and outbound parts of the orbit and between right and left of the Sun the plane of symmetry of the interplanetary dust cloud was determined. With its ascending node of 87 ± 5° and inclination of 3.0 ± 0.3° it lies between the invariable plane of the Solar System and the plane of the solar equator. Orbital distribution Of the 235 impacts total 152 were recorded by the South sensor and 83 by the Ecliptic sensor. This excess of impacts on the South sensor had mostly small impact (charge) signals but there was also some excess of big impacts. From thee azimuth values of Ecliptic sensor impacts it was concluded that the micrometeoroids moved on low eccentric orbits, e < 0.4, whereas South sensor impacts moved mostly on higher eccentric orbits. There was even an excess of outward compared to inward trajectories like the ’’’beta-meteoroids’’’ which were observed earlier by the Pioneer 8 and 9 dust instruments. Optical, physical, and chemical properties The measurements of zodiacal light color - essentially constant along the Helios orbit - and of polarization - showing a decrease closer toward the Sun - also contain information on properties on interplanetary dust particles. On the basis of the penetration studies with the Helios film the excess of impacts on the South sensor was interpreted to be due to low density, < 1000 kg/m3, meteoroids that were shielded by the entrance film from entering the Ecliptic sensor. Helios mass spectra range from those with dominant low masses up to 30 u that are compatible with silicates to those with dominant high masses between 50 and 60 u of iron and molecular ion types. The spectra display no clustering of single minerals. The continuous transition from low to high ion masses indicates that individual grains are a mixture of various minerals and carbonaceous compounds. Cometary and interstellar dust streams The Helios zodiacal light measurements show excellent stability. This allows detecting local brightness excesses if they are crossed by the Helios field-of-view, like it happened for comet West or for the Quadrantid meteor shower. Repetition by about 0.2% from orbit to orbit sufficed to detect the dust ring along the orbit of Venus. Inspection of the Helios micrometeoroid data showed a clustering of impacts in the same region of space on different Helios orbits. A search with the Interplanetary Meteoroid Environment for eXploration (IMEX) dust streams in space model identified the trails of comets 45P/Honda-Mrkos-Pajdušáková and 72P/Denning-Fujikawa that Helios traversed multiple times during the first ten orbits around the Sun. After the discovery of interstellar dust passing through the planetary system by the Ulysses spacecraft interstellar dust particles were also found in the Helios micrometeoroid data. Based on the spacecraft position, the azimuth and impact charge 27 impactors are compatible with an interstellar source. The Helios measurements comprise interstellar dust measurements closest to the Sun. References Spacecraft instruments Scientific instruments Space science experiments
Helios Dust Instrumentation
[ "Technology", "Engineering" ]
2,602
[ "Scientific instruments", "Measuring instruments" ]
71,397,747
https://en.wikipedia.org/wiki/Alloy%20601
Alloy 601 is a nickel alloy, mostly made up of nickel and chromium, with small amounts of aluminium, silicon, copper and manganese. This creates a number of desirable properties including good high temperature strength, corrosion resistance under oxidizing conditions and it retains ductility after long service exposure. It is often used in various types of engineering equipment which require heat resistance and corrosion resistance. Composition Properties Alloy 601 can also be identified by the UNS Number N06601. It shares similar properties to Alloy 600, but is more resistant to high-temperature oxidation due to the addition of aluminium. The combination of heat resistance and oxidation resistance means it is often used in the thermal processing industry for applications such as radiant tubes. It is also commonly used in high-temperature applications in the aerospace industry, such as gas turbine blades, combustion can-liners and jet engine igniters. References Nickel–chromium alloys
Alloy 601
[ "Chemistry" ]
189
[ "Alloys", "Alloy stubs" ]
71,402,067
https://en.wikipedia.org/wiki/Kamikatsu%20Zero-waste%20Center
Kamikatsu Zero-waste Center (also known as "WHY") is a waste management and materials recovery facility that recycles over 80 percent of the waste produced in Kamikatsu, which is much higher than the 20 percent average in the rest of Japan. It is at the center of what The Washington Post describes as an "ambitious path toward a zero-waste life". History May 20, 2020 – Opening. April 16, 2021 – Received the Architectural Institute of Japan Award for Best Work. January 24, 2021 – Received Grand Prize for the Ministry of Internal Affairs and Communications's Furusato Zukuri Awards. Facilities Made predominantly using waste materials such as used windows, the facilities are in the shape of a question mark. Waste separation station, stock yard. Kuru Kuru Shop, a reuse shop. Learning center. Laundromat and restrooms. Collaborative laboratory. Hotel WHY, where guests experience the town's recycling system. Zero-waste policy in Kamikatsu Kamikatsu is a "zero waste" town, all household waste is separated into 45 different categories and sent to be recycled. In 2008, a poll showed that 40 percent of residents were still unhappy about the aspect of the policy that required items to be washed. But the town continues the policy as it is cheaper and more environmentally friendly than purchasing an incinerator. The town recycles about 80 percent of its waste, compared to 20 percent in the rest of Japan, which is still relatively high compared to the USA at 9 percent and the Philippines at less than 5 percent, according to a Rappler article. The town has set a goal to become fully zero waste by 2020. Architectural awards 2021 – Architectural Institute of Japan Award for Best Work 2021 – Japan Institute of Architects Environmental Architecture Award 2021 – Dezeen Awards, sustainable building of the year References External links Kamikatsu Zero Waster Center KAMIKATSU ZERO WASTE CENTER – Hiroshi Nakamura & NAP Waste management Recycling in Japan Waste treatment technology 2020 establishments in Japan Kamikatsu, Tokushima
Kamikatsu Zero-waste Center
[ "Chemistry", "Engineering" ]
416
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
71,402,863
https://en.wikipedia.org/wiki/Equisingularity
In algebraic geometry, an equisingularity is, roughly, a family of singularities that are not non-equivalent and is an important notion in singularity theory. There is no universal definition of equisingularity but Zariki's equisingularity is the most famous one. Zariski's equisingualrity, introduced in 1971 under the name " algebro-geometric equisingularity", gives a stratification that is different from the usual Whitney stratification on a real or complex algebraic variety. See also stratified space References Further reading https://mathoverflow.net/questions/299314/a-general-definition-of-an-equisingular-family-of-singular-varieties algebraic geometry
Equisingularity
[ "Mathematics" ]
167
[ "Fields of abstract algebra", "Algebraic geometry" ]
71,402,929
https://en.wikipedia.org/wiki/Tame%20topology
In mathematics, a tame topology is a hypothetical topology proposed by Alexander Grothendieck in his research program Esquisse d’un programme under the French name topologie modérée (moderate topology). It is a topology in which the theory of dévissage can be applied to stratified structures such as semialgebraic or semianalytic sets, and which excludes some pathological spaces that do not correspond to intuitive notions of spaces. Some authors consider an o-minimal structure to be a candidate for realizing tame topology in the real case. There are also some other suggestions. See also Thom's first isotopy lemma References External links https://ncatlab.org/nlab/show/tame+topology Algebraic analysis Geometry education Stratifications Topology
Tame topology
[ "Physics", "Mathematics" ]
165
[ "Stratifications", "Topology stubs", "Topology", "Space", "Geometry", "Spacetime" ]
71,409,538
https://en.wikipedia.org/wiki/Necrobotics
Necrobotics is the practice of using biotic materials (or dead organisms) as robotic components. In July 2022, researchers in the Preston Innovation Lab at Rice University in Houston, Texas published a paper in Advanced Science introducing the concept and demonstrating its capability by repurposing dead spiders as robotic grippers and applying pressurized air to activate their gripping arms. Necrobotics utilizes the spider's organic hydraulic system and their compact legs to create an efficient and simple gripper system. The necrobotic spider gripper is capable of lifting small and light objects, thereby serving as an alternative to complex and costly small mechanical grippers. Background The main appeal of the spider's body in necrobotics is its compact leg mechanism and use of hydraulic pressure. The spider's anatomy utilizes a simple hydraulic (fluid) pressure system. Spider legs have flexor muscles that naturally constrict their legs when relaxed. A force is required to straighten and extend their legs, which spiders accomplish by pumping hemolymph fluid (blood) through their joints as a means of hydraulic pressure. It takes no external power to curl their legs due to their flexor muscles' natural curled state. In July 2022, researchers in the Preston Innovation Lab at Rice University published a paper detailing their experiments with the gripper. Although dead spiders no longer produce hemolymph, Te Faye Yap (lead author and mechanical engineering graduate) found that pumping air through a needle into the spider's cephalothorax accomplishes the same results as hemolymph. The original hydraulic (fluid) system is essentially converted into a pneumatic (air) system. Fabrication Obtain a spider (preferably a wolf spider) Euthanize the spider using a cold temperature of around -4°C for 5-7 days Insert a 25 gauge hypodermic needle into the spider's cephalothorax (main body) Apply glue around the needle to form a seal and allow it to dry Connect a syringe or pump to the needle Extend the spider's legs by pumping air in Testing and Data Internal Force Versus Gripping Force The typical pressure in a resting spider's legs ranges from 4 kPa to 6.1 kPa. Researchers extended the legs by increasing the spider's internal pressure to 5.5 kPa. Pumping air into the body increases the internal pressure, causing the legs to expand. Pumping air out of the body decreases internal pressure, causing the legs to contract due to their flexor leg muscles. When the internal pressure decreases to 0 kPa, the gripper would be fully closed, allowing for the gripper to grasp objects. This action demonstrates that as internal pressure decreases, the gripping force increases. Inversely, when internal pressure increases, the gripping force decreases. By gripping individual weighted acetate beads, it is found that the necrobotic gripper achieves a maximum gripping force of 0.35 milinewtons. Spider Weight Versus Gripping Force To estimate the gripping forces of smaller and larger spiders, researchers created a plot to predict the gripping force relative to the size of the spider. The wolf spider's body weight is relatively equal to the gripping force of its legs. The mass of the gripper is 33.5 mg and can lift 1.3 times its body weight (43.6 mg or 0.35 mN). However, with larger spiders, the gripping force relative to body weight decreases. For example, a 200-gram goliath birdeater is predicted to lift 10% of its weight (20 grams or 196 mN). Though there is an inverse relationship between spider mass and gripping force, larger spiders exert greater gripping forces than smaller spiders. Gripper Lifespan The necrobotic gripper's functionality is entirely reliant on the structural integrity of the spider. If the spider were to break down easily and frequently, the gripper would not be practical. Using cyclic testing, a series of repeated actions, it is found that the necrobotic gripper can actuate 700 to 1000 times. After 1000 cycles, cracks begin forming on the membrane of the leg joints due to dehydration. Weakened and decomposing joints lead to frequent breakage and replacement, thereby serving as an obstacle in applying necrobotics to real-world scenarios. One theorized fix to this issue is applying beeswax or a lubricant to the joints. Researchers found that over 10 days, the mass of an uncoated spider decreased 17 times more than the mass of a spider coated with beeswax. Lubricating joints combats dehydration and slows the loss of organic material. Applications Necrobotics can serve as a fast and precise alternative to mechanical components that are difficult to manufacture. Due to small mechanical grippers being costly and complex, the necrobotic gripper can be used as a replacement. Fabricating these pneumatic spider grippers can be done in under 30 minutes and have a relatively long lifespan of 1000 cycles. The necrobotic gripper is ideal for processes requiring delicate handling of materials and maneuvering light objects into tight spaces. There may also be applications in microelectronics where necrobotic grippers can handle simple pickup and dropping actions. Besides the necrobotic spider gripper, there are no other robotic concepts under the necrobotics subfield. Future necrobotic concepts can utilize soft robotics and electrical stimuli to repurpose biotic material into biohybrid systems. Another application of necrobotics is utilizing preexisting bone structures to house robotic components. Constraints With the usage of organic material, there is a higher chance of the component decomposing and breaking down as opposed to traditional mechanical systems. There may be additional work and management required to replace these grippers if they fail. Additionally, organic inconsistencies with the spiders will yield inaccurate results. Not all wolf spiders develop the same, so gripping force and leg contraction can vary between grippers. There are moral implications behind euthanizing spiders for robotics. The ethical boundaries that necrobotics push in the pursuit of biohybrid systems raise concerns, as opponents say it may lead to the hybridization of mammals and is intrusive to nature. Proponents respond that repurposing dead animals has been human practice for millennia and that necrobotics should be pursued to advance science. See also 3D bioprinting Biomedical engineering Blood substitute Remote control animal Soft robotics References Robotics Undead Biorobotics
Necrobotics
[ "Engineering" ]
1,336
[ "Robotics", "Automation" ]
77,332,489
https://en.wikipedia.org/wiki/Bluefors
Bluefors is a Helsinki-based company specializing in cryogenic products used in several high-tech industries. Cryogenic solutions are integral to advancements in quantum computing and scientific research. The company offers cryogenic systems, including dilution refrigerators and cryocoolers, which are essential for applications requiring ultra-low temperatures. Cryocoolers are for instance a requirement for today’s superconducting qubits to function. In response to increasing global demand, the company has expanded its operations. Notable acquisitions include Cryomech in the USA and Rockgate in Japan. Bluefors also have a laboratory in Delft, the Netherlands. References Companies based in Helsinki Companies with year of establishment missing Cryogenics
Bluefors
[ "Physics" ]
148
[ "Applied and interdisciplinary physics", "Cryogenics" ]
77,336,294
https://en.wikipedia.org/wiki/Grim%20Reaper%20paradox
In philosophy, the Grim Reaper paradox is a paradox involving an infinite sequence of grim reapers, each tasked with killing a person if no reaper has already killed them. The paradox raises questions about the possibility of continuous time and the infinite past (temporal finitism). The paradox is inspired by J. A. Benardete's paradoxes from the 1964 book Infinity: An Essay in Metaphysics. In fact, various formulations of paradoxes involving beginningless sets, whose members perform a function only if no previous member performs it, are all labelled Benardete Paradoxes. They are examples of supertasks. The paradox The paradox supposes there is an infinite sequence of Reapers, each assigned a time to kill a particular person. Each Reaper will only kill this person if no earlier Reaper has already killed them. It is 12pm, the first Reaper is set to kill the person at 1pm. The second Reaper is set to kill them at 12:30pm, the third at 12:15pm, and so on. As a consequence of these propositions, the person will certainly be killed by a Reaper before 1pm, however, no individual Reaper can kill them, as there is always an earlier Reaper who would do so first. Therefore, it is impossible that the person survive, but also impossible that any Reaper kills them. Resolutions and Implications Discrete time One solution to the paradox is supposing that time must be discrete rather than continuous. If so, an infinite number of Reapers cannot all have a separate time in which they will kill you, as there are only finitely many "moments" in each period of time. A possible issue with this solution is that the Reaper paradox can take different forms which do not rely upon continuous time. One such example appears in Benardete's book, in which a god throws up a wall if a man travels 1/2 mile, another god throws up a wall after 1/4 mile, another at 1/8 mile, ad infinitum. Discrete time would do nothing to prevent this paradox. Causal finitism Another solution is the idea of Causal finitism, which asserts that there cannot be an infinite regress of causes. In other words, every causal chain must have a starting point. Thus, there cannot be an infinite number of Reapers whose actions depend on all previous Reapers. All Benardete paradoxes share this feature of an infinite causal chain, and so are all impossible. Causal finitism could plausibly imply the discreteness of time, temporal finitism, infinitely large spatial regions, and continuously dense spatial regions, all of which are heavy metaphysical commitments. The Unsatisfiable Pair Diagnosis A third potential solution to the Grim Reaper paradox has been suggested, known as the Unsatisfiable Pair Diagnosis (UPD). The UPD asserts that Benardete paradoxes (including the Grim Reaper paradox) are simply logically impossible, and no metaphysical thesis needs to be adopted. In The Form of the Benardete Dichotomy Nickolas Shackel observes that all Benardete Paradoxes involve two conditions: The linearly ordered set S has no first member For all x in S, E at x iff E nowhere before x Shackel shows these statements to be formally inconsistent, they logically cannot both be true. The paradox assumes that some set of items could satisfy both statements, but no set can. Relevance to theism According to Pruss, the Grim Reaper paradox provides grounds for thinking that the past is finite, i.e. that there must be a first period of time. This would support the Kalam cosmological argument, backing up the premise that the universe began to exist. In 2018, Pruss provided a more thorough cosmological argument using causal finitism to motivate a necessary uncaused cause. The argument is as follows: Nothing has an infinite causal history. There are no causal loops. Something has a cause. Therefore, there is an uncaused cause. Pruss then adds the following Causal Principle: 5. Every contingent item has a cause. From this the conclusion can be drawn that there is an uncaused cause which exists necessarily. Pruss states that it is still a major task to argue from a necessary first cause to theism. Whilst The Kalam argument opposes sequences that go infinitely backwards in time, this argument denies all causally backwards-infinite sequences. Notes References Supertasks Infinity Philosophical paradoxes
Grim Reaper paradox
[ "Mathematics" ]
911
[ "Physical quantities", "Time", "Mathematical objects", "Infinity", "Philosophy of time", "Spacetime", "Supertasks" ]
77,338,790
https://en.wikipedia.org/wiki/Motonormativity
Motonormativity (also motornormativity, windshield bias, or car brain) is an unconscious cognitive bias in which the assumption is made that motor car ownership and use is an unremarkable social norm. Coinage The term was coined by Swansea University psychologist Ian Walker and colleagues in a 2023 study. Description and significance Motonormativity is not a bias confined just to motorists, but is a feature of car-centric societies. Walker has argued that a consequence of motonormative bias is that any attempt to reduce car use is not seen plainly for what it is, but interpreted as an attempt to curtail personal freedom. This effect has been documented not just in famously car dependent North America, but around the world. Examples Walker has cited certain road safety campaigns targeting children as an example of motonormativity: by encouraging children to wear brightly coloured clothing to avoid being run over, such campaigns normalize the idea of motor traffic as an accepted danger others must adjust to, in a way which in other contexts would be considered victim blaming. Motonormativity may affect planning decisions so that, for example, a new hospital is built outside a city even though that makes it less accessible to city dwellers who do not have use of a car. See also Normativity Mode of transport Transport poverty References Further reading Cognitive biases Neologisms 2020s neologisms Transport policy Transport terminology Public health
Motonormativity
[ "Physics" ]
292
[ "Physical systems", "Transport", "Transport policy", "Transport terminology" ]
78,761,776
https://en.wikipedia.org/wiki/Transition%20metal%20sulfito%20complex
Transition metal sulfito complexes are coordination compounds containing sulfite (SO32-) as a ligand. The inventory is large. Few sulfito complexes have commercial applications, but sulfite is a substrate for the molybdoenzyme sulfite oxidase. Bonding modes In principle, sulfite can bond to metal ions via S or O. To some extent, the sulfito ligand resembles nitrito (), which can bind through N or O. Monodentate, S-bonded sulfites are more common than O-bonded sulfito ligands. S-Bonded sulfite is a soft ligand with a strongly trans labilizing effect as indicated by the rapid aquation of to give . In some cases, sulfite serves as a bridging ligand forming M-SO2-O-M’ linkages. Examples (tetren = ) (M = Rh, Co, en = ethylenediamine) Complexes with modified sulfito ligands Being dibasic, sulfito ligands are susceptible to O-alkylation and O-protonation. Some examples: References Ligands Sulfites
Transition metal sulfito complex
[ "Chemistry" ]
246
[ "Ligands", "Coordination chemistry" ]
69,832,190
https://en.wikipedia.org/wiki/Institute%20of%20Particle%20Physics
The Institute of Particle Physics (IPP) is a Canadian organization that fosters expertise in particle physics research and advanced education. IPP is a nonprofit organization operated by the institutional and individual members for the benefit of particle physics research in Canada. IPP supported projects can be accessed on the group's website. Currently, the IPP Scientific Council administers the IPP Research Scientist Program. The IPP director and council focus on future planning, advocacy with funding sources, and on its activities in international public relations. History The IPP was established in 1971 to administer anticipated funds from the National Research Council Canada to steer the Canadian program working at Fermilab, Argonne National Lab, and SLAC National Accelerator Laboratory. IPP formed a Scientific Council, elected by the membership, to be responsible for the Scientific program and the operation of the Institute. IPP council vetted projects and advocated within the funding regime and internationally. Eventually, the Natural Sciences and Engineering Research Council (NSERC) developed better communication with, and funding model for, experimental groups, alleviating the need for IPP to directly administer research grant funds. Community support Long range planning An important part of the Institute of Particle Physics’ mission is to coordinate community input for long range planning exercises. This involves solicitations of community input, hosting of town hall meetings where the projects underway and future projects are discussed, and the concerns of the community can be aired. This input results in the preparation of a brief, usually solicited by NSERC, that serves as input to the Subatomic Physics long range planning exercise. IPP Early Career Theory Fellowship The Institute of Particle Physics Early Career Theory Fellowship is designed to enable outstanding theory PhD students and postdoctoral researchers to be present for a period at an international university, laboratory, or institute. The purpose of the fellowship is to encourage scientific collaboration between theorists in Canada and those abroad, and also to enhance the career prospects of the junior researcher. IPP high school teacher awards The Institute of Particle Physics has supported Canadian high school teachers attending the CERN high school teacher program. IPP summer student program The Institute of Particle Physics supports Canadian undergraduate students participating in the CERN summer student program. References External links Official website 1971 establishments in Canada Particle physics Science education in Canada
Institute of Particle Physics
[ "Physics" ]
462
[ "Particle physics" ]
69,832,813
https://en.wikipedia.org/wiki/Silicon%20quantum%20dot
Silicon quantum dots are metal-free biologically compatible quantum dots with photoluminescence emission maxima that are tunable through the visible to near-infrared spectral regions. These quantum dots have unique properties arising from their indirect band gap, including long-lived luminescent excited-states and large Stokes shifts. A variety of disproportionation, pyrolysis, and solution protocols have been used to prepare silicon quantum dots, however it is important to note that some solution-based protocols for preparing luminescent silicon quantum dots actually yield carbon quantum dots instead of the reported silicon. The unique properties of silicon quantum dots lend themselves to an array of potential applications: biological imaging, luminescent solar concentrators, light emitting diodes, sensors, and lithium-ion battery anodes. History Silicon has found extensive use in electronic devices; however, bulk Si has limited optical applications. This is largely due to the vertical optical transition between the conduction band and valence band being forbidden because of its indirect band gap. In 1990, Leigh Canham showed that silicon wafers can emit light after being subjected to electrochemical and chemical dissolution. The light emission was attributed to the quantum confinement effect in the resulting porous silicon. This early work provided a foundation for several different types of silicon nanostructures including silicon nanoparticles (quantum dots), silicon nanowires, silicon nanoshells, silicon nanotubes, silicon aerogels, and mesoporous silicon. The first reports of silicon quantum dots emerged in the early 1990s demonstrating luminescence from freestanding oxidized silicon quantum dots. Recognizing the vast potential of their unique optical properties, many researchers explored, and developed methods to synthesize silicon quantum dots. Once these materials could be prepared reliably, methods to passivate the surfaces were critical to rendering these materials solution processable and minimize the effects of oxidation. Many of these surface passivation methods draw inspiration from methods that were first developed for silicon wafers and porous silicon. Currently, silicon quantum dots are being commercialized by Applied Quantum Materials Inc. (Canada). Properties Silicon quantum dots (SiQDs) possess size-tunable photoluminescence that is similar to that observed for conventional quantum dots. The luminescence is routinely tuned throughout the visible and into the near-infrared region by defining particle size. In general, there are two distinct luminescence bands that dominate silicon quantum dot properties. Long-lived luminescence excited states (S-band, slow decay rate) are typically associated with size-dependent photoluminescence ranging from yellow/orange to the near-infrared. Short-lived luminescent excited states (F-band, fast decay rate) are typically associated with size-independent blue photoluminescence and in some cases nitrogen impurities have been implicated in these processes. The S-band is typically attributed to the size-dependent band gap of the silicon quantum dots. This emission can be tuned from yellow (600 nm) into the infrared (1000 to 1100 nm) by changing the diameter of the silicon quantum dots from about 2 to 8 nm. Some reports also describe the preparation of green-emitting silicon quantum dots prepared by decreasing the size, however, these materials are challenging to isolate and require further development. Silicon quantum dot luminescence may also be tuned by defining their surface chemistry. Attaching different surface species allows tuning of silicon quantum dot luminescence throughout the visible spectrum while the silicon quantum dot dimensions remain unchanged. This surface tuning is typically accompanied by the appearance of nanosecond lifetimes like those seen for F-band luminescence. Silicon quantum dot photoluminescence quantum yields are typically in the range of 10 to 40%, with a handful of synthetic protocols providing values in excess of 70%. The long-lived excited state of silicon quantum dot S-band luminescence that starkly contrasts photoemission from conventional quantum dots is often attributed to the inherent indirect band gap of silicon and lends itself to unique material applications. Combining long-lived excited states with the biological compatibility of silicon quantum dots enables time-gated biological imaging. The large Stokes shift allows them to convert photons from the ultraviolet range into the visible or infrared range and is particularly beneficial in the design and implementation of luminescent solar concentrators because it limits self-absorption while down converting the light. Importantly, SiQDs are biologically compatible and do not contain heavy metals (e.g., cadmium, indium, lead). The biological compatibility of these materials has been carefully studied both in vitro and in vivo. During in vitro studies, SiQDs have been found to exhibit limited toxicity in concentrations up to 72 μg/mL in HeLa cells and 30 μg/mL in epithelial-like cells (MDA-MB-231). In vivo studies assessing biological compatibility of SiQDs undertaken in mice and monkeys (rhesus macaques) found "no signs of toxicity clearly attributable to SiQDs." In bacteria, SiQDs have been shown to be less toxic than both CdSe and CdSe/ZnS quantum dots. Synthesis Synthesis methods Silicon quantum dots can be synthesized using a variety of methods, including thermal disproportionation of silicon suboxides (e.g., hydrogen silsesquioxane, a silsesquioxane derivative), and laser and plasma-induced decomposition of silane(s).  These methods reliably provide high quality SiQDs exhibiting size/band gap dependent (S-band) photoluminescence. Top-down methods, such as laser ablation and ball-milling have also been reported. Several solution-based methods have also been presented that often result in materials exhibiting F-band luminescence. Recently, it has been determined that some of these methods do not provide silicon quantum dots, but rather luminescent carbon quantum dots. Size control Defining the size of silicon quantum dots is essential because it influences their optical properties (especially S-band luminescence). Typically, the size of the silicon quantum dots is defined by controlling material synthesis. For example, silicon quantum dot size can be controlled by the reaction temperature during thermal disproportionation of silsesquioxanes. Similarly, the plasma residence time in non-thermal plasma methods is a key factor. Alternatively, post-synthetic protocols, such as density gradient ultracentrifugation, can be used to narrow the size distribution through separation. Surface passivation and modification The synthesis methods used to prepare SiQDs often result in reactive surfaces. Hydride-terminated SiQDs require post synthesis modification because they tend to oxidize under ambient conditions and exhibit limited solution processability. These surfaces are often passivated with organic molecules (e.g., alkyl chains) to render SiQDs resistant to oxidation and compatible with common solvents. This can then be passivated through methods, such as hydrosilylation. Much of the developed surface chemistry draws on well-established procedures used to modify the surface of porous silicon and silicon wafers.  Hydrosilylation, which involves the formal addition of a Si-H bond across a C-C double or triple bond, is commonly used to introduce alkenes and alkynes to silicon quantum dot surfaces and also provides access to useful terminal functional groups (e.g., carboxylic acid, ester, silanes) that can define solvent compatibility and provide locations for further derivatization. The covalent bonding between the surface groups and the silicon quantum dot is robust and is not readily exchangeable – this is very different from the ionic bonding commonly used to tether surface groups to other types of quantum dots. Applications Silicon quantum dots have been used in prototype applications owing to their biocompatibility and the ubiquitous nature of silicon, compared to other types of quantum dots. In addition to these fundamental properties, the unique optical properties of silicon quantum dots (i.e., long-lived excited states, large Stokes shift and tunable luminescence) can be advantageous for certain applications. Owing to these (and other) properties, the potential applications of SiQDs are diverse, spanning medical, sensing, defense, and energy related fields. Biological imaging The biocompatibility of silicon quantum dots along with their long luminescent lifetimes and near-infrared emission makes them well-suited for fluorescence imaging in biological systems. Due to this promise, silicon quantum dots have been applied for both in vitro and in vivo imaging. While steady-state imaging is traditionally used, the keen advantage of silicon comes into play for time-gated imaging. Time-gated imaging employs a delay between the excitation and the luminescence detection, this allows fluorophores with short lifetimes to relax, thus highlighting those with long lifetimes. This type of fluorescence imaging is useful for biological imaging as many tissues exhibit autofluorescence that can interfere with imaging. By using this technique, the signal to background ratio for imaging SiQDs can be increased up to 3x over conventional steady-state imaging techniques. Other modes of imaging have also been explored for silicon nanomaterials. For example, the silicon core of large silicon nanoparticles has been used for 29Si MRI in mice models. By modifying the surface with a ligand that can coordinate 64Cu, PET imaging is also accessible. Further, doping with paramagnetic centers show promise for T1 and T2 weighted 1H MRI. Luminescent solar concentrators Luminescent solar concentrators take advantage of the large Stokes shift of the silicon quantum dots to convert light into electricity. The large Stokes shift allows the SiQDs to convert UV light into red/near infrared light that is effectively absorbed by silicon solar cells, while having limited self absorption. The LSCs are designed to collect light and use the glass to waveguide the re-emitted light towards the edges of the glass, where the solar cells collect the light and convert it to electricity. By designing the LSC carefully, the silicon quantum dots can be prepared as a transparent film over the glass limiting losses due to scattering, while making them suitable as replacements for windows in buildings. To do this effectively, the surface of the silicon quantum dots can be modified with various ligands to improve polymer compatibility. It is also desirable to push the absorbance of the SiQDs into the visible to correspond better with the solar spectrum, which can be accomplished by adding a dye. Light-emitting diodes Quantum dot displays utilize quantum dots to produce pure monochromatic light. Most of the work designing LEDs based on silicon quantum dots have focused on electroluminescence of the silicon quantum dots. By changing the size of the SiQDs, the LED emission can be tuned from deep red (680 nm) to orange/yellow (625 nm). Despite promising initial results and advances towards improving the external quantum efficiency of the resulting LEDs, future work is required to overcome the broad luminescence emission. Sensing Photochemical sensors take advantage of the silicon quantum dot photoluminescence by quenching photon emission in the presence of the analyte. Photochemical sensors based on silicon quantum dots have been used to sense a wide variety of analytes, including pesticides, antibiotics, nerve agents, heavy metals, ethanol, and pH, often employing either electron transfer or fluorescence resonance energy transfer (FRET) as the method of quenching. Hazardous high energy materials, such as nitroaromatic compounds (i.e., TNT and DNT), can be detected at nanogram levels via electron transfer. In the electron transfer method, the energy level of LUMO of the molecule is between the valence and conduction bands of the silicon quantum dots, enabling the transfer of an excited state electron to the LUMO, and, therefore, preventing radiative recombination of the electron hole pair. This also works when the HOMO of the analyte is just above the conduction band of the SiQD, enabling the electron to transfer from the analyte to the SiQD. Alternative methods of detection via quenching of the SiQD core have also been explored. By functionalizing the quantum dots with enzymes, various biologically relevant materials can be sensed due to the formation of metabolites. Using this method, glucose can be detected via the formation hydrogen peroxide that quenches luminescence. Another method uses ratiometric sensing, where a fluorescent molecule is used as a control and the relative intensities of the two fluorescent labels are compared. This method was used to detect organophosphate nerve agents visually at a lower concentration than can be observed for SiQD quenching alone. See also Cadmium-free quantum dot References Semiconductor structures Quantum electronics Quantum dots Optoelectronics Nanoelectronics Nanoparticles by composition Silicon photonics
Silicon quantum dot
[ "Physics", "Materials_science" ]
2,675
[ "Silicon photonics", "Quantum electronics", "Quantum mechanics", "Condensed matter physics", "Nanoelectronics", "Nanotechnology" ]
69,844,095
https://en.wikipedia.org/wiki/Redundant%20elevators
Redundant elevators are additional elevators installed to guarantee greater accessibility of buildings and public transportation systems in the event that an elevator malfunctions or is undergoing repairs. The United States Disability Rights Education and Defense Fund describes redundant elevators as a "best practice" and recommends all transit agencies "consider installing redundant elevators at all existing key stations with elevators in rapid, light, and commuter rail, and at all Amtrak stations with elevators." Legislation United States The Americans with Disabilities Act of 1990 requires elevators for new construction and alterations in public accommodations and commercial facilities, with some exceptions. However, there are no requirements for redundant elevators. Redundant elevators in public transportation Canada Ottawa Ottawa's OC Transpo has committed to installing redundant elevators at all transfer stations and stations where alternative accessible routes cannot be provided. United States Bay Area Rapid Transit All Bay Area Rapid Transit stations have accessible elevators, however most stations lack redundant elevators. BART has committed to increasing elevator redundancy within its system. Connecticut Department of Transportation Connecticut Department of Transportation policy states that at stations without redundant elevators, signage must be posted near all elevators displaying a 24-hour monitored telephone number that connects the passenger to a mobility taxi service. MBTA As part of a 2006 agreement between the Massachusetts Bay Transportation Authority (MBTA) and the Boston Center for Independent Living, MBTA has agreed to install redundant elevators at stations in their system. Metropolitan Transportation Authority Washington Metro Since 2003, the Washington Metro has required that all newly constructed stations must have redundant elevators. As of 2021, all Washington Metro stations are wheelchair accessible but the majority of stations lack redundant elevators. 15 out of 91 stations have at least one redundant elevator, with redundant elevators planned for installation at four other stations. References Accessibility Disability rights Elevators
Redundant elevators
[ "Engineering" ]
347
[ "Building engineering", "Accessibility", "Design", "Elevators" ]
69,845,004
https://en.wikipedia.org/wiki/Conical%20refraction
Conical refraction is an optical phenomenon in which a ray of light, passing through a biaxial crystal along certain directions, is refracted into a hollow cone of light. There are two possible conical refractions, one internal and one external. For internal refraction, there are 4 directions, and for external refraction, there are 4 other directions. For internal conical refraction, a planar wave of light enters an aperture a slab of biaxial crystal whose face is parallel to the plane of light. Inside the slab, the light splits into a hollow cone of light rays. Upon exiting the slab, the hollow cone turns into a hollow cylinder. For external conical refraction, light is focused at a single point aperture on the slab of biaxial crystal, and exits the slab at the other side at an exit point aperture. Upon exiting, the light splits into a hollow cone. This effect was predicted in 1832 by William Rowan Hamilton and subsequently observed by Humphrey Lloyd in the next year. It was possibly the first example of a phenomenon predicted by mathematical reasoning and later confirmed by experiment. History The phenomenon of double refraction was discovered in the Iceland spar (calcite), by Erasmus Bartholin in 1669. was initially explained by Christiaan Huygens using a wave theory of light. The explanation was a centerpiece of his Treatise on Light (1690). However, his theory was limited to uniaxial crystals, and could not account for the behavior of biaxial crystals. inside the sphere. In 1813, David Brewster discovered that topaz has two axes of no double refraction, and subsequently others, such as aragonite, borax and mica, were identified as biaxial. Explaining this was beyond Huygens' theory. At the same period, Augustin-Jean Fresnel developed a more comprehensive theory that could describe double refraction in both uniaxial and biaxial crystals. Fresnel had already derived the equation for the wavevector surface in 1823, and André-Marie Ampère rederived it in 1828. Many others investigated the wavevector surface of the biaxial crystal, but they all missed its physical implications. In particular, Fresnel mistakenly thought the two sheets of the wavevector surface are tangent at the singular points (by a mistaken analogy with the case of uniaxial crystals), rather than conoidal. William Rowan Hamilton, in his work on Hamiltonian optics, discovered the wavevector surface has four conoidal points and four tangent conics. These conoidal points and tangent conics imply that, under certain conditions, a ray of light could be refracted into a cone of light within the crystal. He termed this phenomenon "conical refraction" and predicted two distinct types: internal and external conical refraction, corresponding respectively to the conoidal points and tangent conics. Hamilton announced his discovery at the Royal Irish Academy on October 22, 1832. He then asked Humphrey Lloyd to prove this experimentally. Lloyd observed external conical refraction 14 December with a specimen of arragonite from the Dollonds, which he published in February. He then observed internal conical refraction and published in March. Lloyd then combined both reports, and added details, into one paper. Lloyd discovered experimentally that the refracted rays are polarized, with polarization angle half that of the turning angle (see below), told Hamilton about it, who then explained theoretically. At the same time, Hamilton also exchanged letters with George Biddell Airy. Airy had independently discovered that the two sheets touch at conoidal points (rather than tangent), but he was skeptical that this would have experimental consequences. He was only convinced after Lloyd's report. This discovery was a significant victory for the wave theory of light and solidified Fresnel's theory of double refraction. Lloyd's experimental data are described in pages 350–355. The rays of the internal cone emerged, as they ought, in a cylinder from the second face of the crystal; and the size of this nearly circular cylinder, though small, was decidedly perceptible, so that with solar light it threw on silver paper a little luminous ring, which seemed to remain the same at different distances of the paper from the arragonite. In 1833, James MacCullagh claimed that it is a special case of a theorem he published in 1830 that he did not explicate, since it was not relevant to that particular paper. Cauchy discovered the same surface in the context of classical mechanics.Somebody having remarked, "I know of no person who has not seen conical refraction that really believed in it. I have myself converted a score of mathematicians by showing them the cone of light". Hamilton replied, "How different from me! If I had seen it only, I should not have believed it. My eyes have too often deceived me. I believe it, because I have proved it." Geometric theory A note on terminology: The surface of wavevectors is also called the wave surface, the surface of normal slowness, the surface of wave slowness, etc. The index ellipsoid was called the surface of elasticity, as according to Fresnel, light waves are transverse waves in, in exact analogy with transverse elastic waves in a material. Surface of wavevectors For notational cleanness, define . This surface is also known as Fresnel wave surface. Given a biaxial crystal with the three principal refractive indices . For each possible direction of planar waves propagating in the crystal, it has a certain group velocity . The refractive index along that direction is defined as . Define, now, the surface of wavevectors as the following set of pointsIn general, there are two group velocities along each wavevector direction. To find them, draw the plane perpendicular to . The indices are the major and minor axes of the ellipse of intersection between the plane and the index ellipsoid. At precisely 4 directions, the intersection is a circle (those are the axes where double refraction disappears, as discovered by Brewster, thus earning them the name of "biaxial"), and the two sheets of the surface of wavevectors collide at a conoidal point. To be more precise, the surface of wavevectors satisfy the following degree-4 equation (, page 346):or equivalently, The major and minor axes are the solutions to the constraint optimization problem: where is the matrix with diagonal entries . Since there are 3 variables and 2 constraints, we can use the Karush–Kuhn–Tucker conditions. That is, the three gradients are linearly dependent. Let , then we havePlugging back to , we obtain Let be the vector with the direction of , and the length of . We thus find that the equation of is Multiply out the denominators, then multiply by , we obtain the result. In general, along a fixed direction , there are two possible wavevectors: The slow wave and the fast wave , where is the major semiaxis, and is the minor. Plugging into the equation of , we obtain a quadratic equation in :which has two solutions . At exactly four directions, the two wavevectors coincide, because the plane perpendicular to intersects the index ellipsoid at a circle. These directions are where , at which point . Expanding the equation of the surface in a neighborhood of , we obtain the local geometry of the surface, which is a cone subtended by a circle. Further, there exists 4 planes, each of which is tangent to the surface at an entire circle (a trope conic, as defined later). These planes have equation (, pages 349–350)or equivalently, . and the 4 circles are the intersection of those planes with the ellipsoidAll 4 circles have radius . By differentiating its equation, we find that the points on the surface of wavevectors, where the tangent plane is parallel to the -axis, satisfies That is, it is the union of the -plane, and an ellipsoid. Thus, such points on the surface of wavevectors has two parts: Every point with , and every point that intersects with the auxiliary ellipsoid Using the equation of the auxiliary ellipsoid to eliminate from the equation of the wavevector surface, we obtain another degree-4 equation, which splits into the product of 4 planes: Thus, we obtain 4 ellipses: the 4 planar intersections with the auxiliary ellipsoid. These ellipses all exist on the wavevector surface, and the wavevector surface has tangent plane parallel to the axis at those points. By direct computation, these ellipses are circles. It remains to verify that the tangent plane is also parallel to the plane of the circle. Let be one of those 4 planes, and let be one point on the circle in . If , then since the circle is on the surface, the tangent plane to the surface at must contain the tangent line to the circle at . Also, the plane must also contain , the line pass that is parallel to the -axis. Therefore, the plane is spanned by and , which is precisely the plane . This then extends by continuity to the case of . One can imagine the surface as a prune, with 4 little pits or dimples. Putting the prune on a flat desk, the prune would touch the desk at a circle that covers up a dimple. In summary, the surface of wavevectors has singular points at where . The special tangent plane to the surface touches it at two points that make an angle of and , respectively. The angle of the wave cone, that is, the angle of the cone of internal conical refraction, is . Note that the cone is an oblique cone. Its apex is perpendicular to its base at a point on the circle (instead of the center of the circle). Surface of ray vectors The surface of ray vectors is the polar dual surface of the surface of wavevectors. Its equation is obtained by replacing with in the equation for the surface of wavevectors. That is,All the results above apply with the same modification. The two surfaces are related by their duality: The four special planes tangent to the surface of wavevectors on a circle correspond to the 4 conoidal points on the surface of ray vectors. The four conoidal points on the surface of wavevectors correspond to the 4 planes tangent to the surface of ray vectors on a circle. Approximately circular In typical crystals, the difference between is small. In this case, the conoidal point is approximately at the center of the tangent circle surrounding it, and thus, the cone of light (in both the internal and the external refraction cases) is approximately a circular cone. Polarization In the case of external conical refraction, we have one ray splitting into a cone of planar waves, each corresponding to a point on the tangent circle of the wavevector surface. There is one tangent circle for each of the four quadrants. Take the one with , then take a point on it. Let the point be . To find the polarization direction of the planar wave in direction , take the intersection of the index ellipsoid and the plane perpendicular to . The polarization direction is the direction of the major axis of the ellipse intersection between the plane perpendicular to and the index ellipsoid. Thus, the with the highest corresponds to a light polarized parallel to the direction, and the with the lowest corresponds to a light polarized in a direction perpendicular to it. In general, rotating along the circle of light by an angle of would rotate the polarization direction by approximately . This means that turning around the cone an entire round would turn the polarization angle by only half a round. This is an early example of the geometric phase. This geometric phase of is observable in the difference of the angular momentum of the beam, before and after conical refraction. Algebraic geometry The surface of wavevectors is defined by a degree-4 algebraic equation, and thus was studied for its own sake in classical algebraic geometry. Arthur Cayley studied the surface in 1849. He described it as a degenerate case of tetrahedroid quartic surfaces. These surfaces are defined as those that are intersected by four planes, forming a tetrahedron. Each plane intersects the surface at two conics. For the wavevector surface, the tetrahedron degenerates into a flat square. The three vertices of the tetrahedron are conjugate to the two conics within the face they define. The two conics intersect at 4 points, giving 16 singular points. In general, the surface of wavevectors is a Kummer surface, and all properties of it apply. For example: It is projectively isomorphic to its dual surface. There are at most 16 singular points. Each trope of the surface corresponds to a singular point on its dual. Here, a trope is defined as a double-conic on the surface. In other words, it is where the intersection of the surface with a plane factors into a perfect square. For each Kummer surface, there exists a two-dimensional family of lines, such that each point of the surface is tangent to two lines in the family. More properties of the surface of wavevectors are in Chapter 10 of the classical reference on Kummer surfaces. Every linear material has a quartic dispersion equation, so its wavevector surface is a Kummer surface, which can have at most 16 singular points. That such a material might exist was proposed in 1910, and in 2016, scientists made such a (meta)material, and confirmed it has 16 directions for conical refraction. Diffraction theory The classical theory of conical refraction was essentially in the style of geometric optics, and ignores the wave nature of light. Wave theory is needed to explain certain observable phenomena, such as Poggendorff rings, secondary rings, the central spot and its associated rings. In this context, conical refraction is usually named "conical diffraction" to emphasize the wave nature of light. Observations The angle of the cone depends on the properties of the crystal, specifically the differences between its principal refractive indices. The effect is typically small, requiring careful experimental setup to observe. Early experiments used sunlight and pinholes to create narrow beams of light, while modern experiments often employ lasers and high-resolution detectors. Poggendorff observed two rings separated by a thin dark band. This was explained by Voigt. See Born and Wolf, section 15.3, for a derivation. Potter observed in 1841 certain diffraction phenomena that were inexplicable with Hamilton's theory. Specifically, if we follow the two rings created by the internal conic refraction, then the inner ring would contract until it becomes a single point, while the outer ring expands indefinitely. A satisfactory explanation required later developments in diffraction theory. Modern developments The study of conical refraction has continued since its discovery, with researchers exploring its various aspects and implications. Some recent work includes: Paraxial theory: This theory provides a simplified description of conical diffraction for small angles of incidence and has been used to analyze the detailed structure of the light patterns observed. Chiral crystals: The inclusion of optical activity (chirality) in the crystal leads to new phenomena, such as the transformation of the conical cylinder into a "spun cusp" caustic. Absorption and dichroism: The presence of absorption in the crystal significantly alters the behavior of light, leading to the splitting of diabolical points into pairs of branch points and affecting the emergent light patterns. Nonlinear optics: Nonlinear optical effects in biaxial crystals can interact with conical refraction, leading to complex and intriguing phenomena. Applications: Conical refraction had found applications in optical trapping, free-space optical communications, polarization metrology, super-resolution imaging, two-photon polymerization, and lasers. Conical refraction was also observed in transverse sound waves in quartz. See also Birefringence Wave surface Crystal optics Polarization Caustics External links Images of conical refractions with rings of radius larger than one meter, through monocrystal of rhombic sulfur. By Yu. P. Mikhailichenko Images of the Fresnel wave surface, (pages 470–485 ). References Polarization (waves) Optical mineralogy Refraction Optical quantities
Conical refraction
[ "Physics", "Mathematics" ]
3,418
[ "Physical phenomena", "Physical quantities", "Refraction", "Quantity", "Astrophysics", "Optical phenomena", "Optical quantities", "Polarization (waves)" ]
74,408,247
https://en.wikipedia.org/wiki/Reversible%20Hill%20equation
The classic Monod–Wyman–Changeux model (MWC) for cooperativity is generally published in an irreversible form. That is, there are no product terms in the rate equation which can be problematic for those wishing to build metabolic models since there are no product inhibition terms. However, a series of publications by Popova and Sel'kov derived the MWC rate equation for the reversible, multi-substrate, multi-product reaction. The same problem applies to the classic Hill equation which is almost always shown in an irreversible form. Hofmeyr and Cornish-Bowden first published the reversible form of the Hill equation. The equation has since been discussed elsewhere and the model has also been used in a number of kinetic models such as a model of Phosphofructokinase and Glycolytic Oscillations in the Pancreatic β-cells or a model of a glucose-xylose co-utilizing S. cerevisiae strain. The model has also been discussed in modern enzyme kinetics textbooks. Derivation Consider the simpler case where there are two binding sites. See the scheme shown below. Each site is assumed to bind either molecule of substrate S or product P. The catalytic reaction is shown by the two reactions at the base of the scheme triangle, that is S to P and P to S. The model assumes the binding steps are always at equilibrium. The reaction rate is given by: Invoking the rapid-equilibrium assumption we can write the various complexes in terms of equilibrium constants to give: where . The and terms are the ratio of substrate and product to their respective half-saturation constants, namely and and Using the author's own notation, if an enzyme has sites that can bind ligand, the form, in the general case, can be shown to be: The non-cooperative reversible Michaelis-Menten equation can be seen to emerge when we set the Hill coefficient to one. If the enzyme is irreversible the equation turns into the simple Michaelis-Menten equation that is irreversible. When setting the equilibrium constant to infinity, the equation can be seen to revert to the simpler case where the product inhibits the reverse step. A comparison has been made between the MWC and reversible Hill equation. A modification of the reversible Hill equation was published by Westermark et al where modifiers affected the catalytic properties instead. This variant was shown to provide a much better fit for describing the kinetics of muscle phosphofructokinase. References Enzyme kinetics Pharmacology Chemical kinetics Catalysis
Reversible Hill equation
[ "Chemistry" ]
550
[ "Catalysis", "Pharmacology", "Chemical reaction engineering", "Enzyme kinetics", "Medicinal chemistry", "Chemical kinetics" ]
72,945,504
https://en.wikipedia.org/wiki/Bou%C3%A9%E2%80%93Dupuis%20formula
In stochastic calculus, the Boué–Dupuis formula is variational representation for Wiener functionals. The representation has application in finding large deviation asymptotics. The theorem was proven in 1998 by Michelle Boué and Paul Dupuis. In 2000 the result was generalized to infinite-dimensional Brownian motions and in 2009 extended to abstract Wiener spaces. Boué–Dupuis formula Let be the classical Wiener space and be a -dimensional standard Brownian motion. Then for all bounded and measurable functions we have the following variational representation where: The expectation is with respect to the probability space of . The infimum runs over all processes which are progressively measurable with respect to the augmented filtration generated by denotes the -dimensional Euclidean norm. References Stochastic calculus Wiener process Probability theorems Calculus of variations
Boué–Dupuis formula
[ "Mathematics" ]
171
[ "Theorems in probability theory", "Mathematical theorems", "Mathematical problems" ]
72,949,749
https://en.wikipedia.org/wiki/Alexandrov%27s%20soap%20bubble%20theorem
Alexandrov's soap bubble theorem is a mathematical theorem from geometric analysis that characterizes a sphere through the mean curvature. The theorem was proven in 1958 by Alexander Danilovich Alexandrov. In his proof he introduced the method of moving planes, which was used after by many mathematicians successfully in geometric analysis. Soap bubble theorem Let be a bounded connected domain with a boundary that is of class with a constant mean curvature, then is a sphere. Literature References Differential geometry
Alexandrov's soap bubble theorem
[ "Mathematics" ]
96
[ "Theorems in differential geometry", "Theorems in geometry" ]
72,959,651
https://en.wikipedia.org/wiki/Potassium%20laurate
Potassium laurate is a metal-organic compound with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid (lauric acid). Synthesis Potassium laurate can be prepared via a reaction of lauric acid and potassium hydroxide. Physical properties Soluble in water. Soluble in ethyl benzene. Forms powder or light-tan paste. Uses The compound is used in the cosmetics industry as an emulsifier and surfactant. Also used as a fungicide, insecticide, and bactericide. References Laurates Potassium compounds
Potassium laurate
[ "Chemistry" ]
121
[ "Inorganic compounds", "Inorganic compound stubs" ]
78,769,289
https://en.wikipedia.org/wiki/Hillingar%20effect
The hillingar effect or Arctic mirage is a mirage that occurs when cold air near the surface causes light rays to bend. Light passing from an object through air to an observer always refracts, or bends, in the direction of increasing air density. Especially over cold ocean areas but also over snowfields or glaciers, air density can change with altitude so rapidly that the horizon appears to lift up like the edges of a saucer. Coastlines normally well below the horizon are raised up into view. Early Norsemen called these mirages hillingars. References Optics
Hillingar effect
[ "Physics", "Chemistry" ]
115
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
78,772,358
https://en.wikipedia.org/wiki/Computational%20toxicology
Computational toxicology is a multidisciplinary field and area of study, which is employed in the early stages of drug discovery and development to predict the safety and potential toxicity of drug candidates. It integrates in silico methods, or computer-based models, with in vivo, or animal, and in vitro, or cell-based, approaches to achieve a more efficient, reliable, and ethically responsible toxicity evaluation process. Key aspects of computational toxicology include the following: early safety prediction, mechanism-oriented modeling, integration with experimental approaches, and structure-based algorithms. Sean Ekins is a forerunner in the field of computational toxicology among other fields. Historical development The origins of computational toxicology trace back to the 1960s and 1970s when early quantitative structure–activity relationship, or QSAR, models were developed. These models aimed to predict the biological activity of chemicals based on their molecular structures. Advances in computational power during this period allowed for increasingly sophisticated simulations and analyses, laying the groundwork for modern computational approaches. The 1980s and 1990s saw the expansion of the field with the advent of molecular docking, cheminformatics, and bioinformatics tools. The rise of high-throughput screening technologies provided vast datasets, which fueled the need for computational methods to manage and interpret complex toxicological data. In the early 21st century, the establishment of initiatives such as the U.S. Environmental Protection Agency's, or EPA's, ToxCast program marked a significant milestone. ToxCast aimed to integrate computational and experimental data to improve toxicity prediction and reduce reliance on animal testing. During this time, advances in machine learning and artificial intelligence further transformed the field, enabling the analysis of large-scale datasets and the development of predictive models with greater accuracy. Today, computational toxicology continues to evolve, driven by innovations in omics technologies, big data analytics, and regulatory science. It plays a crucial role in risk assessment, drug development, and environmental protection, offering faster and more ethical alternatives to traditional toxicological testing. References Drug discovery Toxicology
Computational toxicology
[ "Chemistry", "Biology", "Environmental_science" ]
420
[ "Toxicology", "Life sciences industry", "Drug discovery", "Toxicology stubs", "Medicinal chemistry" ]
75,704,539
https://en.wikipedia.org/wiki/J.%20D.%20Hanawalt
J. Donald "Don" Hanawalt ( – June 26, 1987) was an American physicist who joined The Dow Chemical Company in 1931 and became a Corporate Vice President by 1953. He co-authored (with Harold W. "Sid" Rinn) an article titled, "The Identification of Crystalline Materials" which, along with a 1938 publication titled, Chemical Analysis by X-Ray Diffraction: Classification and Use of X-Ray Diffraction Patterns, are considered the foundations of powder X-ray diffraction as an analytical technique. The work is still in use today as part of the powder diffraction file (PDF) published by the International Centre for Diffraction Data (ICDD), a non-profit scientific organization dedicated to collecting, editing, publishing, and distributing powder diffraction data for the identification of materials. The membership of the ICDD consists of worldwide representation from academe, government, and industry. The ICDD presents an award bearing Hanawalt's name every three years to recognize distinguished, recent work in the field of powder diffraction in honor of his contributions. References Crystallographers Diffraction 1900s births 1987 deaths Year of birth uncertain
J. D. Hanawalt
[ "Physics", "Chemistry", "Materials_science" ]
246
[ "Spectrum (physical sciences)", "Crystallography", "Diffraction", "Crystallographers", "Spectroscopy" ]
75,711,077
https://en.wikipedia.org/wiki/Institut%20a%C3%A9rotechnique
The Institut aérotechnique (IAT) is a French public research laboratory part of the Conservatoire national des arts et métiers, specializing in aerodynamic studies, located in Saint-Cyr-l'École (Yvelines). The creation of this institute is thanks to an initiative of Henri Deutsch de la Meurthe, also founder of the Aéro-Club de France. Its inauguration took place on July 8, 1911. It currently has several wind tunnels, some of which specialize in the automotive, railway and aerospace sectors. Concerning aeronautics, the laboratory has a partnership with the Institut polytechnique des sciences avancées. References External links Official website Research institutes in France Research institutes established in 1911 1911 establishments in France Aerodynamics Aerospace engineering organizations
Institut aérotechnique
[ "Chemistry", "Engineering" ]
156
[ "Aerospace engineering organizations", "Aeronautics organizations", "Aerodynamics", "Aerospace engineering", "Fluid dynamics" ]
71,416,154
https://en.wikipedia.org/wiki/Sergey%20Piletsky
Sergey Piletsky is a professor of Bioanalytical Chemistry and the Research Director for School of Chemistry, University of Leicester, United Kingdom. Education Sergey graduated from Kyiv University, Ukraine, obtaining an MSc in chemistry in 1985 and researched on synthesis of the polymers selective for nucleic acids, for which he was awarded with a PhD in 1991. Cranfield University awarded Sergey with a DSc for his work on molecularly imprinted polymers for diagnostics applications. Awards Sergey is a recipient of Royal Society Wolfson Research Merit Award, Leverhulme Trust Fellowship, DFG Fellowship from the Institute of Analytical Chemistry, Award of President of Ukraine, and Japan Society for Promotion of Science and Technology Fellowship. Research Sergey's work in molecular imprinting focuses on: (i) the fundamental study of the recognition properties of molecularly imprinted polymers; (ii) the development of sensors and assays for environmental and clinical analysis; and (iii) the development of molecularly imprinted polymer nanoparticles for theranostic applications. Sergey introduced computational design into the field of molecular imprinting, by scientifically demonstrating that non-covalent interaction between the template molecule and polymer is through the technique known as 'bite and switch' wherein functional groups first non-covalently bond with the binding site, but during the rebinding step, the polymer matrix forms irreversible covalent bonds with the target molecule. A number of research groups around the world follow his ideas in developing functional imprinted polymers for a variety of applications. Notable publications Surface-grafted molecularly imprinted polymers for protein recognition, A Bossi, SA Piletsky, EV Piletska, PG Righetti, APF Turner, Analytical chemistry 73 (21), 5281-5286 Electrochemical sensor for catechol and dopamine based on a catalytic molecularly imprinted polymer-conducting polymer hybrid recognition element, Dhana Lakshmi, Alessandra Bossi, Michael J Whitcombe, Iva Chianella, Steven A Fowler, Sreenath Subrahmanyam, Elena V Piletska, Sergey A Piletsky, Analytical Chemistry 81 (9), 3576-3584 Piletsky S.A., Turner A.P.F. (2006). New generation of chemical sensors based on molecularly imprinted polymers, in: Molecular imprinting of polymers, S. Piletsky and A.P.F. Turner (eds.), Landes Bioscience, Georgetown, TX, USA Notable patents Rationally Designed Selective Binding Polymers (2010), Publication number: 20100009859, Inventors: Sergey A. Piletsky, Olena Piletska, Khalku Karim, Coulton H. Legge, Sreenath Subrahmanyam Electrochemical Sensor (2019) Publication number: 20210239643, Inventors: Sergey Piletsky, Omar Sheej Ahamad, Alvaro Garcia Cruz Polymerisation method, polymers and uses thereof (2006) Publication number: 20060122288, Inventors: Sergey Piletsky, Olena Piletska, Anthony Turner, Khalku Karim, Beining Chen Methods and Kits for determining binding sites (2020) Publication number: 20200033356, Inventors: Sergey Piletsky, Elena Piletska, Francesco Canfarotta, Don Jones Photoreactor and Process for Preparing MIP Nanoparticles (2014) Publication number: 20140228472, Inventors: Sergey Piletsky, Olena Piletska, Antonio Guerreiro, Michael Whitcombe, Alessandro Poma References External links Year of birth missing (living people) Living people British chemists Alumni of Cranfield University British inventors Molecular modelling Computational biology Computational chemistry Biosensors Receptors Biomimetics Sensors Bioinorganic chemistry Ukrainian expatriates in England Ukrainian chemists 21st-century Ukrainian scientists
Sergey Piletsky
[ "Chemistry", "Technology", "Engineering", "Biology" ]
796
[ "Biological engineering", "Molecular physics", "Biomimetics", "Bionics", "Measuring instruments", "Signal transduction", "Bioinformatics", "Receptors", "Computational chemistry", "Theoretical chemistry", "Molecular modelling", "Biosensors", "Computational biology", "Biochemistry", "Senso...
71,420,582
https://en.wikipedia.org/wiki/Alfv%C3%A9n%20surface
The Alfvén surface is the boundary separating a star's corona from the stellar wind defined as where the coronal plasma's Alfvén speed and the large-scale stellar wind speed are equal. It is named after Hannes Alfvén, and is also called Alfvén critical surface, Alfvén point, or Alfvén radius. In 2018, the Parker Solar Probe became the first spacecraft that crossed Alfvén surface of the Sun. Definition Stars do not have a solid surface. However, they have a superheated atmosphere, made of solar material bound to the star by gravity and magnetic forces. The stellar corona extends far beyond the solar surface, or photosphere, and is considered the outer boundary of the star. It marks the transition to the solar wind which moves through the planetary system. This limit is defined by the distance at which disturbances in the solar wind cannot propagate back to the solar surface. Those disturbances cannot propagate back towards a star if the outbound solar wind speed exceeds Mach one, the speed of 'sound' as defined for the solar wind. This distance forms an irregular 'surface' around a star is called the Alfvén surface. It can also be described as a point where gravity and magnetic fields are too weak to contain heat and pressure that push the material away from a star. This is the point where solar atmosphere ends and where solar wind begins. Adhikari, Zank, & Zhao (2019) define the Alfvén surface as: the location at which the large-scale bulk solar wind speed and the Alfvén speed are equal, and thus it separates sub-Alfvénic coronal flow ||≪|| from super-Alfvénic solar wind flow ||≫|| DeForest, Howard, & McComas (2014) define the Alfvén surface as:a natural boundary that marks the causal disconnection of individual packets of plasma and magnetic flux from the Sun itself. The Alfvén surface is the locus where the radial motion of the accelerating solar wind passes the radial Alfvén speed, and therefore any displacement of material cannot carry information back down into the corona. It is thus the natural outer boundary of the solar corona, and the inner boundary of interplanetary space. Alfvén surface separates the sub- and super-Alfvénic regimes of the stellar wind, which influence the structure of any magnetosphere/ionosphere around an orbiting planet in the system. Characterization of the Alfvén surface can serve as an inner-boundary of the habitable zone of the star. Alfvén surface can be found "nominally" at 10–30 star radii. Research Researchers were unsure exactly where the Alfvén critical surface of the Sun lay. Based on remote images of the corona, estimates had put it somewhere between 10 and 20 solar radii from the surface of the Sun. On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe (PSP) encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface; the probe measured the solar wind plasma environment with its FIELDS and SWEAP instruments. This event was described by NASA as "touching the Sun". During the flyby, Parker Solar Probe passed into and out of the corona several times. This proved the predictions that the Alfvén critical surface is not shaped like a smooth ball, but has spikes and valleys that wrinkle its surface. At 09:33 UT on 28 April 2021 Parker Solar Probe entered the magnetized atmosphere of the Sun above the photosphere, crossing below the Alfvén critical surface for five hours into plasma in causal contact with the Sun with an Alfvén Mach number of 0.79 and magnetic pressure dominating both ion and electron pressure. Magnetic mapping suggests the region was a steady flow emerging on rapidly expanding coronal magnetic field lines lying above a pseudostreamer. The sub-Alfvénic nature of the flow may be due to suppressed magnetic reconnection at the base of the pseudostreamer, as evidenced by unusually low densities in this region and the magnetic mapping. Further reading References External links Sun Solar phenomena
Alfvén surface
[ "Physics" ]
845
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
77,342,962
https://en.wikipedia.org/wiki/Deltares
Deltares is a in the Netherlands specialising in hydraulic engineering research and consulting, along with water management, geotechnics, and infrastructure. The organisation's research mainly focuses on rivers and river deltas, coastal regions, and offshore engineering. As of 2020, Deltares employed over 750 full-time equivalent (FTE) staff members from 42 nationalities, located in Delft and Utrecht. The turnover in 2020 was €112 million. Areas of expertise Deltares operations focus on, among other things: Water management; Water safety; Hydraulic engineering; Groundwater; Soil management; Geology; Ecology; Water quality; River and coastal morphology. Facilities In addition to desk study research, Deltares undertakes physical model research and development of computer applications. For physical model research, Deltares has several wave flumes (including the Delta Flume), wave basins, and lock facilities. Facilities are also available for research on pumps and pipelines. For geotechnical research, Deltares provides facilities such as the geocentrifuge, a water and soil flume (for dredging research), and a geotechnical laboratory. History Deltares was established on January 1, 2008, following the findings of the Wijffels Committee, from the merger of: GeoDelft Delft Hydraulics, previously known as the (Hydraulic Research Laboratory) Parts of TNO–Building and Underground Parts of the specialized services RIZA, RIKZ, and DWW from Rijkswaterstaat. Initially, the name Delta Institute was considered. However, this name had been used until 1992 by another organization: the Delta Institute for Hydrobiological Research. This organization is now part of the Netherlands Institute of Ecology (NIOO-KNAW).By 2008, the Delft laboratory had become known by the English name WL | Delft Hydraulics, and in an effort to consolidate knowledge with similar institutes, it was merged with other research institutes and sections of Rijkswaterstaat, becoming known by its present name. See also Delta Works Flood control in the Netherlands Rijkswaterstaat Waterloopkundig Laboratorium Zuiderzee Works References External links Deltares Aankondiging Delta-instituut (archived) Coastal engineering Civil engineering Hydraulic engineering Delta Works Research institutes in the Netherlands
Deltares
[ "Physics", "Engineering", "Environmental_science" ]
470
[ "Hydrology", "Coastal engineering", "Physical systems", "Construction", "Hydraulics", "Delta Works", "Civil engineering", "Hydraulic engineering" ]
69,854,203
https://en.wikipedia.org/wiki/Europium%28II%29%20oxide
Europium(II) oxide (EuO) is a chemical compound which is one of the oxides of europium. In addition to europium(II) oxide, there is also europium(III) oxide and the mixed valence europium(II,III) oxide. Preparation Europium(II) oxide can be prepared by the reduction of europium(III) oxide with elemental europium at 800 °C and subsequent vacuum distillation at 1150 °C. Eu2O3 + Eu → 3 EuO It is also possible to synthesize from the reaction of europium oxychloride and lithium hydride. 2 EuOCl + 2 LiH → 2 EuO + 2 LiCl + H2 In modern research, thin films can be manufactured by molecular beam epitaxy directly from europium atoms and oxygen molecules. These films have contamination of Eu3+ of less than 1%. Properties Europium(II) oxide is a violet compound as a bulk crystal and transparent blue in thin film form. It is unstable in humid atmosphere, slowly turning into the yellow europium(II) hydroxide hydrate and then to white europium(III) hydroxide. EuO crystallizes in a cubic sodium chloride structure with a lattice parameter a = 0.5144nm. The compound is often non-stoichiometric, containing up to 4% Eu3+ and small amounts of elemental europium. However, since 2008 high purity crystalline EuO films can be created in ultra high vacuum conditions. These films have a crystallite size of about 4 nm. Europium(II) oxide is ferromagnetic with a Curie Temperature of 69.3 K. With the addition of about 5-7% elemental europium, this increases to 79 K. It also displays colossal magnetoresistance, with a dramatic increase in conductivity below the Curie temperature. One more way to increase the Curie temperature is doping with gadolinium, holmium, or lanthanum. Europium(II) oxide is a semiconductor with a band gap of 1.12 eV. Applications Because of the properties of europium(II) oxide, thin layers of the oxide deposited on silicon are being studied for use as spin filters. Spin filter materials only allow electrons of a certain spin to pass, blocking electrons of the opposite spin. References Europium(II) compounds Oxides Rock salt crystal structure Ferromagnetic materials Semiconductor materials
Europium(II) oxide
[ "Physics", "Chemistry" ]
525
[ "Semiconductor materials", "Ferromagnetic materials", "Oxides", "Salts", "Materials", "Matter" ]
69,856,131
https://en.wikipedia.org/wiki/Prophet%20inequality
In the theory of online algorithms and optimal stopping, a prophet inequality is a bound on the expected value of a decision-making process that handles a sequence of random inputs from known probability distributions, relative to the expected value that could be achieved by a "prophet" who knows all the inputs (and not just their distributions) ahead of time. These inequalities have applications in the theory of algorithmic mechanism design and mathematical finance. Single item The classical single-item prophet inequality was published by , crediting its tight form to D. J. H. (Ben) Garling. It concerns a process in which a sequence of random variables arrive from known distributions . When each arrives, the decision-making process must decide whether to accept it and stop the process, or whether to reject it and go on to the next variable in the sequence. The value of the process is the single accepted variable, if there is one, or zero otherwise. It may be assumed that all variables are non-negative; otherwise, replacing negative values by zero does not change the outcome. This can model, for instance, financial situations in which the variables are offers to buy some indivisible good at a certain price, and the seller must decide which (if any) offer to accept. A prophet, knowing the whole sequence of variables, can obviously select the largest of them, achieving value for any specific instance of this process, and expected value The prophet inequality states the existence of an online algorithm for this process whose expected value is at least half that of the prophet: No algorithm can achieve a greater expected value for all distributions of One method for proving the single-item prophet inequality is to use a "threshold algorithm" that sets a parameter and then accepts the first random variable that is at least as large If the probability that this process accepts an item is , then its expected value is plus the expected excess over that the selected variable (if there is one) has. Each variable will be considered by the threshold algorithm with probability at least and if it is considered will contribute to the excess, so by linearity of expectation the expected excess is at least Setting to the median of the distribution of so that and adding to this bound on expected excess, causes the and terms to cancel each other, showing that for this setting of the threshold algorithm achieves an expected value of at least A different threshold, also achieves at least this same expected value. Generalizations Various generalizations of the single-item prophet inequality to other online scenarios are known, and are also called prophet inequalities. Comparison to competitive analysis Prophet inequalities are related to the competitive analysis of online algorithms, but differ in two ways. First, much of competitive analysis assumes worst case inputs, chosen to maximize the ratio between the computed value and the optimal value that could have been achieved with knowledge of the future, whereas for prophet inequalities some knowledge of the input, its distribution, is assumed to be known. And second, in order to achieve a certain competitive ratio, an online algorithm must perform within that ratio of the optimal performance on all inputs. Instead, a prophet inequality only bounds the performance in expectation, allowing some input sequences to produce worse performance as long as the average is good. References External links Matroid Prophet Inequalities and Mechanism Design, The Matroid Union An Economic View of Prophet Inequalities Online algorithms Sequential experiments Mechanism design
Prophet inequality
[ "Mathematics" ]
690
[ "Game theory", "Mechanism design" ]
72,965,015
https://en.wikipedia.org/wiki/QSO%20J0439%2B1634
QSO J0439+1634, often referred to by just its coordinates, J0439+1634 or J043947.08+163415.7, is a superluminous quasar, and was, until 20 February 2024, (when it was superseded by QSO J0529-4351) considered the brightest quasar in the early universe with a redshift of z = 6.51. It is approximately 12.873 billion light-years away. The brightness of the quasar is equivalent to about 600 trillion luminosities of the Suns with gravitational lensing, without this effect 11 trillion. The quasar-related supermassive black hole has a mass of 700 million solar masses. Discovery On April 3, 2018, the ACS/WFC observed and photographed gravitational lensing at the location of the quasar, and further research revealed an extremely bright and large quasar there. References Further reading Astronomical objects discovered in 2019 Quasars Supermassive black holes Taurus (constellation)
QSO J0439+1634
[ "Physics", "Astronomy" ]
223
[ "Black holes", "Unsolved problems in physics", "Supermassive black holes", "Constellations", "Taurus (constellation)" ]
72,966,505
https://en.wikipedia.org/wiki/Algorithmic%20Contract%20Types%20Unified%20Standards
Algorithmic Contract Types Unified Standards, abbreviated to ACTUS, is an attempt to create a globally accepted set of definitions and a way of representing almost all financial contracts. Such standards are regarded as important for transaction processing, risk management, financial regulation, the tokenization of financial instruments, and the development of smart contracts for decentralized finance (DeFi) using blockchain technology. ACTUS is used as a reference standard by the Office of Financial Research (OFR), an arm of the US Treasury. History The difficulty of defining and analyzing financial data were described by Willi Brammertz and his co-authors in a 2009 book, Unified Financial Analysis: The missing links of finance. The simplicity of the problem is described in an ECB paper, “Modelling metadata in central banks”. This cites the issue of how financial institutions have tried to overcome data silos by building enterprise-wide data warehouses. However, while these data warehouses physically integrate different sources of data, they do not conceptually unify them. For example, a single concept like notional value still might be captured in various ways in fields that might be labeled ‘nominal value,’ ‘current principal,’ ‘par value’ or ‘balance’. Standardization of data would improve internal bank operations, and offer the possibility of large-scale financial risk analytics by leveraging Big Data technology. Key to this is the idea of "contract types". The concepts were expanded upon by Brammertz and Allan I. Mendelowitz in a 2018 paper in the Journal of Risk Finance. They describe the need for software that turns natural language contracts into algorithms – smart contracts – that can automate financial processes using blockchain technology. Financial contracts define exchanges of payments or cashflows that follow certain patterns; in fact 31 patterns cover most contracts. Underlying these contracts there must be a data dictionary that standardizes contract terms. In addition, the smart contracts need access to information representing the state of the world and which affects contractual obligations. This information would include variables such as market risk and counterparty risk factors held in online databases that are outside the blockchain (sometimes called "oracles"). The idea of the standardized algorithmic representation of financial contracts, however, is independent of and predates blockchain technology and digital currencies. In fact, also Nick Szabo's definition of smart contracts dates back to 1994. However, it is highly relevant for blockchains or distributed ledgers and the concept of smart contracts. Brammertz and Mendelowitz argue in a 2019 paper that without standards, the chaos around data in banks today would proliferate on blockchains, because every contract could be written individually. They further argue that of the four conditions set by Szabo, blockchains will usually fulfill only one, namely observability. The authors argue that the adoption of a standard for smart contracts and financial data would reduce the cost of operations for financial firms, provide a computational infrastructure for regulators, reduce regulatory reporting costs, and improve market transparency. Also, it would enable the assessment of systemic risk by directly quantifying the interconnectedness of firms. These ideas led to the ACTUS proposal for a data standard alongside an algorithmic standard. Together, these can describe most financial instruments through 31 contract types or modular templates. The ACTUS Financial Research Foundation and the ACTUS Users Association develop the structure to implement the ideas. The also control the intellectual property and development approaches. Specifications are developed, maintained, and released on GitHub. In October 2021, ACTUS was added as the second reference after ISO 20022 to a database run by the Office of Financial Research, an arm of the US Treasury. ACTUS is being used to help define five asset classes (equities, debt, options, warrants, and futures) in the OFR's financial instrument reference database (FIRD). A third reference, the Financial Information eXchange (FIX) messaging standard, was added a year later. In 2023 ACTUS became a liaison member of ISO TC68 / SC9. ACTUS implementation ACTUS has been implemented as a set of royalty-free, open standards for representing financial contracts. The standards combine three elements. First, a concise data dictionary that defines the terms present in a particular type of financial contract. Second, a simple but complete taxonomy of the fundamental algorithmic contract type patterns. These incorporate the parts of the data dictionary that apply to a given contract type. Finally, the reference code in Java which calculates the cash flow obligations which are established by the contract so they can be accurately projected, analyzed and acknowledged by all parties over the life of the contract. Providing an open standard for the data elements and algorithms of contracts provides consistency first within financial institutions and second when sharing data among organizations in the finance industry. This data may be used to consolidate the views of product lines within a firm, to manage obligations between institutions, or to meet reporting obligations set by regulators. In addition, ACTUS can assist in the tokenization of financial instruments, and the development of smart contracts for decentralized finance (DeFi) using blockchain. For example, ACTUS contracts have been coded in the Marlowe smart contracts language. References Data modeling Financial software Cryptocurrency projects
Algorithmic Contract Types Unified Standards
[ "Engineering" ]
1,080
[ "Data modeling", "Data engineering" ]
74,418,611
https://en.wikipedia.org/wiki/48%2C000%20Hz
In digital audio, 48,000 Hz (also represented as 48 kHz or DVD Quality) is a common sampling rate. It has become the standard for professional audio and video. 48 kHz is evenly divisible by 24, a common frame rate for media, such as film, unlike 44.1 kHz. Origin In the late 1970s, digital audio didn't yet have a standard for a sampling rate, with proprietary sampling rates ranging from 32 kHz up to 50 kHz. As the use of digital audio increased, it became apparent that standardization on a single sampling rate was needed, which started to be worked on in 1981. A variety of requirements had to be considered before deciding on a sampling rate. Principally, the samplling rate had to be at least double the maximal frequency carried (as per the Nyquist–Shannon sampling theorem) — at least 40 kHz to roughly cover all human-audible frequencies without aliasing distortion. The sampling rates under consideration ranged between 45 kHz and 60 kHz. 60 kHz would have been the ideal sampling rate for film and video use because it would have a complete absence of leap frames, but from the professional audio-only recording perspective, it was considered wastefully high. To synchronize digital audio with television and film, there were five sampling rates available, that had leap frames but were not too high, which were as follows: 45, 48, 50, 52.5, and 54 kHz. European television chose 48 kHz due to them already broadcasting in 32 kHz, which corresponded to a 3:2 ratio, which made conversion easy, with no leap frames. As for NTSC television, they had two choices: 48, or 50 kHz. Ultimately, they chose 48 kHz, because there would be a leap frame every 5 frames, unlike 50 kHz, which would have a leap frame every 3 frames of color and b/w NTSC video, because European television was already using 48 kHz, and because it was easy to synchronize with 24 frames per second, a common frame rate used in television, and video. Differences between 48 and 44.1 kHz Humans can't easily hear the difference between 48, 44.1 kHz, and other similar sampling rates. One benefit that 44.1 kHz provides is that it is easier to work with, requiring fewer computer resources, simply because it has fewer samples per second, which also results in smaller file sizes. It is generally recommended to use 48 kHz for digital publishing, and 44.1 kHz for CD publishing. 48 kHz does have a slightly Nyquist frequency than 44.1 kHz, which allows for a more gradual low-pass filter to be used without introducing aliasing to the encoded signal. Other common rates Other sampling rates include: 44.1 kHz (also known as CD Quality): Originated in the late 1970s with PCM adaptors, and is still a common sampling rate to this day, mostly due to CD's adoption of this sampling rate, defined in the Red Book standard in 1980. 44,056 Hz: An obsolete sampling rate used in Color NTSC. 88.2, 96 kHz and above: High sampling rates are used for recording and production as they can improve audio signal processing and help reduce aliasing during recording. These higher rates are also used for audiophile listening but haven't become the standard for listening, as their principal advantage is being able to encode frequencies above those humans can hear, using more storage and computer resources. See also High-resolution audio Notes References Digital audio Sound measurements Audio engineering
48,000 Hz
[ "Physics", "Mathematics", "Engineering" ]
715
[ "Sound measurements", "Physical quantities", "Quantity", "Electrical engineering", "Audio engineering" ]
74,420,348
https://en.wikipedia.org/wiki/Generalized%20uncertainty%20principle
The Generalized Uncertainty Principle (GUP) represents a pivotal extension of the Heisenberg Uncertainty Principle, incorporating the effects of gravitational forces to refine the limits of measurement precision within quantum mechanics. Rooted in advanced theories of quantum gravity, including string theory and loop quantum gravity, the GUP introduces the concept of a minimal measurable length. This fundamental limit challenges the classical notion that positions can be measured with arbitrary precision, hinting at a discrete structure of spacetime at the Planck scale. The mathematical expression of the GUP is often formulated as: In this equation, and denote the uncertainties in position and momentum, respectively. The term represents the reduced Planck constant, while is a parameter that embodies the minimal length scale predicted by the GUP. The GUP is more than a theoretical curiosity; it signifies a cornerstone concept in the pursuit of unifying quantum mechanics with general relativity. It posits an absolute minimum uncertainty in the position of particles, approximated by the Planck length, underscoring its significance in the realms of quantum gravity and string theory where such minimal length scales are anticipated. Various quantum gravity theories, such as string theory, loop quantum gravity, and quantum geometry, propose a generalized version of the uncertainty principle (GUP), which suggests the presence of a minimum measurable length. In earlier research, multiple forms of the GUP have been introduced Observable consequences The GUP's phenomenological and experimental implications have been examined across low and high-energy contexts, encompassing atomic systems, quantum optical systems, gravitational bar detectors, gravitational decoherence, and macroscopic harmonic oscillators, further extending to composite particles, astrophysical systems See also Uncertainty principle References External links Research papers on Generalized Uncertainty Principle Quantum gravity String theory Unsolved problems in physics
Generalized uncertainty principle
[ "Physics", "Astronomy" ]
363
[ "Astronomical hypotheses", "Unsolved problems in physics", "Quantum gravity", "String theory", "Physics beyond the Standard Model" ]
74,424,084
https://en.wikipedia.org/wiki/Najeong%20Well%2C%20Gyeongju
Najeong Well, designated South Korean historic site No 245, on 20 November 1975, is located in the sacred ("religious") forest of Najeong in Gyeongju. The well is said to be the birthplace of Park Hyeokgeose, the founder of Silla. Gallery In popular culture In the South Korean TV series, Hwarang: The Poet Warrior Youth, the young True bones meet here to fight, and, Najeong being a sacred place, are subject to death, a death which is avoided, by swearing to become Hwarang with allegiance not to their families but to Silla and its king. References Gyeongju Historic Sites of South Korea
Najeong Well, Gyeongju
[ "Chemistry", "Engineering", "Environmental_science" ]
144
[ "Hydrology", "Water wells", "Environmental engineering" ]
74,430,591
https://en.wikipedia.org/wiki/Pool%20skimmer
A skimmer or surface separator (it separates substances from the surface of a liquid) is an essential accessory for the maintenance and cleaning of the water in a swimming pool. It is used to remove all the surface dirt floating on the water surface, such as leaves, tanning oil and human secretions. These impurities remain suspended on the surface, affect the appearance of the water and are not always removed by the conventional vacuuming process. The skimmer is installed directly in the surface water suction system and also has the function of controlling the water level to prevent accidental overflows. In the United States and Portugal, the use of skimmers in the construction of swimming pools is mandatory, regulated and standardized by competent bodies. Types There are different types of skimmers that can be used for different purposes. The most common types of skimmers include: Manual skimmers: These are basic skimmers that consist of a strong net strung on the end of a long pole. They are used to remove waste and pollutants from the surface of the water. Automatic skimmers: These are the most common variety of skimmers and the one you will most likely see running in any given pool. They are installed at surface level and are designed to remove debris and contaminants from the surface of the water. Standalone skimmers: These skimmers are designed to be used in conjunction with a pump and filter system. They are installed at surface level and are designed to remove debris and contaminants from the surface of the water Drainage opening Typically a skimmer draws water from the pool through a rectangular opening in the wall, at the top of the pool, connected through a device installed in one (or more) walls of the pool. The internal parts of the skimmer are accessed from the pool deck through a circular or rectangular cover, approximately one foot in diameter. If the pool's water pump is operational, it draws water from the pool through a hinged floating chute (which operates from a vertical position at a 90-degree angle to the pool, to prevent leaves and debris from being washed back into the pool by wave action), and down into a removable "skimmer basket", whose purpose is to catch leaves, dead insects and other larger floating debris. The opening visible from the side of the pool is usually 1'0" (300 mm) wide by 6" (150 mm) high, which cuts the water halfway down the center of the opening. Skimmers with wider openings are called "wide angle" skimmers and can be up to 2'0" wide (600 mm). Floating skimmers have the advantage of not being affected by water level, as they adjust to work with the suction rate of the pump and will maintain optimal skimming regardless of water level, leading to a significantly reduced amount of biomaterial in the water. Skimmers should always have a leaf basket or filter between it and the pump to avoid clogging the pipes leading to the pump and filter. Consecutive dilution A consecutive dilution system is usually provided to remove the organic waste in stages after passing through the skimmer. The waste material is trapped within one or more sequential skimmer basket sieves, each with a finer mesh to further dilute the size of the contaminant. Dilution here is defined as the act of making something weaker in strength, content or value. The first basket is placed very close to the mouth of the skimmer. The second is connected to the circulation pump. Here 25% of the water drawn from the main drain at the bottom of the pool meets 75% drawn from the surface. The circulation pump sieve basket is easily accessible for maintenance and should be emptied daily. The third sieve is the sand unit. Here the smallest organic waste that has slipped through the previous sieves is trapped by the sand. If not removed regularly, organic waste will continue to decompose and affect water quality. The dilution process makes it easy to remove organic waste. Ultimately, the sand screen can be backwashed to remove smaller trapped organic debris that otherwise leaches ammonia and other compounds into the recirculated water. These additional solutes eventually lead to the formation of disinfection byproducts (DBPs). The sieve baskets are easily removed each day for cleaning, as is the sand unit, which should be backwashed at least once a week. A perfectly maintained back-to-back dilution system dramatically reduces the build-up of chloramines and other DBPs. The water returned to the pool must have been cleaned of all organic debris larger than 10 microns in size. Recirculation jets Return water from the consecutive dilution system passes through subsurface return jets. These are designed to impact a turbulent flow as the water enters the pool. This flow as a force is much smaller than the mass of water in the pool and takes the path of least pressure upwards where eventually the surface tension reforms it into a laminar flow at the surface. As the returned water disturbs the surface, it creates a capillary wave. If the return jets are positioned correctly, this wave creates a circular motion within the surface tension of the water, allowing the surface to slowly circulate around the pool walls. Organic debris that floats to the surface through this capillary wave circulation slowly passes through the skimmer mouth, where it is attracted by laminar flow and surface tension over the skimmer dump. In a well-designed pool, the circulation caused by disturbed return water helps remove organic debris from the pool surface and directs it to be trapped within the back-dilution system for easy disposal. Some return jets are equipped with a rotating filter. Used correctly, it induces deeper circulation, cleaning the water even more. Rotating the jet filters at an angle imparts rotation within the entire depth of the pool water. Orientation to the left or to the right would result in a clockwise or counterclockwise rotation, respectively. This has the advantage of cleaning the bottom of the pool and slowly moving the sunken inorganic debris into the main drain, where it is removed by the circulator basket screen. In a properly constructed pool the circulation of water caused by the way it returns from the back-dilution system will reduce or even eliminate the need to vacuum the bottom. To obtain maximum rotational force on the main body of water, the back-to-back dilution system must be as clean and unblocked as possible to allow maximum flow pressure from the pump. As the water swirls, it also disturbs the organic debris in the lower layers of the water, forcing it up. The rotational force created by the pool's return jets is the most important part of cleaning the pool water and pushing organic debris through the skimmer mouth. If the pool is designed and operated correctly, this circulation is visible and, after a period, reaches even the deepest end, inducing a low-speed vortex over the main drain due to suction. Correct use of return jets is the most effective way to remove disinfection byproducts caused by deeper decaying organic debris and bring them into the back-to-back dilution system for immediate disposal. Additional sanitation methods Salt chlorination units, electronic oxidation systems, ionization systems, microbial disinfection systems with ultraviolet lamps and Tri-Chlor Feeders are other independent or auxiliary systems of skimmers for pool sanitation. Apart from this, the temperature of the water is very important, since if it remains high, it favors the proliferation of algae. Mineral disinfectants Mineral disinfectants for pools and spas use minerals, metals, or elements derived from the natural environment to produce water quality benefits that would otherwise occur with harsh or synthetic chemicals . Companies cannot sell a mineral disinfectant in the United States unless it has been registered with the US Environmental Protection Agency (EPA). Two mineral disinfectants are currently registered with the EPA: one is a silver salt with a controlled release mechanism that is applied to calcium carbonate granules that help neutralize pH; the other uses a colloidal form of silver released into water from ceramic beads. Mineral technology takes advantage of the cleaning and filtering qualities of common substances. Silver and copper are well-known oligodynamic substances that are effective in destroying pathogens. Silver has been shown to be effective against harmful bacteria, viruses, protozoa and fungi . Copper is widely used as an algaecide. Alumina derived from aluminates filters harmful materials at the molecular level and can be used to control the rate of delivery of desirable metals such as copper. Working through the pool or spa's filtration system, mineral disinfectants use combinations of these minerals to inhibit algae growth and remove contaminants. Unlike chlorine or bromine, metals and minerals do not evaporate and do not degrade. Minerals can make water noticeably softer and, by replacing harsh chemicals in water, reduce the possibility of red eyes, dry skin and bad odors. Oil and grease on the surface The density of fresh water is 1,000 kilograms per cubic meter, while the density of seawater varies between 1,020 and 1,030 kilograms per cubic meter. Oil is less dense than fresh water and seawater, so it floats in both types of water, but due to the difference in density the oil and fat particles, dispersed in the water, will reach the surface more quickly in a saltwater or seawater pool, being in both cases, "the only possible means" to remove them, to recirculate the water surface through skimmers . References External links Automatic Skimmer Robotic Pool Cleaner Swimming pool equipment Cleaning
Pool skimmer
[ "Chemistry" ]
1,999
[ "Cleaning", "Surface science" ]
74,433,644
https://en.wikipedia.org/wiki/Rhenium%20trioxynitrate
Rhenium trioxynitrate, also known as rhenium(VII) trioxide nitrate, is a chemical compound with the formula ReO3NO3. It is a white solid that readily hydrolyzes in moist air. Preparation and properties Rhenium trioxynitrate is prepared by the reaction of ReO3Cl (produced by reacting rhenium trioxide and chlorine) and dinitrogen pentoxide: ReO3Cl + N2O5 → ReO3NO3 + NO2Cl The ReO3Cl can be replaced with rhenium heptoxide, however, this produces an impure product. This compound reacts with water to produce perrhenic acid and nitric acid. When heated above 75 °C, it decomposes to rhenium heptoxide, nitrogen dioxide, and oxygen: 4 ReO3NO3 → 2 Re2O7 + 2 NO2 + O2 A graphite intercalation compound can be produced by reacting a mixture of rhenium trioxynitrate and dinitrogen pentoxide with graphite. Structure X-ray diffraction and IR spectroscopic evidence rejects the formulations NO2+ReO4– or Re2O7·N2O5, but instead suggests a polymeric structure with a monodentate nitrate ligand. References Rhenium compounds Nitrates
Rhenium trioxynitrate
[ "Chemistry" ]
289
[ "Oxidizing agents", "Nitrates", "Salts" ]
74,435,162
https://en.wikipedia.org/wiki/Major%20Baltic%20inflow
The Baltic Sea saltwater inflow, known as the major Baltic inflow (MBI), refers to a significant influx of saline water from the North Sea into the Baltic Sea through the Danish straits. In the Baltic Sea, dense seawater from the North Sea sinks to the bottom and moves along the seabed, displacing the often oxygen-depleted water in the deep basins. Simultaneously, it transports new oxygen-rich water to the deep basins. These inflows are crucial for the Baltic Sea's ecosystem because they alleviate the oxygen depletion that commonly occurs in the deep basins of the poorly mixed sea, and at the same time, they prevent eutrophication caused by internal nutrient loading. The process for MBI formation The water in the Baltic Sea is brackish, meaning it has a low salinity. Each year, approximately 550 km3 of freshwater from rainfall and rivers within its drainage basin flow into the Baltic Sea. However, only about 100 km3 of water evaporates from the Baltic Sea into the atmosphere annually. This results in an excess of approximately 450 km3 of fresh water each year, which accounts for about two percent of the total volume of the Baltic Sea. To maintain a stable water level over the long term, the excess water flows out of the Baltic Sea through the Danish straits into the North Sea. Periodic counterflow from the North Sea to the Baltic Sea prevents the Baltic from gradually turning into a freshwater basin. The saltwater inflow, known as the major Baltic inflow (MBI), is said to occur when there is a strong overflow of saline water from the North Sea to the Baltic Sea over the cross-sections of the Darss Sill (Belt Sea) and the Drogden Sill (The Sound). Such a flow almost entirely destroys the salinity stratification in the areas of the sills for several days. Typically, the inflow event must last for at least five days to be classified as an MBI. During extremely powerful events, the Baltic Sea receives over 100 km3 of very saline water from the ocean, whereas during weaker MBIs, the volume is less than 100 km3, averaging around 70 km3. Major Baltic inflows primarily occur during winter and early spring because weather conditions are most favorable for the inflow of saline water. First, the Baltic Sea must experience easterly and southeasterly winds for about 20 to 30 days, reducing precipitation within the Baltic Sea drainage basin, enhancing outflow from the basin, and causing a drop in the sea level. Before the arrival of the saline inflow, the Baltic Sea's surface level is typically around 26 centimeters lower than usual. Following this, a roughly one-month period of westerly winds begins, during which the sea level in the Kattegat rises, and pressure differences force the saline water from the North Sea through the narrow Danish straits into the Baltic Sea. Throughout the entire inflow process, the Baltic Sea's water level rises on average by about 59 cm, with 38 cm occurring during the preparatory period and 21 cm during the actual saline inflow. The MBI itself typically lasts for 7–8 days. Occurrence of MBIs The formation of an MBI requires specific, relatively rare weather conditions. Between 1897 and 1976, approximately 90 MBIs were observed, averaging about one per year. Occasionally, there are even multi-year periods without any MBIs occurring. Large inflows that effectively renew the deep basin waters occur on average only once every ten years. Very large MBIs have occurred in 1897 (330 km3), 1906 (300 km3), 1922 (510 km3), 1951 (510 km3), 1993/94 (300 km3), and 2014/2015 (300 km3). Large MBIs have on the other hand been observed in 1898 (twice), 1900, 1902 (twice), 1914, 1921, 1925, 1926, 1960, 1965, 1969, 1973, 1976, and 2003. The MBI that started in 2014 was by far the third largest MBI in the Baltic Sea. Only the inflows of 1951 and 1921/1922 were larger than it. Previously, it was believed that there had been a genuine decline in the number of MBIs after 1980, but recent studies have changed our understanding of the occurrence of saline inflows. Especially after the lightship Gedser Rev discontinued regular salinity measurements in the Belt Sea in 1976, the picture of the inflows based on salinity measurements remained incomplete. At the Leibniz Institute for Baltic Sea Research (Warnemünde, Germany), an updated time series has been compiled, filling in the gaps in observations and covering Major Baltic Inflows and various smaller inflow events of saline water from around 1890 to the present day. The updated time series is based on direct discharge data from the Darss Sill and no longer shows a clear change in the frequency or intensity of saline inflows. Instead, there is cyclical variation in the intensity of MBIs at approximately 30-year intervals. Effects on the state of the Baltic Sea and its ecosystem Major Baltic inflows (MBIs) are the only natural phenomenon capable of oxygenating the deep saline waters of the Baltic Sea, making their occurrence crucial for the ecological state of the sea. The salinity and oxygen from MBIs significantly impact the Baltic Sea's ecosystems, including the reproductive conditions of marine fish species such as cod, the distribution of freshwater and marine species, and the overall biodiversity of the Baltic Sea. The heavy saline water brought in by MBIs slowly advances along the seabed of the Baltic Proper at a pace of a few kilometers per day, displacing the deep water from one basin to another. Although some oxygen is transported from the North Sea to the Baltic Sea, only a small portion of the oxygen responsible for renewing the deep basins originates from the Baltic Sea entrance area. In the southwestern basins of the Baltic Sea (Arkona sea, Bornholm basin), there is already oxygen present in the water column, and the role of the saltwater pulse is to entrain and direct it towards the deep basins of the sea. It has been observed that the oxygen supplied by saline inflows is consumed more rapidly in the Baltic Proper than before. In 1993, the oxygen replenishment was depleted in about 17 months, in 2003, it took approximately 13 months, and in 2015, it was exhausted in just six months. The intense oxygen consumption in the Baltic Sea's seafloor is due to prolonged nutrient loading and the impacts of climate change. Although a single MBI can oxygenate both the deep water hydrogen sulfide and ammonia, multiple consecutive inflows would be required to raise the oxygen level to a satisfactory level. The so-called "oxygen debt" was estimated to be about 20 million tons in 2020. This is the amount of oxygen that should be transported to the Baltic Proper via saline inflows to raise its oxygen concentration to the 3 mL/L level observed after the MBI of 1993. With a single 200 km3 MBI, approximately 2 million tons of oxygen are transported. References Baltic Sea Oceanography
Major Baltic inflow
[ "Physics", "Environmental_science" ]
1,469
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
78,785,499
https://en.wikipedia.org/wiki/Draupner%20wave
The Draupner wave, also known as the New Year's wave or Draupner freak wave, was a rare freak wave that was the first to be detected by a measuring instrument. The wave, determined to be in height, was recorded on 1 January 1995 at Unit E of the Draupner platform, a gas pipeline support complex located in the North Sea about southwest from the southern tip of Norway. Background The Draupner platform rig, located in the Norwegian North Sea and 16/11 offshore from Norway, was built to withstand a calculated 1-in-10,000-years wave with a predicted height of and was fitted with state-of-the-art sensors, including a laser rangefinder wave recorder on the platform's underside. Accompanying storm On 31 December, a low pressure system was located over Sweden, with a north-western motion. This system produced large waves over the North Sea, although none would be of significance. Early the next day, a polar low would form over the Norwegian portion of the North Sea, which produced heavy winds that would set up the formation of the Draupner wave. Discovery The wave itself was first detected at 15:24 UTC on 1 January 1995 by a downward-pointing laser beam located on the Draupner S platform. The laser beam recorded a rogue wave with a maximum wave height of . Peak elevation above still water level was . The reading was confirmed by the other sensors. The platform sustained minor damage in the event. In the area, the SWH at the time was about , so the Draupner wave was more than twice as tall and steep as its neighbors, with characteristics that fell outside any known wave model. The wave caused enormous interest in the scientific community. Legacy The platform sustained minor damage in the event. In the area, the SWH at the time was about , so the Draupner wave was more than twice as tall and steep as its neighbors, with characteristics that fell outside any known wave model. The wave caused enormous interest in the scientific community. The wave, one of the largest ever documented in the Atlantic Ocean, helped solidify the initial speculation that rogue waves did naturally occur, and as a result the wave would be heavily studied in the years following the event. See also List of rogue waves Notes and footnotes Notes Footnotes Sources 1995 in science Waves Water waves Rogue wave incidents
Draupner wave
[ "Physics", "Chemistry" ]
489
[ "Physical phenomena", "Applied and interdisciplinary physics", "Water waves", "Waves", "Motion (physics)", "Physical oceanography", "Fluid dynamics" ]
78,789,026
https://en.wikipedia.org/wiki/Herman%20Arend%20Ferguson
Herman Arend Ferguson (25 February 1911 – 10 April 1997) was a Dutch civil engineer and hydraulic engineer who contributed to water management in The Netherlands. He played a central role in the recovery efforts following the inundation of Walcheren in 1944, and the works to repair the significant damage caused by the North Sea flood of 1953. He held senior positions at Rijkswaterstaat, authored several key publications on hydraulic engineering, and was awarded the Order of the Netherlands Lion. Life and career Herman Arend Ferguson was born on 25 February 1911 in Voorburg, the son of George Ferguson and Francesca Hermina van den Brandhof. He was of Scottish ancestry. He graduated in 1938 with a degree in Civil Engineering from Delft University of Technology. After graduation, he joined Rijkswaterstaat (the Dutch Directorate-General for Public Works and Water Management), where he was employed in the (Study Department) of the (Directorate for the Lower Rivers). During 1945–46, Ferguson was part of the (Service for the Reclamation of Walcheren), which was established to oversee the reclamation of the island of Walcheren after the intentional inundation of the island during World War II. The character Rafelding in the non-fiction novel Het verjaagde water by A. den Doolaard is based on Ferguson. Afterwards, Ferguson moved to Vlissingen and worked for Rijkswaterstaat's (Service for Dike Repair Zeeland) following the North Sea flood of 1953. He became head of the (Hydraulic Engineering Division) of the Delta Service from 1956 to 1960, and subsequently headed the (Rotterdam Waterway District). Ferguson was appointed Chief Engineering Director of the (Directorate for the Lower Rivers, 1962–1969) and later of the Delta Service (1969–1976), where he contributed to significant water engineering projects, including the Delta Works and the improvement of the Nieuwe Waterweg. Ferguson was deeply involved in the design and construction of the Haringvliet sluices, which were critical for water flow regulation and flood protection in the Rhine-Meuse-Scheldt Delta. He also contributed to the closure of tidal inlets including the Haringvliet and Brouwershavense Gat, major Delta Works schemes which were required to ensure that flood risks were mitigated. Publications Ferguson authored and contributed to numerous technical reports and publications over his career, including: (Lower Rivers in the 1960s: Personal Memories of a Chief Engineer-Director, 1995) (Dialogue with the North Sea, 1991), an exploration of the influence of human intervention on the Dutch Delta. (Delta Vision: A Retrospective on Forty Years of Hydraulic Engineering in Southwest Netherlands, 1988) (The Dutch Delta: A Compromise Between Environment and Technique in the Battle Against Water, 1983) (Six-Barge Push Towage: A Study of General and Nautical Aspects, 1983) (Note on the Concentration of Hydraulic Engineering Works in a Separate Service, 1972) (Hydrological Changes in the Delta Region and Their Impact on Water Management, 1965) (Wave Research in the Delta Region, 1959) (Hydraulic Research for the Design and Construction of Closure Gaps in the Delta Dams, 1958) (Salt Movement on the Rotterdam Waterway Especially During Low Surface Water Discharge, 1957) (Restoration and Improvement Works After the Disaster of 1 February 1953, 1954) (Report on the Condition of Banks and Beaches in Zeeland 1951 VI: Banks Along Keeten, Mastgat, and Zijpe, 1953) (Report on Observations with the "Ocean" in the Mouth Area of the Western Scheldt, 1943) (Introduction to Tide Calculation, 1943). Most of these publications are maintained in electronic format by the Rijkswaterstaat archive. Awards Ferguson was awarded the Order of the Netherlands Lion, and his contributions to the scientific underpinning of the Delta Works Plan earned him an honorary doctorate from Delft University of Technology in 1987. See also Delta Works Flood control in the Netherlands Rijkswaterstaat References External links Trésor der Hollandsche Waterbouw – Digital repository of publications by H.A. Ferguson 1911 births 1997 deaths Dutch engineers Hydraulic engineering Delft University of Technology alumni Academic staff of the Delft University of Technology 20th-century Dutch engineers
Herman Arend Ferguson
[ "Physics", "Engineering", "Environmental_science" ]
875
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
78,789,767
https://en.wikipedia.org/wiki/Atilotrelvir
Atilotrelvir (development code GST-HG171) is a drug for the treatment of COVID-19. It has broad-spectrum anti-SARS-CoV-2 activity against different variants (including WT, β, δ, and omicron). In combination with ritonavir, it was approved for use in China in 2023. References COVID-19 drug development Nitriles Pyrrolidones SARS-CoV-2 main protease inhibitors Trifluoromethyl compounds Carboxamides Cyclopropyl compounds Spiro compounds Tert-butyl compounds
Atilotrelvir
[ "Chemistry" ]
135
[ "Drug discovery", "Functional groups", "Organic compounds", "COVID-19 drug development", "Nitriles", "Spiro compounds" ]
78,798,008
https://en.wikipedia.org/wiki/Mackey%20functor
In mathematics, particularly in representation theory and algebraic topology, a Mackey functor is a type of functor that generalizes various constructions in group theory and equivariant homotopy theory. Named after American mathematician George Mackey, these functors were first introduced by German mathematician Andreas Dress in 1971. Definition Classical definition Let be a finite group. A Mackey functor for consists of: For each subgroup , an abelian group , For each pair of subgroups with : A restriction homomorphism , A transfer homomorphism . These maps must satisfy the following axioms: Functoriality: For nested subgroups , and . Conjugation: For any and , there are isomorphisms compatible with restriction and transfer. Double coset formula: For subgroups , the following identity holds: . Modern definition In modern category theory, a Mackey functor can be defined more elegantly using the language of spans. Let be a disjunctive -category and be an additive -category (-categories are also known as quasi-categories). A Mackey functor is a product-preserving functor where is the -category of correspondences in . Applications In equivariant homotopy theory Mackey functors play an important role in equivariant stable homotopy theory. For a genuine -spectrum , its equivariant homotopy groups form a Mackey functor given by: where denotes morphisms in the equivariant stable homotopy category. Cohomology with Mackey functor coefficients For a pointed G-CW complex and a Mackey functor , one can define equivariant cohomology with coefficients in as: where is the chain complex of Mackey functors given by stable equivariant homotopy groups of quotient spaces. References Further reading Dieck, T. (1987). Transformation Groups. de Gruyter. Webb, P. "A Guide to Mackey Functors" Bouc, S. (1997). "Green Functors and G-sets". Lecture Notes in Mathematics 1671. Springer-Verlag. Representation theory Algebraic topology Functors Homological algebra
Mackey functor
[ "Mathematics" ]
452
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Algebraic topology", "Fields of abstract algebra", "Topology", "Mathematical relations", "Functors", "Category theory", "Representation theory", "Homological algebra" ]
75,719,981
https://en.wikipedia.org/wiki/Caladinho%20Stream
The Caladinho Stream (Portuguese: Ribeirão Caladinho) is a watercourse that rises and flows in the Brazilian municipality of Coronel Fabriciano, in the state of Minas Gerais. The source is located near the Caladinho neighborhood and it runs for about 12 kilometers to its mouth in the Piracicaba River through the Industrial Novo Reno, Universitário, Aparecida do Norte, Morada do Vale, Aldeia do Lago and Santa Terezinha II neighborhoods. Its sub-basin covers 9 km2. Pollution and the disorderly occupation of adjacent areas, especially during the 20th century, have caused a propensity to flooding during storms. Solutions are being developed through environmental education projects in the city's schools, hillside containment, drainage and reforestation works. History and occupation Urbanization in the area of the Caladinho Stream Sub-basin began in the 1960s, when the current Caladinho and Santa Terezinha II neighbourhoods were settled. Its name pays homage to Calado, the first name given to the current central area of Coronel Fabriciano. In the 1950s, the site underwent an earthmoving process for the implementation of the BR-381 highway (formerly MG-4), which cut through the city via Presidente Tancredo de Almeida Neves Avenue, but the stretch under federal concession was municipalized after being transferred outside the urban perimeter. Many of the first lots in Coronel Fabriciano were located on the banks of watercourses, which were occupied without planning and resulted in a propensity to flooding during storms. Historically, Tancredo Neves Avenue has been one of the areas most affected by flooding caused by deficiencies in the flow of rainwater into the stream. The impacts of heavy rainfall were reduced after drainage works, the construction of water collection branches, the opening of galleries and gabions between 2007 and 2008. The source of the Caladinho Stream is located in the Caladinho neighborhood, near an allotment. It runs from north to south through the neighborhoods of Industrial Novo Reno, Universitário, Aparecida do Norte, Morada do Vale, Aldeia do Lago and Santa Terezinha II, and flows into the Piracicaba River, covering a distance of around 12 km. In several stretches, including the area that intersects the interior of the campus of the Catholic University Center of Eastern Minas Gerais (Unileste), its course is channeled. Covering 9 km2, its sub-basin is bordered by the Caladão Stream Sub-basin and is part of the Piracicaba River Sub-basin, which is part of the Doce River Basin. Ecology Coronel Fabriciano's Ordinance Plan includes reforestation of the section inside the urban area, an increase in flow capacity and flood control. In 2004, the construction of a sewage treatment plant located between the Mangueiras and Santa Terezinha II neighborhoods to meet the demand from the city's waterways began to be studied. However, the project was suspended because residents in the area feared odors. In the following years, collection networks and interceptors were installed throughout the sub-basin. Sewage from Coronel Fabriciano remained discharged directly into the watercourses bordering the urban perimeter without any management until 2019, when the construction of a treatment plant in Limoeiro, district of Timóteo, to supply 165,000 inhabitants in both cities was authorized. Despite the start of wastewater treatment, irregular dumping of garbage and debris on the banks of the spring can also be spotted in some stretches. At a distance of 300 meters from the source, the water is already considered to be outside the parameters of the National Environment Council (CONAMA) and cannot be consumed and contact avoided. There are a considerable number of people, including children, who use the water for irrigating vegetables, collecting recyclable material or even for leisure. During the rainy season, which usually runs from October to April, the lower areas are affected by floods. Besides pollution, siltation, erosion and damage to biodiversity have been observed in the course of the stream. There are places demarcated as permanent protection areas (APP), but that have been occupied irregularly. The City Hall regularly weeds, cleans and removes debris from public spaces and provides disposal sites around the city for construction debris, furniture and branches. Environmental education projects in the city's schools, lectures, photographic exhibitions, video presentations and community meetings are held to alleviate the situation. Gallery See also History of Coronel Fabriciano References Bibliography Water streams Bodies of water Hydrology Rivers of Minas Gerais
Caladinho Stream
[ "Chemistry", "Engineering", "Environmental_science" ]
966
[ "Hydrology", "Environmental engineering" ]
75,722,285
https://en.wikipedia.org/wiki/Doce%20River%20Basin
The Doce River Basin (Portuguese: Bacia do rio Doce) is located in the southeastern region of Brazil. According to the Doce River Basin Committee (CBH-Doce), it belongs to the Southeast Atlantic hydrographic region, has a drainage area of 86,175 square kilometers and covers all or part of 229 municipalities. 86% of the basin's area belongs to the state of Minas Gerais, in the Doce River Valley, and 14% to Espírito Santo. Description The main sources of the Doce River emerge in the Mantiqueira and Espinhaço mountain ranges. It is formed from the confluence of the Piranga and Carmo rivers between the municipalities of Ponte Nova, Rio Doce and Santa Cruz do Escalvado, in the state of Minas Gerais. It runs 853 kilometers to its mouth in the Atlantic Ocean at Linhares, on the coast of Espírito Santo. The main tributaries of the Doce River on the left bank include the Piracicaba, Santo Antônio, Corrente Grande, Suaçuí Pequeno, Suaçuí Grande (in Minas Gerais), Pancas and São José (in Espírito Santo). On the right bank, the main affluents are the Casca, Matipó, Caratinga and Manhuaçu rivers (in Minas Gerais), and the Guandu, Santa Joana and Santa Maria do Rio Doce rivers (in Espírito Santo). The Doce River Basin is composed of the main sub-basins of the Piranga, Piracicaba, Santo Antônio, Suaçuí (Pequeno and Grande), Caratinga and Manhuaçu rivers in Minas Gerais; and the Guandu, Santa Joana and Santa Maria do Doce rivers in Espírito Santo. Climate The topography and the sea directly affect the climatic characteristics of the Doce River Basin. In winter, the presence of the South Atlantic anticyclone favors the dominance of high pressure and prevents humidity from rising, forming the dry season. There is also the intrusion of polar air masses, which makes it difficult for temperatures to rise and instabilities to form. The terrain and the coastline are detrimental to the action of polar air and prevent the maintenance of low average temperatures (less than 18 °C) in the coldest month of the year at most altitudes below 300 meters. In summer, the temperature rises easily and the influence of tropical instabilities sets up the wet season. The lower altitude areas, which include a large part of Espírito Santo and the valley bottoms formed by the course of the Doce River, have the highest temperatures and lowest average rainfall, ranging from 1,000 to 1,200 millimeters a year. These conditions characterize the semi-humid tropical climate (Aw, according to the Köppen-Geiger climate classification), present in the Steel Valley, Governador Valadares, Aimorés and Colatina. Throughout the basin, the highest altitudes are affected by annual rainfall of more than 1,200 millimeters and the lowest average temperatures, characterizing the subhumid temperate climate. The northern part, classified as Cwa, is characterized by hot summers, as in Itambacuri, São João Evangelista and Itabira. To the south, where forest plateaus dominate, summers are cool and the climate is designated as Cwb, as occurs in Viçosa, Ponte Nova and part of Caratinga. Geomorphology and geology The relief of the Doce River Basin is significantly rugged and characterized by the mares de morros. The course traces a lowland area called the Doce River interplateau depression, with average altitudes in its interior ranging from 250 to 500 meters on hills of medium slope. Until it reaches Governador Valadares, the Doce River follows a southwest–northeast course that intersects the geological unit known as the "Cinturão Atlântico", which is part of the Mantiqueira Province. The dissected plateaus of south-central and eastern Minas Gerais occupy around 70% of the basin's area and have an undulating relief, including landforms such as ridges, valleys and hills. Other relevant geological units in the basin are the Espinhaço mountain range, to the east, composed of ruiniform structures shaped by fluvial erosion and dividing the basins of the Doce, São Francisco and Jequitinhonha rivers; and the Iron Quadrangle, at the western end, with altitudes ranging from 1,000 to 1,700 meters, exceeding 2,000 meters in the Caraça mountain range, under rocks dissected by the geological structure. However, the highest altitude in the basin area is found in Iúna, in Espírito Santo, at 2 627 meters. The subsoil has several aquifers, but the majority (92.6% of the total) are fractured aquifers with low water productivity. Around 3.5% is concentrated in the coastal region of Espírito Santo, where productivity is potentially high but variable. The alluvial aquifer located under the Steel Valley, which accounts for 3% of the mass, is the only one with high productivity and flow, serving as the main source of public supply. The rest of the aquifers are karst or double porosity types, with variable water productivity. Pedology In the Doce River depression, rocks from the gneissic-magmatic-metamorphic complex predominate, including biotite-gneiss, granitic and granite-gneissic rocks; to a lesser extent, rocks from the charnockitic complex. The granite-gneiss rocks of the Precambrian basement under crystalline structures are predominant in the dissected plateaus. In the Espinhaço mountain range, the composition is predominantly quartzite rocks, while in the Iron Quadrangle, itacolumite, itabirite and quartzite ridges stand out. Soils of the red-yellow latosol and red-yellow acrisol classes predominate in the basin. Latosols, registered from flat to mountainous terrain, are drained, dystrophic and alkalic (with a high concentration of aluminum), and formed mainly from gneissic and magmatic rocks, schists and sandy-clay deposits. Acrisol is formed from gneissic and magmatic rocks, charnoquites and schists and is also found on flat to mountainous terrain but is more common in hilly areas. It is the most susceptible to erosion, but also the most suitable for some of the region's agricultural crops, such as corn, rice, coffee and pastures. Humic latosols, litholic soils, cambisols and rock outcrops are found to a lesser extent. Environmental crime On November 5, 2015, a mining tailings dam collapsed in the subdistrict of Bento Rodrigues, in the municipality of Mariana, in the state of Minas Gerais, and caused a torrent of 62 million cubic meters of mining tailings discharged into the Doce River. The event was considered the biggest environmental disaster in Brazil's history and the worst tailings dam accident ever recorded in the world. See also Mariana dam disaster Brumadinho dam disaster References Bibliography Landforms of South America Landforms of Brazil Drainage basins of Brazil Hydrology Water streams Water and the environment
Doce River Basin
[ "Chemistry", "Engineering", "Environmental_science" ]
1,525
[ "Hydrology", "Environmental engineering" ]
75,733,128
https://en.wikipedia.org/wiki/Autogenerative%20high-pressure%20digestion
Autogenerative high-pressure fermentation (AHPD) is a biogas production technique that operates under elevated gas pressure. This pressure is naturally generated by the bacteria and archaea through the gases they release. First described by R. Lindeboom of University of Wageningen (WUR) in 2011, a batch reactor was pressurized to 58 bar, yielding a methane concentration of 96% in the resulting biogas. This method is also commonly referred to as High Pressure Anaerobic Digestion (HPAD) in scientific literature. AHPD leverages the higher solubility of carbon dioxide (CO2) at 0.031 mol/L/bar compared to methane (CH4) at 0.0016 mol/L/bar. This difference allows more CO2 to dissolve in the digestate, while hydrogen sulfide (H2S) also dissolves more efficiently under pressure. The result is biogas with a higher methane content, which requires less upgrading to meet natural gas standards, ultimately reducing processing costs. Microbial composition Individual species of microorganism have different optimal conditions in which they grow and replicate most rapidly.There is a specificrange around that optimum in which a specie is able to survive. Factors such as the pH, temperature, osmotic pressure (often caused by salinity) all contribute to the optimal condition of all microorganisms. For example, in terms of pressure, some species are able to survive in extremophile conditions such as extreme radiation, temperature, salinity or pressure. Piezophile microorganisms have their optimal growth condition at a pressure equal to or above 10 megapascals (99 atm; 1,500 psi). Some bacteria and archaea have adapted to life in the deep oceans, where the pressure (Hydrostatic pressure) is much higher than at sea level. For example, the methane-producing archaea species Methanocaldococcus, Methanothermococcus, Methanopyrus and Methanotorris have been found in hydrothermal vents in the ocean floor. Research at the University of Groningen (RUG) has shown that the bacterial community is affected by pressure from composition changes. This makes it possible to influence the anaerobic digestion process.A further development of this technique is the addition of hydrogen gas to the reactor. According to Henry's law, this gas also dissolves more at increased pressure. The result is that it can be better absorbed by bacteria and archea. In turn, it converts the hydrogen gas with the already dissolved carbon dioxide into additional methane. This combination of techniques was described in detail by Kim et all in 2021, known to be a process called biological methanation. On Michael Liebreich's hydrogen ladder 5.0, this form of biogas upgrading is at step C. This is considerably higher than applications as fuel in vehicles. These are spread over steps D to G. Although the technique is usually used as a fermentation process for thick liquid flows and solid biomass, it can also be applied as anaerobic Wastewater treatment. In South Korea, they have succeeded in operating a UASB reactor (a form of anaerobic wastewater treatment) at 8 Bar. A biogas was then created with a methane content of 96.7%. A remarkable finding was that the grains in the sludge that are so similar in characteristic of the UASB technique were well preserved. This was because more Extracellular polymeric substance (EPS) was formed in the biofilm. Microorganisms make these to protect themselves against difficult conditions, in this case the extreme pressure. See also Biogas Anaerobic digestion References Sustainable energy Biogas technology Anaerobic digestion
Autogenerative high-pressure digestion
[ "Chemistry", "Engineering", "Biology" ]
769
[ "Biofuels technology", "Anaerobic digestion", "Environmental engineering", "Water technology", "Biogas technology" ]
77,360,869
https://en.wikipedia.org/wiki/Tasmanite%20%28mineral%29
Tasmanite, or Tasmanian amber (in the original sense of the word: “discovered in Tasmania”) — a rare regional mineraloid, a brownish-reddish fossilized organic resin from the island of Tasmania, formed in some deposits of the parent rock (tasmanite shale) and known by the same name: tasmanite. Found in bituminized shales on the banks of the Mersey River (northern Tasmania), this mineral was examined and described in 1865 by Professor A. J. Church. Meanwhile, translucent tasmanite is not formed everywhere where there are deposits of the sedimentary rock of the same name, but only in some layers. Over the next century and a half, almost no new evidence appeared about Tasmanian amber. Origin and genesis The parent rock, also called Tasmanite, is itself a special type of sedimentary rock of organic origin, common not only in Tasmania or Australia, but also throughout the globe. Tasmanite as a rock is a typical oil shale, - with a very high carbon content, formed from Late Permian and Carboniferous deposits of unicellular algae. In appearance, Tasmanite is a fossilized amorphous mass containing large quantities of remains of spores (cysts) and pollen. In its pure form, tasmanite consists almost entirely of flattened and compressed microspore shells. The initial forming substance is necroma of brackish-water seaweeds from the genus Tasmanites (; Newton, 1875). The color of the differences is always dark, mixed, the tonality varies depending on the location in the range from gray-brown to black; Due to the high spore content, most samples appear to be covered in yellow pollen. The same picture is visible on the tasmanite fracture. Tasmanite is distinguished by a very high homogeneity of composition; it consists almost exclusively of organic matter, compressed shells of microspores of algae from the genus Tasmanites and, thus, can be classified as a standard liptobiolite. The carbon content in pure samples fluctuates around 81% with a small error. After burning Tasmanite, a small amount of white ash remains, retaining the shape of the original sample. Tasmanites throughout the world are among the richest oil source rocks; the conversion factor of organic matter into oil is about 78%. Tasmanite rarely forms independent accumulations; for the most part it accompanies deposits of various coals of autochthonous origin, that is, coals that occur at the site of deposition of organic matter that gave them their origin. The thickness of Tasmanite layers, as a rule, does not exceed 1.5 meters, moreover, they are not continuous, but separated by a layer of sedimentary clay. In one of these layers in the vicinity of the Mersey River, Professor A. J. Church discovered in 1864 translucent mineral formations, their appearance reminiscent of dark reddish-brown amber. The discovered layer of Tasmanite shale, according to the researcher, was significantly different from other deposits that he had to examine. Fossil resin of a brownish-reddish color and gelified appearance was contained here not just in the form of inclusions, but in very large quantities, literally penetrating the entire layer, which was especially noticeable in the section. Translucent reddish-brown or brown varieties grown into the Tasmanite shale ultimately accounted for up to 40% of the main rock and had the appearance of narrow scaly lenses, difficult to separate from the main rock. It is also known that this deposit itself had a small size and was located near the floodplain of the River Mersey. Professor Church did not indicate a more precise location. Properties and composition The organic mineraloid, described under the name Tasmanite, was translucent even when not polished, standing out sharply against the background of the main rock. It had a reddish-brown or reddish-brown color and a waxy sheen. The hardness on the Mohs scale was approximately 2, and the density was significantly higher than that of amber, hovering around 1.8. The fracture of the mineral was conchoidal. Birefringence, dispersion and distinct pleochroism were absent. Professor A. J. Church also investigated the chemical composition, within the limits of his capabilities. As it turned out, according to the results of the research, when heated, the mineral easily melted, emitting a strong odor, probably oil-like. Tasmanite also dissolved slowly in hydrochloric acid, ethyl alcohol and turpentine. According to analyzes of several samples, the mineral contained 79.34% carbon, 10.41% hydrogen, 4.93% oxygen and 5.32% sulfur — as you can see, the given figures add up to 100%. From here the approximate formula of tasmanite was derived: C40H124O2S. In addition to the high content of sulfur (organic sulfides), attention is drawn to the relatively high specific gravity of this substance. Probably, the inclusions of resinous translucent tasmanite in the bulk of the Tasmanite schist were not necessarily associated with the presence of any other resin-containing plant remains, except for the rock-forming algae of the genus Tasmanites. Most likely, the effect of compaction and clarification of the mineral had a connection with deeper metamorphism of the rock in this particular place. As the German petrographer noted in a comparative study at the end of the 1960s, the appearance of a reddish tint for liptobiolites formed by this alga is characteristic of the fatty carbon stage. Microscopic studies of thin sections of two Tasmanites from different deposits showed that Alaskan Tasmanites, which have a reddish sheen, are metamorphosed to a much greater extent than Australian ones, which have a golden-yellow color of weakly "carbonized" . Taking into account all known data, it would be most accurate to define “Tasmanian amber” as a compacted and partially purified infiltrate formed as a result of the metamorphism of the Late Carboniferous shale of the same name. There is also no doubt that the external similarity is not accidental: amber and tasmanite belong to the same group of liptobilites – fossil coals enriched with the most decomposition-resistant components of plant matter: waxes, fossil resins, and other similar natural compounds. In addition to amber and tasmanite, representatives of this group of are, for example, the organic mineral fichtelite. In the old mineralogical literature, a large number of specific names were proposed for resins of various origins <... for example>, tasmanite is the name given to the compacted Late Carboniferous infiltrates of Tasmania... — Vladimir Zherikhin, "Introduction to paleoentomology", 2008 On the other hand, the trivial name "Tasmanian amber" found in the literature should not be perceived as anything other than a mineralogical metaphor that simplifies the outside perception of a little-known object. Fossil resins in general are often called amber, since this mineraloid is undoubtedly the most popular and well known among other stones of organic origin. Fossil resins other than true amber are sometimes also classified as retinites or resinites, but these terms are not clearly defined. It is obvious that tasmanite can appear as a special regional liptobiolite among the specific names of natural resins of various origins adopted in the old mineralogical literature. These undoubtedly include simetite (Sicilian amber), romanite (Romanian amber), chemavinite and cedarite (Canadian chalk resins) and many others that have the status of local "amber". References See also Tasmanite Cannel coal Kerosene shale Amber Copal List of types of amber Petrified wood Rocks Amber Geology of Tasmania Mining in Tasmania Fossil resins Amorphous solids
Tasmanite (mineral)
[ "Physics" ]
1,633
[ "Amber", "Unsolved problems in physics", "Physical objects", "Rocks", "Amorphous solids", "Matter" ]
77,361,219
https://en.wikipedia.org/wiki/Normalized%20solution%20%28mathematics%29
In mathematics, a normalized solution to an ordinary or partial differential equation is a solution with prescribed norm, that is, a solution which satisfies a condition like In this article, the normalized solution is introduced by using the nonlinear Schrödinger equation. The nonlinear Schrödinger equation (NLSE) is a fundamental equation in quantum mechanics and other various fields of physics, describing the evolution of complex wave functions. In Quantum Physics, normalization means that the total probability of finding a quantum particle anywhere in the universe is unity. Definition and variational framework In order to illustrate this concept, consider the following nonlinear Schrödinger equation with prescribed norm: where is a Laplacian operator, is a Lagrange multiplier and is a nonlinearity. If we want to find a normalized solution to the equation, we need to consider the following functional: Let be defined by with the constraint where is the Hilbert space and is the primitive of . A common method of finding normalized solutions is through variational methods, i.e., finding the maxima and minima of the corresponding functional with the prescribed norm. Thus, we can find the weak solution of the equation. Moreover, if it satisfies the constraint, it's a normalized solution. A simple example on Euclidean space On a Euclidean space , we define a function with the constraint . By direct calculation, it is not difficult to conclude that the constrained maximum is , with solutions and , while the constrained minimum is , with solutions and . History The exploration of normalized solutions for the nonlinear Schrödinger equation can be traced back to the study of standing wave solutions with prescribed -norm. Jürgen Moser firstly introduced the concept of normalized solutions in the study of regularity properties of solutions to elliptic partial differential equations (elliptic PDEs). Specifically, he used normalized sequences of functions to prove regularity results for solutions of elliptic equations, which was a significant contribution to the field. Inequalities developed by Emilio Gagliardo and Louis Nirenberg played a crucial role in the study of PDE solutions in spaces. These inequalities provided important tools and background for defining and understanding normalized solutions. For the variational problem, early foundational work in this area includes the concentration-compactness principle introduced by Pierre-Louis Lions in 1984, which provided essential techniques for solving these problems. For variational problems with prescribed mass, several methods commonly used to deal with unconstrained variational problems are no longer available. At the same time, a new critical exponent appeared, the -critical exponent. From the Gagliardo-Nirenberg inequality, we can find that the nonlinearity satisfying -subcritical or critical or supercritical leads to a different geometry for functional. In the case the functional is bounded below, i.e., subcritical case, the earliest result on this problem was obtained by Charles-Alexander Stuart using bifurcation methods to demonstrate the existence of solutions. Later, Thierry Cazenave and Pierre-Louis Lions obtained existence results using minimization methods. Then, Masataka Shibata considered Schrödinger equations with a general nonlinear term. In the case the functional is not bounded below, i.e., supcritical case, some new difficulties arise. Firstly, since is unknown, it is impossible to construct the corresponding Nehari manifold. Secondly, it is not easy to obtain the boundedness of the Palais-Smale sequence. Furthermore, verifying the compactness of the Palais-Smale sequence is challenging because the embedding is not compact. In 1997, Louis Jeanjean using the following transform: Thus, one has the following functional: Then, which corresponds exactly to the Pokhozhaev's identity of equation. Jeanjean used this additional condition to ensure the boundedness of the Palais-Smale sequence, thereby overcoming the difficulties mentioned earlier. As the first method to address the issue of normalized solutions in unbounded functional, Jeanjean's approach has become a common method for handling such problems and has been imitated and developed by subsequent researchers. In the following decades, researchers expanded on these foundational results. Thomas Bartsch and Sébastien de Valeriola investigate the existence of multiple normalized solutions to nonlinear Schrödinger equations. The authors focus on finding solutions that satisfy a prescribed norm constraint. Recent advancements include the study of normalized ground states for NLS equations with combined nonlinearities by Nicola Soave in 2020, who examined both subcritical and critical cases. This research highlighted the intricate balance between different types of nonlinearities and their impact on the existence and multiplicity of solutions. In bounded domain, the situation is very different. Let's define where . Refer to Pokhozhaev's identity, The boundary term will make it impossible to apply Jeanjean's method. This has led many scholars to explore the problem of normalized solutions on bounded domains in recent years. In addition, there have been a number of interesting results in recent years about normalized solutions in Schrödinger system, Choquard equation, or Dirac equation. Some extended concepts Mass critical, mass subcritical, mass supcritical Let's consider the nonlinear term to be homogeneous, that is, let's define where . Refer to Gagliardo-Nirenberg inequality: define then there exists a constant such that for any , the following inequality holds: Thus, there's a concept of mass critical exponent, From this, we can get different concepts about mass subcritical as well as mass supercritical. It is also useful to get whether the functional is bounded below or not. Palais-Smale sequence Let be a Banach space and be a functional. A sequence is called a Palais-Smale sequence for at the level if it satisfies the following conditions: 1. Energy Bound: . 2. Gradient Condition: as for some . Here, denotes the Fréchet derivative of , and denotes the inner product in . Palais-Smale sequence named after Richard Palais and Stephen Smale. See also Standing wave Sobolev inequality Palais–Smale compactness condition Variational principle Schrödinger picture Mathematical formulation of quantum mechanics Relation between Schrödinger's equation and the path integral formulation of quantum mechanics References Further reading Quantum mechanics Partial differential equations Calculus of variations
Normalized solution (mathematics)
[ "Physics" ]
1,307
[ "Theoretical physics", "Quantum mechanics" ]
69,867,676
https://en.wikipedia.org/wiki/Lewandowski-Kurowicka-Joe%20distribution
In probability theory and Bayesian statistics, the Lewandowski-Kurowicka-Joe distribution, often referred to as the LKJ distribution, is a probability distribution over positive definite symmetric matrices with unit diagonals. Introduction The LKJ distribution was first introduced in 2009 in a more general context by Daniel Lewandowski, Dorota Kurowicka, and Harry Joe. It is an example of the vine copula, an approach to constrained high-dimensional probability distributions. The distribution has a single shape parameter and the probability density function for a matrix is with normalizing constant , a complicated expression including a product over Beta functions. For , the distribution is uniform over the space of all correlation matrices; i.e. the space of positive definite matrices with unit diagonal. Usage The LKJ distribution is commonly used as a prior for correlation matrix in Bayesian hierarchical modeling. Bayesian hierarchical modeling often tries to make an inference on the covariance structure of the data, which can be decomposed into a scale vector and correlation matrix. Instead of the prior on the covariance matrix such as the inverse-Wishart distribution, LKJ distribution can serve as a prior on the correlation matrix along with some suitable prior distribution on the scale vector. It has been implemented in several probabilistic programming languages, including Stan and PyMC. References External links Described as part of the Stan manual distribution-explorer Random matrices Bayesian statistics Continuous distributions Multivariate continuous distributions
Lewandowski-Kurowicka-Joe distribution
[ "Physics", "Mathematics" ]
303
[ "Random matrices", "Matrices (mathematics)", "Statistical mechanics", "Mathematical objects" ]
69,867,945
https://en.wikipedia.org/wiki/Bigoni%E2%80%93Piccolroaz%20yield%20criterion
The Bigoni–Piccolroaz yield criterion is a yielding model, based on a phenomenological approach, capable of describing the mechanical behavior of a broad class of pressure-sensitive granular materials such as soil, concrete, porous metals and ceramics. General concepts The idea behind the Bigoni-Piccolroaz criterion is that of deriving a function capable of transitioning between the yield surfaces typical of different classes of materials only by changing the function parameters. The reason for this kind of implementation lies in the fact that the materials towards which the model is targeted undergo consistent changes during manufacturing and working conditions. The typical example is that of the hardening of a power specimen by compaction and sintering during which the material changes from granular to dense. The Bigoni-Piccolroaz yielding criterion can be represented in the Haigh–Westergaard stress space as a convex smooth surface and in fact the criterion itself is based on the mathematical definition of the surface in the above-mentioned space as a proper interpolation of experimental points. Mathematical formulation The Bigoni-Piccolroaz yield surface is thought as a direct interpolation of experimental data. This criterion represents a smooth and convex surface, which is closed both in hydrostatic tension and compression and has a drop-like shape, particularly suited to describe frictional and granular materials. This criterion has also been generalized to the case of surfaces with corners. Design principles Since the whole idea of the model is to tailor a function to experimental data, the authors have defined a certain group of features as desirable, even if not essential, among those: smoothness of the surface; possibility of changing the shape and thus the interpolation on a broad class of experimental data for different materials; possibility to represent known criteria with limit set of parameters; convexity of the surface. Parametric function The Bigoni–Piccolroaz yield criterion is a seven-parameter surface defined as: where p, q and are invariants dependent on the stress tensor, while is the "meridian" function: describing the pressure-sensitivity and is the "deviatoric" function: describing the Lode-dependence of yielding. The mathematical definitions of the parameters and are: With: Where is the deviatoric stress, is the identity tensor, is the stress tensor and the dot indicates the scalar product. A better understanding of those important parameters can be grasped by using their geometrical representation in the Haigh–Westergaard stress space. Considering the tern of principal stresses and the deviatoric plane , orthogonal to the trisector of the first quadrant and passing through the origin of the coordinate system, the tern and unequivocally represents a point in the space acting as a cylindrical coordinate system with the trisector as an axis: is the distance of the point from the deviatoric plane ; is the distance from the trisector; represents the angle between the projections of and the axis on the deviatoric plane . The usage of p and q instead of the correct cylindrical coordinates and : is justified by the easier physical interpretation: p is the hydro-static pressure on the material point, q is the Von Mises equivalent stress. The described yield function corresponds to the yield surface: which makes explicit the relation between the two functions and and the shape of the meridian and deviatoric sections, respectively. The seven, non-negative material parameters: define the shape of the meridian and deviatoric sections. In particular, some of the parameters are easily relatable to mechanical properties: controls the pressure sensitivity, and are the yield strength under isotropic conditions of tension and compression. The other parameters define the shape of the surface when intersected by the meridian and deviatoric planes: and define the meridian section, and define the deviatoric section. Related yielding criteria Having been designed to allow consistent changes in the surface shape in the Haigh–Westergaard stress space, the Bigoni-Piccolroaz yield surface can be used as a generalized formulation for several criteria, such as the well known von Mises, Tresca, Mohr–Coulomb. See also Yield surface Yield (engineering) Plasticity (physics) Material failure theory Compaction of ceramic powders External links The Bigoni-Piccolroaz yield surface is a powerful instrument for the characterization of granular materials and it arises great interest in the field of the definition of constitutive models for ceramics, rock and soil which is a task of fundamental importance for better design of products using these materials. https://bigoni.dicam.unitn.it/ https://apiccolroaz.dicam.unitn.it/ https://www.refracture2-h2020.eu/ References Materials science Plasticity (physics) Yield criteria Structural analysis Ceramic materials Ceramic engineering Powders
Bigoni–Piccolroaz yield criterion
[ "Physics", "Materials_science", "Engineering" ]
995
[ "Structural engineering", "Applied and interdisciplinary physics", "Deformation (mechanics)", "Structural analysis", "Materials science", "Plasticity (physics)", "Materials", "Powders", "Ceramic materials", "nan", "Aerospace engineering", "Mechanical engineering", "Ceramic engineering", "M...
69,880,929
https://en.wikipedia.org/wiki/The%20Simple%20Function%20Point%20method
The Simple Function Point (SFP) method is a lightweight Functional Measurement Method. The Simple Function Point method was designed by Roberto Meli in 2010 to be compliant with the ISO14143-1 standard and compatible with the International Function Points User Group (IFPUG) Function Point Analysis (FPA) method. The original method (SiFP) was presented for the first time in a public conference in Rome (SMEF2011) The method was subsequently described in a manual produced by the Simple Function Point Association: the Simple Function Point Functional Size Measurement Method Reference Manual, available under the Creatives Commons Attribution-NoDerivatives 4.0 International Public License. Adoption by IFPUG In 2019, the Simple Function Points Method was acquired by the IFPUG, to provide its user community with a simplified Function Point counting method, to make functional size measurement easier yet reliable in the early stages of software projects. The short name became SFP. The SPM (Simple Function Point Practices Manual) was published by IFPUG in late 2021. Basic concept When the SFP method was proposed, the most widely used software functional size measurement method was IFPUG FPA. However, IFPUG FPA had (and still has) a few shortcomings: It is not easy to apply. It requires certified personnel, and the productivity of measurement is relatively low (between 400 and 600 Function Points per day, according to Capers Jones, between 200 and 300 Function Points per day according to experts from Total Metrics ). The measurement is partly subjective, since some of its measurement rules have to be suitably interpreted by the person who performs the measurement. The diffusion of the method in the software development community is quite limited. To overcome at least some of these problems, the SFP method was defined to provide the following characteristics: Easy to apply; Less subject to interpretation, being based on quite straightforward definitions; Easy to learn: specifically, people familiar with IFPUG FPA could learn SFP very quickly with very little effort; Compatible with the IFPUG FPA; specifically , that is, a measure of size expressed in UFP should be equal to the measure expressed in SiFP (In this article we use “UFP” for unadjusted Function Point to designate the unit of measure defined by IFPUG FPA and SiFP the unit of measure defined by SFP). The sought characteristics were achieved as follows: IFPUG FPA requires that logical data files and transactions are identified, logical data files are classified into Internal Logical Files (ILF) and External Interface Files (EIF), every transaction is classified as External Input (EI), External Output (EO), External Query (EQ), every ILF and EIF is weighted, based on its Record Element Types (RET) and Data Element Types (DET), every EI, EO and EQ is weighted, based on its File Types Referenced (FTR) and DET exchanged through the borders of the application being measured. Of these activities, SFP requires only the first two, i.e., the identification of logical data files and transactions. Activities 4) and 5) are the most time consuming, since they require that every data file and transaction is examined in detail: skipping these phases makes the SFP method both quicker and easier to apply than IFPUG FPA. In addition, most of the subjective interpretation is due to activities 4) and 5), and partly also to activity 3): skipping these activities makes the SFP method also less prone to subjective interpretation. The concepts used in the definition of SFP are a small subset of those used in the definition of IFPUG FPA, therefore learning SFP is easier than learning IFPUG FPA, and it is immediate for those who already know IFPUG FPA. In practice, only the concepts of logical data file and transaction have to be known. Finally, the weights assigned to data files and transactions make the size in SFP very close to the size expressed in Function Points, on average. Definition The logical data files are named Logical Files (LF) in the SFP method. Similarly, transactions are named Elementary Process (EP). Unlike in IFPUG FPA, there is no classification or weighting of the Base Functional Components (BFC as defined in ISO14143-1 standard). The size of an EP is 4.6 SFP, while the size of a LF is 7.0 SFP. Therefore the size expressed in SFP is based on the number of data files (#LF) and the number of transactions (#EP). Belonging to the software application being measured: Empirical evaluation of the SFP method Empirical studies have been carried out, aiming at evaluating the convertibility of SFP and UFP measures comparing the SFP and UFP measures in supporting the estimation of software development effort Convertibility between SFP and FPA measures In the original proposal of the SiFP method, a dataset from the ISBSG, including data from 768 projects, was used to evaluate the convertibility among UFP and SiFP measures. This study showed that on average . Another study also used an ISBSG dataset to evaluate the convertibility among UFP and SiFP measures. The dataset included data from 766 software applications. Via ordinary least square regression, it was found that . Based on these empirical studies, it seems that (note that this approximate equivalence holds on average: in both studies an average relative error around 12% was observed). However, a third study found . This study used data from only 25 Web applications, so it is possible that the conversion rate is affected by the specific application type or by the relatively small size of the dataset. In 2017, a study evaluated the convertibility between UFP and SiFP measures using seven different datasets. Every dataset was characterized by a specific conversion rate. Specifically, it was found that , with . Noticeably, for a dataset, no linear model could be found; instead the statistically significant model was found. In conclusion, available evidence shows that one SiFP is approximately equivalent to one UFP, but this equivalence depends on the data being considered, besides being true only on average. Considering that the IFPUG SFP basic elements (EP, LF) are totally equivalent to the original SiFP elements (UGEP, UGDG), the previous results hold for the IFPUG SFP method as well. Using SFP for software development effort estimation IFPUG FPA is mainly used for estimating software development effort. Therefore, any alternative method that aims at measuring the functional size of software should support effort estimation with the same level of accuracy as IFPUG FPA. In other words, it is necessary to verify that effort estimates based on SFP are at least as good as the estimates based on UFP. To perform this verification, an ISBSG dataset was analyzed, and models of effort vs. size were derived, using ordinary least squares regression, after log-log transformations. The effort estimation errors were then compared. It turned out that the two models yielded extremely similar estimation accuracy. A following study analyzed a dataset containing data from 25 Web applications. Ordinary least squares regression was used to derive UFP-based and SiFP-based effort models. Also in this case, no statistically significant estimation differences could be observed. References External links The introduction to Simple Function Points (SFP) from IFPUG. Software metrics Software engineering costs
The Simple Function Point method
[ "Mathematics", "Engineering" ]
1,553
[ "Software engineering", "Quantity", "Metrics", "Software metrics" ]
69,882,474
https://en.wikipedia.org/wiki/Ivar%20Werner%20Oftedal
Ivar Werner Oftedal (25 February 1894 – 30 May 1976) was a Norwegian mineralogist. He was born in Larvik. He took his cand.real. degree in 1929 and the dr.philos. degree in 1941, both at the University of Oslo. After 29 years as a conservator at the University Museum, he was a professor of geology at the University of Oslo from 1949 to 1964, specializing in mineralogy, geochemistry and crystallography. References 1894 births 1976 deaths People from Larvik Norwegian mineralogists Norwegian geochemists crystallographers University of Oslo alumni Academic staff of the University of Oslo
Ivar Werner Oftedal
[ "Chemistry", "Materials_science" ]
136
[ "Crystallographers", "Crystallography", "Geochemists", "Norwegian geochemists" ]
69,883,674
https://en.wikipedia.org/wiki/Josette%20Bellan
Josette Bellan (née Rosentweig) is a Romanian-French-American aerospace engineer and fluid dynamicist known for her research on turbulence in high-pressure reactions, and on the interactions between fluid dynamics and thermodynamics in these reactions. She is a senior research scientist at the Jet Propulsion Laboratory (JPL) and visiting associate in the Department of Mechanical and Civil Engineering of the California Institute of Technology (Caltech). Education and career Bellan is originally from Romania, and grew up under the communist government there. She was educated in France, earning a baccalauréat in 1964 after studying at the Lycée Jules-Ferry (Paris), and a master's degree in 1969 at the Paris University of Sciences. She and her family visited Princeton University on a vacation in 1969, and she and her twin sister Larisse (who died in 1980) were encouraged to apply to Princeton for graduate study. They did, becoming the second and third women graduate students in Princeton's engineering school, after Genevieve Segol, a civil and geological engineer who also came to Princeton from France, and the first women in Princeton's graduate program in aerospace and mechanical engineering. At Princeton, the twins were supported by Zonta International through Amelia Earhart Fellowships, and their arrival at Princeton was reported in The New York Times and French newspapers. She earned another master's degree at Princeton in 1972, and, with her sister, completed her Ph.D. in 1974. Her dissertation was A Theory of Turbulent Combustion and Nitric Oxide Formation for Dual-carbureted Stratified-charge Engines, supervised by William A. Sirignano. After continuing at Princeton as a postdoctoral researcher, Bellan joined the Jet Propulsion Laboratory in 1978. She was a lecturer in jet propulsion at Caltech in 1992, became a visiting associate there in 1995, and was Chancellor's Distinguished Lecturer at the University of California, Irvine from 1995 to 1996. She became a senior research scientist at JPL in 1997. She is a citizen of both France and the United States. Research Bellan's research involves the simulation of reacting mixtures, and has shown the importance in these simulations of combining the effects of both fluid motion and heat transfer, down to the smallest levels of scale. Applications of this work include the development of bio-fuels, improving combustion efficiency in ground and aerospace vehicles and their effects on climate change, understanding the atmosphere of Venus, and modeling the interaction of rocket plumes with the surface of the Moon. Privacy activism A 2004 policy of the Bush Administration, Homeland Security Presidential Directive 12, led NASA to require background checks on scientists at JPL. Bellan became part of a group of JPL employees who filed a lawsuit in 2007 against the policy, claiming that the checks into her personal life were too intrusive and harmed the collegial atmosphere of JPL, that as contract employees rather than government employees they should not be subject to the same checks required of people who performed classified research, and that this sort of government intrusiveness, typical of the communist regime that Bellan grew up under, was inappropriate for a democracy. The case, NASA v. Nelson, went to the Supreme Court of the United States in 2010, but the court upheld the government policy. After the loss, some employees left JPL, but Bellan remained and submitted her background information. It was rejected because she included a note stating that she was submitting the information under duress, and she was required to submit it a second time without the note. In a related incident in 2012, a NASA laptop containing the background check information and other personal information of approximately 10,0000 employees was stolen from an employee's car, and a representative of the employees from the previous suit announced plans to sue NASA over its incautious handling of their information. The issue continued to simmer through 2014, when Bellan and another dual-national researcher from Ireland were required to sign a loyalty oath to the US, which they argued went well beyond the requirements of the 2004 policy. After an intervention by congresswoman Judy Chu, the requirement was modified. Recognition In 2014, JPL gave Bellan their highest research award, the Magellan Award for Excellence, for her methods for simulating mixtures of particles and supercritical fluids. The American Institute of Aeronautics and Astronautics (AIAA) gave her their 2018 Pendray Aerospace Literature Award, "for widely reaching, seminal and outstanding publications on bio-fuels, sprays and high pressure flows to meet future challenges of Aeronautics and Astronautics combustion systems". She was named a Fellow of the American Society of Mechanical Engineers in 1988, and a Fellow of the American Institute of Aeronautics and Astronautics in 2008. She became a Fellow of The Combustion Institute in 2021, "for establishing fundamental models of turbulent multi-phase phenomena with the models relying on fluid mechanics coupled to non-equilibrium thermodynamics and chemistry". References External links Year of birth missing (living people) Living people American aerospace engineers American women engineers French aerospace engineers Romanian aerospace engineers Romanian women engineers Fluid dynamicists Jet Propulsion Laboratory faculty Fellows of the American Society of Mechanical Engineers Fellows of the American Institute of Aeronautics and Astronautics 20th-century French women engineers 21st-century French women engineers 20th-century French engineers 21st-century French engineers
Josette Bellan
[ "Chemistry" ]
1,082
[ "Fluid dynamicists", "Fluid dynamics" ]
78,800,412
https://en.wikipedia.org/wiki/Envudeucitinib
Envudeucitinib is an investigational new drug that is being evaluated for the treatment of psoriasis. It is a selective tyrosine kinase 2 (TYK2) inhibitor developed by Fronthera U.S. Pharmaceuticals LLC for the treatment of autoimmune diseases. Envudeucitinib targets the TYK2 signaling pathway, which plays a crucial role in regulating multiple pro-inflammatory cytokines such as IL-12, IL-23, and type I interferons. References Anti-inflammatory agents Amides Cyclopropanes Methoxy compounds Phenyl compounds Pyridines Triazoles
Envudeucitinib
[ "Chemistry" ]
137
[ "Pharmacology", "Functional groups", "Medicinal chemistry stubs", "Pharmacology stubs", "Amides" ]
78,801,544
https://en.wikipedia.org/wiki/Protocetraric%20acid
Protocetraric acid is a chemical compound with the molecular formula . It is a secondary metabolite produced by a variety of lichens and is classified as a depsidone. History In 1845 Knop and Schnedermann isolated crystalline cetraric acid from the lichen Cetraria islandica. O. Hesse proposed that cetraric acid does not exist in the lichen, but is rather the decomposition product of another acid that he called protocetraric acid, which is split up into fumaric and cetraric acids. In reviewing Hesse's work. O. Simon confirmed the statements of Knop and Schnedermann, finding cetraric acid in the plant in a free state. O. Simon did not find the protocetraric acid proposed by Hesse, but instead used that name for another acid he isolated. Protocetraric acid was first described in the 1930s. Rao and colleagues published the ultraviolet and infrared spectra of some lichen depsidones, including protocetraric acid, in 1967. Properties The molecular formula of protocetraric acid is C18H14O9; it has a molecular mass of 374.29 grams per mole. In its purified crystalline form, it exists as short needles with a melting point range of . Its ultraviolet spectrum has three peaks of maximum absorption (λmax) at 210, 238, and 312 nm. Its infrared spectrum has several peaks: 680, 745, 785, 814, 840, 990, 1020, 1080, 1115, 1150, 1190, 1270, 1380, 1440, 1562, 1642, 1738, 3000, and 3500 cm−1. A number of ester derivatives of protocetraric acid, such as succinprotocetraric acid and fumarprotocetraric acid, have also been identified in lichens. Preliminary research has been conducted into the potential pharmacology of protocetraric acid and related compounds. Protocetraric acid has broad spectrum antimicrobial properties against some pathogenic microbes such as Salmonella typhi. It also has weak activity against SARS-CoV-2 3C-like protease (Ki of 3.95 μM), as does the related depsidone salazinic acid, and therefore it is being studied as a scaffold for the potential discovery of more potent drugs for the treatment of COVID-19. Biological activities Laboratory experiments indicate that protocetraric acid has broad spectrum antimicrobial activity against some pathogenic microbes, including antibacterial activity against Salmonella typhi, and antifungal activity against Trichophyton rubrum. It also has moderate antimycobacterial activity on the growth of Mycobacterium tuberculosis. Eponyms Some authors have explicitly named protocetraric acid in the specific epithets of their published lichen species, thereby acknowledging the presence of this compound as an important taxonomic characteristic. These eponyms are listed here, followed by their author citation and year of publication. Usnea hossei var. protocetrarica Hypotrachyna protocetrarica Karoowia protocetrarica Myriotrema protocetraricum Ocellularia protocetrarica Opegrapha protocetrarica Oropogon protocetraricus Xanthoparmelia protocetrarica Several derivatives of protocetraric acid were designed and synthesised using Diels-Alder reaction, esterification, and Friedel-Crafts alkylation of protocetraric acid with different reagents under Lewis acid. The products were tested for their α-Glucosidase inhibitory using molecular docking analysis. Related compounds The related chemical 9'-(O-methyl)protocetraric acid was isolated from the lichen Cladonia convoluta. Conhypoprotocetraric acid, identified from lichens Relicina cf. incongrua and Lecanora myriocarpoides, was synthesized and characterized in 1995. Confumarprotocetraric acid Conhyopoprotocetraric acid Conprotocetraric acid Consuccinprotocetraric acid Fumarprotocetraric acid Hypoprotocetraric acid Malonprotocetraric acid 4-0-Methylhypoprotocetraric acid Succinprotocetraric acid References Cited literature Lichen products Lactones Heterocyclic compounds with 3 rings Benzodioxepines Benzoic acids
Protocetraric acid
[ "Chemistry" ]
965
[ "Natural products", "Lichen products" ]
78,802,451
https://en.wikipedia.org/wiki/Endosulfan%20tragedy%20in%20Kerala
Endosulfan tragedy in Kerala is a series of health problems that occurred in Kerala, India, following the use of the pesticide endosulfan. Endosulfan was sprayed aerially in cashew orchards in Kasaragod district of Kerala to control pests such as tea mosquito bugs. It was found that people living in these areas were affected by physical and genetic problems after the application of this pesticide. The health effects of the spraying of endosulfan were evident in the people of 11 panchayats in the district, with the victims suffering from birth defects, physical disabilities, mental retardation, and gynecological problems. It also affected biodiversity of the region. In April 2011, the Persistent Organic Pollutants Review Committee, a subsidiary body to the Stockholm Convention declared endosulfan molecule as a persistent organic pollutant. One reason for this declaration was the campaign launched by various stakeholders in the context of the health problems seen in Kasaragod. Overview During 1963–64, the Agriculture Department started planting cashews in the hills around Padre, which is now in Kasaragod district. In 1978, the plantations were taken over by Plantation Corporation of Kerala (PCK), a subsidiary of the Kerala government. Endosulfan is an organochlorine insecticide and acaricide, which acts by blocking the GABA-gated chloride channel of the insect (IRAC group 2A). Endosulfan was aerially sprayed over the cashew orchards under the Plantation Corporation, in 1978, after a trial in 1977–78. It was used to control the tea mosquito bugs that affects cashews. Endosulfan was sprayed from helicopters three times a year. Later, it was found that people living in these areas were affected by physical and genetic problems after the application of this pesticide. These health problems have been reported mostly in panchayats like Enmakaje, Bellur, Kumbadaje, Pullur and Periye in Kasaragod district of Kerala. It is said that after almost 20 years of continuous aerial spraying of endosulfan pesticide in these panchayats, the local residents started succumbing to various diseases and deaths. Consequences In 1981, Sripadre, an independent environmental journalist, first exposed the consequences of the large-scale use of the pesticide endosulfan by reporting on various disorders among domestic animals in areas sprayed with the pesticide. On December 25, 1981, Evidence Weekly published a report on cows giving birth to calves with deformed limbs after aerial spraying of endosulfan in Enmakaje. It was later discovered that the use of endosulfan also had a significant impact on humans. The health effects of the spraying of endosulfan were evident in the people of 11 panchayats in the district, with the victims suffering from birth defects, physical disabilities, mental retardation, cancer, and gynecological problems. Similar to Kerala, the same health problems are now being seen in the South Canara district of Karnataka, where the Karnataka Cashew Development Corporation aerially sprayed endosulfan on cashew orchards for over 20 years. Since 1995, 500 deaths have been officially recognized as being related to the spraying of endosulfan. Following widespread public opposition, in 1998, the Kerala government temporarily suspended aerial spraying of endosulfan. In February 2001, a government-appointed team from the Kerala Agricultural University recommended an immediate halt to aerial spraying. Government imposed a permanent ban of this pesticide following a lower court ruling in 2001. In January 2002, the National Institute of Occupational Health released a report stating that they had found traces of endosulfan in water samples and blood samples collected from the village of Padre. A study published in 2018 found that endosulfan residues persist in Kasaragod soil even 20 years after its use was stopped. In 2001, following media reports that spraying of the pesticide endosulfan in Kasaragod district had caused serious health problems, the National Human Rights Commission (NHRC) asked the Indian Council of Medical Research to study the matter and submit a report in July. In 2002, one of its constituent institutions, the National Institute of Occupational Health (NIOH), conducted a study on the subject and reported that the prevalence of the disease was higher in these areas. Following this, the commission urged the central government to impose a ban on endosulfan across India. Although India government opposed a global ban of endosulfan, in April 2011, the Persistent Organic Pollutants Review Committee declared endosulfan molecule as a persistent organic pollutant. One reason for this declaration was the campaign launched by various stakeholders in the context of the health problems seen in Kasaragod. The Kerala State Health and Family Welfare Department classified 6278 individuals suffering from various types of diseases as "endosulfan victims" and the deaths that occurred in the said areas at that time as "endosulfan poisoning victims". In this way, all those suffering from various diseases caused by endosulfan were included in the list of victims and they were provided with financial assistance, monthly pension, free ration, free treatment, housing and many other facilities. In 2006, the government distributed Rs.50,000 each to the dependents of 135 persons who died of endosulfan. The spraying of endosulfan in cashew orchards has also caused significant damage to the biodiversity of the area, according to a study conducted by Dr. V.S. Vijayan of the Salim Ali Foundation. A rapid survey conducted by Vijayan and his team indicated that the use of the pesticide resulted in a 40 to 70 percent reduction in plant diversity in the area, and it affected native species, especially fish, being the most. Studies have also found declines in numbers and distribution of butterflies, which are considered biological indicators of healthy and diverse ecosystems. Objections There is controversy over whether endosulfan is the main cause of health problems in these areas. One argument is that it has not been proven that these diseases are more prevalent in Kasaragod than in other areas. It is also said that diseases that have nothing to do with endosulfan are also being attributed to it. Scientist and head of the Department of Agricultural Entomology at Kerala Agricultural University, Dr. K. M. Sreekumar has spoken out strongly about the unscientific nature of the endosulfan disaster claims. Sreekumar says that the child with the enlarged head, which is said to be caused by endosulfan, was due to a disease called hydrocephalus, and that it can be caused by various reasons, such as difficulties during normal childbirth, problems that occur when two children are born together, etc. He adds that there is not even a research paper yet that this is caused by endosulfan. He says that these patients are present in areas where endosulfan has been sprayed and in areas where endosulfan has not been used so far, and that all types of diseases that have been shown to be caused by endosulfan in Kasaragod district are similar. He points out that while health workers are identifying patients who say they are due to endosulfan, there is a lack of clinical or biochemical evidence to confirm that these illnesses are a result of endosulfan. Although a study by Calicut Medical College spot a high incidence of disease in Kasargod, according to a critique by Sreekumar and fellow entomologist Prathapan Divakaran, published in the journal Current Science, the levels of endosulfan in the blood of patients in the Calicut Medical College study appeared to have no correlation with the health of these patients. Legal actions Leela Kumari Amma, an agricultural officer from village of Pullur in Kasaragod, was the first to approach the court regarding endosulfan ban. On October 18, 1998, she filed a case in the Hosdurg Munsif Court demanding a halt to the spraying of endosulfan, and the court issued an interim order not to spray endosulfan in that area. When the Munsif Court order came, the Plantation Corporation moved the case to the Kanhangad Sub Court, but she won there too. After that, the Corporation moved straight to the High Court, but rejecting this, in 2000, the Kerala High Court upheld the lower court's order and permanently stopped the spraying of endosulfan. In 2011, Supreme Court of India banned production, sale and use of Endosulfan in the country till further orders. In January 2017, the Supreme Court of India ordered state government to give a compensation of Rs 5 lakh each to the victims of endosulfan. In May 2022, after the government failed to distribute compensation to everyone even after five years of the order, Supreme Court has strongly criticized the state government for delaying the distribution of compensation to endosulfan victims. The Supreme Court had ordered in 2017 that adequate medical facilities should be provided to endosulfan victims. However, following a contempt petition filed in the Supreme Court in 2021 alleging that the state had failed to implement this, the Kerala High Court directed the Kasaragod District Legal Services Authority (DLSA) to submit a detailed report on the medical and palliative care facilities for endosulfan victims in the district. In popular culture Arajeevithangalkkoru swargam () is a documentary film directed by M. A. Rahman that highlights the seriousness of the Endosulfan disaster. Produced by 'Greenfox' under the leadership of K.M.K. Kunjabdulla, the documentary began shooting in 1999 and was completed in 2002. A Pestering Journey, directed by K. R. Manoj, is another documentary based on this tragedy. This documentary was also submitted before the Supreme Court of India as evidence in the Endosulfan case. Enmakaje (Translated as Swarga in English), a novel written by Ambikasuthan Mangad is based on the Endosulfan tragedy in Kasaragod. Valiya Chirakulla Pakshikal (), a 2015 Malayalam drama film written and directed by Dr. Biju is also based on this incident. Pakarnnattam, directed by Jayaraj is also based on this incident. References Health disasters in India Chemical disasters Disasters in Kerala Toxic effects of pesticides
Endosulfan tragedy in Kerala
[ "Chemistry" ]
2,159
[ "Chemical accident", "Chemical disasters" ]
78,803,891
https://en.wikipedia.org/wiki/Estrella%20Alabastro
Estrella Fagela Alabastro (born February 19, 1941) is a Filipina chemical engineer who served as Secretary of Science and Technology. Early life and education She obtained a BS in chemical engineering from University of the Philippines Diliman. She obtained an MS and PhD in chemical engineering from Rice University. She is married to Edgardo Garcia Alabastro and they have three children. Career She researched thermal processing of Philippine food products and irradiation to extend the shelf life of mangoes. She created the Small Enterprises Technology Upgrading Program, which prolonged tinapa preservation from a week to 6 months. She was responsible for the Food Science Doctoral Program at the University of the Philippines Diliman. Alabastro was appointed Secretary of the Department of Science and Technology (DOST) March 12, 2001, serving until June 30, 2010. She was the first woman DOST Secretary. In 2015, she was named Academician of National Academy of Science and Technology in Chemical Engineering. References 1941 births Living people Secretaries of science and technology of the Philippines Arroyo administration cabinet members Chemical engineers Women chemical engineers 20th-century Filipino engineers Filipino women engineers Members of the Filipino National Academy of Science and Technology University of the Philippines Diliman alumni Rice University alumni Academic staff of the University of the Philippines Diliman
Estrella Alabastro
[ "Chemistry", "Engineering" ]
259
[ "Chemical engineering", "Women chemical engineers", "Chemical engineers" ]
78,809,060
https://en.wikipedia.org/wiki/Liang%20%28mass%29
Liang (), or leung in Cantonese, also called "Chinese ounce" or "tael", is a traditional Chinese unit for weight measurement. It originated in China before being introduced to neighboring countries in East Asia. Nowaday, the mass of 1 liang equals 50 grams in mainland China, 37.5 grams in Taiwan, Korea and Thailand, 37.799 grams in Hong Kong, Singapore and Malaysia, and 100 grams in Vietnam. Liang is mostly used in the traditional markets, and famous for measuring gold, silver and Chinese medicines. China Mainland Chinese mass units promulgated in 1915 On 7 January 1915, the Beiyang government promulgated a measurement law to use not only metric system as the standard but also a set of Chinese-style measures based directly on the Qing dynasty definitions (). where liang is the base unit equal to 37.301 grams. Mass units in the Republic of China since 1930 On 16 February 1929, the Nationalist government adopted and promulgated The Weights and Measures Act to adopt the metric system as the official standard and to limit the newer Chinese units of measurement to private sales and trade, effective on 1 January 1930. These newer "market" units are based on rounded metric numbers. And jin became the base unit. where liang is equal to 1/16 of a jin, or 31.25 grams. Mass units in the People's Republic of China since 1959 On June 25, 1959, the State Council of the People's Republic of China issued the "Order on the Unified Measurement System", retaining the market measure system, with the statement of "The market system originally stated that sixteen liangs are equal to one jin. Due to the trouble of conversion, it should be changed to ten liangs per jin. " Legally, 1 jin equals 500 grams, and 10 liangs equals 1 jin (that is, 1 liang equals 50 grams). The traditional Chinese medicine measurement system remains unchanged. Taiwan In 1895, Taiwan was ceded to Japan from China. The Japanese implemented the metric system, but the Taiwanese still followed their own habits and continued to use the old weights and measures of the Qing Dynasty. 1 Taiwan liang is equal to 37.5 grams, or 1/16 Taiwan jin. where liang is the base unit. Hong Kong and Macau Hong Kong and Macau mass units Currently, Hong Kong law stipulates that one liang is equal to 1/16 jin, which is 37.79936375 grams. Similarly, Singapore law stipulates that one jin is also equal to sixteen liangs or 0.6048 kilograms, and one liang equals to 37.799 g. Malaysia has the same regulations as it is a former British colony. Hong Kong troy units These are used for trading precious metals such as gold and silver. Korea The base unit of Korean weight is the gwan. One liang (兩, Korean ounce) is 1/100 of a gwan, or 37.5 g (1.32 oz). Vietnam In Vietnam, the unit of liang is called "lang": 1 lang is equal to 37.8 grams by traditional value, and 100 grams by modern value. For more information on the Chinese mass measurement system, please see article Jin (mass). Compounds wikt:幾斤幾兩 (jǐjīnjǐliǎng) wikt:半斤八兩 (bànjīnbāliǎng) wikt:缺斤少兩 (quējīnshǎoliǎng) wikt:銀兩 (yínliǎng) See also Chinese units of measurement Hong Kong units of measurement Taiwanese units of measurement Korean units of measurement Vietnamese units of measurement Notes References External links 中國度量衡#衡 市制 兩 Units of mass Chinese units of measurement Customary units of measurement
Liang (mass)
[ "Physics", "Mathematics" ]
757
[ "Matter", "Quantity", "Units of mass", "Mass", "Customary units of measurement", "Units of measurement" ]
78,811,704
https://en.wikipedia.org/wiki/Wafer%20fabrication%20equipment
Wafer fabrication equipment is equipment that is used in the process of semiconductor fabrication to process raw semiconductor wafers into finished chips, such as integrated circuits. Wafer fabrication equipment is meant to be installed in cleanrooms. Types Stepper Burn-in oven Market Referred to respectively as the wafer fab equipment or wafer front end (equipment) market, both using the acronym WFE, the market is that of the manufacturers of the machines which in turn manufacture semiconductors. The apexresearch link in 2020 identified Applied Materials, ASML, KLA-Tencor, Lam Research, TEL and Dainippon Screen Manufacturing as market participants while the 2019 electronicsweekly.com report, citing The Information Network's president Robert Castellano, focused on the respective market shares commanded by the two leaders, Applied Materials and ASML. See also LCD manufacturing FOUP References Semiconductor fabrication equipment Semiconductor device fabrication
Wafer fabrication equipment
[ "Materials_science", "Engineering" ]
186
[ "Semiconductor device fabrication", "Semiconductor fabrication equipment", "Microtechnology" ]
74,439,886
https://en.wikipedia.org/wiki/Theranostics
Theranostics, also known as theragnostics, is a technique commonly used in personalised medicine. For example in nuclear medicine, one radioactive drug is used to identify (diagnose) and a second radioactive drug is used to treat (therapy) cancerous tumors. In other words, theranostics combines radionuclide imaging and radiation therapy which targets specific biological pathways. Technologies used for theranostic imaging include radiotracers, contrast agents, positron emission tomography, and magnetic resonance imaging. It has been used to treat thyroid cancer and neuroblastomas. The term "theranostic" is a portmanteau of two words, therapeutic and diagnostic, thus referring to a combination of diagnosis and treatment that also allows for continuing medical assessment of a patient. The first known use of the term is attributed to John Funkhouser, a consultant for the company Cardiovascular Diagnostic, who used it in a press release in August 1998. Applications Nuclear medicine Theranostics originated in the field of nuclear medicine; iodine isotope 131 for the diagnostic study and treatment of thyroid cancer was one of its earliest applications. Nuclear medicine encompasses various substances, either alone or in combination, that can be used for diagnostic imaging and targeted therapy. These substances may include ligands of receptors present on the target tissue or compounds, like iodine, that are internalized by the target through metabolic processes. By using these mechanisms, theranostics enables the localization of pathological tissues with imaging and the targeted destruction of these tissues using high doses of radiation. Radiological scope Contrast agents with therapeutic properties have been under development for several years. One example is the design of contrast agents capable of releasing a chemotherapeutic agent locally at the target site, triggered by a stimulus provided by the operator. This localized approach aims to increase treatment efficacy and minimize side effects. For instance, ultrasound-based contrast media, such as microbubbles, can accumulate in hypervascularized tissues and release the active ingredient in response to ultrasound waves, thus targeting a specific area chosen by the sonographer. Another approach involves linking monoclonal antibodies (capable of targeting different molecular targets) to nanoparticles. This strategy enhances the drug's affinity and specificity towards the target and enables visualization of the treatment area, such as using superparamagnetic iron oxide particles detectable by magnetic resonance imaging. Additionally, these particles can be designed to release chemotherapy agents specifically at the site of binding, producing a local synergistic effect with antibody action. Integrating these methods with medical-nuclear techniques, which offer greater imaging sensitivity, may aid in target identification and treatment monitoring. Imaging techniques Positron emission tomography Positron emission tomography (PET) imaging in theranostics provides insight into metabolic and molecular processes within the body. The PET scanner detects photons and creates three-dimensional images that enable visualization and quantification of physiological and biochemical processes. PET imaging uses radiotracers that target specific molecules or processes. For example, [18F] fluorodeoxyglucose (FDG) is commonly used to assess glucose metabolism, as cancer cells exhibit increased glucose uptake. Other radiotracers target specific receptors, enzymes, or transporters, allowing the evaluation of various physiological and pathological processes. PET imaging plays a role in both diagnosis and treatment planning. It aids in the identification and staging of diseases, such as cancer, by visualizing the extent and metabolic activity of tumors. PET scans can also guide treatment decisions by assessing treatment response and monitoring disease progression. Additionally, PET imaging is used to determine the suitability of patients for targeted therapies based on specific molecular characteristics, enabling personalized treatment approaches. Single-photon emission computed tomography Single-photon emission computed tomography (SPECT) is employed in theranostics, using gamma rays emitted by a radiotracer to generate three-dimensional images of the body. SPECT imaging involves the injection of a radiotracer that emits single photons, which are detected by a gamma camera rotating around the person undergoing imaging. SPECT provides functional and anatomical information, allowing the assessment of organ structure, blood flow, and specific molecular targets. It is useful in evaluating diseases that involve altered blood flow or specific receptor expression. For example, SPECT imaging with technetium-99m (Tc-99m) radiopharmaceuticals may be able to assess myocardial perfusion and identify areas of ischemia or infarction in patients with cardiovascular diseases. SPECT imaging helps in identifying disease localization, staging, and assessing the response to therapy. Moreover, SPECT imaging is employed in targeted radionuclide therapy, where the same radiotracer used for diagnostic imaging can be used to deliver therapeutic doses of radiation to the diseased tissue. Magnetic resonance imaging Magnetic resonance imaging (MRI) is a non-invasive imaging technique that uses strong magnetic fields and radiofrequency pulses to generate detailed anatomical and functional images of the body. MRI provides excellent soft tissue contrast and is widely used in theranostics for its ability to visualize anatomical structures and assess physiological processes. In theranostics, MRI allows for the detection and characterization of tumors, assessment of tumor extent, and evaluation of treatment response. MRI can provide information on tissue perfusion, diffusion, and metabolism, aiding in the selection of appropriate therapies and monitoring their effectiveness. Advancements in MRI technology have expanded its capabilities in theranostics. Techniques such as functional MRI (fMRI) enable the assessment of brain activation and connectivity, while diffusion-weighted imaging (DWI) provides insights into tissue microstructure. The development of molecular imaging agents, such as superparamagnetic iron oxide nanoparticles, allows for targeted imaging and tracking of specific molecular entities. Therapeutic approaches Theranostics encompasses a range of therapeutic approaches that are designed to target and treat diseases with enhanced precision. Targeted drug delivery systems Targeted drug delivery systems facilitate the selective delivery of therapeutic agents to specific disease sites while minimizing off-target effects. These systems employ strategies, such as nanoparticles, liposomes, and micelles, to encapsulate drugs and enhance their stability, solubility, and bioavailability. By incorporating diagnostic components, such as imaging agents or targeting ligands, into these delivery systems, clinicians can monitor drug distribution and accumulation in real-time, ensuring effective treatment and reducing systemic toxicity. Targeted drug delivery systems hold promise in the treatment of cancer, cardiovascular diseases, and other conditions, as they allow for personalized and site-specific therapy. Gene therapy Gene therapy is a therapeutic approach that involves modifying or replacing faulty genes to treat or prevent diseases. In theranostics, gene therapy can be combined with diagnostic imaging to monitor the delivery, expression, and activity of therapeutic genes. Imaging techniques such as MRI, PET, and optical imaging enable non-invasive assessment of gene transfer and expression, providing valuable insights into the efficacy and safety of gene-based treatments. Gene therapy has shown potential in treating genetic disorders, cancer, and cardiovascular diseases, and its integration with diagnostic imaging offers a comprehensive approach for monitoring and optimizing treatment outcomes. Radiotherapy Radiotherapy can be integrated with imaging techniques to guide treatment planning, monitor radiation dose distribution, and assess treatment response. Molecular imaging methods, such as PET and SPECT, can be employed to visualize and quantify tumor characteristics, such as hypoxia or receptor expression, aiding in personalized radiation dose optimization10. Additionally, theranostic approaches involving radiolabeled therapeutic agents, known as radiotheranostics, combine the therapeutic effects of radiation with diagnostic capabilities. Radiotheranostics, including Peptide receptor radionuclide therapy (PRRT), hold promise for targeted radiotherapy, enabling precise tumor targeting and dose escalation, while sparing healthy tissues. For example, PRRT based on Lutetium-177 combinations (known as radioligands) has emerged as a treatment option for inoperable metastatic neuroendocrine tumours (NET). Immunotherapy Immunotherapy harnesses the body's immune system to recognize and attack cancer cells or other disease targets. In theranostics, immunotherapeutic approaches can be coupled with diagnostic imaging to assess immune cell infiltration, tumor immunogenicity, and treatment response. Imaging techniques, such as PET and MRI, can provide valuable information about the tumor microenvironment, immune cell dynamics, and response to immunotherapies. Furthermore, theranostic strategies involving the use of radiolabeled immunotherapeutic agents allow for simultaneous imaging and therapy, aiding in patient selection, treatment monitoring, and optimization of immunotherapeutic regimens. Nanomedicine Nanomedicine refers to the use of nanoscale materials for medical applications. In theranostics, nanomedicine offers opportunities for targeted drug delivery, imaging, and therapy. Nanoparticles can be engineered to carry therapeutic payloads, imaging agents, and targeting ligands, allowing for multimodal theranostic approaches. These nanocarriers can enhance drug stability, improve drug solubility, and enable controlled release at the disease site. Additionally, nanomaterials with inherent imaging properties, such as quantum dots or gold nanoparticles, can serve as contrast agents for imaging. Applications and challenges Oncology Theranostics has been applied in oncology, contributing to new approaches in the diagnosis, treatment, and monitoring of cancers. By integrating diagnostic imaging and targeted therapies, theranostics offers personalized approaches that improve treatment outcomes and patient care. In oncology, theranostics encompasses a wide range of applications, including the management of various types of cancers such as breast, lung, prostate, and colorectal cancer. Molecular imaging techniques, such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT), enable the visualization and characterization of cancerous lesions, aiding in early detection, staging, and assessment of treatment response. This allows for more accurate and tailored treatment planning, including the selection of appropriate targeted therapies or the optimization of radiation therapy. Despite the significant progress, the translation of theranostics into routine clinical practice faces challenges, including the need for standardized imaging protocols, biomarker validation, and regulatory considerations. Additionally, there is a continuous need for research and development to further enhance the effectiveness and accessibility of theranostic approaches in oncology. Neurology and cardiology Theranostics extends beyond oncology and holds potential in the fields of neurology and cardiology. In neurology, theranostic approaches offer new avenues for the diagnosis and treatment of various neurodegenerative diseases, such as Alzheimer's disease, Parkinson's disease, and multiple sclerosis. Advanced imaging techniques, including magnetic resonance imaging (MRI) and positron emission tomography (PET), allow for the visualization of neuroanatomy, functional connectivity, and molecular changes in the brain. This enables early detection, precise diagnosis, and monitoring of disease progression, facilitating the development of targeted therapeutic interventions. Similarly, in cardiology, theranostics play a significant role in the diagnosis and treatment of cardiovascular conditions. Non-invasive imaging modalities like MRI and computed tomography (CT) provide detailed information about cardiac structure, function, and blood flow, aiding in the assessment of heart disease and the guidance of interventions. Theranostic approaches in cardiology involve targeted drug delivery systems for the treatment of conditions such as atherosclerosis and restenosis, as well as image-guided interventions for precise stenting or catheter-based therapies. Research directions Several challenges remain to be addressed for widespread adoption and integration of theranostics into routine clinical practice. Regulatory considerations will play a role in ensuring the safety, efficacy, and quality of theranostic agents and technologies. Harmonization of regulations across different countries and regions is necessary to facilitate global implementation. Cost-effectiveness is a significant challenge, as theranostic approaches can be expensive. Strategies to optimize resource utilization and reimbursement models have been discussed. Technical limitations, such as the development of more specific and sensitive imaging agents, improvement of imaging resolution and quality, and the integration of different imaging modalities, require ongoing research and technological advancements. Ethical considerations surrounding patient privacy, data security, and the responsible use of patient information need to be addressed. References Medicinal radiochemistry Diagnostic radiology Radiation therapy procedures Technology neologisms 1998 neologisms
Theranostics
[ "Chemistry" ]
2,615
[ "Medicinal radiochemistry", "Medicinal chemistry" ]
74,440,011
https://en.wikipedia.org/wiki/Climate%20based%20daylight%20modelling
Climate based daylight modelling (CBDM) also known as dynamic daylight metrics is a calculation methodology first developed in the late 1990s to assess daylight quality and quantity. It is used by Building Design engineers and architects to predict luminance and/or illuminance within buildings using standardised sun and sky condition climate data for a given geographical location. It is a different design metric to Daylight factors which only considers the ratio of the light level inside a structure to the light level outside the structure from an overcast sky. With CBDM, if used considerately, the facade design of a building can be optimised to maximise useful daylight whilst excluding excessive daylight, which otherwise might cause issues with glare, visual discomfort, and/or solar gains which can cause thermal comfort issues. At the same time reducing reliance and operation of artificial lighting. CBDM calculations are calculated within Building simulation modelling software tools for each and every hour of the year, or sometimes for smaller increments, which allows for daily and seasonal profiles to be tested and optimised The key metrics reported on within CBDM software are as follows: 'Daylight Autonomy' (DA), the amount of time that a point in a room can be expected to achieve a target level of illuminance from daylight. Normally expressed as a percentage for a useful level of illuminance to be met or exceeded, for example 300 lux 'Spatial Daylight Autonomy' (sDA); the amount of time that a point on the working plane in a room can be expected to achieve a target level of illuminance from daylight. The working plane is established to represent a useful working height within the space, such as at desk level. The sDA is normally expressed as a percentage target for a useful level of illuminance for a given target, for example; 50% of a 300 lux DA target on the working plane (300/50%) 'Useful Daylight Illuminance' (UDI-a); the summed annual occurrence of illuminance on the working plane and during the occupied hours for the space. Typically, the target may be between 100 and 3000 lux. Any hours of illuminance below 100 lux is defined as UDI-s, and typically would require artificial lighting to be switched on. Any hours of illuminance above 3000 lux is defined as UDI-e, and indicates excessive daylight which can cause visual discomfort and glare/contrast issues and typically would require blinds or curtains to be closed. See also Daylighting Right to light Daylight factor Notes External links International Commission on Illumination Light Visibility Energy-saving lighting Lighting
Climate based daylight modelling
[ "Physics", "Mathematics" ]
536
[ "Physical phenomena", "Visibility", "Spectrum (physical sciences)", "Physical quantities", "Quantity", "Electromagnetic spectrum", "Waves", "Light", "Wikipedia categories named after physical quantities" ]
74,440,694
https://en.wikipedia.org/wiki/ODE/IM%20correspondence
In mathematical physics, the ODE/IM correspondence is a link between ordinary differential equations (ODEs) and integrable models. It was first found in 1998 by Patrick Dorey and Roberto Tateo. In this original setting it relates the spectrum of a certain integrable model of magnetism known as the XXZ-model to solutions of the one-dimensional Schrödinger equation with a specific choice of potential, where the position coordinate is considered as a complex coordinate. Since then, such a correspondence has been found for many more ODE/IM pairs. See also Bethe ansatz WKB approximation References Integrable systems Spin models Ordinary differential equations
ODE/IM correspondence
[ "Physics" ]
136
[ "Spin models", "Integrable systems", "Theoretical physics", "Quantum mechanics", "Statistical mechanics" ]
74,448,567
https://en.wikipedia.org/wiki/LK-99
LK-99 (from the Lee-Kim 1999 research), also called PCPOSOS, is a gray–black, polycrystalline compound, identified as a copper-doped lead‒oxyapatite. A team from Korea University led by Lee Sukbae () and Kim Ji-Hoon () began studying this material as a potential superconductor starting in 1999. In July 2023, they published preprints claiming that it acts as a room-temperature superconductor at temperatures of up to at ambient pressure. Many different researchers have attempted to replicate the work, and were able to reach initial results within weeks, as the process of producing the material is relatively straightforward. By mid-August 2023, the consensus was that LK-99 is not a superconductor at room temperature, and is an insulator in pure form. As of 12 February 2024, no replications had gone through the peer review process of a journal, but some had been reviewed by a materials science lab. A number of replication attempts identified non-superconducting ferromagnetic and diamagnetic causes for observations that suggested superconductivity. A prominent cause was a copper sulfide impurity occurring during the proposed synthesis, which can produce resistance drops, lambda transition in heat capacity, and magnetic response in small samples. After the initial preprints were published, Lee claimed they were incomplete, and coauthor Kim Hyun-Tak () said one of the papers contained flaws. Chemical properties and structure The chemical composition of LK-99 is approximately Pb9Cu(PO4)6O, in which— compared to pure lead-apatite (Pb10(PO4)6O)— approximately one quarter of Pb(II) ions in position 2 of the apatite structure are replaced by Cu(II) ions. The structure is similar to that of apatite, space group P63/m (No. 176). Synthesis Lee et al. provide a method for chemical synthesis of LK-99 in three steps. First they produce lanarkite from a 1:1 molar mixing of lead(II) oxide (PbO) and lead(II) sulfate (Pb(SO4)) powders, and heating at for 24 hours: PbO + Pb(SO4) → Pb2(SO4)O. Then, copper(I) phosphide (Cu3P) is produced by mixing copper (Cu) and phosphorus (P) powders in a 3:1 molar ratio in a sealed tube under a vacuum and heated to for 48 hours: 3 Cu + P → Cu3P. Then, lanarkite and copper phosphide crystals are ground into a powder, placed in a sealed tube under a vacuum, and heated to for between 5‒20 hours: Pb2(SO4)O + Cu3P → Pb10-Cu(PO4)6O + S (g), where 0.9 < x < 1.1. There were a number of problems with the above synthesis from the initial paper. The reaction is not balanced, and others reported the presence of copper(I) sulfide () as well. For a balanced reaction might be: 5 . Many syntheses produced fragmentary results in different phases, where some of the resulting fragments were responsive to magnetic fields, other fragments were not. The first synthesis to produce pure crystals found them to be diamagnetic insulators. Physical properties Some small LK-99 samples were reported to show strong diamagnetic properties, including a response confusingly referred to as "partial levitation" over a magnet. This was misinterpreted by some as a sign of superconductivity, although it is a sign of regular diamagnetism or ferromagnetism. While initial preprints claimed the material was a room-temperature superconductor, they did not report observing any definitive features of superconductivity, such as zero resistance, the Meissner effect, flux pinning, AC magnetic susceptibility, the Josephson effect, a temperature-dependent critical field and current, or a sudden jump in specific heat around the critical temperature. As it is common for a new material to spuriously seem like a potential candidate for high-temperature superconductivity, thorough experimental reports normally demonstrate a number of these expected properties. not one of these properties had been observed by the original experiment or any replications. Proposed mechanism for superconductivity Partial replacement of Pb2+ ions with smaller Cu2+ ions is said to cause a 0.48% reduction in volume, creating internal stress in the material, causing a heterojunction quantum well between the Pb(1) and oxygen within the phosphate ([PO4]3−). This quantum well was proposed to be superconducting, based on a 2021 paper by Kim Hyun-Tak describing a novel and complicated theory combining ideas from a classical theory of metal-insulator transitions, the standard Bardeen–Cooper–Schrieffer theory, and the theory of hole superconductivity by J.E.Hirsch. Response On 31 July 2023, Sinéad Griffin of Lawrence Berkeley National Laboratory analyzed LK-99 with density functional theory (DFT), showing that its structure would have correlated isolated flat bands, and suggesting this might contribute to superconductivity. However, while other researchers agreed with the DFT analysis, a number suggested that this was not compatible with superconductivity, and that a structure different from what was described in Lee, et al. would be necessary. Analyses by industrial and experimental physicists noted experimental and theoretical shortcomings of the published works. Shortcomings included the lack of phase diagrams spanning temperature, stoichiometry, and stress; the lack of pathways for the very high Tc of LK-99 compared to prior heavy fermion superconductors; the absence of flux pinning in any observations; the possibility of stochastic conductive artifacts in conductivity measurements; the high resistance and low current capacity of the alleged superconducting state; and the lack of direct transmission electron microscopy (TEM) of the materials. Compound name The name LK-99 comes from the initials of discoverers Lee and Kim, and the year of discovery (1999). The pair had worked with Tong-Seek Chair () at Korea University in the 1990s. In 2008, they founded the Quantum Energy Research Centre (퀀텀 에너지연구소; also known as ) with other researchers from Korea University. Lee would later become CEO of , and Kim would become director of research and development. Publication history Lee has stated that in 2020, an initial paper was submitted to Nature, but was rejected. Similarly presented research on room-temperature superconductors (but a completely different chemical system) by Ranga P. Dias had been published in Nature earlier that year, and received with skepticism—Dias's paper would subsequently be retracted in 2022 after its data was questioned as having been falsified. In 2020, Lee and Kim Ji-Hoon filed a patent application. A second patent application (additionally listing Young-Wan Kwon), was filed in 2021, which was published on 3 March 2023. A World Intellectual Property Organization (WIPO) patent was also published on 2 March 2023. On 4 April 2023, a Korean trademark application for "LK-99" was filed by the . Scholarly articles and preprints A series of academic publications summarizing initial findings came out in 2023, with a total of seven authors across four publications. On 31 March 2023, a Korean-language paper, "Consideration for the development of room-temperature ambient-pressure superconductor (LK-99)", was submitted to the Korean Journal of Crystal Growth and Crystal Technology. It was accepted on 18 April, but was not widely read until three months later. On 22 July 2023, two preprints appeared on arXiv. The first was submitted by Young-Wan Kwon, and listed Kwon, former CTO, as third author. The second preprint was submitted only 2 hours later by Kim Hyun-Tak, former principal researcher at the Electronics & Telecommunications Research Institute and professor at the College of William & Mary, listing himself as third author, as well as three new authors. On 23 July, the findings were also submitted by Lee to APL Materials for peer review. On 3 August 2023, a newly-formed Korean LK-99 Verification Committee requested a high-quality sample from the original research team. The team responded that they would only provide the sample once the review process of their APL paper was completed, expected to take several weeks or months. On 31 July 2023, a group led by Kapil Kumar published a preprint on arXiv documenting their replication attempts, which confirmed the structure using X-ray crystallography (XRD) but failed to find strong diamagnetism. On 11 Aug 2023, P. Puphal et al., released their preprint synthesizing the first single crystals of Pb9Cu(PO4)6O finally disproving superconductivity in this chemical stoichiometry published later in APL Materials. On 16 August 2023, Nature published an article declaring that LK-99 had been demonstrated to not be a superconductor, but rather an insulator. It cited statements by an condensed matter experimentalist at the University of California, Davis, and several studies previewed in August 2023. Other discussion by authors On 26 July 2023, Kim Hyun-Tak stated in an interview with the New Scientist that the first paper submitted by Kwon contained "many defects" and was submitted without his permission. On 28 July 2023, Kwon presented the findings at a symposium held at Korea University. That same day, Yonhap News Agency published an article quoting an official from Korea University as saying that Kwon was no longer in contact with the university. The article also quoted Lee saying that Kwon had left the Research Institute four months previously. On the same day, Kim Hyun-Tak provided The New York Times with a new video presumably showing a sample displaying strong signs of diamagnetism. The video appears to show a sample different to the one in the original preprint. On 4 August 2023, he informed SBS News that high-quality LK-99 samples may exhibit diamagnetism over 5,000 times greater than graphite, which he claimed would be inexplicable unless the substance is a superconductor. Response Materials scientists and superconductor researchers responded with skepticism. The highest-temperature superconductors known at the time of publication had a critical temperature of at pressures of over . The highest-temperature superconductors at atmospheric pressure (1 atm) had a critical temperature of at most . On 2 August 2023, The Korean Society of Superconductivity and Cryogenics established a verification committee as a response to the controversy and unverified claims of LK-99, in order to arrive at conclusions over these claims. The verification committee is headed by Kim Chang-Young of Seoul National University and consists of members of the university, Sungkyunkwan University and Pohang University of Science and Technology. Upon formation, the verification committee did not agree that the two 22 July arXiv papers by Lee et al. or the publicly available videos at the time supported the claim of LK-99 being a superconductor. the measured properties do not prove that LK-99 is a superconductor. The published material does not explain how the LK-99's magnetisation can change, demonstrate its specific heat capacity, or demonstrate it crossing its transition temperature. A more likely explanation for LK-99's magnetic response is a mix of ferromagnetism and non-superconductive diamagnetism. A number of studies found that copper(I) sulfide contamination common to the synthesis process could closely replicate the observations that inspired the initial preprints. Public response The claims in the 22 July papers by Lee et al. went viral on social media platforms the following week. The viral nature of the claim resulted in posts from users using pseudonyms from Russia and China claiming to have replicated LK-99 on both Twitter and Zhihu. Other viral videos described themselves as having replicated samples of LK-99 "partially levitating", most of which were found to be fake. Scientists interviewed by the press remained skeptical, because of the quality of both the original preprints, the lack of purity in the sample they reported, and the legitimacy of the claim after the failure of previous claims of room temperature superconductivity did not show legitimacy (such as the Ranga Dias affair). The Korean Society of Superconductivity and Cryogenics expressed concern on the social and economic impacts of the preliminary and unverified LK-99 research. A video from Huazhong University of Science and Technology uploaded on 1 August 2023 by a postdoctoral researcher on the team of Chang Haixin, apparently showed a micrometre-sized sample of LK-99 partially levitating. This went viral on Chinese social media, becoming the most viewed video on Bilibili by the next day, and a prediction market briefly put the chance of successful replication at 60%. A researcher from the Chinese Academy of Sciences refused to comment on the video for the press, dismissing the claim as "ridiculous". In early August, people began to create memes about "floating rocks", and there was a brief surge in Korean and Chinese technology stocks, despite warnings from the Korean stock exchange against speculative bets in light of the excitement around LK-99, which eventually fell on August 8. Following the publication of the Nature article on August 16 that proclaimed LK-99 is not a superconductor, South Korean superconductor stocks fell further, as the interest about LK-99 from investors in previous weeks disappeared. Replication attempts After the July 2023 publication's release, independent groups reported that they had begun attempting to reproduce the synthesis, with initial results expected within weeks. no replication attempts had yet been peer-reviewed by a journal. Of the non-peer-reviewed attempts, over 15 notable labs have published results that failed to observe any superconductivity, and a few have observed magnetic response in small fragments that could be explained by normal diamagnetism or ferromagnetism. Some demonstrated and replicated alternate causes of the observations in the original papers: Copper-deficient copper (I) sulfide has a known phase transition at from a low-temperature phase to a high-temperature superionic phase, with a sharp rise in resistivity and a λ-like-feature in the heat capacity. Furthermore, Cu2S is diamagnetic. Only one attempt observed any sign of superconductivity: Southeast University claimed to measure very low resistance in a flake of LK-99, in one of four synthesis attempts, below a temperature of . Doubts were expressed by experts in the field, as they saw no dropoff to zero resistance, and used crude instruments that could not measure resistance below 10 μΩ (too high to distinguish superconductivity from less exotic low-temperature conductivity), and had large measurement artifacts. Some replication efforts gained global visibility, with the aid of online replication trackers that catalogued new announcements and status updates. Experimental studies Selected experimental studies. Results Key: Theoretical studies In the initial papers, the theoretical explanations for potential mechanisms of superconductivity in LK-99 were incomplete. Later analyses by other labs added simulations and theoretical evaluations of the material's electronic properties from first principles. Selected theoretical studies: See also Bismuth strontium calcium copper oxide: Superconductivity at Tc ≈ to Carbonaceous sulfur hydride: Purported superconductivity at Tc ≈ at 267 GPa Lanthanum decahydride: Superconductivity at Tc = at 150 GPa Unconventional superconductor Salvatore Pais – Inventor with a patent referenced by a patent related to LK-99 References Further reading External links This effect is a conductive copper plate induced by a magnetic. List of replication claims, regularly updated: Compilation of Known Replication Attempt Claims. Guderian2nd, Spacebattles Forums. Retrieved 2 August 2023. LK-99#Online Claims. Eiri Sanada. Retrieved 2 August 2023. 2023 in science Lead(II) compounds Phosphates Science and technology in South Korea Superconductivity Crystals in space group 176
LK-99
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,494
[ "Physical quantities", "Superconductivity", "Materials science", "Salts", "Phosphates", "Condensed matter physics", "Electrical resistance and conductance" ]
78,813,714
https://en.wikipedia.org/wiki/Titanium%20tetraazide
Titanium tetraazide is an inorganic chemical compound with the formula . It is a highly sensitive explosive, and has been prepared from titanium tetrafluoride and trimethylsilyl azide via the corresponding fluoride-azide exchange. Properties Titanium tetraazide has been characterized by vibrational spectroscopy and single-crystal X-ray diffraction. The compound was predicted in 2003 to be vibrationally stable, and was expected to have a tetrahedral structures containing linear bond angles, contrasting other metal azides which generally feature bent bond angles. After synthesis in 2004, the resulting titanium tetraazide did not exhibit linear bond angles, as the coordination numbers exceeded 4. References azide titanium
Titanium tetraazide
[ "Chemistry" ]
146
[ "Explosive chemicals", "Azides", "Inorganic compounds", "Inorganic compound stubs" ]
78,814,861
https://en.wikipedia.org/wiki/Enzomenib
Enzomenib is an investigational new drug that is being evaluated for the treatment of acute leukemia. It is a small molecule inhibitor that targets the interaction between menin and mixed-lineage leukemia (MLL) proteins. Enzomenib particularly in patients with KMT2A (MLL) rearrangements or NPM1 mutations. The U.S. Food and Drug Administration (FDA) has granted both Fast Track and Orphan Drug designations to Enzomenib. References Alkenes Antineoplastic drugs Azetidines Benzamides Carboxamides Ethers Fluorobenzenes Isopropylamino compounds Piperidines Pyrimidines Spiro compounds
Enzomenib
[ "Chemistry" ]
141
[ "Pharmacology", "Functional groups", "Medicinal chemistry stubs", "Organic compounds", "Alkenes", "Ethers", "Pharmacology stubs", "Spiro compounds" ]
78,817,781
https://en.wikipedia.org/wiki/Micropollutant
Micropollutants are substances that even at very low concentrations have adverse effects on different environmental matrices. They are an inhomogeneous group of atroprogenic chemical compounds that is discharged by human to the environment. Commonly known micropollutants that might pose possible threats to ecological environments are, to name just a few: environmental persistent pharmaceutical pollutants and personal care products, pesticides, stimulants, persistent organic pollutants, and artificial sweeteners To date, most of the scientists have identified wastewater treatment plants as the main source of micropollutants to aquatic ecosystems and/or adversely affect the extraction of potable water from raw water. Due to in many places drinking water is also extracted from surface waters, or the substances also reach the groundwater via the water, they are also found in raw water and must be laboriously removed by drinking water treatment. In addition, some of the substances are bioaccumulative, which means that they accumulate in animals or plants and thus also in the human food chain. Background It is estimated that there are currently around 235,000 individual chemical substances registered worldwide. A large number of these are released into wastewater by humans. If these are persistent, they remain during clarification in the wastewater and enter the environment. Some of them have ecotoxic relevant properties. In some cases, the chemical itself is not a concern, but its degradation products are. This has been known for a longer time. As early as 1976, a study was published in which salicylic acid and clofibric acid were detected in the effluent of a sewage treatment plant in Kansas City. We currently know that there are well over 1,000 substances in wastewater that pose a risk. Many others have not yet been sufficiently researched in this regard. In current studies of water quality in European rivers by the Helmholtz Centre for Environmental Research, 610 chemicals whose occurrence or problematic effects are known were examined in more detail and analyzed to determine whether and, if so, in what concentrations they occur in Europe's flowing waters. The evaluation of 445 samples from a total of 22 rivers showed that the researchers were able to detect a total of 504 of the 610 chemicals. In total, they found 229 pesticides and biocides, 175 pharmaceutical chemicals as well as surfactants, plastic and rubber additives, per- and polyfluoroalkyl substances (PFAS) and corrosion inhibitors. In 40 percent of the samples they detected up to 50 chemical substances, in another 41 percent between 51 and 100 chemicals. In 4 samples they were even able to detect more than 200 organic micropollutants. With 241 chemicals, they detected the most substances in a water sample from the Danube. Effect The influences of micropollutants are varied. The best known are those of hormones that enter the water through the contraceptive pill. Several studies have shown that feminization occurs in an unusually high number of fish below discharges from sewage treatment plants, which has a negative impact on the population. One in five male smallmouth bass in U.S. rivers has developed female sexual characteristics. Estrogen-like artificial compounds such as the plasticizer bisphenol A also have this effect. There is evidence that this also applies to humans. Such substances are called endocrine disruptors. Other substances, such as benzotriazole, which is added to dishwasher detergent as corrosion protection for silver cutlery, are suspected of being carcinogenic in addition to acting as an endocrine disruptor in the concentrations found. Another relevant factor is the danger posed by the spread of multi-resistant bacteria. There are two possible ways in which this can happen through wastewater. Firstly, by transporting already resistant strains into the receiving water due to inadequate treatment technology. The other possibility is the development of resistant cultures in the environment by introducing antibiotics into the water body. Preventing the entry of bacteria has long been used as a form of hygienic treatment using UV light or ozone, especially if the water is to be reused. Membrane systems such as membrane bioreactors or downstream ultrafiltration also serve this purpose. Depending on the intensity and technology, some micropollutants are also removed in addition to the bacteria. The extent to which membrane technologies with low energy consumption are able to deplete trace substances is being investigated. Legislation Techniques for elimination of micropollutants via a so called fourth treatment stage during sewage treatment are implemented in Germany, Switzerland, Sweden and the Netherlands and tests are ongoing in several other countries. In Switzerland it has been enshrined in law since 2016. Since 1 January 2025, there has been a recast of the Urban Waste Water Treatment Directive in the European Union, which requires the removal of a large proportion of micropollutants from wastewater. Due to the large number of amendments that have now been made, the directive was rewritten on November 27, 2024 as Directive (EU) 2024/3019, published in the EU Official Journal on December 12, and entered into force on January 1, 2025. The member states now have 31 months, i.e. until July 31, 2027, to adapt their national legislation to the new directive ("implementation of the directive"). The implementation of the framework guidelines is staggered until 2045, depending on the size of the sewage treatment plant and its population equivalents (PE). Sewage treatment plants with over 150,000 PE have priority and should be adapted immediately, as a significant proportion of the pollution comes from them followed by wastewater treatment plants with 10,000 to 150,000 PE that discharge into coastal waters or sensitive waters. The latter concerns waters with a low dilution ratio, waters from which drinking water is obtained and those that are coastal waters, or those used as bathing waters or used for mussel farming. Member States will be given the option not to apply fourth treatment in these areas if a risk assessment shows that there is no potential risk from micropollutants to human health and/or the environment. Removal of micropollutants Due to the large number of substances with very different chemical and physical properties, the removal of these substances is difficult. Three techniques and cominationes of them have been established so far. Two remove the contaminants with the help of activated carbon (PAC (Powdered Activated Carbon), GAC (Granulated Activated Carbon)) and one with ozone. In addition to that a large number of techniques are still in experimental stage. These include for example processes that work with plasma or ultrasound, so-called AOP processes, applications with zeolites and cyclodextrins, membrane processes or photocatalysis. See also Drug pollution Plastic particle water pollution Environmental impact of silver nanoparticles Environmental persistent pharmaceutical pollutant Water pollution References Pollution Environmental science Water pollution Environmental impact of products Medical waste Environmental microbiology Toxicants Environmental ethics Environment and health Environmental issues
Micropollutant
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
1,430
[ "Toxicology", "Environmental ethics", "Medical waste", "Harmful chemical substances", "Water pollution", "Materials", "Toxicants", "nan", "Environmental microbiology", "Matter" ]
78,818,646
https://en.wikipedia.org/wiki/SN%201999em
SN 1999em was a well-observed type II-P supernova in the spiral galaxy NGC 1637, which lies within the mostly southern constellation of Eridanus. It was discovered on October 29, 1999 at a visual magnitude of 13.3. Using a corrected version of the expanding photosphere method (EPM), the distance to the supernova is estimated as . This is in good agreement with the Cepheid method, which yields a distance of . Observations This supernova event was first detected by the Lick Observatory Supernova Search from a CCD frame taken October 29, 1999 with the Katzman Automatic Imaging Telescope (KAIT). The discovery was confirmed by the Beijing Astronomical Observatory the same day. It showed an apparent visual magnitude of 13.5. A KAIT image of the same area taken October 20th showed nothing at the position of this supernova. SN 1999em was positioned west and south of the NGC 1637 nucleus. A spectrum taken October 30 showed this to be a type II supernova event. The early expansion velocity of the photosphere was measured at . Interstellar lines in the spectrum indicated the event may be partially obscured by dust. X-ray emission was detected from this source on November 1–2 and 11–12 using the Chandra X-ray Observatory. The number of photons detected suggested a luminosity of for the source. A compact radio source at this position was detected on December 1 from the NRAO Very Large Array. This was the first type II-p supernova to be detected at both X-ray and radio wavelengths. By now the target was identified as a type II-P supernova, based on the shape of the light curves and spectral properties. Spectrapolarimetry measured between November 1999 and January 2000 showed an increasing level of polarization at later dates. This implied asphericity toward the core of the explosion – meaning a deviation from spherical symmetry. Photometric observations showed that SN 1999em remained in its plateau phase for approximately 90 days, indicating that the progenitor possessed a massive hydrogen envelope when the explosion occurred. The explosion date was estimated to be before discovery. By day 161, the spectrum was dominated by emission lines, indicating that the remnant was transitioning to the nebular phase. Evidence showed that dust formation began at around day 500. The exponential decay rate of the light curve tail was mainly powered by the radioactive decay of 56Co to 56Fe. Ejecta mass is estimated at approximately and the surviving neutron star has . The host galaxy is close enough that individual bright supergiants can be resolved. However, no such object was detected at the position of the event. Supernova models indicate a progenitor mass in the range of , with near solar metallicity and an explosive energy of . This star had a radius of about . Radio and X-ray emission indicate the progenitor was surrounded by clumpy or filamentary circumstellar material that was fed by a low stellar mass loss rate of about ·yr−1 with a wind velocity of . The light curve for this event is nearly identical to that of SN 1999gi, suggesting they may have similar progenitor stars. References Further reading Supernovae Eridanus (constellation)
SN 1999em
[ "Chemistry", "Astronomy" ]
666
[ "Supernovae", "Astronomical events", "Constellations", "Explosions", "Eridanus (constellation)" ]
75,737,969
https://en.wikipedia.org/wiki/Parabolic%20subgroup%20of%20a%20reflection%20group
In the mathematical theory of reflection groups, the parabolic subgroups are a special kind of subgroup. The precise definition of which subgroups are parabolic depends on context—for example, whether one is discussing general Coxeter groups or complex reflection groups—but in all cases the collection of parabolic subgroups exhibits important good behaviors. For example, the parabolic subgroups of a reflection group have a natural indexing set and form a lattice when ordered by inclusion. The different definitions of parabolic subgroups essentially coincide in the case of finite real reflection groups. Parabolic subgroups arise in the theory of algebraic groups, through their connection with Weyl groups. Background: reflection groups In a Euclidean space (such as the Euclidean plane, ordinary three-dimensional space, or their higher-dimensional analogues), a reflection is a symmetry of the space across a mirror (technically, across a subspace of dimension one smaller than the whole space) that fixes the vectors that lie on the mirror and send the vectors orthogonal to the mirror to their negatives. A finite real reflection group is a finite group generated by reflections (that is, every linear transformation in is a composition of some of the reflections in ). For example, the symmetries of a regular polygon in the plane form a reflection group (called the dihedral group), because each rotation symmetry of the polygon is a composition of two reflections. Finite real reflection groups can be generalized in various ways, and the definition of parabolic subgroup depends on the choice of definition. Each finite real reflection group has the structure of a Coxeter group: this means that contains a subset of reflections (called simple reflections) such that generates , subject to relations of the form where denotes the identity in and the are numbers that satisfy for and for . Thus, the Coxeter groups form one generalization of finite real reflection groups. A separate generalization is to consider the geometric action on vector spaces whose underlying field is not the real numbers. Especially, if one replaces the real numbers with the complex numbers, with a corresponding generalization of the notion of a reflection, one arrives at the definition of a complex reflection group. Every real reflection group can be complexified to give a complex reflection group, so the complex reflection groups form another generalization of finite real reflection groups. In Coxeter groups Suppose that is a Coxeter group with a finite set of simple reflections. For each subset of , let denote the subgroup of generated by . Such subgroups are called standard parabolic subgroups of . In the extreme cases, is the trivial subgroup (containing just the identity element of ) and The pair is again a Coxeter group. Moreover, the Coxeter group structure on is compatible with that on , in the following sense: if denotes the length function on with respect to (so that if the element of can be written as a product of elements of and not fewer), then for every element of , one has that . That is, the length of is the same whether it is viewed as an element of or of . The same is true of the Bruhat order: if and are elements of , then in the Bruhat order on if and only if in the Bruhat order on . If and are two subsets of , then if and only if , , and the smallest group that contains both and is . Consequently, the lattice of standard parabolic subgroups of is a Boolean lattice. Given a standard parabolic subgroup of a Coxeter group , the cosets of in have a particularly nice system of representatives: let denote the set of elements in that do not have any element of as a right descent. Then for each , there are unique elements and such that . Moreover, this is a length-additive product, that is, . Furthermore, is the element of minimum length in the coset . An analogous construction is valid for right cosets. The collection of all left cosets of standard parabolic subgroups is one possible construction of the Coxeter complex. In terms of the Coxeter–Dynkin diagram, the standard parabolic subgroups arise by taking a subset of the nodes of the diagram and the edges induced between those nodes, erasing all others. The only normal parabolic subgroups arise by taking a union of connected components of the diagram, and the whole group is the direct product of the irreducible Coxeter groups that correspond to the components. In complex reflection groups Suppose that is a complex reflection group acting on a complex vector space . For any subset , let be the subset of consisting of those elements in that fix each element of . Such a subgroup is called a parabolic subgroup of . In the extreme cases, and is the trivial subgroup of that contains only the identity element. It follows from a theorem of that each parabolic subgroup of a complex reflection group is a reflection group, generated by the reflections in that fix every point in . Since acts linearly on , where is the span of (that is, the smallest linear subspace of that contains ). In fact, there is a simple choice of subspaces that index the parabolic subgroups: each reflection in fixes a hyperplane (that is, a subspace of whose dimension is less than that of ) pointwise, and the collection of all these hyperplanes is the reflection arrangement of . The collection of all intersections of subsets of these hyperplanes, partially ordered by inclusion, is a lattice . The elements of the lattice are precisely the fixed spaces of the elements of (that is, for each intersection of reflecting hyperplanes, there is an element such that ). The map that sends for is an order-reversing bijection between subspaces in and parabolic subgroups of . Concordance of definitions in finite real reflection groups Let be a finite real reflection group; that is, is a finite group of linear transformations on a finite-dimensional real Euclidean space that is generated by orthogonal reflections. As mentioned above (see ), may be viewed as both a Coxeter group and as a complex reflection group. For a real reflection group , the parabolic subgroups of (viewed as a complex reflection group) are not all standard parabolic subgroups of (when viewed as a Coxeter group, after specifying a fixed Coxeter generating set ), as there are many more subspaces in the intersection lattice of its reflection arrangement than subsets of . However, in a finite real reflection group , every parabolic subgroup is conjugate to a standard parabolic subgroup with respect to . Examples The symmetric group , which consists of all permutations of , is a Coxeter group with respect to the set of adjacent transpositions , ..., . The standard parabolic subgroups of (which are also known as Young subgroups) are the subgroups of the form , where are positive integers with sum , in which the first factor in the direct product permutes the elements among themselves, the second factor permutes the elements among themselves, and so on. The hyperoctahedral group , which consists of all signed permutations of (that is, the bijections on that set such that for all ), has as its maximal standard parabolic subgroups the stabilizers of for . More general definitions in Coxeter theory In a Coxeter group generated by a finite set of simple reflections, one may define a parabolic subgroup to be any conjugate of a standard parabolic subgroup. Under this definition, it is still true that the intersection of any two parabolic subgroups is a parabolic subgroup. The same does not hold in general for Coxeter groups of infinite rank. If is a group and is a subset of , the pair is called a dual Coxeter system if there exists a subset of such that is a Coxeter system and so that is the set of all reflections (conjugates of the simple reflections) in . For a dual Coxeter system , a subgroup of is said to be a parabolic subgroup if it is a standard parabolic (as in ) of for some choice of simple reflections for In some dual Coxeter systems, all sets of simple reflections are conjugate to each other; in this case, the parabolic subgroups with respect to one simple system (that is, the conjugates of the standard parabolic subgroups) coincide with the parabolic subgroups with respect to any other simple system. However, even in finite examples, this may not hold: for example, if is the dihedral group with elements, viewed as symmetries of a regular pentagon, and is the set of reflection symmetries of the polygon, then any pair of reflections in forms a simple system for , but not all pairs of reflections are conjugate to each other. Nevertheless, if is finite, then the parabolic subgroups (in the sense above) coincide with the parabolic subgroups in the classical sense (that is, the conjugates of the standard parabolic subgroups with respect to a single, fixed, choice of simple reflections ). The same result does not hold in general for infinite Coxeter groups. Affine and crystallographic Coxeter groups When is an affine Coxeter group, the associated finite Weyl group is always a maximal parabolic subgroup, whose Coxeter–Dynkin diagram is the result of removing one node from the diagram of . In particular, the length functions on the finite and affine groups coincide. In fact, every standard parabolic subgroup of an affine Coxeter group is finite. As in the case of finite real reflection groups, when we consider the action of an affine Coxeter group on a Euclidean space , the conjugates of the standard parabolic subgroups of are precisely the subgroups of the form for some subset of . If is a crystallographic Coxeter group, then every parabolic subgroup of is also crystallographic. Connection with the theory of algebraic groups If is an algebraic group and is a Borel subgroup for , then a parabolic subgroup of is any subgroup that contains . If furthermore has a pair, then the associated quotient group is a Coxeter group, called the Weyl group of . Then the group has a Bruhat decomposition into double cosets (where is the disjoint union), and the parabolic subgroups of containing are precisely the subgroups of the form where is a standard parabolic subgroup of . Parabolic closures Suppose is a Coxeter group of finite rank (that is, the set of simple generators is finite). Given any subset of , one may define the parabolic closure of to be the intersection of all parabolic subgroups containing . As mentioned above, in this case the intersection of any two parabolic subgroups of is again a parabolic subgroup of , and consequently the parabolic closure of is a parabolic subgroup of ; in particular, it is the (unique) minimal parabolic subgroup of containing . The same analysis applies to complex reflection groups, where the parabolic closure of is also the pointwise stabiliser of the space of fixed points of . The same does not hold for Coxeter groups of infinite rank. Braid groups Each Coxeter group is associated to another group called its Artin–Tits group or generalized braid group, which is defined by omitting the relations for each generator from its Coxeter presentation. Although generalized braid groups are not reflection groups, they inherit a notion of parabolic subgroups: a standard parabolic subgroup of a generalized braid group is a subgroup generated by a subset of the standard generating set , and a parabolic subgroup is any subgroup conjugate to a standard parabolic. A generalized braid group is said to be of spherical type if the associated Coxeter group is finite. If is a generalized braid group of spherical type, then the intersection of any two parabolic subgroups of is also a parabolic subgroup. Consequently, the parabolic subgroups of form a lattice under inclusion. For a finite real reflection group , the associated generalized braid group may be defined in purely topological language, without referring to a particular group presentation. This definition naturally extends to finite complex reflection groups. Parabolic subgroups can also be defined in this setting. Footnotes References Coxeter groups Reflection groups
Parabolic subgroup of a reflection group
[ "Physics" ]
2,498
[ "Euclidean symmetries", "Reflection groups", "Symmetry" ]
75,742,095
https://en.wikipedia.org/wiki/Indium%28I%29%20chloride
Indium(I) chloride (also indium monochloride) is the chemical compound with the formula InCl. Indium monochloride occurs as a yellow cubic form below 120 °C and above this temperature as a red orthorhombic form. InCl is one of three known indium chlorides. Synthesis and structure InCl can be prepared by heating indium metal with indium trichloride in a sealed tube. According to X-ray crystallography, the structure of the yellow polymorph resembles that of sodium chloride except that the Cl-In-Cl angles are not 90°, but range between 71 and 130°. The red (high T) polymorph crystallizes in the thallium(I) iodide motif. Reactivity The relatively high energy level of the 5s electrons of the indium center make InCl susceptible to oxidation as well as disproportionation into In(0) and InCl3. Tetrahydrofuran (THF) appears to facilitate the disproptionation of InCl as well as other indium(I) halides. History Indium(I) chloride was first isolated in 1926 as part of an investigation on the compounds formed between indium and chlorine. References Chlorides Indium(I) compounds Metal halides
Indium(I) chloride
[ "Chemistry" ]
280
[ "Chlorides", "Inorganic compounds", "Inorganic compound stubs", "Salts", "Metal halides" ]
75,744,297
https://en.wikipedia.org/wiki/Relationship%20between%20telomeres%20and%20longevity
The relationship between telomeres and longevity and changing the length of telomeres is one of the new fields of research on increasing human lifespan and even human immortality. Telomeres are sequences at the ends of chromosomes that shorten with each cell division and determine the lifespan of cells. The telomere was first discovered by biologist Hermann Joseph Muller in the early 20th century. However, experiments by Elizabeth Blackburn, Carol Greider, and Jack Szostak in the 1980s led to the successful discovery of telomerase (the enzyme responsible for maintaining telomere length) and a better understanding of telomeres. Telomeres play essential roles in the stability and control of cell division. Telomeres protect chromosomes from deterioration and fusion with neighboring chromosomes and act as a buffer zone, preventing the loss of essential genetic information during cell division. It is predicted that the knowledge of methods to increase the length of cell telomeres (Stem cell and quasi-stem cells, control the regeneration and rebuilding of different tissues of the body) will pave the way for increasing human lifespan. Examining telomeres is one of the most important fields of research related to aging. It is also very important to investigate the mechanisms of maintaining telomerase, cell cleansing (old cells that accumulate in tissues and sometimes cause cancer and inflammation) and the production of new cells in long-lived organisms. However, this idea faces major challenges such as increased cancer incidence, immune system problems, and unwanted long-term consequences. Telomere and Telomerase In the early 1970s, Alexey Olovnikov first recognized that chromosomes cannot completely duplicate their ends during cell division. This is known as the "end replication problem". Olovnikov proposed that every time a cell divides, a part of the DNA sequence is lost, and if this loss reaches a certain level, cell division will stop at the end. According to his "marginotomy" theory, there are sequences at the end of the DNA (telomeres) that are placed in tandem repeats and create a buffer zone that determines the number of divisions a particular cell can undergo. Many organisms have a ribonucleoprotein enzyme called telomerase, which is responsible for adding repetitive nucleotide sequences to the ends of DNA. Telomerase replicates the telomere head and does not require ATP. In most multicellular eukaryotic organisms, telomerase is active only in germ cells, some types of stem cells such as embryonic stem cells, and certain white blood cells. Telomerase can be reactivated and telomeres restored to the embryonic state by somatic cell nuclear transfer. The continuous shortening of telomeres with each replication in somatic (body) cells may play a role in aging and in cancer prevention. This is because telomeres act as a kind of "delayed fuse" and eventually run out after a certain number of cell divisions. This action results in the loss of vital genetic information from the cell's chromosome after multiple divisions. Research on telomerase is extremely important in understanding its role in maintaining telomere length and its potential implications for aging and cancer. Challenges While telomeres play an important role in cellular senescence, the intricate biological details of telomeres still require further investigation. The complex interactions between telomeres, different proteins and the cellular environment must be fully understood in order to develop precise and safe interventions to change it. Understanding the long-term effects of telomere extension on the body is complex and risky. Prediction of long-term consequences, including potential unanticipated side effects or interactions with other cellular processes, requires thorough and long-term investigation. Increased risk of cancer Extending telomeres can allow cells to divide more and increase the risk of uncontrolled cell growth and cancer development. A study conducted by Johns Hopkins University challenged the idea that long telomeres prevent aging. Rather than protecting cells from aging, long telomeres help cells with age-related mutations last longer. This problem prepares the conditions for the occurrence of various types of cancer, and people with longer cell telomeres showed more signs of suffering from types of cancer such as Melanoma and Lymphoma. Telomere length balance It is important to strike the right balance to avoid unintended consequences. Old cells and telomere dysfunction Telomere dysfunction during cellular aging (a state in which cells do not divide but are metabolically active) affects the health of the body. Preventing telomere shortening without clearing old cells may lead to the accumulation of these cells in the body and contribute to age-related diseases and tissue dysfunction. Intertissue differences Different tissues of the human body may react differently to changes in telomeres. Telomere length is different in different tissues and cell types of the body. Developing a general telomere lengthening strategy that is effective in all tissues is a complex task; Also, understanding how different types of cells, organs and systems react to telomere manipulation is very important for developing safe and effective interventions. Effects on the immune system The immune system plays an important role in monitoring and destroying abnormal or cancerous cells. Telomere extension may affect the immune system's ability to recognize and eliminate cells with long telomeres, potentially compromising immune surveillance. It is very important to ensure the ability of the immune system to effectively identify and fight against pathogens and abnormal cells. See also Apoptosis Neurodegenerative disease Programmed cell death HeLa References External link Senescence Cellular senescence Telomeres Longevity Molecular biology
Relationship between telomeres and longevity
[ "Chemistry", "Biology" ]
1,138
[ "Senescence", "Cellular senescence", "Cellular processes", "Molecular biology", "Biochemistry", "Telomeres", "Metabolism" ]
69,890,623
https://en.wikipedia.org/wiki/La%20Teja%20Refinery
The La Teja Refinery is the only oil refinery in Uruguay, and is located in the La Teja neighborhood in Montevideo. Owned by the national industry ANCAP, the refinery primarily produces light-grade oil products used for domestic industries. The refinery is connected to an oil terminal in the Port of Montevideo. The refinery was first operated in 1937, and currently has a total capacity of approximately 50,000 barrels a day. , two-thirds of Uruguay's petroleum imports come from the United States, with a further 18% from neighboring Brazil. Emissions A 2011 study measured found SO2 emissions to be ~ 4×1017 molec cm−2 slant column density directly over the oil refinery, decreasing as the plume disperses and NO2 peaking at ~ 1×1016 molec cm−2. Planned future The refinery undergoes overhauls approximately every 4 years. The capacity of the refinery is scheduled to be upgrade in 2023 to better produce lightweight petroleum products with residual oil solvent extraction and solvent deasphalting. Because of the energy transition in the country, where Uruguay had over 94% clean energy and the government has plans for a transition for other industries like transport, Minister of Environment Adrián Peña projected closing the refinery by 2035 to meet the zero emission goal set out in Uruguay Long Term Climate Strategy. References Oil refineries in Uruguay
La Teja Refinery
[ "Chemistry" ]
271
[ "Petroleum", "Petroleum stubs" ]
69,891,673
https://en.wikipedia.org/wiki/See%20Monster
See Monster (stylised in all capitals) was a temporary outdoor art installation in Weston-super-Mare, England. It was part of the nationwide arts festival Unboxed: Creativity in the UK and consisted of a converted decommissioned offshore platform featuring a garden and artworks that promoted sustainability. Along with the wider Unboxed festival, the installation attracted some controversy. However, more than a million people engaged with it through visitation, related activities and various forms of media. Development Background See Monster was originally a North Sea offshore platform and was one of ten works commissioned as part of Unboxed: Creativity in the UK, a nationwide arts festival based around science, technology, engineering and mathematics. The installation was located at the Tropicana, an events space and former lido that had previously hosted the Banksy art installation Dismaland in 2015. It was a work of the Leeds creative studio NewSubstance and was supported by North Somerset Council. It was expected to cost £10.5 million. The intention of See Monster was to inspire people to discuss the sustainable reuse of industrial structures. Martin Green, chief creative officer of Unboxed, said that the installation would "take something that took from the earth and ask it to give back." The installation was purported to be the first example in the world of an offshore platform being repurposed after decommissioning, rather than scrapped. Construction and opening The 450-tonne platform was stripped, repaired and cleaned in a shipyard in the Netherlands over 12 months. It arrived in Weston-super-Mare by barge on 13 July 2022 and was moved onto the beach by a Mammoet self-propelled modular transporter. A 1,500-tonne crane lifted it onto a set of pre-constructed legs within the Tropicana on 16 July. The opening had originally been planned for July to coincide with the summer holidays, but was ultimately delayed until 23 September. The delay was attributed to the unprecedented nature of the project and to the weather, with construction work being unable to take place during high winds, rain or lightning. A viewing platform opened on 15 August, allowing visitors to watch the construction. The installation was intended to be open until 5 November, but this was later extended to 20 November. Overview Installation See Monster was tall and had four distinct levels. A waterfall representing the monster's roar cascaded into the pool in which the platform stood. Above this were the Cellar Deck, Garden Lab and Helideck. More than 6,000 pieces made up a shimmering kinetic artwork representing the monster's scales and a crane represented the monster's head and neck. Other features included a garden of trees and plants grown to survive in a coastal microclimate, a cloud-making machine, an amphitheatre, telescopes to show the view and a curly slide. The installation's irrigation system was powered using renewable energy generated by the WindNest, an artwork by Trevor Lee comprising two rotating pods generating wind power, and the Solar Tree, comprising a solar panel mounted atop a metal tree generating solar power. There were also two kinetic sculptures by Ivan Black representing the Sun and Moon, as well as a studio from which radio programmes and podcasts were broadcast. Programmes In addition to the physical installation, there was a learning programme offering educational visits and resources to schoolchildren, young people, Scouts, youth groups and students and a think tank programme involving local residents. Drone light shows Prior to See Monster's opening, a series of three drone light shows called The Awakening took place on Weston-super-Mare seafront on 28 August, 30 August and 1 September 2022. They were performed by SkyMagic and involved 400 drones. Reception Reaction and controversy Patrick O'Mahony, See Monster's creative director, expected that the installation would "split opinion" but remarked that he would "rather people love it or hate it rather than being indifferent" and that "there's nothing worse than doing something people have no reaction to." Charlotte Lytton, writing for The Telegraph, compared See Monster favourably to Dismaland and remarked, "even if its eco-message does not entirely cut through, this is the better end of public art: a supersized spectacle in equal parts immersive and unusual." Julian Knight, chair of the Digital, Culture, Media and Sport Committee, said that the installation looked "fantastic" but criticised the fact that the delay caused it to miss the summer holidays, questioned the relevance of it and the other Unboxed installations to the public and described the festival as an "irresponsible use of public money." The installation's delay was also criticised locally, although some suggested that the later opening had helped to prolong Weston-super-Mare's tourist season. Knight called for an investigation into the festival and the National Audit Office (NAO) subsequently announced that it would conduct one. Audience engagement Unboxed announced overall audience engagement of 1,087,646 for See Monster, including 512,261 through visitation, 87,211 through the learning programme and 5,852 through participation, as well as 394,822 through digital media, 67,500 through broadcast media and 20,000 through print media. North Somerset Council reported that 6,000 engaged with the think tank programme and an estimated 70,000 attended the drone light shows. The installation was reported to have attracted visitors from across the country and from abroad, with some queuing for two to three hours to enter. See Monster's success provided an economic boost to Weston-super-Mare, with numerous local businesses reporting increased custom during its opening. Unboxed cited the installation's popularity as the reason for extending its opening. Some local residents called for it to remain permanently. Accolades On 30 May 2023, See Monster was the popular choice winner in the Pop-Ups & Temporary category in the 2023 Architizer A+Awards. Decommissioning Work to dismantle See Monster began on 21 November 2022 and was completed in early 2023. The structure was recycled, with some of the features being donated to local projects and the trees and plants being replanted around Weston-super-Mare. See Monster Garden, a public garden on Weston-super-Mare seafront featuring many of the trees and plants from the installation, and intended as a lasting legacy, opened on 24 July 2023. References External links - video of arrival onto beach, and interview with creative director Weston-super-Mare Arts in Somerset Public art in England Oil platforms
See Monster
[ "Chemistry", "Engineering" ]
1,338
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
69,892,662
https://en.wikipedia.org/wiki/Burning%20plasma
In plasma physics, a burning plasma is a plasma that is heated primarily by fusion reactions involving thermal plasma ions. The Sun and similar stars are a burning plasma, and in 2020 the National Ignition Facility achieved a burning plasma in the laboratory. A closely related concept is that of an ignited plasma, in which all of the heating comes from fusion reactions. The Sun The Sun and other main sequence stars are internally heated by fusion reactions involving hydrogen ions. The high temperatures needed to sustain fusion reactions are maintained by a self-heating process in which energy from the fusion reaction heats the thermal plasma ions via particle collisions. A plasma enters what scientists call the burning plasma regime when the self-heating power exceeds any external heating. The Sun is a burning plasma that has reached fusion ignition, meaning the Sun's plasma temperature is maintained solely by energy released from fusion. The Sun has been burning hydrogen for 4.5 billion years and is about halfway through its life cycle. Thermonuclear weapons Thermonuclear weapons, also known as hydrogen bombs, are nuclear weapons that use energy released by a burning plasma's fusion reactions to produce part of their explosive yield. This is in contrast to pure-fission weapons, which produce all of their yield from a neutronic nuclear fission reaction. The first thermonuclear explosion, and thus the first man-made burning plasma, was the Ivy Mike test carried out by the United States in 1952. All high-yield nuclear weapons today are thermonuclear weapons. The National Ignition Facility In 2020, a burning plasma was created in the laboratory for the first time at the National Ignition Facility, a large laser-based inertial confinement fusion research device located at the Lawrence Livermore National Laboratory in Livermore, California. NIF achieved a fully ignited plasma on August 8, 2021, and a scientific energy gain above one on December 5, 2022. Tokamaks Multiple tokamaks are currently under construction with the goal of becoming the first magnetically confined burning plasma experiment. ITER, being built near Cadarache in France, has the stated goal of allowing fusion scientists and engineers to investigate the physics, engineering, and technologies associated with a self-heating plasma. Issues to be explored include understanding and controlling a strongly coupled, self-organized plasma; management of heat and particles that reach plasma-facing surfaces; demonstration of fuel breeding technology; and the physics of energetic particles. These issues are relevant to ITER's broader goal of using self-heating plasma reactions to become the first fusion energy device that produces more power than it consumes, a major step toward commercial fusion power production. To reach fusion-relevant temperatures, the ITER tokamak will heat plasmas using three methods: ohmic heating (running electric current through the plasma), neutral particle beam injection, and high-frequency electromagnetic radiation. SPARC, being built in Devens in the United States, plans to verify the technology and physics required to build a power plant based on the ARC fusion power plant concept. SPARC is designed to achieve this with margin in excess of breakeven and may be capable of achieving up to 140 MW of fusion power for 10-second bursts despite its relatively compact size. SPARC's high-temperature superconductor magnet is intended to create much stronger magnetic fields, allowing it to be much smaller than similar tokamaks. Symbolic implications The NIF burning plasma, despite not occurring in an energy context, has been characterised as a major milestone in the race towards nuclear fusion power, with the perception that it could bring with it a better planet. The first controlled burning plasma has been characterized as a critical juncture on the same level as the Trinity Test, with enormous implications for fusion for energy (fusion power), including the weaponization of fusion power, mainly for electricity for directed-energy weapons, as well as fusion for peacebuilding – one of the main tasks of ITER. References Plasma types Fusion power Nuclear fusion Stellar evolution
Burning plasma
[ "Physics", "Chemistry" ]
810
[ "Plasma physics", "Fusion power", "Astrophysics", "Stellar evolution", "Nuclear physics", "Nuclear fusion", "Plasma types" ]
78,832,425
https://en.wikipedia.org/wiki/Fen%20%28mass%29
Fen (), called fan in Cantonese, hun in Taiwanese, phân in Vietnamese, or "candareen" in English, is a traditional Chinese unit for weight measurement. It originated in China before being introduced to neighboring countries in East Asia. Nowaday, the mass of 1 fen equals 0.5 grams in mainland China, 0.375 grams in Taiwan, 0.37799 grams in Hong Kong, Singapore and Malaysia, and 0.378 grams in Vietnam. Fen is mostly used in the traditional markets, and famous for measuring gold, silver and Chinese medicines. China Mainland On June 25, 1959, the State Council of the People's Republic of China issued the "Order on the Unified Measurement System", retaining the market measure system, with minor amendment. where 1 fen equals 0.5 grams (i.e., 500 mg) and 10 fens equals 1 qian. The traditional Chinese medicine measurement system remains unchanged. Taiwan The Taiwanese still followed their own habits and continued to use the old weights and measures of the Qing Dynasty. 1 Taiwan fen is equal to 0.375 grams (375 mg), or 1/10 Taiwan qian. Hong Kong and Macau Hong Kong and Macau mass units In Hong Kong, one fen is equal to 1/10 qian, which is 0.3779936375 grams, or 377.9936375 mg. Similarly, Singapore law stipulates that one fen equals 0.37799 g. Malaysia has the same regulations as it is a former British colony as well. Hong Kong troy units These are used for trading precious metals such as gold and silver. Vietnam In Vietnam, the unit of fen is called "phân": 1 phân is equal to 0.38 grams or 10 ly by traditional value. For more information on the Chinese mass measurement system, please see article Jin (mass). See also Chinese units of measurement Hong Kong units of measurement Taiwanese units of measurement Vietnamese units of measurement Notes References External links 中國度量衡#衡 市制 分 (質量單位) Units of mass Chinese units of measurement Customary units of measurement
Fen (mass)
[ "Physics", "Mathematics" ]
441
[ "Matter", "Quantity", "Units of mass", "Mass", "Customary units of measurement", "Units of measurement" ]
78,838,726
https://en.wikipedia.org/wiki/Elisabeth%20Gasteiger
Elisabeth Gasteiger is a Swiss bioinformatician known for her work in developing and managing tools for protein analysis. Her efforts have been instrumental in advancing proteomics research, particularly through her contributions to the ExPASy (Expert Protein Analysis System), a bioinformatics resource platform. She currently holds the position of Senior User Experience and Support Manager at the Swiss Institute of Bioinformatics (SIB). Career Gasteiger's work at the Swiss Institute of Bioinformatics (SIB) has been instrumental in the development and enhancement of key bioinformatics resources, including the UniProt database, an integrated platform that combines protein sequence data from Swiss-Prot, TrEMBL, and PIR, as well as the ExPASy platform, which provides a suite of tools to support protein sequence analysis and related research. She has played a pivotal role in the development and enhancement of the ExPASy (Expert Protein Analysis System) platform, coordinating software development within the Swiss-Prot group at SIB and overseeing the ExPASy server. One of her notable contributions is the development of ProtParam, a tool hosted on the ExPASy server that allows researchers to compute various physical and chemical parameters of a protein sequence, such as its molecular weight, theoretical pI, amino acid composition, and extinction coefficient. In her role as User Experience & Support Manager with the Swiss-Prot Group at SIB, she contributed to the deployment of the Cellosaurus database on the ExPASy platform in May 2015, significantly enhancing its development as a standalone resource for life sciences research. She has also contributed to other bioinformatics resources, including the creation of the ABCD database, a repository for chemically defined antibodies, which serves as a valuable resource for researchers in the field of immunology. Additionally, she has been associated with the development of the Glycomics@ExPASy platform, which aims to bridge the gap in glycomics research by providing access to a variety of databases and tools dedicated to the study of glycans and glycoproteins. Her ongoing work has contributed to the success of UniProt as a leading resource for protein sequence classification, annotation, and functional analysis, supporting researchers worldwide in molecular biology, genomics, and proteomics. She has been instrumental in providing critical support to researchers by developing and managing bioinformatics tools that improve the accessibility and accuracy of protein sequence data. In 2024, she contributed to a research initiative in collaboration with Ottokar Stundner and other clinician scientists, which was awarded the Weiss Research Prize. This project, supported by the Weiss-Wissenschaftsstiftung and the Austrian Science Fund (FWF), aims to improve the safety of anesthetic procedures by investigating the risks associated with crystal formation in anesthetic mixtures. Selected publications Gasteiger, E., Hoogland, C., Gattiker, A., Duvaud, S., Wilkins, M. R., Appel, R. D., Bairoch, A. (2003). "ExPASy: SIB bioinformatics resource portal." Nucleic Acids Research, 31(13), 3787–3793. doi:10.1093/nar/gkg557. Gasteiger, E., Hoogland, C., Gattiker, A., Duvaud, S., Wilkins, M. R., Appel, R. D., Bairoch, A. (2005). "Protein identification and analysis tools on the ExPASy server." In: Walker, J. M. (ed.), The Proteomics Protocols Handbook, pp. 571–607. Humana Press. doi:10.1385/1-59259-890-0:571. Gasteiger, E., Jung, E., Bairoch, A. (2001). "The SWISS-PROT protein sequence database and its applications in proteomics." Bioinformatics, 17(2), 305–306. doi:10.1093/bioinformatics/17.2.305. Boeckmann, B., Bairoch, A., Apweiler, R., Blatter, M. C., Estreicher, A., Gasteiger, E., et al. (2003). "The SWISS-PROT protein knowledgebase and its applications to proteomics." Bioinformatics, 19(2), 1–10. doi:10.1093/bioinformatics/19.2.1. Gasteiger, E., UniProt Consortium. (2001). "UniProt: The universal protein knowledgebase." Nucleic Acids Research, 46(D1), D136–D139. doi:10.1093/nar/gkx1098. Gasteiger, E., UniProt Consortium. (2009). "The protein knowledgebase: UniProt and its applications." Bioinformatics, 25(11), 1626–1637. doi:10.1093/bioinformatics/btp183. References Bioinformatics Proteomics Swiss women scientists
Elisabeth Gasteiger
[ "Engineering", "Biology" ]
1,126
[ "Bioinformatics", "Biological engineering" ]
71,453,228
https://en.wikipedia.org/wiki/Deformation%20index
The deformation index is a parameter that specifies the mode of control under which time-varying deformation or loading processes occur in a solid. It is useful for evaluating the interaction of elastic stiffness with viscoelastic or fatigue behavior. If deformation is maintained constant while load is varied, the process is said to be deformation controlled. Similarly, if load is held constant while deformation is varied, the process is said to be load controlled. Between the extremes of deformation and load control, there is a spectrum of intermediate modes of control including energy control. For example, between two rubber compounds with the same viscoelastic behavior but different stiffnesses, which compound is preferred for a given application? In a strain controlled application, the lower stiffness rubber would operate at smaller stress and therefore produce less viscous heating. But in a stress controlled application, the higher stiffness rubber would operate at small strains thereby producing less viscous heating. In an energy controlled application, the two compounds might give the same amount of viscous heating. The right selection for minimizing viscous heating therefore depends on the mode of control. Definition Futamura's deformation index can be defined as follows. is the parameter whose value is controlled (ie held constant). is Young's modulus of linear elasticity. is the strain. is the stress. Particular choices of yield particular modes of control and determine the units of . For , we get strain control: . For , we get energy control: . For , we get stress control: . History The parameter was originally proposed by Shingo Futamura, who won the Melvin Mooney Distinguished Technology Award in recognition of this development. Futamura was concerned with predicting how changes in viscoelastic dissipation were affected by changes to compound stiffness. Later, he extended applicability of the approach to simplify finite element calculations of the coupling of thermal and mechanical behavior in a tire. William Mars adapted Futamura's concept for application in fatigue analysis. Analogy to polytropic process Given that the deformation index may be written in a similar algebraic form, it may be said that the deformation index is in a certain sense analogous to the polytropic index for a polytropic process. References Solid mechanics
Deformation index
[ "Physics" ]
454
[ "Solid mechanics", "Mechanics" ]
71,457,598
https://en.wikipedia.org/wiki/Karl%20Heinz%20Bennemann
Karl Heinz Bennemann (born July 31, 1932) is a German condensed matter physicist. He has contributed to the advance on the understanding of traditional BCS and high Tc superconductors, the magnetic properties of alloys, the magnetic properties of low dimensional systems, the physicochemical properties of surfaces, the physicochemical properties of nanostructured materials, ultrafast phenomena, and non-linear optics, among others. His work was recognized through an Alfred P. Sloan fellowship in 1969. Career Bennemann performed his Diploma and Doctor Rerum Naturalium studies in physics at the University of Münster. The diploma degree was obtained in 1960 under the supervision of Ludwig Tewordt, who was part of the superconductivity group hosted at the University of Illinois at Urbana-Champaign (UIUC). His thesis work was on the theoretical study of the physical effects caused by point defects in copper. In 1962 he obtained the doctoral degree under the combined guidance of Adolf Kratzer and W. Franz,L Tewordt. His doctoral work was on the effect of lattice defects on the polarization of the electron gas in solids (thesis titled “Allgemeine Methode zur Bestimmung der durch punktförmige Gitterfehler in Metallen hervorgerufenen Verzerrung des Gitters und Polarisation des Elektronengases”). Through a joint program between the University of Münster and UIUC, he worked fundamental studies on the conductivity in metals, which also earned him the PhD at the latter university, endorsed by James S. Koehler and Frederick Seitz. In the US, Bennemann worked with scientists in a solid-state physics group founded by Seitz in 1959. Academic career After obtaining his PhD degree, in 1962–1964, Bennemann worked as a postdoctoral scholar in John Bardeen's group, where he studied macroscopic quantum systems with a focus on quantum liquids and superconductivity. He developed a general method to study the electron redistribution around point defects in noble metals and by using the t-matrix method he formulated a theory to calculate the electron distribution in metals. He also contributed to the understanding of the physical properties of point defects in covalent crystals. and extended the pseudo potential theory, introduced by Phillips, to successfully calculate the properties of diamond. In 1964–1965, he was appointed as an assistant researcher at the Institute for Mathematical Physics at the University of Karlsruhe and spent the summer of 1965 working in the Cavendish Laboratory, at the University of Cambridge, England. In Neville Mott's group, he studied the superconductivity in ferromagnetic alloys. Later, he went back to the United States to work in the Institute for the Study of Metals, at the University of Chicago, where he continued with studies of superconductivity in magnetic alloys. Bennemann was especially interested in the role of paramagnetic impurities on the superconducting transition temperature. In 1967, he was appointed as associate professor at the Department of Physics and Astronomy in the University of Rochester, where he received tenure a year later. At that time he contributed to the understanding of the coexistence of superconductivity and magnetic ordering. Furthermore, he proposed an expression for the electron-phonon coupling constant in terms of measurable normal-state quantities and atomic properties, to help explain the superconducting transition temperature in d-band metals. In 1969, while at the University of Rochester, he was awarded an Alfred P. Sloan Foundation fellowship to pursue studies on the magnetic properties of alloys. At the end of 1969, he got offers for a full professorship from several universities: the University of California, Los Angeles, Brown University, Georgetown University, McGill University at Montreal, Canada and the Freie Universität Berlin. He decided to return to Germany and accepted a full professorship at the Institute for Theoretical Physics at the Freie Universität Berlin, in Dahlem, West Berlin. He arrived at a city where some of the most impactful physics was developed before the Second World War and which had suffered large devastation during the world conflict. The Freie Universität Berlin was founded in 1948, under especially difficult circumstances, in the American Sector of the divided city subjected to the Russian blockade. At that time the Friedrich-Wilhelms-Universität, where Albert Einstein, Erwin Schrödinger, Max Planck, Werner Heisenberg and others had developed their seminal work, was split into the Humboldt Universität located in the east sector and the Freie Universität in the west. Today's Institute for Theoretical Physics, was founded at the beginning of the 1970s when the university was reorganized and underwent extensive expansion. It took more than 20 years to fully organize the university and provide the infrastructure. Bennemann contributed to the creation of an international physics institute of excellence. In the decades that followed, he ran projects on many important problems in condensed matter physics. With the collaboration of international scientists and graduate students he contributed to the scientific environment at the Freie Universität, and the development of science in other countries, including Argentina, Brazil and Mexico. Bennemann's impact and legacy through trainees Bennemann has always been interested in current physics problems. By offering problems in the frontiers of knowledge to graduate students, as well as Postdoctoral fellows and consolidated collaborators, his group could achieve important contributions in condensed matter physics and nanoscience. The intense intellectual activity led several of his trainees to follow carriers in the academia. Contributions through the doctoral theses of his graduate students, who are now faculty personnel, include: Karol Penson, now at the Sorbonne University in Paris, France, studied the spin-Peierls transition for one-dimensional classical and quantum chains. José Luis Morán-López, now at the Institute for Scientific and Technological Research in San Luis Potosí, Mexico, contributed to the understanding of the electronic structure and properties of binary alloys and developed a theory for magnetism of transition metals. David Tománek, now retired at Michigan State University, United States, was involved in the development of a theory for the structural and electronic properties of surfaces, including reconstruction and photoemission spectra. Sugata Mukherjee, who was at the Bose National Centre for Basic Science in Kolkata, India, until he passed away in 2020, performed theoretical studies of the atomic structure at the surface of transition metals and alloys. Peter Jensen, now retired from the Kläre-Bloch-Schule Berlin, Germany, worked out spin-1 Ising models with competing interactions. Gustavo Pastor, Universität Kassel, Germany, studied the electronic properties of metal clusters. Martín García, also at Universität Kassel, studied the bond character in divalent metal clusters. Joerg Schmalian, at the Karlsruhe Institute of Technology, Germany, studied strongly correlated high-temperature superconductors. Gunnar Baumgärtel, who is patent attorney and senior partner at Maikowski-Ninnemann, Berlin, Germany, worked on the role of magnetic excitations in High Tc superconductors. Finally, Harald Jeschke, now at Okayama University, Japan, made contributions to the understanding of optically created non-equilibrium in covalent solids. Other of Bennemann's graduate students were or are now in the industrial or financial sectors. Günther Kerker (worked at Bayer, Switzerland, until he died in 2016) studied the strong-coupling superconductivity in transition-metal alloys. Roland Linke (worked at the Deutsche Telephonwerke, Berlin, Germany, until he passed away in 1986) contributed to the understanding of itinerant magnetism in transition metals. Ute Pustogowa (Hypo Vereinsbank-Unicredit, Germany) contributed to the understanding of the magneto-optic non-linear effects in transition metals. Sören Grabowski (Partner at EY Parthenon, Berlin, Germany - formerly also Munich, San Diego & Moscow) contributed to the understanding of the interdependence of spin fluctuations and high-Tc superconductivity. Matthias Langer (LK Test Solutions, München, Germany) worked on a theory to explain the elementary excitations in the normal state of high Tc superconductors. Thomas Luce (MicroVision, Nürnberg, Germany) studied problems in non-linear occurred at surfaces and thin films. Roland Knorren (EMEA at Oracle, Hamburg, Germany) studied the ultrafast dynamics of non-equilibrium electrons in noble and transition metals. Ilya Grigorenko (CLS Group, New York, USA) studied ultrafast dynamics and optimal control of electrons in nanostructures. Roman Brinzanik (Kraftwerk Renewable Power Solutions GmbH, Berlin) performed a Monte Carlo study of magnetic nanostructures during growth. Achievements in collaboration with Habilitanden In some European countries, in order to get a university professorship, to teach and to advice students, there was a requirement, after the doctoral degree, to tackle a problem in physics and to defend the results in an oral presentation. It is known as habilitation. Karl Bennemann was also very active in advising young scientists through the habilitation processes: Arno Holz presented a study of a new phase diagram for the metal-insulator transition in n-type semiconductors. Pedro U. Schlottmann tackled the coexistence of spin-glass and ferromagnetic phases in alloys with two magnetic components and exchange interactions of opposite signs. Karol A. Penson carried out a study of the static and dynamic aspects of spin-lattice Peierls instabilities in quasi-one-dimensional systems. Peter Stampfli was involved with the analysis of the polarizability of small spherical metallic clusters. Wolfgang Hübner presented advances on the understanding of non-linear optics. Peter J. Jensen, lead a study of the magnetic properties of thin ferromagnetic films and Martín E. García, developed a theory for ultrafast phenomena in clusters and solids. Personal life He is the youngest of three children. His father was a businessman in Münster. Bennemann lived his childhood in a small village close to Münster. He married in 1960 and has three sons. Books The Physics of Liquid and Solid He, Ed. K.H. Bennemann and J.B. Ketterson (Willey and Sons, 1976) The Physics of Liquid and Solid He, Ed. K.H. Bennemann and J.B. Ketterson (Willey and Sons, 1978) Non-linear optics in Metals, K.H. Bennemann, (Oxford University Press, 1999) The Physics of Sueprconductors, Volume 1, Conventional and High-Tc Superconductors, Ed. K.H. Bennemann and J.B. Ketterson (Springer Verlag, 2003) The Physics of Superconductors, Volume 2, Superconductivity in Nanostructures, High-Tc and Novel Superconductors, Organic Superconductors, Ed. K.H. Bennemann and J.B. Ketterson (Springer Verlag, 2004) Novel Superfluids, Volume 1, Ed. K.H. Bennemann and J.B. Ketterson (Oxford University Press, 2013) Novel Superfluids, Volume 2, Ed. K.H. Bennemann and J.B. Ketterson (Oxford University Press, 2015) References Condensed matter physicists German physicists University of Münster alumni University of Illinois Urbana-Champaign alumni 1932 births Living people
Karl Heinz Bennemann
[ "Physics", "Materials_science" ]
2,413
[ "Condensed matter physicists", "Condensed matter physics" ]
71,462,156
https://en.wikipedia.org/wiki/Protactinium%28IV%29%20bromide
Protactinium(IV) bromide is an inorganic compound. It is an actinide halide, composed of protactinium and bromine. It is radioactive, and has the chemical formula of PaBr4. It may be due to the brown color of bromine that causes the appearance of protactinium(IV) bromide to be brown crystals. Its crystal structure is tetragonal. Protactinium(IV) bromide is sublimed in a vacuum at 400 °C. The protactinium(IV) halide closest in structure to protactinium(IV) bromide is protactinium(IV) chloride. Preparation Protactinium(IV) bromide can be prepared by reacting protactinium(V) bromide with hydrogen gas or aluminium: Properties Protactinium(IV) bromide reacts with antimony trioxide to form protactinium bromate: See also Protactinium(V) bromide References Bromides Actinide halides Protactinium compounds
Protactinium(IV) bromide
[ "Chemistry" ]
219
[ "Bromides", "Salts" ]
71,462,826
https://en.wikipedia.org/wiki/Indentation%20plastometry
Indentation plastometry is the idea of using an indentation-based procedure to obtain (bulk) mechanical properties (of metals) in the form of stress-strain relationships in the plastic regime (as opposed to hardness testing, which gives numbers that are only semi-quantitative indicators of the resistance to plastic deformation). Since indentation is a much easier and more convenient procedure than conventional tensile testing, with far greater potential for mapping of spatial variations, this is an attractive concept (provided that the outcome is at least approximately as reliable as those of standard uniaxial tests). Basic requirements Capturing of macroscopic (size-independent) properties brings in a requirement to deform a volume of material that is large enough to be representative of the bulk. This depends on the microstructure, but usually means that it must contain “many” grains and is typically of the order of hundreds of microns in linear dimensions. The indentation size effect, in which the measured hardness tends to increase as the deformed volume becomes small, is at least partly due to a failure to interrogate a representative volume. The indenter, which is normally spherical, therefore needs to have a radius in the approximate range of several hundred microns up to a mm or two. A further requirement concerns the plastic strains generated in the sample. The indentation response must be sensitive to the plasticity characteristics of the material over the strain range of interest, which normally extends up to at least several % and commonly up to several tens of %. The strains created in the sample must therefore also range up to values of this order. This typically requires that the “penetration ratio” (penetration depth over indenter radius) should be at least about 10%. Finally, depending on the hardness of the metal, this in turn requires that the facility should have a relatively high load capability – usually of the order of several kN. Experimental outcomes The simplest indentation procedures, which have been in use for many decades, involve the application of a pre-determined load (often from a dead weight), followed by measurement of the lateral size of the residual indent (or possibly its depth). However, many indentation procedures are now based on “instrumented” set-ups, in which the load is progressively ramped up and both load and penetration (displacement) are continuously monitored during indentation. A key experimental outcome is thus the load-displacement curve. Various types of equipment can be used to generate such curves. These include those designed to carry out so-called “nanoindentation” - for which both the load (down to the mN range) and the displacement (commonly sub-micron) are very small. However, as noted above, if the deformed volume is small, then it’s not possible to obtain “bulk” properties. Moreover, even with relatively large loads and displacements, some kind of “compliance correction” may be required, to separate the response of the sample from displacements associated with the loading system. The other main form of experimental outcome is the shape of the residual indent. As mentioned above, early types of hardness tester focused on this, in the form of (relatively crude) measurement of the “width” of the indent – commonly via simple optical microscopy. However, much richer information can be extracted by using a profilometer (optical or stylus) to obtain the full shape of the residual indent. With a spherical indenter (and a sample that is isotropic in the plane of the indented surface), the indent will exhibit radial symmetry and its shape can be captured in the form of a single profile (of depth against radial position). The details of this shape (for a given applied load) exhibit a high sensitivity to the stress-strain relationship of the sample. Also, it is easier to obtain than a load-displacement curve, partly because no measurements need to be made during loading. Finally, such profilometry has potential for the detection and characterization of sample anisotropy (whereas load-displacement curves carry no such information). Solution procedures Two main approaches have evolved for obtaining stress-strain relationships from experimental indentation outcomes (load-displacement curves or residual indent profiles). The simpler of the two involves direct “conversion” of the load-displacement curve. This is usually done by obtaining a series of “equivalent”, “effective” or “representative” values of the stress in the loaded part of the sample (from the applied load) and a corresponding set of values of the strain in the deformed region (from the displacement). The assumptions involved in carrying out such conversions are inevitably very crude, since (even for a spherical indenter) the fields of both stress and strain within the sample are highly complex and evolve throughout the process – the figure shows some typical plastic strain fields. Various empirical correction factors are commonly employed, with neural network “training” procedures sometimes being applied to sets of load-displacement data and corresponding stress-strain curves, to help evaluate them. It’s also common for loading to be periodically interrupted, and data from partial unloading procedures to be used in the conversion. However, unsurprisingly, universal conversions of this type (applied to samples with unknown stress-strain curves) tend to be unreliable and it is now widely accepted that the procedure cannot be used with any confidence. The other main approach is a more cumbersome one, although with much greater potential for obtaining reliable results. It involves iterative numerical (Finite element method – FEM) modelling of the indentation procedure. This is first done with a trial stress-strain relationship (in the form of an analytical expression – often termed a constitutive equation), followed by convergence on the best fit version (set of parameter values in the equation), giving optimal agreement between experimental and modelled outcomes (load-displacement plots or residual indent profiles). This procedure fully captures the complexity of the evolving stress and strain fields during indentation. While it is based on relatively intensive modelling computations, protocols have been developed in which the convergence is automated and rapid. Profilometry-based indentation plastometry (PIP) It has become clear that important advantages are offered by using the residual indent profile as the target outcome, rather than the load-displacement curve. These include easier measurement, greater sensitivity of the experimental outcome to the stress-strain relationship and potential for detection and characterisation of sample anisotropy – see above. The figure gives an indication of the sensitivity of the profile to the stress-strain curve of the material. The term PIP thus encompasses the following features: 1) Obtaining stress-strain curves characteristic of the bulk of a material (by using relatively large spherical indenters and relatively deep penetration), 2) Experimental measurement of the residual indent profile and 3) Iterative FEM simulation of the indentation test, to obtain the stress-strain curve (captured in a constitutive equation) that gives the best fit between modelled and measured profiles. For tractable and user-friendly application, an integrated facility is needed, in which the procedures of indentation, profilometry and convergence on the optimal stress-strain curve are all under automated control References Materials testing
Indentation plastometry
[ "Materials_science", "Engineering" ]
1,478
[ "Materials testing", "Materials science" ]