id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
70,257,270 | https://en.wikipedia.org/wiki/Yury%20Mikhailovich%20Bunkov | Yury (or Yuriy or Yuri) Mikhailovich Bunkov (Юрий Михайлович Буньков, 29 August 1950 in Stavropol) is a Russian experimental physicist, specializing in condensed matter physics. He is known as one of the co-discoverers of the quantum spin liquid state.
Education and career
Bunkov, born into a family of geologists, graduated in 1968 from a special school for physics and mathematics in Moscow (School No. 2). In 1968 he also achieved first place in the Moscow Physics Olympiad and matriculated at the Moscow Institute of Physics and Technology (MIPT), which was headed by Piotr Kapitza.
According to Bunkov, the "parametric echo" should be called the "Bunkov echo":
In 1974 he matriculated at the Kapitza Institute for Physical Problems, where he received in 1979 his Candidate of Sciences degree (Ph.D.). A. S. Borovik-Romanov was Bunkov's thesis advisor. At the Kapitza Institute for Physical Problems, he was employed as a non-principal scientist from 1979 to 1985, a principal scientist from 1985 to 1986, and from 1986 to 1995 as a leading scientist. At the Kapitza Institute he constructed the Soviet Union's first nuclear demagnetization refrigerator.
In 1983 at the Kapitza Institute, quantum spin superfluidity was discovered by Bunkov, who was the team leader, with Vladimir Dmitriev and Yuri Mukharsky (who worked at the Kapitza Institute as students). Spin superfluidity manifested itself in NMR studies on 3helium-B as regions of coherent Larmor precession (regions known as HPDs, Homogeneously Precessing Domains), with inhomogeneities in the precession caused by supercurrents in the spin (based on magnetization) similar to those in superconductivity (based on charge-supercurrent) and superliquid or Bose-Einstein condensates (based on mass-supercurrent). Spin superfluidity is also a Bose-Einstein condensate (BEC) of magnons. A theoretical explanation was given in the 1980s by the theorist Igor Akindinovich Fomin.
Bunkov wrote concerning his work at the Kapitza Institute:
At the Kapitza Institute, he received in 1985 his Russian Doctor of Sciences degree (habilitation) with a thesis on NMR studies on superfluid helium-3. At Grenoble's Institut Néel (formerly called CRTBT), belonging to the Centre National de la Recherche Scientifique (CNRS), he was employed from 1995 to 2004 as a Directeur de Recherche and since 2004 is employed as a Directeur de Recherche de 1er classe. From 2008 to the present, he is a part-time professor at Kazan Federal University.
For many years Bunkov participated in the Soviet-Finnish project ROTA, in which the researchers discovered many different types of 3He vortices. He also made many visits (from 1989 to 1995) to Lancaster University, where he participated in NMR experiments at record-setting low temperatures for 3He.
At CRTBT his research group cooled 3He to 100°K and, at such low temperatures, found in 1996 an energy deficit after a 3He neutron capture reaction. The energy deficit "appeared to arise from vortex creation via the Kibble-Zurek cosmological mechanism, in analogy with cosmic-string creation in the early Universe." He became the leader of the project ULTIMA (Ultra Low Temperature Instrumentation for Measurements in Astrophysics), which has as its purpose the development of a dark-matter detector based on overcooled superfluid 3He.
In the 3He-B phase (which has a complex phase structure) he experimentally discovered analogues to cosmological and quantum field theoretical phenomena, such as cosmological strings (as vortices in the spin supercurrent) and Majorana quasiparticles. In the 1980s he and his colleagues detected Goldstone modes (as phonons in the spin-superfluidity HPDs analogous to the second sound phenomena in superfluids). In the 2000s he discovered Q-balls] in superfluid 3He. (The concept of a Q-ball, a type of non-topological soliton, was originally introduced in quantum field theory.)
He collaborated extensively with the theorist Grigori Efimovich Volovik. In 2008, Bunkov and his Japanese colleagues discovered coherent precession in the helium-3-A phase embedded in uniaxially deformed anisotropic aerogels.
Bunkov is internationally recognized for his research on quantum fluids and solids, superfluid 3He nuclear magnetic resonance {NMR), and oltra-low temperatures techniques and their application to cosmology and the search for dark matter. As of the end of 2018 his Hirsch index was 26.
Bunkov received in 1993 the State Prize of the Russian Federation "for the discovery of magnetic superfluidity and the Homogeneously Precessing Domain". In 2001 Bunkov was made Doctor "Honoris Causa" by Pavol Jozef Šafárik University in Košice, Slovakia. In 2008 he was awarded, jointly with Vladimir Dmitriev and Igor A. Fomin, the Fritz London Memorial Prize for their discovery and elucidation of "unique phenomena in superfluid 3He-B: macroscopi phase-coherent spin precession and the flow of spin supercurrent." Bunkov has been a full member of the Academia Europaea since 2010.
Selected publications
with A.S. Borovik-Romanov, V.V. Dmitriev, Yu.M. Mukharskiĭ: Long-lived induction signal in superfluid in 3-He, JETP Lett., vol. 40, no. 6, 1984, pp. 1033–1037 (translated by Dave Parsons) pdf
with V.V. Dmitriev, Yu.M. Mukharskiy: Twist oscillations of homogeneous precession domain in 3He-B, JETP Lett., vol. 43, 1986, pp. 168–171. (Goldstone mode)
with A. S. Borovik-Romanov: Spin supercurrent and magnetic relaxation in Helium-3, Harwood Academic Publ. 1990.
with V.V.Dmitriev, Yu.M. Mukharskiy, Low frequency oscillations of the homogeneously precessing domain in 3He-B, Physica B, vol. 178, 1992, pp. 196–201. (Goldstone mode)
Persistent signal; coherent NMR state trapped by orbital texture, J. Low Temp. Phys., vol. 138, 2005, pp. 753–758, (Q-Ball)
with G.E. Volovik: Magnon condensation into a Q-ball in 3He-B, Phys. Rev. Lett., vol. 98, 2007, p. 265302.
Spin Supercurrent, J. of Magnetism and Magnetic Materials (2007 article published in 2018). Arxiv 2007
with T. Sato, T. Kunimatsu, K. Izumina, A. Matsubara, M. Kubota, T. Mizusaki: Coherent precession of magnetization in the superfluid 3He A-phase, Phys. Rev. Lett., vol. 101, 2008, p. 055301.
with G. E. Volovik: Bose-Einstein Condensation of Magnons in Superfluid 3He, J. Low Temperature Physics, vol. 150, 2008, pp. 135–144.
with G.E. Volovik: Magnon BEC in superfluid 3He-A, JETP Lett., vol. 89, 2009, pp. 306–310.
with G. E. Volovik: Magnon BEC and spin superfluidity: a 3He primer, Arxiv 2009
Spin superfluidity and magnons Bose-Einstein-Condensation, Physics Uspekhi, August 2010, Online
with G. E. Volovik: Spin superfluidity and magnon BEC, in: Int. Ser. Monogr. Phys. 156, 2013, pp. 253–311, Arxiv
with Rasul Gazizuzin: Observation of Majorana Quasiparticles Surface States in Superfluid 3He-B by Heat Capacity Measurements, Arxiv 2016
with Vladimir Safonov: Magnon Condensation and Spin Superfluidity, Arxiv 2017
with A. Farhutdinov. A. Kuzmichev, T. R. Safin, P. M. Vetoshko, V. I. Belotelov. and V. I. Tagirov: The magnonic superfluid droplet at room temperature. arXiv preprint arXiv:1911.03708 Arxiv 2019
Magnonic Superfluidity Versus Bose Condensation, Appl. Magn. Reson. vol. 51, 2020, pp. 1711–1721.
with A. N. Kuzmichev, T. R. Safin, P. M. Vetoshko, V. I. Belotelov, and M. S. Tagirov: Quantum paradigm of the foldover magnetic resonance, Scientific Reports, vol. 11, no. 1, 2021, pp. 1–8.
References
External links
Soviet physicists
20th-century Russian physicists
21st-century Russian physicists
Condensed matter physicists
Moscow Institute of Physics and Technology alumni
French National Centre for Scientific Research scientists
Academic staff of Kazan Federal University
Members of Academia Europaea
State Prize of the Russian Federation laureates
1950 births
Living people
People from Stavropol | Yury Mikhailovich Bunkov | [
"Physics",
"Materials_science"
] | 2,102 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
59,478,233 | https://en.wikipedia.org/wiki/Perpendicular%20paramagnetic%20bond | A perpendicular paramagnetic bond is a type of chemical bond that does not exist under normal, atmospheric conditions. Such a phenomenon was first hypothesized through simulation to exist in the atmospheres of white dwarf stars whose magnetic fields, on the order of 105 teslas, could allow such interactions to exist. In a very strong magnetic field, excited electrons in molecules may be stabilized, causing these molecules to abandon their original orientations parallel to the magnetic field and instead lie perpendicular to it. Normally, at such intense temperatures as those near a white dwarf, more common molecular bonds cannot form and existing ones decompose.
References
Astrophysics
Chemical bonding
White dwarfs
Hypothetical processes
Exotic matter
Magnetism in astronomy | Perpendicular paramagnetic bond | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy"
] | 143 | [
"Astronomical sub-disciplines",
"Astronomy stubs",
"Astrophysics",
"Hypotheses in chemistry",
"Astrophysics stubs",
"Condensed matter physics",
"nan",
"Exotic matter",
"Magnetism in astronomy",
"Chemical bonding",
"Matter"
] |
67,362,875 | https://en.wikipedia.org/wiki/Darling%2058 | The Darling 58 is a genetically engineered American chestnut tree. The tree was created by American Chestnut Research & Restoration Program at the State University of New York College of Environmental Science and Forestry (SUNY ESF) in collaboration with The American Chestnut Foundation (TACF) to restore the American chestnut to the forests of North America. These Darling-58 trees are attacked by chestnut blight, but survive. Darling-58 trees survive to reach maturity, produce chestnuts, and multiply to restore the American chestnut tree to the forests of North America. An error resulted in use of an alternate cultivar, Darling 54 in some field tests of the Darling 58 cultivar of American Chestnut.
While The American Chestnut Foundation discontinued support of development of the Darling 58 cultivar in December 2023, in part due to the mistaken use of Darling 54 in field trials, The American Chestnut Research & Restoration Program, who originated the tree, continues its development.
Background
The chestnut blight was introduced in the late 19th century with the Japanese chestnut and decimated the once-widespread American chestnut tree. Native un-modified trees are killed from the ground up by the blight, and only the root system survives. The roots then continue to send up shoots that are once again attacked by the blight and die back before they reach maturity, repeating the cycle.
Mechanism
Chestnut blight damages trees by producing oxalic acid, which lowers the pH in the cambium and kills plant tissues. Darling 58 adds a oxalate oxidase (OxO) gene from wheat, driven by a CaMV 35S promoter. The promoter allows the OxO protein to be made all through the plant. The OxO protein allows the plant to break down the acid before too much damage is done. The same defense strategy is found not only in wheat, but also in strawberries, bananas, oats, barley, and other cereals. The resistant trait is passed down to progeny. The resistance does not stop the blight from completing its lifecycle.
Extensive testing done with the transgenic Darling 58 variant to assess its effects on other species showed that the survival, pollen use, and reproduction of bumble bees were not affected by oxalate oxidase at the typical concentrations found in the pollen of the American chestnut. Presence of the transgenic oxalate oxidase gene in the genome of the American chestnut has little effect on photosynthetic or respiratory physiology.
History
In 2013, reported initial experiments to introduce wheat OxO into American chestnuts. Potted transgenic plants with two different promoters (35S, VspB) were created. OxO levels are measured out of the plant leaves. Infection experiments on cut leaves show that the lesion sizes can be reduced to around or below the level of the blight-resistant Chinese chestnut, suggesting that the potted plant may be resistant too.
In 2014, SUNY ESF reported that the "Darling4" transgenic event produced an intermediate level of resistance between American and Chinese chestnuts. The trait was also passed into progeny.
The Darling 58 (SX58) line was produced before 2016. A 2020 SUNY-ESF Masters thesis shows that Darling 58 is the transgenic event that produces the highest amount of OxO.
In January 2020, the researchers submitted a deregulation petition for the Darling 58 variant, with a public comment period ending October 19, 2020.
In November 2022, the USDA began another public comment period for Darling 58's approval.
In 2022, SUNY-ESF scientists reported that a different promoter, win3.12 from the eastern cottonwood, allows the expression levels of OxO to remain low in basal conditions, but increase under wound or infection. This modification is expected to be more metabolically efficient compared to the "always-on" CAMV promoter and thereby have greater transgene stability over successive generations compared with the Darling 58 variant. In laboratory bioassays, win3.12-OxO lines showed elevated disease tolerance similar to that exhibited by blight-resistant Chinese chestnut.
In December 2023, TACF announced that they were discontinuing development of the Darling 58 due to poor performance results. The SUNY ESF is continuing to seek federal approval to distribute seeds to the public without the support of TACF.
Darling 54
In December 2023, it was announced that there had been a mishap and any material known as "Darling 58" was actually "Darling 54". Darling 54 is a transgenic American chestnut tree also modified with the 35S:OxO construct. The difference between D58 and D54 is that D54 has the 35S:OxO construct inserted into a coding sequence within its genome. D58 was thought to have the 35S:OxO construct inserted into a non-coding region of the genome. An insertion in a coding sequence, or gene, could disrupt or alter gene expression and therefore protein function. The 35S:OxO construct is located within the Sal1 gene of the D54 genome. Sal1 is linked to drought stress and oxidative stress responses in other species.
References
Further reading
The USDA Should Let People Plant Blight-Resistant American Chestnut Trees
Castanea
Ecological restoration
Genetically modified organisms | Darling 58 | [
"Chemistry",
"Engineering",
"Biology"
] | 1,068 | [
"Genetic engineering",
"Ecological restoration",
"Genetically modified organisms",
"Environmental engineering"
] |
67,368,474 | https://en.wikipedia.org/wiki/Freshwater%20shoreline%20management | Freshwater Shoreline Management involves assessing and protecting lakes, rivers, and other freshwater shorelines from excessive development or other anthropogenic disturbances.
Shoreline management involves the long-term monitoring of watershed and shoreline revitalisation projects. Freshwater shoreline management is frequently run by local conservation authorities through state, provincial, and federal lake partner programs. These programs have been used as a method of tracking shoreline change over time, determining areas of concern, and educating shoreline property owners.
History
The concept of Freshwater Shoreline Management evolved from ideas developed for the Integrated Coastal Zone Management (ICZM), which emerged from the 1992 United Nations Conference on Environment and Development. In Canada, a coastal zone management plan was completed by 1996 using the ICZM framework. Freshwater management programs utilized the coastal zone management plan to create freshwater management plans to address the growing concerns for the environment that had been aired since the 1960s in Canadian society.
Anthropogenic effects on watersheds were increasing globally in the 1900s, with nutrient loading of phosphorus, nitrogen, and sulfur causing eutrophication and acidification of water bodies. These effects are primarily caused by the human development of shorelines, agricultural runoff of chemicals and fertilizers, human litter, and sewage/wastewater. To manage these impacts, local and regional organizations began conducting watershed monitoring programs to detect long-term environmental changes and establish their causes.
Usage
Anthropogenic effects on lakes, such as freshwater usage, shoreline development, recreational use, agriculture, and retaining walls, can negatively impact aquatic and terrestrial organisms that rely on the shoreline of a lake for habitat. The anthropogenic effects can also cause eutrophication and acidification of lakes, which impacts organisms within the water itself and can also cause harm to human health. It can have the added effect of decreasing property values and tourism in the lake communities due to some beaches being unsafe to swim in because of pollutants.
Since it may be modified to match the needs of the watershed and be applied to the current land use nearby, freshwater shoreline management is useful for community-based monitoring. The Lake Ontario Shoreline Management Plan is an example of how communities can use freshwater shoreline management. Programs such as this were developed by conservation authorities and citizens alongside regional and provincial governments to perform shoreline mapping and assessment, public consultation/education, and implement long-term monitoring of the watershed and shoreline.
The Muskoka Watershed Council has also performed shoreline assessments using the Love Your Lakes Program to survey the shoreline of Lake Bella in the Muskoka District. It showed that the natural shoreline decreased from 96% in 2002 to 80% in 2007, impacting the overall water quality as it allows for increased nutrient runoff, negatively impacting biodiversity as it decreases habitat for fish, insects, and birds. This program has increased local education on lake health and stewardship of revitalizing shorelines.
Climate Change Impacts
Climate change has been found to affect freshwater shoreline communities. Effects such as increased warming of the water bodies, increased storm runoff, the quickening of yearly ice melt and limited amounts of winter ice, and increased wave height during storms, which increases the potential of erosion, were all found to potentially affect lake shorelines.
Shoreline management has been identified as a method to mitigate climate change impacts such as potential flooding and nutrient loading from frequent and higher-intensity storms. That can occur as shorelines naturalize, which can increase filtration and decrease sediment and nutrient runoff.
Example: Love Your Lakes Program
The Love Your Lakes Program is an example of a Shoreline Assessment and Revitalization program used in Canada. It was developed under the Canadian Ministry of Environment and Climate Change (MECC) Lake Partner Program as a joint effort between Watersheds Canada, MECC, and the Canadian Wildlife Federation.
The program allows lake owners and organizations to apply to have their shorelines assessed and discusses methods that individuals and the community can use to revitalize their shorelines. Naturalization, using native plant species along the shoreline to create a buffer, it is often recommended as this limits erosion from wake action and can decrease nutrient runoff from lawn maintenance or farming activities. To this date, almost 200 lakes have been assessed by the program. This has led to increased community awareness and shoreline naturalization, which has transformed up to 300 shoreline properties.
References
Coastal geography
Coastal engineering
Environmental impact in the United States
Environmental impact in Canada | Freshwater shoreline management | [
"Engineering"
] | 876 | [
"Coastal engineering",
"Civil engineering"
] |
67,373,736 | https://en.wikipedia.org/wiki/Green%20transport%20hierarchy | The green transport hierarchy (Canada), street user hierarchy (US), sustainable transport hierarchy (Wales), urban transport hierarchy or road user hierarchy (Australia, UK) is a hierarchy of modes of passenger transport prioritising green transport. It is a concept used in transport reform groups worldwide and in policy design. In 2020, the UK government consulted about adding to the Highway Code a road user hierarchy prioritising pedestrians. It is a key characteristic of Australian transport planning.
History
The Green Transportation Hierarchy: A Guide for Personal & Public Decision-Making by Chris Bradshaw was first published September 1994 and revised June 2004. As part of a pedestrian advocacy group in the United States, he proposed the hierarchy ranking passenger transport based on environmental emissions. The reviewed ranking listed, in order: walking, cycling, public transport, car sharing, and finally private car.
It was first prepared for Ottawalk and the Transportation Working Committee of the Ottawa-Carleton Round-table on the Environment in January 1992, only stating 'Walk, Cycle, Bus, Truck, Car'.
Factors
Mode
Energy source
Trip length
Trip speed
Vehicle size
Passenger load factor
Trip segment
Trip purpose
Traveller
Adoption
The author directed the hierarchy at both individual lifestyle choices and public authorities who should officially direct their resources – funds, moral suasion, and formal sanctions – based on the factors.
Bradshaw described the hierarchy to be logical, but the effect of applying it to seem radical.
The model rejects the concept of the balanced transportation system, where users are assumed to be free to choose from amongst many different yet ‘equally valid’ modes. This is because choices incorporating factors that are ranked low (walking, cycling, public transport) are seen as generally having a high impact on other choices.
See also
Alternatives to car use
Bicycle-friendly
Bill Boaks campaigned for pedestrian priority everywhere
Car-free movement
Complete streets
Cycling advocacy
Cyclability
Health and environmental impact of transport
Health impact of light rail systems
Induced demand
Jaywalking
Peak car
Planetizen
Priority (right of way)
Reclaim the Streets
Road hierarchy
Road traffic safety
Settlement hierarchy
Street hierarchy
Street reclamation
Sustainable transport
Traffic bottleneck
Traffic code
Traffic conflict
Traffic flow
Transportation demand management
Walkability
Walking audit
References
External links
Original 1992 paper
Climate change policy
Rules of the road
Sustainable transport
1992 documents
1994 books
1992 in transport
Hierarchy | Green transport hierarchy | [
"Physics"
] | 458 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
47,556,310 | https://en.wikipedia.org/wiki/Sacituzumab%20govitecan | Sacituzumab govitecan, sold under the brand name Trodelvy by Gilead Sciences, is a Trop-2-directed antibody and topoisomerase inhibitor drug conjugate used for the treatment of metastatic triple-negative breast cancer and metastatic urothelial cancer.
The most common side effects include nausea, neutropenia, diarrhea, fatigue, anemia, vomiting, alopecia (hair loss), constipation, decreased appetite, rash and abdominal pain. Sacituzumab govitecan has a boxed warning about the risk of severe neutropenia (abnormally low levels of white blood cells) and severe diarrhea. Sacituzumab govitecan may cause harm to a developing fetus or newborn baby.
Sacituzumab govitecan was approved for medical use in the United States in April 2020, and in the European Union in November 2021. The U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) consider it to be a first-in-class medication.
Medical uses
Sacituzumab govitecan is indicated for the treatment of adults with metastatic triple-negative breast cancer who received at least two prior therapies for metastatic disease; people with unresectable locally advanced or metastatic triple-negative breast cancer (mTNBC) who have received two or more prior systemic therapies, at least one of them for metastatic disease; and for people with locally advanced or metastatic urothelial cancer (mUC) who previously received a platinum-containing chemotherapy and either a programmed death receptor-1 (PD-1) or a programmed death-ligand 1 (PD-L1) inhibitor.
It is also indicated for the treatment of people with unresectable locally advanced or metastatic hormone receptor (HR)-positive, human epidermal growth factor receptor 2 (HER2)-negative (IHC 0, IHC 1+ or IHC 2+/ISH-) breast cancer who have received endocrine-based therapy and at least two additional systemic therapies in the metastatic setting.
Mechanism
Sacituzumab govitecan is a conjugate of the humanized anti-Trop-2 monoclonal antibody linked with SN-38, the active metabolite of irinotecan. Each antibody having on average 7.6 molecules of SN-38 attached. Linkage to an antibody allows the drug to specifically target cells expressing Trop-2.
Sacituzumab govitecan is a Trop-2-directed antibody and topoisomerase inhibitor drug conjugate, meaning that the drug targets the Trop-2 receptor that helps the cancer grow, divide and spread, and is linked to topoisomerase inhibitor, which is a chemical compound that is toxic to cancer cells. Approximately two of every ten breast cancer diagnoses worldwide are triple-negative. Triple-negative breast cancer is a type of breast cancer that tests negative for estrogen receptors, progesterone receptors and human epidermal growth factor receptor 2 (HER2) protein. Therefore, triple-negative breast cancer does not respond to hormonal therapy medicines or medicines that target HER2.
Development
Immunomedics announced in 2013, that it had received fast track designation from the US Food and Drug Administration (FDA) for the compound as a potential treatment for non-small cell lung cancer, small cell lung cancer, and metastatic triple-negative breast cancer. Orphan drug status was granted for small cell lung cancer and pancreatic cancer. In February 2016, Immunomedics announced that sacituzumab govitecan had received an FDA breakthrough therapy designation (a classification designed to expedite the development and review of drugs that are intended, alone or in combination with one or more other drugs, to treat a serious or life-threatening disease or condition) for the treatment of people with triple-negative breast cancer who have failed at least two other prior therapies for metastatic disease.
History
Sacituzumab govitecan was added to the proposed International nonproprietary name (INN) list in 2015, and to the recommended list in 2016.
Sacituzumab govitecan-hziy was approved for medical use in the United States in April 2020.
Sacituzumab govitecan-hziy was approved based on the results of IMMU-132-01, a multicenter, single-arm clinical trial (NCT01631552) of 108 participants with metastatic triple-negative breast cancer who had received at least two prior treatments for metastatic disease. Of the 108 participants involved within the study, 107 were female and 1 was male. Participants received sacituzumab govitecan-hziy at a dose of 10milligrams per kilogram of body weight intravenously on days one and eight every 21 days. Treatment with sacituzumab govitecan-hziy was continued until disease progression or unacceptable toxicity. Tumor imaging was obtained every eight weeks. The efficacy of sacituzumab govitecan-hziy was based on the overall response rate (ORR) – which reflects the percentage of participants that had a certain amount of tumor shrinkage. The ORR was 33.3% (95% confidence interval [CI], 24.6 to 43.1). Additionally, with the 33.3% of study participants who achieved a response, 2.8% of participants experienced complete responses. The median time to response in participants was 2.0 months (range, 1.6 to 13.5), the median duration of response was 7.7 months (95% confidence interval [CI], 4.9 to 10.8), the median progression free survival was 5.5 months, and the median overall survival was 13.0 months. Of the participants that achieved an objective response to sacituzumab govitecan-hziy, 55.6% maintained their response for six or more months and 16.7% maintained their response for twelve or more months.
Sacituzumab govitecan-hziy was granted accelerated approval along with priority review, breakthrough therapy, and fast track designations. The U.S. Food and Drug Administration (FDA) granted approval of Trodelvy to Immunomedics, Inc.
In April 2021, the FDA granted regular approval to sacituzumab govitecan for people with unresectable locally advanced or metastatic triple-negative breast cancer (mTNBC) who have received two or more prior systemic therapies, at least one of them for metastatic disease. Efficacy and safety were evaluated in a multicenter, open-label, randomized trial (ASCENT; NCT02574455) conducted in 529 participants with unresectable locally advanced or mTNBC who had relapsed after at least two prior chemotherapies, one of which could be in the neoadjuvant or adjuvant setting, if progression occurred within twelve months. Participants were randomized (1:1) to receive sacituzumab govitecan, 10mg/kg as an intravenous infusion, on days 1 and 8 of a 21-day (n=267) cycle or physician's choice of single agent chemotherapy (n=262).
In April 2021, the FDA granted accelerated approval to sacituzumab govitecan for people with locally advanced or metastatic urothelial cancer (mUC) who previously received a platinum-containing chemotherapy and either a programmed death receptor-1 (PD-1) or a programmed death-ligand 1 (PD-L1) inhibitor. Efficacy and safety were evaluated in TROPHY (IMMU-132-06; NCT03547973), a single-arm, multicenter trial that enrolled 112 participants with locally advanced or mUC who received prior treatment with a platinum-containing chemotherapy and either a PD-1 or PD-L1 inhibitor.
In February 2023, the FDA approved sacituzumab govitecan for people with unresectable locally advanced or metastatic hormone receptor (HR)-positive, human epidermal growth factor receptor 2 (HER2)-negative (IHC 0, IHC 1+ or IHC 2+/ISH-) breast cancer who have received endocrine-based therapy and at least two additional systemic therapies in the metastatic setting.
Society and culture
Legal status
On 14 October 2021, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Trodelvy, intended for the treatment of unresectable or metastatic triple-negative breast cancer. The applicant for this medicinal product is Gilead Sciences Ireland UC. Sacituzumab govitecan was approved for medical use in the European Union in November 2021.
References
Further reading
External links
Antibody-drug conjugates
Cancer treatments
Monoclonal antibodies for tumors
Orphan drugs | Sacituzumab govitecan | [
"Biology"
] | 1,896 | [
"Antibody-drug conjugates"
] |
47,559,813 | https://en.wikipedia.org/wiki/True%20muonium | In particle physics, true muonium is a theoretically predicted exotic atom representing a bound state of an muon and an antimuon (μ+μ−). The existence of true muonium is well established theoretically within the Standard Model. Its properties within the Standard Model are determined by quantum electrodynamics, and may be modified by physics beyond the Standard Model.
True muonium is yet to be observed experimentally, though it may have been produced in experiments involving collisions of electron and positron beams. The ortho-state of true muonium (i.e. the state with parallel alignment of the muon and antimuon spins) is expected to be relatively long-lived (with a lifetime of ), and decay predominantly to an e+e− pair, which makes it possible for LHCb experiment at CERN to observe it with the dataset collected by 2025.
Experimental research
There are several experimental projects searching for the true muonium. One of them is the μμ-tron experiment (Mumutron) planned at the Budker Institute of Nuclear Physics of the Siberian Branch of the Russian Academy of Sciences (INP SB RAS), which has been under development since 2017. The experiment involves the creation of a special low-energy electron–positron collider, which will make it possible to observe the production of true muonium in collisions of electron and positron beams with an intersection angle of 75° with energies of 408 MeV. Thus, the invariant mass of colliding particles will be equal to twice the mass of the muon (=105.658 MeV). To register the exotic atom (in the decay channel into an electron-positron pair), it is planned to create a specialized detector. Apart to the actual detection of true muonium, it is planned to isolate its various states and measure their lifetimes.
In addition to experiments in the field of elementary particle physics, the collider created within the framework of the experiment is also of interest from the point of view of developing accelerator technologies for the Super Charm-Tau factory planned at the INP SB RAS. The experiment was proposed in 2017 by , A. I. Milshtein, and , researchers at the INP SB RAS.
See also
Muonium
Positronium
Onium
References
External links
Low-energy electron-positron collider to search and study (μ+μ−) bound state. A.V. Bogomyagkov, V.P. Druzhinin, E.B. Levichev, A.I. Milstein, S.V. Sinyatkin. BINP, Novosibirsk.
Hypothetical composite particles
Onia | True muonium | [
"Physics"
] | 565 | [
"Particle physics stubs",
"Particle physics"
] |
47,562,544 | https://en.wikipedia.org/wiki/Norman%20Hackerman%20Young%20Author%20Award | The Norman Hackerman Young Author Award was established in 1982 by The Electrochemical Society (ECS). The award is presented annually for the best paper published in the Journal of the Electrochemical Society for a topic in the field of electrochemical science and technology by a young author or authors. (This award incorporates the Turner Book Prize.)
Recipients of the award are presented with a scroll, cash prize (divided equally among eligible authors), and travel assistance to enable winner(s) to attend the ECS meeting where the award is presented.
This award is named after the chemist Norman Hackerman.
Notable Recipients
As listed by ECS:
1994 Hubert A. Gasteiger
1988 Jennifer A. Bardwell
1987 Joachim Maier
1975 Larry R. Faulkner
1971 M. Stanley Whittingham
1966 John Newman
1960 A. C. Makrides
1953 Jack Halpern
1948 Michael Streicher
1941 Edward Adler
1938 Nathaniel B. Nichols
1929 William C. Gardiner
See also
List of chemistry awards
References
Chemistry awards
Electrochemistry | Norman Hackerman Young Author Award | [
"Chemistry",
"Technology"
] | 207 | [
"Chemistry awards",
"Electrochemistry",
"Science award stubs",
"Electrochemistry stubs",
"Science and technology awards",
"Physical chemistry stubs"
] |
47,562,547 | https://en.wikipedia.org/wiki/MicroLED | MicroLED, also known as micro-LED, mLED or μLED is an emerging flat-panel display technology consisting of arrays of microscopic LEDs forming the individual pixel elements. Inorganic semiconductor microLED (μLED) technology was first invented in 2000 by the research group of Hongxing Jiang and Jingyu Lin of Texas Tech University (TTU) while they were at Kansas State University (KSU). The first high-resolution and video-capable InGaN microLED microdisplay in VGA format was realized in 2009 by Jiang, Lin and their colleagues at Texas Tech University and III-N Technology, Inc. via active driving of a microLED array by a complementary metal-oxide semiconductor (CMOS) IC. Compared to widespread LCD technology, microLED displays offer better contrast, response times, and energy efficiency.
MicroLED offers greatly reduced energy requirements when compared to conventional LCD displays while also offering pixel-level light control and a high contrast ratio. The inorganic nature of microLEDs gives them a longer lifetime advantage over OLEDs and allows them to display brighter images with minimal risk of screen burn-in. The sub-nanosecond response time of μLED has a huge advantage over other display technologies for 3D/AR/VR displays since these devices need more frames per second and fast response times to minimise ghosting. MicroLEDs are capable of high speed modulation, and have been proposed for chip-to-chip interconnect applications.
, Sony, Samsung, and Konka started to sell microLED video walls. LG, Tianma, PlayNitride, TCL/CSoT, Jasper Display, Jade Bird Display, Plessey Semiconductors Ltd, and Ostendo Technologies, Inc. have demonstrated prototypes. Sony already sells microLED displays as a replacement for conventional cinema screens. BOE, Epistar, and Leyard have plans for microLED mass production. MicroLED can be made flexible and transparent, just like OLEDs.
According to a report by Market Research Future, the MicroLED display market will reach around USD 24.3 billion by 2027. Custom Market Insights reported that the MicroLED display market is expected to reach around USD 182.7 Billion by 2032.
Research
Following the first report of electrical injection microLEDs based on indium gallium nitride (InGaN) semiconductors in 2000 by the research group of Hongxing Jiang and Jingyu Lin, several groups have quickly engaged in pursuing this concept. Many related potential applications have been identified. Various on-chip connection schemes of microLED pixel arrays have been employed by AC LED Lighting, LLC (a company funded by Jiang and Lin) allowing for the development of single-chip high voltage DC/AC-LEDs to address the compatibility issue between the high voltage electrical infrastructure and low voltage operation nature of LEDs and high brightness self-emissive microdisplays.
The microLED array has also been explored as a light source for optogenetic applications and for visible light communications.
Early InGaN based microLED arrays and microdisplays were primarily passively driven. The first actively driven video-capable self-emissive InGaN microLED microdisplay in VGA format ( pixels, each 12μm in size with 15μm between them) possessing low voltage requirements was patented and realized in 2009 by Jiang, Lin and their colleagues at Texas Tech and III-N Technology, Inc.(a company funded by Jiang and Lin) via integration between microLED array and CMOS integrated circuit (IC) and the work was also published in the following years.
The first microLED products were demonstrated by Sony in 2012. These displays, however, were very expensive.
There are several methods to manufacture microLED displays. The flip-chip method manufactures the LED on a conventional sapphire substrate, while the transistor array and solder bumps are deposited on silicon wafers using conventional manufacturing and metallization processes. Mass transfer is used to pick and place several thousand LEDs from one wafer to another at the same time, and the LEDs are bonded to the silicon substrate using reflow ovens. The flip-chip method is used for micro displays used on virtual reality headsets. Another microLED manufacturing method involves bonding the LEDs to an IC layer on a silicon substrate and then removing the LED bonding material using conventional semiconductor manufacturing techniques. The current bottleneck in the manufacturing process is the need to individually test every LED and replace faulty ones using an excimer laser lift-off apparatus, which uses a laser to weaken the bond between the LED and its substrate. Faulty LED replacement must be performed using high accuracy pick-and-place machines and the test and repair process takes several hours. The mass transfer process alone can take 18 days, for a smartphone screen with a glass substrate. Special LED manufacturing techniques can be used to increase yield and reduce the amount of faulty LEDs that need to be replaced. Each LED can be as small as 5μm across. LED epitaxy techniques need to be improved to increase LED yields.
Excimer lasers are used for several steps: laser lift-off to separate LEDs from their sapphire substrate and to remove faulty LEDs, for manufacturing the LTPS-TFT backplane, and for laser cutting of the finished LEDs. Special mass transfer techniques using elastomer stamps are also being researched. Other companies are exploring the possibility of packaging 3 LEDs: one red, one green and one blue LED into a single package to reduce mass transfer costs.
Quantum dots are being researched as a way to shrink the size of microLED pixels, while other companies are exploring the use of phosphors and quantum dots to eliminate the need for different-colored LEDs. Sensors can be embedded in microLED displays.
Over 130 companies are involved in microLED research and development. MicroLED light panels are also being made, and are an alternative to conventional OLED and LED light panels.
Digital pulse-width modulation is well-suited to driving microLED displays. MicroLEDs experience a color shift as the current magnitude changes. Analog schemes change current to change brightness. With a digital pulse, only one current value is used for the on state. Thus, there is no color shift that occurs as brightness changes.
Current microLED display offerings by Samsung and Sony consist of "cabinets" that can be tiled to create a large display of any size, with the display's resolution increasing with size. They also contain mechanisms to protect the display against water and dust. Each cabinet is diagonally with a resolution of .
Commercialization
MicroLEDs have already demonstrated performance advantages over LCD and OLED displays, including higher brightness, lower latency, higher contrast ratio, greater color saturation, intrinsic self-illumination, better efficiency and longer lifetime. Compared with OLED displays and LCDs, microLED displays stand out for their combination of high performance, durability, and energy efficiency. Ultrahigh brightness is particularly relevant for applications in augmented-reality displays that compete with the Sun’s brightness in outdoor environments.
Glo and Jasper Display Corporation demonstrated the world's first RGB microLED microdisplay, measuring diagonally, at SID Display Week 2017. Glo transferred their microLEDs to the Jasper Display backplane.
Sony launched a "Crystal LED Display" in 2012 with resolution, as a demonstration product. Sony announced its CLEDIS (Crystal LED Integrated Structure) brand which used surface mounted LEDs for large display production. , Sony offers CLEDIS in , and displays. On 12 September 2019, Sony announced Crystal LED availability to consumers ranging from 1080p to 16K displays.
Samsung demonstrated a microLED display called The Wall at CES 2018. In July 2018, Samsung announced plans on bringing a 4K microLED TV to consumer market in 2019. At CES 2019, Samsung demonstrated a 4K microLED display and 6K microLED display. On June 12 at InfoComm 2019, Samsung announced the global launch of The Wall Luxury microLED display configurable from in 2K to in 8K. On October 4, 2019, Samsung announced that The Wall Luxury microLED display shipments had begun.
In March 2018, Bloomberg reported Apple to have about 300 engineers devoted to in-house development of microLED screens. At IFA 2018 in August, LG Display demonstrated a microLED display.
At SID's Display Week 2019 in May, Tianma and PlayNitride demonstrated their co-developed microLED display with over 60% transparency. China Star Optoelectronics Technology (CSoT) demonstrated a transparent microLED display with around 45% transparency, also co-developed with PlayNitride. Plessey Semiconductors Ltd demonstrated a monolithic monochrome blue GaN-on-silicon wafer bonded to a Jasper Display CMOS backplane active-matrix microLED display with an 8μm pixel pitch.
At SID's Display Week 2019 in May, Jade Bird Display demonstrated their 720p and 1080p microLED microdisplays with 5μm and 2.5μm pitch respectively, achieving luminance in the millions of candelas per square metre. In 2021, Jade Bird Display and Vuzix have entered a Joint manufacturing agreement for making microLED based projectors for smart glasses and augmented reality glasses
At Touch Taiwan 2019 on September 4, 2019, AU Optronics demonstrated a microLED display and indicated that microLED was 12 years from mass commercialization. At IFA 2019 on September 13, 2019, TCL Corporation demonstrated their Cinema Wall featuring a 4K microLED display with maximum brightness of 1,500cd/m and contrast ratio produced by their subsidiary China Star Optoelectronics Technology (CSoT).
As of 2024, Samsung has already launched microLED display products including The Wall. Samsung’s microLED display technology transfers micrometer-scale LEDs into LED modules, resulting in what resembles wall tiles composed of mass-transferred clusters of almost microscopic lights.
Samsung has also debuted at 2024 CES their Transparent MicroLED display.
LG has also debuted at 2024 CES their microLED display - LG MAGNIT.
In terms of microLED microdisplay, Jade Bird Display launched 0.13" series of MicroLED displays which has an active area of 0.13” (3.3 mm) in diagonal and a resolution of 640X480 for AR and VR display products.
Apple reportedly invested billions of dollars in development of microLED displays in the years leading up to 2024, intending to transition its products to the technology beginning with the Apple Watch Ultra, before ultimately abandoning the effort after deciding it was unviable. However, the company is reportedly still "eyeing microLED for other projects down the road".
See also
OLED
AMOLED
Mini LED
List of flat panel display manufacturers
References
External links
First actively driven video-capable high-resolution microLED microdisplay in VGA format
MicroLED review (2013)
Putting microLED technology on display (2024)
Crystal LED - Sony
The Wall
LG MAGNIT MicroLED
"The Long View With John Doerr", John Doerr of KPC&B describes the microLED concept, starts around the 5 minute mark.
LED screens are significantly different from microLED
Display technology
Light-emitting diodes | MicroLED | [
"Engineering"
] | 2,304 | [
"Electronic engineering",
"Display technology"
] |
74,525,897 | https://en.wikipedia.org/wiki/Cyaarside | Cyaarside, also called cyarside, is the As≡C− anion. Featuring a triple bond between arsenic and carbon, it is the arsenic analogue of cyanide and cyaphide.
Preparation
An actinide cyaarside complex can be prepared by C−O bond cleavage of the arsaethynolate anion, the arsenic analogue of cyanate and phosphaethynolate. Reaction of the uranium complex [] with one molar equivalent of [ in the presence of 2.2.2-cryptand results in the formation of a dinuclear, oxo-bridged uranium complex featuring a C≡As ligand.
See also
arsaalkyne (As≡CR)
References
Anions
Arsenic compounds | Cyaarside | [
"Physics",
"Chemistry"
] | 154 | [
"Ions",
"Matter",
"Anions"
] |
74,531,061 | https://en.wikipedia.org/wiki/Potential%20renal%20acid%20load | Potential renal acid load (PRAL) is a measure of the acid that the body produces after ingesting a food. This is different from pH, which is the acidity of a food before being consumed. PRAL is a different acidity measure than the food ash measurement.
Some acidic foods actually have a negative PRAL measurement, meaning they reduce acidity in the stomach.
A low PRAL diet (not to be confused with an alkaline diet) can lower acidity in the stomach, which can be helpful for people suffering GERD or Acid Reflux. However, it does not lower the pH of blood and therefore cannot treat osteoporosis or other conditions.
References
Medical scales
Metabolism
Digestive system | Potential renal acid load | [
"Chemistry",
"Biology"
] | 148 | [
"Digestive system",
"Organ systems",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
60,899,832 | https://en.wikipedia.org/wiki/Secondary%20electrospray%20ionization | Secondary electro-spray ionization (SESI) is an ambient ionization technique for the analysis of trace concentrations of vapors, where a nano-electrospray produces charging agents that collide with the analyte molecules directly in gas-phase. In the subsequent reaction, the charge is transferred and vapors get ionized, most molecules get protonated (in positive mode) and deprotonated (in negative mode). SESI works in combination with mass spectrometry or ion-mobility spectrometry.
History
The fact that trace concentrations of gases in contact with an electrospray plume were efficiently ionized was first observed by Fenn and colleagues when they noted that tiny concentrations of plasticizers produced intense peaks in their mass spectra. However, it was not until 2000 when this problem was reframed as a solution, when Hill and coworkers used an electrospray to ionize molecules in the gas phase, and named the technique Secondary Electrospray Ionization. In 2007, the almost simultaneous works of Zenobi and Pablo Sinues applied SESI to breath analysis for the first time, marking the beginning of a fruitful field or research. With sensitivities in the low pptv range (10−12), SESI has been used in other applications, where the detection of low volatility vapors is important.
Detecting low volatility species in the gas phase is important because larger molecules tend to have higher biological significance. Low volatility species have been overlooked because it is technically difficult to detect them, as they are in very low concentration, and they tend to condensate in the inner piping of instruments. However, as this problem is solved, and new instruments are able to handle larger and more specific molecules, the ability to perform on-line, real time analysis of molecules naturally released in the air, even at minute concentrations, is attracting attention to this ionization technique.
Principle of operation
In the early days of SESI, two ionization mechanisms were under debate.: the droplet-vapor interaction model postulates that vapors are adsorbed in the electrospray ionization (ESI) droplets, and then reemitted as the droplet shrinks, just as regular liquid phase analytes are produced in electrospray ionization; on the other hand, the ion-vapor interaction model postulates that molecules and ions or small clusters collide, and the charge is transferred in this collision. Currently available commercial SESI sources operate at high temperature so as to better handle low volatility species. In this regime, nanodroplets from the electrospray evaporate very quickly to form ion clusters in equilibrium. This results in ion-vapor reactions dominating the majority of the ionization region. As charging ions originate from nano-droplets, and no high energy ions are involved at any point of the ionization process nor the creation of ionizing agents, fragmentation in SESI is remarkably low, and the resulting spectra are very clean. This allows for a very high dynamic range, where low intensity peaks are not affected by more abundant species.
Some related techniques are laser ablation electrospray ionization, proton-transfer-reaction mass spectrometry and selected-ion flow-tube mass spectrometry.
Applications
The main feature of SESI is that it can detect minuscule concentrations of low volatility species in real time, with molecular masses as high as 700 Da, falling in the realm of metabolomics. These molecules are naturally released by living organisms, and are commonly detected as odors, which means that they can be analyzed non-invasively. SESI, combined with High Resolution Mass Spectrometry, provides time-resolved, biologically relevant information of living systems, where the system does not need to be interfered with. This allows to seamlessly capture the time
evolution of their metabolism and their response to controlled stimuli.
SESI has been widely used for breath gas analysis for biomarker discovery, and in vivo pharmacokinetic studies:
Biomarker discovery
Bacterial infection
It has been widely reported the identification of bacteria by their volatile organic compound fingerprint. SESI-MS has proven to be a robust technique for the identification of bacteria from cell cultures and infections in vivo from breath samples, after the development of libraries of vapor profiles. Other studies include: In vivo differentiation between critical pathogens Staphylococcus aureus and Pseudomonas aeruginosa. or differential detection among antibiotic resistant S. aureus and its non-resistant strains. Bacterial infection detection from other fluids such as saliva have also been reported.
Respiratory diseases
Many chronic respiratory diseases lack of an appropriate method of monitoring and differentiation among disease stages. SESI-MS has been used to diagnose and distinguish exacerbations from breath samples in chronic obstructive pulmonary disease. Metabolic profiling of breath samples has accurately differentiated healthy individuals from idiopathic pulmonary fibrosis or obstructive sleep apnea patients.
Cancer
SESI-MS is being studied as a non-invasive detection system of cancer biomarkers in breath. A preliminary study differentiates patients suffering from breast neoplasia.
Skin
Volatiles released from the skin can be detected by sampling the ambient gas surrounding it, providing a fast method for detecting metabolic changes in fatty acids composition patterns.
Pharmacokinetics
To study pharmacokinetics, it is necessary a robust technique because of the complex nature of the samples' matrix, be it plasma, urine, or breath. Recent studies show that secondary electrospray ionization (SESI) is a powerful technique to monitor drug kinetics via breath analysis. Because breath is naturally produced, several datapoints can be readily collected. This allows for the number of collected data-points to be greatly increased. In animal studies, this approach SESI can reduce animal sacrifice while yielding pharmacokinetic curves with unmatched time resolutions. In humans, SESI-MS non-invasive analysis of breath can help study the kinetics of drugs at a personalized level. Monitoring exogenously introduced species allows tracking their specific metabolic pathway, which reduces the risk of picking confounding factors.
Time-resolved metabolic analysis
Introducing known stimuli, such as specific metabolites isotopically labeled compounds, or other sources of stress triggers metabolic changes which can be easily monitored with SESI-MS. Some examples if this include: cell culture volatile compounds profiling; and metabolic studies for plant or trace human metabolic pathways.
Other applications
Other applications developed with SESI-MS include:
Detection of illicit drugs;
Detection of explosives;
Food quality control monitoring.
References
Mass spectrometry
Ion source
External links
Deep Breath Initiative
Fossiliontech
Sinueslab Breath Research
Breath tests
Mathematical and theoretical biology
Mathematical modeling | Secondary electrospray ionization | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,384 | [
"Mathematical modeling",
"Spectrum (physical sciences)",
"Mathematical and theoretical biology",
"Instrumental analysis",
"Applied mathematics",
"Mass",
"Ion source",
"Mass spectrometry",
"Matter"
] |
60,902,291 | https://en.wikipedia.org/wiki/Protein%20detection | Protein detection is used for clinical diagnosis, treatment and biological research. Protein detection evaluates the concentration and amount of different proteins in a particular specimen. There are different methods and techniques to detect protein in different organisms. Protein detection has demonstrated important implications for clinical diagnosis, treatment and biological research. Protein detection technique has been utilized to discover protein in different category food, such as soybean (bean), walnut (nut), and beef (meat). Protein detection method for different type food vary on the basis of property of food for bean, nut and meat. Protein detection has different application in different field.
Protein Detection in Soybeans, Walnuts, Beef
Purpose for protein detection in food
Allergies from food have been noted to become common disease nowadays. The food allergies in the clinical demonstration present different signs, for example mild symptoms from itching in the mouth and swelling of the lips to critical anaphylactic response result in fatal consequences. According to statistic, about 2% adults and 8% children are experiencing hypersensitivity from industrialized countries. In order to reduce potential threatening reactions for life, avoiding the consumption from these allergenic foods strictly is the valid therapy. Therefore, sufficient description in term of potentially allergenic ingredients existing in food products is crucial and indispensable which can be monitored through protein detection.
Rationale for protein detection in soybeans
The soybean has been consumed in processed foods all over the word because of its high nutrient and easy processing characteristic such as soybean milk, tofu, meat alternatives, and brewed soybean products. microorganisms is used in brewage process for brewed soybean products like miso, soy sauce, natto and tempeh. Allergenicity stays in brewed soybean products. In Asian countries, these brewed soybean products are popular and traditional. The amount of patients from soybean allergy and the nearly infinite uses for soybean have gone up in the past a couple of years.
Previous method for protein detection in soybeans
During the last 30 years, broad methods and techniques were experimented to discover soybean protein. These methods and techniques can be conveyed to lab environment easily. The original and traditional methods were designed and tested in molecular biology spectrum. Enzyme‐Linked Immunosorbent Assay technique containing high susceptibility and specificity is reliable method to investigate soybean proteins through applying a protein which can identify a foreign molecule. This has been evaluated as a vacuolar protein including a molecular block of 34 kDa. The ELISA illustrated sufficient repeatability and reproducibility in lab assessment. But it can not test protein in soybean existing in brewed soybean products. There are different studies to conduct experiments to assess soybean protein through ELISA. However, reproducibility, cross-reactivity and low repeatability make measurement difficult to be reliable in processed foods. These methods can not discover soybean protein staying in brewed soybean products.
Current method for protein detection in soybeans
Compared with previous method, a heating process is involved in current abstraction technique to investigate soybean protein existing in brewed products. Since the heating process can deactivate the microbial proteolytic enzymes, the current abstraction technique can be used to disclose soybean protein in brewed soybean products. The heating abstraction technique can be demonstrated as the following. To produce the good dispersibility for the specimen in the extraction buffer to carry out the heating process, 19mL of abstraction buffer is mixed with five glass beads in five millimeter diameter and 1 g of food homogenate. At 5, 15 and 60 min variable time, the mixture is abstracted under 25, 40, 60, 80 and 100 ° variable temperature through the heating in a water bath followed by every 5 minutes vortexing. Food abstractions generated through the previous and the current technique are centrifuged for 20 minutes at three thousand gram, then the supernatant is filtered off by a filter paper. The filtrate is gathered and applied for analysis immediately acting as the food specimen abstract. The calibration standard solutions needs to be prepared to disclose soybean proteins by using ELISA. A three hundred milligram soybean powder specimen is mixed with a twenty milliliter compound including 0.5 M NaCl, 0.5% SDS, 20 mM Tris-HCl (pH 7.5), and 2% 2-ME. The compound is then shaken at room temperature for 16 hours for abstraction. The abstract is centrifuged for 30 minutes at twenty thousand gram, then the supernatant is selected by a 0.8-μm microfilter paper. The protein substance from the initial abstract is inspected with a 2-D Quant Kit. The initial abstract is diluted to 50 ng/mL combined with 0.1% SDS, 0.1% 2-ME, 0.1 M PBS (pH 7.4), 0.1% BSA, and 0.1% Tween 20, and it is deposited for ELISA at 4 °C playing as the calibration standard solution.
Conclusion for current protein detection method in soybeans.
The detection limit for the ELISA is 1 μg/g and it can not assess soybean proteins existing in brewed soybean products due to degradation of the proteins in soybean through microbial proteolytic enzymes staying in the brewed products. The microbial proteolytic enzymes possibly restrain the detection of soybean protein storing in the brewed soybean products. The current abstraction technique can control protein degradation through the microbial proteolytic enzymes. The microbial proteolytic enzymes can be inhibited by heating, pH, and protease inhibitors in general. The variable heating temperatures and abstraction times are examined to decide the ideal heating temperature and time to control microbial proteolytic enzymes. The heating conditions showed to optimize the control of microbial proteolytic enzymes is 80 °C for 15 minutes. So the heating temperature for the abstraction is set to 80 °C and the time is set to 15 minutes for the current abstraction technique.
The current abstraction technique can restrain the degradation of soybean proteins through microbial proteolytic enzymes and can detect soybean protein in most brewed soybean products. The current abstraction technique combined with the heating is a useful and sensitive tool to discover soybean protein stored in processed foods and brewed soybean products. Without impacting microbial proteolytic enzymes, this method is appropriate to quantify soybean protein in processed foods. The proposed extraction and ELISA technique can be applied to control labeling systems for soybean ingredient through a trusty manner.
Rationale for protein detection in walnuts
English walnuts (Juglans regia) and black walnuts (Juglans nigra) are two main types of walnuts in the market across the world. Walnuts are utilized as a valuable ingredient due to favorable health attributes, sensory properties and consumer sensation. Shelled walnuts are broadly applied as ingredients in different foods such as salad, ice creams, bread and meat alternative. Walnut oil is introduced as a good source of mono- and polyunsaturated fatty acids and tocopherols. And it is adopted as a food ingredient in salad dressings particularly. Walnut hull extract is considered as a dietary supplement and a seasoning in the food industry. In addition, ground walnut shells can be used in industrial field as extenders, carriers, fillers and abrasives for example jet cleaners. Tree nuts are regarded as one of the most common allergenic foods around the world. Allergic reactions from tree nuts can be fierce and life threatening. Individuals with walnut allergies can have result in fatal and near-fatal reactions from the unintended ingestion of walnuts, other tree nuts or possibly contamination of food with the walnuts ingredient. To prevent walnut allergic reactions, the only effective way is to avoid walnuts in the diet. The appropriate labeling of processed foods with walnuts ingredient is critical to protect walnut-allergic consumers. There are a couple of circumstances cause undeclared walnut residues such as sharing equipment between walnut-containing and other formulations and undeclared walnuts in ingredients. The enzyme-linked immunosorbent assay (ELISA) can be used as the technique to detect walnuts residues with great sensitivity and specificity since walnuts allergic Individuals can have allergic reactions with low (milligram) amounts of walnuts. Several different techniques can be applied to discover walnut residues as well such as polymerase chain reaction (PCR) method and ELISA method on the basis of polyclonal antisera raised against a particular 2S albumin walnut protein.
Current method for protein detection in walnuts
The sandwich-type walnut ELISA is the current method used to detect protein in walnuts. The sandwich-type walnut ELISA can be applied as a critical analytical technique by food manufacturers and regulatory agencies for hygiene validation and the assessment of allergen control strategies.
Immunogen preparation
A mixture of several brands of English walnuts are used to produce the immunogen. The mixed walnuts need to be washed by deionized distilled water 6 times and air-dried. Portion of the walnuts are dry-roasted for 10 minutes at 270 ◦F. The roasted or raw walnuts are cleaved, frozen, and ground to a refined particle size through the blender. The ground roasted and ground raw walnuts are defatted and filtered. Then, the powdered raw or roasted walnuts are air-dried thoroughly. Both the defatted, powdered raw, and roasted walnuts can be utilized as immunogens. Protein concentrations of the defatted powdered immunogens are set through the Kjeldahl method with 46.4% raw defatted walnut and 34.9% roasted defatted walnut.
Polyclonal antibody production and titer determination
Polyclonal antibodies are generated in 1 sheep, 1 goat, and 3 New Zealand white rabbits with each immunogen. The initial subcutaneous injections are given to the 10 animals including 3 rabbits, 1 sheep, and 1 goat on multiple sites with the defatted powdered immunogen and Freunds Complete Adjuvant. Titer values of collected antisera are evaluated by a noncompetitive ELISA method with walnut protein from abstracts of the proper raw or roasted immunogen.
Cross-reactivity study and ELISA method
A variety of tree nuts, seeds, legumes, fruits and food ingredients are assessed for cross-reactivity in the walnut ELISA assay. The modified sandwich ELISA can be used to detect walnuts residues with sheep antiroasted walnut and rabbit antiroasted walnut antisera used as the capture and detector antibodies respectively.
Conclusion for current protein detection method in walnuts
Walnut residues can be disclosed at 1 ppm quantitation limit in a diversity of food such as ice cream, muffins, cookies and chocolate. The walnut ELISA can be conducted to detect possible walnut residues allergy in other foods from sharing equipment and to evaluate the sanitation procedures targeted on removal of walnut residues from shared equipment in the food industry.
Rationale for protein detection in beef
It has been reported that animal feedingstuffs containing processed animal protein (PAP) contaminated with prions have caused BSE infection of the cattle. Processed animal proteins (PAP) has been prohibited to apply as feed material for all farmed animals except fish meal currently. In addition, infections from consumption of undercooked raw beef has been declaimed to be an important pathogen for Enterohemorrhagic Escherichia coli O157:H7.
Method for protein detection in beef
For processed animal protein, the specific polymerase chain reaction (PCR) based procedure parallelled with microscopic method is utilized to detect processed animal protein (PAP) in feedingstuffs. The limit detection for PCR has been evaluated on 0.05% for beef, 0.1% for pork and 0.2% for poultry meat and bone meal. Microscopic method can disclose 66.13% doubtful samples of feedingstuffs. Combined the results from the use of the microscopic and PCR methods, it has been stated that the molecular biology methods can be executed as a supplementary method for PAP detection.
For undercooked raw beef, in order to make sure a safe beef supply, sensitive and quick detection techniques for E. coli O157:H7 are important in the meat industry. Three different techniques can be used in raw ground beef: the VIDAS ultraperformance E. coli test (ECPT UP), a noncommercial real-time (RT) PCR method and the U.S. Department of Agriculture, Food Safety and Inspection Service (USDA-FSIS) reference method to detect E. coli O157:H7. 25 g of individual raw beef samples and 375 g of raw beef composites can be examined for optimal enrichment times and the efficacy of testing. 6 hours of enrichment is sufficient for both the VIDAS ECPT UP and RT-PCR methods for 25 g samples of each type of raw ground beef, but 24 hours of enrichment is acquired for 375 g samples, Both the VIDAS ECPT UP and RT-PCR methods can generate similar results with those gained from the USDA-FSIS reference method after 18 to 24 hours of enrichment. Low levels of E. coli O157:H7 in 25 g of various types of raw ground beef can be disclosed through these methods, E. coli O157:H7 in composite raw ground beef up to 375 g can be detected as well.
Implication from protein detection
Protein detection in cells from the human rectal mucous membrane can imply colorectal disease such as colon tumours, inflammatory bowel disease. Protein detection based on antibody microarrays can implicate life signature for example organics and biochemical compounds in the solar system in astrobiology field. Protein detection can monitor soybean protein labeling system in processed foods to protect consumers in a reliable way. The labeling for soybean protein declaimed by protein detection has indicated to be the most important solution. Detailed labeling description for the soybean ingredients in refined foods is required to protect the consumer.
References
External links
Fermentation improves nutritional value of beans
Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month, presenting short accounts on selected proteins from the PDB) | Protein detection | [
"Chemistry",
"Biology"
] | 3,012 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry"
] |
54,513,154 | https://en.wikipedia.org/wiki/Gilles%20Holst | Gilles Holst (20 March 1886 – 11 October 1968) was a Dutch physicist, known worldwide for his invention of the low-pressure sodium lamp in 1932.
Early life
His father was a manager of a shipyard. In 1904 he went to ETH Zurich to study mechanical engineering, switching to mathematics and physics after a year.
Career
He worked with Balthasar van der Pol, known for the Van der Pol oscillator, and Frans Michel Penning, known for Penning ionization and the Penning mixture. In 1908 he became a geprüfter Fachlehrer, or qualified teacher. And most important,
he became the science director of the Philips Physics Laboratory in Eindhoven.
In 1909 he became an assistant to Heike Kamerlingh Onnes at Leiden University. At Leiden, it is believed that he was the first to witness the phenomenon of superconductivity. In 1926 he became a member of the Royal Netherlands Academy of Arts and Sciences.
The Gilles Holst Award was first awarded in 1939.
Personal life
He died in the Netherlands at the age of 82.
References
External links
Holst Centre
1886 births
1968 deaths
20th-century Dutch physicists
ETH Zurich alumni
Academic staff of Leiden University
Members of the Royal Netherlands Academy of Arts and Sciences
Superconductivity | Gilles Holst | [
"Physics",
"Materials_science",
"Engineering"
] | 269 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
54,514,043 | https://en.wikipedia.org/wiki/List%20of%20smallest%20exoplanets | Below is a list of the smallest exoplanets so far discovered, in terms of physical size, ordered by radius.
List
The sizes are listed in units of Earth radii (). All planets listed are smaller than Earth and Venus, up to 0.7 Earth radii. The NASA Exoplanet Archive is used as the main data source.
Excluded objects
Kepler-37e is listed with a radius of in the Exoplanet Archive based on KOI data, but the existence of this planet is doubtful, and assuming its existence, a 2023 study found a mass of , inconsistent with such a small radius.
KOI-6705.01, listed as a potential very small planet in the KOI dataset, was shown to be a false positive in 2016.
Candidate planets
Below shows a list candidate planets below . These planets have yet to be confirmed.
See also
List of largest exoplanets
List of exoplanet extremes
Lists of exoplanets
References
Lists of exoplanets
E | List of smallest exoplanets | [
"Physics",
"Mathematics"
] | 212 | [
"Smallest things",
"Quantity",
"Physical quantities",
"Size"
] |
51,652,568 | https://en.wikipedia.org/wiki/Quadruple%20glazing | Quadruple glazing (quadruple-pane insulating glazing) is a type of insulated glazing comprising four glass panes, commonly equipped with low emissivity coating and insulating gases in the cavities between the glass panes. Quadruple glazing is a subset of multipane (multilayer) glazing systems. Multipane glazing with up to six panes is commercially available.
Multipane glazing improves thermal comfort (by reducing downdraft convection currents adjacent to the windowpane), and it can reduce greenhouse gas emissions by minimising heating and cooling demand. Quadruple glazing may be required to achieve desired energy efficiency levels in Arctic regions, or to allow for higher glazing ratios in curtain walling without increasing winter heat loss. Quadruple glazing allows building glazing elements to be designed without modulated external sun-shading, given that the low thermal transmittance of having four or more glazing layers enables solar gain to be adequately managed directly by the window glazing itself. In Nordic countries, some existing buildings with triple glazing are being upgraded to glazing with four or more layers.
Features
With quadruple glazing, the center-of-panel U-value (Ug) of 0.33 W/(m2K) [R-value 17] is readily achievable. With six-pane glazing, a Ug value as low as 0.24 W/(m2K) [R-value 24] was reported. This brings several advantages, such as:
Energy efficient buildings without modulated sun shading The desired overall window thermal transmittance value of lower than about 0.4 W/(m2K) is possible without having to depend on modulated external shading. A study by Svendsen et al. showed that at such low window U-values, glazing with moderate solar gain performs comparably to glazing of comparable U-value with variable external shading and high solar gain. This is so because with improved overall U-values, a building's heating demand diminishes, to the point that wintertime solar heat gain alone may be enough to heat the building.
Pronounced seasonal-dependence of the solar gain Due to incidence-angle-dependent Fresnel reflections, the optical characteristics of multipane glazing, also notably vary seasonally. As the sun's average elevation varies throughout the year, the effective solar gain tends to be meaningfully less in the summer. The effect is also visible to an extent to the naked eye.
Comfort for occupants When compared to traditional double-pane or triple-pane windows with mechanical or structural shading arrangements, multipane glazing enables easier viewing between indoor and outdoor environments. A low U-value maintains inside glass temperatures at a more uniform level throughout the year. During the winter, downwards convection currents (downdrafts) are very small, thereby enabling people seated near such a multipane window to feel as comfortable adjacent to the window as they would feel if they were seated adjacent to a solid wall. However, occlusion or shading might still be wanted for purposes of privacy.
Nearly zero heating building In 1995, it was predicted that with a glazing U-value of 0.3 W/(m2K) zero-heating building could be attained. It has also been shown that the heating demand might be decreased to nearly zero for glazed buildings with system U-values as low as 0.3 W/(m2K). Theoretically, in the summer, the remaining cooling demand could be satisfied by photovoltaic generation alone, with the greatest need for cooling nearly coinciding with the strongest sunlight incident on solar panels. However, in practice, temporal lags between cooling demand and the output from solar panels could occur due to factors such as ambient humidity and the need for dehumidification, as well as the thermal inertia of the building and its contents.
Engineering
Multipane glazing is often designed with thinner intermediate glass panes in order to save weight. To prevent intermediate panes from thermal stress cracking it is sometimes required to use heat-strengthened glass. With more than three glass panes, special care must be taken of the spacer and sealant temperatures as intermediate glass panes in contact with these glazing elements can readily exceed design temperature limits of respective materials due to solar radiation (irradiance) heating.
Solar irradiance heating of intermediate glass panes increases substantially with an increased number of glass panes. Multipane glazing must be carefully designed to account for the expansion of the insulating gases that are placed between the glass layers, because such gaseous expansion becomes an increasingly important consideration as the number of glass panes is increased. Special breather vents, as well as small vents communicating between the layer spaces, can be incorporated in order to manage this glass-bulging effect. Finite element analysis is often used to calculate appropriate glass sheets' strengths. Calculating static equilibrium with thin glass panes used in multipane glazing may involve nonlinear plate mechanics.
Performance
Double-pane windows have been the industry standard for decades. They represent a vast improvement over single-pane windows but the potential for even greater energy savings with more highly insulating windows has been elusive. Recent price reductions in the thin glass used in both smartphones and flat-screen TVs, as well as in the krypton gas used in halogen lights, however, have made it possible to build lighter, high- efficiency quad-pane windows at a lower cost. Researchers from the National Renewable Energy Laboratory evaluated two configurations of Alpen High Performance (an American manufacturer) quad-pane windows at an office building at the Denver Federal Center. Both configurations have the same thickness and a comparable weight as a standard commercial double-pane window—one model uses two layers of film suspended between two panes of standard glass, the other replaces the film with two panes of ultra-thin glass. Researchers found that on average, quad-pane windows saved 24% heating and cooling energy compared with a high-performing double- pane window. For new construction and window replacements, the quad-pane windows have payback between one and six years, depending on climate zone and utility rates.
See also
Passive house
Curtain wall (architecture)
Window
Passive solar building design
History of passive solar building design
References
External links
quadruple glazing seasonal energy transmittance video
6-pane glazing destructive test video
Reflex - Q-Air multipane glazing
EN 1279:2018 Glass in building — Insulating glass unit
EN 16612:2019 Glass in building — Determination of the lateral load resistance of glass panes by calculation
Glass
Building materials
Energy efficiency
Heat transfer
Thermal protection
Building insulation materials
Low-energy building | Quadruple glazing | [
"Physics",
"Chemistry",
"Engineering"
] | 1,421 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Glass",
"Building engineering",
"Unsolved problems in physics",
"Architecture",
"Homogeneous chemical mixtures",
"Construction",
"Materials",
"Thermodynamics",
"Amorphous solids",
"Matter",
"Building materials"
] |
51,667,206 | https://en.wikipedia.org/wiki/Ji%C5%99%C3%AD%20Linhart | Jiří Linhart (13 April 1924 – 6 January 2011) Nuclear fusion physicist and Czech Olympic swimmer. He competed in the men's 200 metre breaststroke at the 1948 Summer Olympics in London. He stayed on in London after which he took his PhD under the supervision of Denis Gabor. He was a pioneer of Nuclear Fusion, author of "Plasma Physics" (1960) - the first textbook on Plasma science, and many academic papers and early patents on nuclear reactors.
In 1956 he became group Head of Acceleration at CERN, and in 1960 he became the head of the EURATOM group in Frascati.
He was also a very keen chess player, playing in the Haifa Olympiad in 1976.
References
External links
1924 births
2011 deaths
Czech male breaststroke swimmers
Olympic swimmers for Czechoslovakia
Swimmers at the 1948 Summer Olympics
Swimmers from Prague
Nuclear fusion
Plasma physicists
Nuclear reactors
People associated with CERN
Czechoslovak male swimmers
Czech chess players
Czechoslovak chess players | Jiří Linhart | [
"Physics",
"Chemistry"
] | 188 | [
"Nuclear fusion",
"Plasma physicists",
"Plasma physics",
"Nuclear physics"
] |
63,034,277 | https://en.wikipedia.org/wiki/Bromperidol%20decanoate | Bromperidol decanoate, sold under the brand names Bromidol Depot, Bromodol Decanoato, and Impromen Decanoas, is an antipsychotic which has been marketed in Europe and Latin America. It is an antipsychotic ester and long-acting prodrug of bromperidol which is administered by depot intramuscular injection once every 4 weeks.
See also
List of antipsychotics § Antipsychotic esters
References
4-Phenylpiperidines
Aromatic ketones
4-Bromophenyl compounds
Butyrophenone antipsychotics
Decanoate esters
4-Fluorophenyl compounds
Prodrugs
Tertiary alcohols
Typical antipsychotics | Bromperidol decanoate | [
"Chemistry"
] | 159 | [
"Chemicals in medicine",
"Prodrugs"
] |
63,044,039 | https://en.wikipedia.org/wiki/Semiorthogonal%20decomposition | In mathematics, a semiorthogonal decomposition is a way to divide a triangulated category into simpler pieces. One way to produce a semiorthogonal decomposition is from an exceptional collection, a special sequence of objects in a triangulated category. For an algebraic variety X, it has been fruitful to study semiorthogonal decompositions of the bounded derived category of coherent sheaves, .
Semiorthogonal decomposition
Alexei Bondal and Mikhail Kapranov (1989) defined a semiorthogonal decomposition of a triangulated category to be a sequence of strictly full triangulated subcategories such that:
for all and all objects and , every morphism from to is zero. That is, there are "no morphisms from right to left".
is generated by . That is, the smallest strictly full triangulated subcategory of containing is equal to .
The notation is used for a semiorthogonal decomposition.
Having a semiorthogonal decomposition implies that every object of has a canonical "filtration" whose graded pieces are (successively) in the subcategories . That is, for each object T of , there is a sequence
of morphisms in such that the cone of is in , for each i. Moreover, this sequence is unique up to a unique isomorphism.
One can also consider "orthogonal" decompositions of a triangulated category, by requiring that there are no morphisms from to for any . However, that property is too strong for most purposes. For example, for an (irreducible) smooth projective variety X over a field, the bounded derived category of coherent sheaves never has a nontrivial orthogonal decomposition, whereas it may have a semiorthogonal decomposition, by the examples below.
A semiorthogonal decomposition of a triangulated category may be considered as analogous to a finite filtration of an abelian group. Alternatively, one may consider a semiorthogonal decomposition as closer to a split exact sequence, because the exact sequence of triangulated categories is split by the subcategory , mapping isomorphically to .
Using that observation, a semiorthogonal decomposition implies a direct sum splitting of Grothendieck groups:
For example, when is the bounded derived category of coherent sheaves on a smooth projective variety X, can be identified with the Grothendieck group of algebraic vector bundles on X. In this geometric situation, using that comes from a dg-category, a semiorthogonal decomposition actually gives a splitting of all the algebraic K-groups of X:
for all i.
Admissible subcategory
One way to produce a semiorthogonal decomposition is from an admissible subcategory. By definition, a full triangulated subcategory is left admissible if the inclusion functor has a left adjoint functor, written . Likewise, is right admissible if the inclusion has a right adjoint, written , and it is admissible if it is both left and right admissible.
A right admissible subcategory determines a semiorthogonal decomposition
,
where
is the right orthogonal of in . Conversely, every semiorthogonal decomposition arises in this way, in the sense that is right admissible and . Likewise, for any semiorthogonal decomposition , the subcategory is left admissible, and , where
is the left orthogonal of .
If is the bounded derived category of a smooth projective variety over a field k, then every left or right admissible subcategory of is in fact admissible. By results of Bondal and Michel Van den Bergh, this holds more generally for any regular proper triangulated category that is idempotent-complete.
Moreover, for a regular proper idempotent-complete triangulated category , a full triangulated subcategory is admissible if and only if it is regular and idempotent-complete. These properties are intrinsic to the subcategory. For example, for X a smooth projective variety and Y a subvariety not equal to X, the subcategory of of objects supported on Y is not admissible.
Exceptional collection
Let k be a field, and let be a k-linear triangulated category. An object E of is called exceptional if Hom(E,E) = k and Hom(E,E[t]) = 0 for all nonzero integers t, where [t] is the shift functor in . (In the derived category of a smooth complex projective variety X, the first-order deformation space of an object E is , and so an exceptional object is in particular rigid. It follows, for example, that there are at most countably many exceptional objects in , up to isomorphism. That helps to explain the name.)
The triangulated subcategory generated by an exceptional object E is equivalent to the derived category of finite-dimensional k-vector spaces, the simplest triangulated category in this context. (For example, every object of that subcategory is isomorphic to a finite direct sum of shifts of E.)
Alexei Gorodentsev and Alexei Rudakov (1987) defined an exceptional collection to be a sequence of exceptional objects such that for all i < j and all integers t. (That is, there are "no morphisms from right to left".) In a proper triangulated category over k, such as the bounded derived category of coherent sheaves on a smooth projective variety, every exceptional collection generates an admissible subcategory, and so it determines a semiorthogonal decomposition:
where , and denotes the full triangulated subcategory generated by the object . An exceptional collection is called full if the subcategory is zero. (Thus a full exceptional collection breaks the whole triangulated category up into finitely many copies of .)
In particular, if X is a smooth projective variety such that has a full exceptional collection , then the Grothendieck group of algebraic vector bundles on X is the free abelian group on the classes of these objects:
A smooth complex projective variety X with a full exceptional collection must have trivial Hodge theory, in the sense that for all ; moreover, the cycle class map must be an isomorphism.
Examples
The original example of a full exceptional collection was discovered by Alexander Beilinson (1978): the derived category of projective space over a field has the full exceptional collection
,
where O(j) for integers j are the line bundles on projective space. Full exceptional collections have also been constructed on all smooth projective toric varieties, del Pezzo surfaces, many projective homogeneous varieties, and some other Fano varieties.
More generally, if X is a smooth projective variety of positive dimension such that the coherent sheaf cohomology groups are zero for i > 0, then the object in is exceptional, and so it induces a nontrivial semiorthogonal decomposition . This applies to every Fano variety over a field of characteristic zero, for example. It also applies to some other varieties, such as Enriques surfaces and some surfaces of general type.
A source of examples is Orlov's blowup formula concerning the blowup of a scheme at a codimension locally complete intersection subscheme with exceptional locus . There is a semiorthogonal decomposition where is the functor with is the natural map.
While these examples encompass a large number of well-studied derived categories, many naturally occurring triangulated categories are "indecomposable". In particular, for a smooth projective variety X whose canonical bundle is basepoint-free, every semiorthogonal decomposition is trivial in the sense that or must be zero. For example, this applies to every variety which is Calabi–Yau in the sense that its canonical bundle is trivial.
See also
Derived noncommutative algebraic geometry
Notes
References
Algebraic geometry | Semiorthogonal decomposition | [
"Mathematics"
] | 1,631 | [
"Mathematical structures",
"Fields of abstract algebra",
"Category theory",
"Algebraic geometry",
"Homological algebra"
] |
53,224,225 | https://en.wikipedia.org/wiki/ILC2 | ILC2 cells, or type 2 innate lymphoid cells are a type of innate lymphoid cell. Not to be confused with the ILC. They are derived from common lymphoid progenitor and belong to the lymphoid lineage. These cells lack antigen specific B or T cell receptor because of the lack of recombination activating gene. ILC2s produce type 2 cytokines (e.g. IL-4, IL-5, IL-9, IL-13) and are involved in responses to helminths, allergens, some viruses, such as influenza virus and cancer.
The cell type was first described in 2001 as non-B/non-T cells, which produced IL-5 and IL-13 in response to IL-25 and expressed MHC class II and CD11c. In 2006, a similar cell population was identified in a case of helminthic infection. The name "ILC2" was not proposed until 2013. They were previously identified in literature as natural helper cells, nuocytes, or innate helper 2 cells. It is believed that ILC2s are rather old cell type with ancestor populations emerging in lamprey and bony fish.
Parasitic infection
ILC2s play the crucial role of secreting type 2 cytokines in response to large extracellular parasites. They express characteristic surface markers and receptors for chemokines, which are involved in distribution of lymphoid cells to specific organ sites. They require IL-7 for their development, which activates two transcription factors (both required by these cells)—RORα and GATA3. After stimulation with Th2 polarising cytokines, which are secreted mainly by epithelia (e.g. IL-25, IL-33, TSLP, prostaglandin D2 and leukotriene D4), ILC2s begin to produce IL-5, IL-13, IL-9, IL-4 rapidly. ILC2s are critical for primary responses to local Th2 antigens e.g. helminths and viruses and that is why ILC2s are abundant in the tissues of skin, lungs, liver and gut. It has been observed that ILC2s originate in the gut, enter lymphatic vessels and then circulate in the bloodstream so they can migrate to other organs to help fight the parasitic infection. The trafficking is partly sphingosine 1-phosphate-dependent. For example, during an Nippostrongylus brasiliensis infection, ILC2s contribute to worm clearance by producing the essential cytokine IL-13. IL-13 secreted by ILC2s also promotes migration of activated lung dendritic cells into the draining lymph node, which then results in naive T cell priming and differentiation into Th2 cells.
Respiratory virus infection
It has been observed, that ILC2s are activated upon respiratory virus infections in mice and humans. For instance, during Influenza A virus infection, which induces IL-33 production, ILC2s are activated and drive airway hyper-responsiveness. Another example is an Respiratory syncytial virus infection, where ILC2s contribute by being the main source of IL-13 early in the infection leading to airway hyper-responsiveness and increased mucus production.
Allergy, atopic dermatitis, and asthma
ILC2s play a variety of roles in allergy. Primarily, they provide a source of the type 2 cytokines that orchestrate the allergic immune response. They produce a profile of signals in response to pro-allergenic cytokines IL-25 and IL-33 that is similar to those produced in response to helminthic infection. Their contribution to this signaling appears to be comparable to that of T cells. In response to allergen exposure in the lungs, ILC2s produce IL-13, a necessary cytokine in the pathogenesis of allergic reactions. This response appears to be independent of T and B cells. Further, allergic responses that resemble asthma-like symptoms have been induced in mice that lack T and B cells using IL-33. It has also been found that ILC2s are present in higher concentrations in tissues where allergic symptoms are present, such as in the nasal polyps of patients with chronic rhinosinusitis and the skin from patients with atopic dermatitis.
Barrier function
ILC2s are known to be enriched in the Fat-Associated Lymphoid Clusters (FALCs) within the mesenteries. IL-5 secreted by ILC2s is essential growth factor for B1 B cells and therefore important in the IgA antibody production. Besides the type 2 cytokines, ILC2s can also produce IL-6, which induces antibody production by B-cells, acts as a growth factor for plasmablasts and contributes in regulation of T follicular helper cells.
ILC2s are also known to be present in the FALCs within the pleural cavity. After being stimulated via IL-33 during an infection, they begin to secrete IL-5, leading to an activation of B1 B cells and the production of IgM antibodies. ILC2s are the dominant population of ILC in the lungs. By producing IL-13, they can initiate smooth muscle contraction and mucus secretion, but also goblet cell hyperplasia if the IL-13 is overexpressed. In addition, ILC2s help pulmonary wound healing after influenza infection by secreting amphiregulin. Besides lungs, ILC2 populations can also be found in human nasal and tonsil tissues.
Adipose tissue homeostasis
ILC2s are essential in the maintenance of homeostasis in lean and healthy adipose tissue. ILC2s resident in visceral adipose tissue produce IL-5, IL-13 and methionine-enkephalin peptides after prolonged exposure to IL-33. IL-5 secreted by ILC2s in adipose tissue is crucial for the recruitment and maintenance of eosinophils. Furthermore, production of IL-13 and IL-4 by ILC2 and eosinophils supports the maintenance of alternatively activated M2 macrophages and glucose homeostasis.
Research identified dysregulated responses of ILC2s in adipose tissue as a factor in the development of obesity in mice since ILC2s also play important role in energy homeostasis. Methionine-enkephalin peptides produced by ILC2s act directly on adipocytes to upregulate UCP1 and promote emergence of beige adipocytes in white adipose tissue. Beige and brown adipose tissue are specialized in thermogenesis. The process of beiging leads to increased energy expenditure and decreased adiposity.
References
Immune system
Lymphocytes | ILC2 | [
"Biology"
] | 1,438 | [
"Immune system",
"Organ systems"
] |
53,224,927 | https://en.wikipedia.org/wiki/Playing%20period | Playing period is a division of time in a sports or games, in which play occurs. Many games are divided into a fixed number of periods, which may be named for the number of divisions. Other games use terminology independent of the total number of divisions. A playing period may have a fixed length of game time or be bound by other rules of the game.
Description
The playing period is a division of time in a sports or games, in which play occurs. Many games are divided into a fixed number of periods, which may be named for the number of divisions (e.g., a half or a quarter). Other games use terminology independent of the total number of divisions (e.g., sets or innings). A playing period may have a fixed length of game time or be bound by other rules (e.g., three outs in baseball or a sudden-death goal in overtime).
Common periods
Halves and quarters
Basketball and gridiron football are among the sports that are divided into two halves, which may be subdivided into two quarters. A fifth overtime "quarter" may be played in the event of a tie at the end of the fourth quarter.
Periods
Floorball and ice hockey games are typically divided into three periods. A fourth period of overtime may be played in the event of a tie at the end of the third period.
Innings
Cricket and baseball games are divided into innings; within each of the innings, there are further subdivisions of play known as deliveries or pitches. In limited overs cricket, each of the innings lasts until either all but one of the players on the batting team are out, or a certain number of legal deliveries have occurred. Additional short innings, which are also limited in the number of legal deliveries, are played if necessary to break ties. In baseball, each inning consists of each team batting until three players on the team are out. Additional innings may be played if the game is tied after the ninth or subsequent innings.
In variations of tag such as kho-kho and atya-patya, there is a time limit for each inning, and if the game is tied, additional innings may be played on a basis akin to sudden death.
Ends
Curling contests consist of a number of ends, where each player on each team throws all of their stones.
Sets
Some sports, like volleyball or tennis are divided into a predetermined number of "sets", and the match ends when a team or individual wins the required number of sets (e.g. winning 3 sets in a best of 5). A set is usually won when a number of points is achieved by one of the competitors (25 points in volleyball or 6 games in tennis, for example), though further rules, like having a 2 points advantage, might be imposed.
See also
Sports
Game
References
Sports terminology
Units of time | Playing period | [
"Physics",
"Mathematics"
] | 575 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
53,231,144 | https://en.wikipedia.org/wiki/Ponceau%203R | Ponceau 3R (C.I. 16155) is an azo dye that once was used as a red food colorant. It is one of a family of Ponceau (French for "poppy-colored") dyes.
References
Food colorings
Azo dyes
Organic sodium salts
Naphthalenesulfonates
2-Naphthols | Ponceau 3R | [
"Chemistry"
] | 75 | [
"Organic sodium salts",
"Salts"
] |
53,231,843 | https://en.wikipedia.org/wiki/Brownout%20%28software%20engineering%29 | Brownout in software engineering is a technique that involves disabling certain features of an application.
Description
Brownout is used to increase the robustness of an application to computing capacity shortage. If too many users are simultaneously accessing an application hosted online, the underlying computing infrastructure may become overloaded, rendering the application unresponsive. Users are likely to abandon the application and switch to competing alternatives, hence incurring long-term revenue loss. To better deal with such a situation, the application can be given brownout capabilities: The application will disable certain features – e.g., an online shop will no longer display recommendations of related products – to avoid overload. Although reducing features generally has a negative impact on the short-term revenue of the application owner, long-term revenue loss can be avoided.
The technique is inspired by brownouts in power grids, which consists in reducing the power grid's voltage in case electricity demand exceeds production. Some consumers, such as incandescent light bulbs, will dim – hence originating the term – and draw less power, thus helping match demand with production. Similarly, a brownout application helps match its computing capacity requirements to what is available on the target infrastructure.
Brownout complements elasticity. The former can help the application withstand short-term capacity shortage, but does so without changing the capacity available to the application. In contrast, elasticity consists of adding (or removing) capacity to the application, preferably in advance, so as to avoid capacity shortage altogether. The two techniques can be combined; e.g., brownout is triggered when the number of users increases unexpectedly until elasticity can be triggered, the latter usually requiring minutes to show an effect.
Brownout is relatively non-intrusive for the developer, for example, it can be implemented as an advice in aspect-oriented programming. However, surrounding components, such as load-balancers, need to be made brownout-aware to distinguish between cases where an application is running normally and cases where the application maintains a low response time by triggering brownout.
References
Software engineering
Cloud computing | Brownout (software engineering) | [
"Technology",
"Engineering"
] | 429 | [
"Software engineering",
"Systems engineering",
"Information technology",
"Computer engineering"
] |
62,119,898 | https://en.wikipedia.org/wiki/Value%20of%20structural%20health%20information | The value of structural health information is the expected utility gain of a built environment system by information provided by structural health monitoring (SHM). The quantification of the value of structural health information is based on decision analysis adapted to built environment engineering. The value of structural health information can be significant for the risk and integrity management of built environment systems.
Background
The value of structural health information takes basis in the framework of the decision analysis and the value of information analysis as introduced by Raiffa and Schlaifer and adapted to civil engineering by Benjamin and Cornell. Decision theory itself is based upon the expected utility hypothesis by Von Neumann and Morgenstern. The concepts for the value of structural health information in built environment engineering were first formulated by Pozzi and Der Kiureghian and Faber and Thöns.
Formulation
The value of structural health information is quantified with a normative decision analysis. The value of structural health monitoring is calculated as the difference between the optimized expected utilities of performing and not performing structural health monitoring (SHM), and , respectively:
The expected utilities are calculated with a decision scenario involving (1) interrelated built environment system state, utility and consequence models, (2) structural health information type, precision and cost models and (2) structural health action type and implementation models. The value of structural health information quantification facilitates an optimization of structural health information system parameters and information dependent actions.
Application
The value of structural health information provides a quantitative decision basis for (1) implementing SHM or not, (2) the identification of the optimal SHM strategy and (3) for planning optimal structural health actions, such as e.g., repair and replacement. The value of structural health information presupposes relevance of SHM information for the built environment system performance. A significant value of structural health information has been found for the risk and integrity management of engineering structures.
References
Infrastructure
Architecture academics | Value of structural health information | [
"Engineering"
] | 390 | [
"Construction",
"Infrastructure"
] |
62,122,982 | https://en.wikipedia.org/wiki/Senescence-associated%20secretory%20phenotype | Senescence-associated secretory phenotype (SASP) is a phenotype associated with senescent cells wherein those cells secrete high levels of inflammatory cytokines, immune modulators, growth factors, and proteases. SASP may also consist of exosomes and ectosomes containing enzymes, microRNA, DNA fragments, chemokines, and other bioactive factors. Soluble urokinase plasminogen activator surface receptor is part of SASP, and has been used to identify senescent cells for senolytic therapy. Initially, SASP is immunosuppressive (characterized by TGF-β1 and TGF-β3) and profibrotic, but progresses to become proinflammatory (characterized by IL-1β, IL-6 and IL-8) and fibrolytic. SASP is the primary cause of the detrimental effects of senescent cells.
SASP is heterogenous, with the exact composition dependent upon the senescent-cell inducer and the cell type. Interleukin 12 (IL-12) and Interleukin 10 (IL-10) are increased more than 200-fold in replicative senescence in contrast to stress-induced senescence or proteosome-inhibited senescence where the increases are about 30-fold or less. Tumor necrosis factor (TNF) is increased 32-fold in stress-induced senescence, 8-fold in replicative senescence, and only slightly in proteosome-inhibited senescence. Interleukin 6 (IL-6) and interleukin 8 (IL-8) are the most conserved and robust features of SASP. But some SASP components are anti-inflammatory.
Senescence and SASP can also occur in post-mitotic cells, notably neurons. The SASP in senescent neurons can vary according to cell type, the initiator of senescence, and the stage of senescence.
An online SASP Atlas serves as a guide to the various types of SASP.
SASP is one of the three main features of senescent cells, the other two features being arrested cell growth, and resistance to apoptosis. SASP factors can include the anti-apoptotic protein Bcl-xL, but growth arrest and SASP production are independently regulated. Although SASP from senescent cells can kill neighboring normal cells, the apoptosis-resistance of senescent cells protects those cells from SASP.
History
The concept and abbreviation of SASP was first established by Judith Campisi and her group, who first published on the subject in 2008.
Causes
SASP expression is induced by a number of transcription factors, including MLL1 (KMT2A), C/EBPβ, and NF-κB. NF-κB and the enzyme CD38 are mutually activating. NF-κB is expressed as a result of inhibition of autophagy-mediated degradation of the transcription factor GATA4. GATA4 is activated by the DNA damage response factors, which induce cellular senescence. SASP is both a promoter of DNA damage response and a consequence of DNA damage response, in an autocrine and paracrine manner. Aberrant oncogenes, DNA damage, and oxidative stress induce mitogen-activated protein kinases, which are the upstream regulators of NF-κB.
Demethylation of DNA packaging protein Histone H3 (H3K27me3) can lead to up-regulation of genes controlling SASP.
mTOR (mammalian target of rapamycin) is also a key initiator of SASP. Interleukin 1 alpha (IL1A) is found on the surface of senescent cells, where it contributes to the production of SASP factors due to a positive feedback loop with NF-κB. Translation of mRNA for IL1A is highly dependent upon mTOR activity. mTOR activity increases levels of IL1A, mediated by MAPKAPK2. mTOR inhibition of ZFP36L1 prevents this protein from degrading transcripts of numerous components of SASP factors. Inhibition of mTOR supports autophagy, which can generate SASP components.
Ribosomal DNA (rDNA) is more vulnerable to DNA damage than DNA elsewhere in the genome such that rDNA instability can lead to cellular senescence, and thus to SASP
The high-mobility group proteins (HMGA) can induce senescence and SASP in a p53-dependent manner.
Activation of the retrotransposon LINE1 can result in cytosolic DNA that activates the cGAS–STING cytosolic DNA sensing pathway upregulating SASP by induction of interferon type I. cGAS is essential for induction of cellular senescence by DNA damage.
SASP secretion can also be initiated by the microRNAs miR-146 a/b.
Senescent cells release mitochondrial double-stranded RNA (mt-dsRNA) into the cytosol driving the SASP via RIGI/MDA5/MAVS/MFN1. Moreover, senescent cells are hypersensitive to mt-dsRNA-driven inflammation due to reduced levels of PNPT1 and ADAR1.
Pathology
Senescent cells are highly metabolically active, producing large amounts of SASP, which is why senescent cells consisting of only 2% or 3% of tissue cells can be a major cause of aging-associated diseases. SASP factors cause non-senescent cells to become senescent. SASP factors induce insulin resistance. SASP disrupts normal tissue function by producing chronic inflammation, induction of fibrosis and inhibition of stem cells. Transforming growth factor beta family members secreted by senescent cells impede differentiation of adipocytes, leading to insulin resistance.
SASP factors IL-6 and TNFα enhance T-cell apoptosis, thereby impairing the capacity of the adaptive immune system.
SASP factors from senescent cells reduce nicotinamide adenine dinucleotide (NAD+) in non-senescent cells, thereby reducing the capacity for DNA repair and sirtuin activity in non-senescent cells. SASP induction of the NAD+ degrading enzyme CD38 on non-senescent cells (macrophages) may be responsible for most of this effect. By contrast, NAD+ contributes to the secondary (pro-inflammatory) manifestation of SASP.
SASP induces an unfolded protein response in the endoplasmic reticulum because of an accumulation of unfolded proteins, resulting in proteotoxic impairment of cell function.
SASP cytokines can result in an inflamed stem cell niche, leading to stem cell exhaustion and impaired stem cell function.
SASP can either promote or inhibit cancer, depending on the SASP composition, notably including p53 status. Despite the fact that cellular senescence likely evolved as a means of protecting against cancer early in life, SASP promotes the development of late-life cancers. Cancer invasiveness is promoted primarily through the actions of the SASP factors metalloproteinase, chemokine, interleukin 6 (IL-6), and interleukin 8 (IL-8). In fact, SASP from senescent cells is associated with many aging-associated diseases, including not only cancer, but atherosclerosis and osteoarthritis. For this reason, senolytic therapy has been proposed as a generalized treatment for these and many other diseases. The flavonoid apigenin has been shown to strongly inhibit SASP production.
Benefits
SASP can aid in signaling to immune cells for senescent cell clearance, with specific SASP factors secreted by senescent cells attracting and activating different components of both the innate and adaptive immune system. The SASP cytokine CCL2 (MCP1) recruits macrophages to remove cancer cells. Although transient expression of SASP can recruit immune system cells to eliminate cancer cells as well as senescent cells, chronic SASP promotes cancer. Senescent hematopoietic stem cells produce a SASP that induces an M1 polarization of macrophages which kills the senescent cells in a p53-dependent process.
Autophagy is upregulated to promote survival.
SASP factors can maintain senescent cells in their senescent state of growth arrest, thereby preventing cancerous transformation. Additionally, SASP secreted by cells that have become senescent because of stresses can induce senescence in adjoining cells subject to the same stresses, thereby reducing cancer risk.
SASP can play a beneficial role by promoting wound healing. SASP may play a role in tissue regeneration by signaling for senescent cell clearance by immune cells, allowing progenitor cells to repopulate tissue. In development, SASP also may be used to signal for senescent cell clearance to aid tissue remodeling. The ability of SASP to clear senescent cells and regenerate damaged tissue declines with age. In contrast to the persistent character of SASP in the chronic inflammation of multiple age-related diseases, beneficial SASP in wound healing is transitory. Temporary SASP in the liver or kidney can reduce fibrosis, but chronic SASP could lead to organ dysfunction.
Modification
Senescent cells have permanently active mTORC1 irrespective of nutrients or growth factors, resulting in the continuous secretion of SASP. By inhibiting mTORC1, rapamycin reduces SASP production by senescent cells.
SASP has been reduced through inhibition of p38 mitogen-activated protein kinases and janus kinase.
The protein hnRNP A1 (heterogeneous nuclear ribonucleoprotein A1) antagonizes cellular senescence and induction of the SASP by stabilizing Oct-4 and sirtuin 1 mRNAs.
SASP Index
A SASP index composed of 22 SASP factors has been used to evaluate treatment outcomes of late life depression. Higher SASP index scores corresponded to increased incidence of treatment failure, whereas no individual SASP factors were associated with treatment failure.
Inflammaging
Chronic inflammation associated with aging has been termed inflammaging, although SASP may be only one of the possible causes of this condition. Chronic systemic inflammation is associated with aging-associated diseases.
Senolytic agents have been recommended to counteract some of these effects. Chronic inflammation due to SASP can suppress immune system function, which is one reason elderly persons are more vulnerable to COVID-19.
See also
Cellular senescence
References
For further reading
Cellular senescence | Senescence-associated secretory phenotype | [
"Biology"
] | 2,233 | [
"Senescence",
"Cellular senescence",
"Cellular processes"
] |
62,123,474 | https://en.wikipedia.org/wiki/Thermal%20ecology | Thermal ecology is the study of the interactions between temperature and organisms. Such interactions include the effects of temperature on an organism's physiology, behavioral patterns, and relationship with its environment. While being warmer is usually associated with greater fitness, maintaining this level of heat costs a significant amount of energy. Organisms will make various trade-offs so that they can continue to operate at their preferred temperatures and optimize metabolic functions. With the emergence of climate change scientists are investigating how species will be affected and what changes they will undergo in response.
History
While it is not known exactly when thermal ecology began being recognized as a new branch of science, in 1969, the Savanna River Ecology Laboratory (SREL) developed a research program on thermal stress due to heated water previously used to cool nuclear reactors being released into various nearby bodies of water. The SREL alongside the DuPont Company Savanna River Laboratory and the Atomic Energy Commission sponsored the first scientific symposium on thermal ecology in 1974 to discuss this issue as well as similar instances and the second symposium was held the next year in 1975.
Animals
Temperature has a notable effect on animals, contributing to body growth and size, and behavioral and physical adaptations. Ways that animals can control their body temperature include generating heat through daily activity and cooling down through prolonged inactivity at night. Because this cannot be done by marine animals, they have adapted to have traits such as a small surface-area-to-volume ratio to minimize heat transfer with their environment and the creation of antifreeze in the body for survival in extreme cold conditions.
Endotherms
Endotherms expend a large amount of energy keeping their body temperatures warm and therefore require a large energy intake to make up for it. There are several ways that they have evolved to solve this issue. For instance, following Bergmann's Rule, endotherms in colder climates tend to be larger than those in warmer climates as a way to conserve internal heat. Other methods include reducing internal temperatures and metabolic rates through daily torpor and hibernation.
Strix occidentalis
The Strix occidentalis, or the California spotted owl, has a preferred temperature range of around 18.20-35.20 °C and is less tolerant to heat than most other birds, exhibiting behaviors such as wing drooping and increased breathing at 30-34 °C. Because of this they tend to live in environments that are resistant to temperature change such as old-growth forests.
Ectotherms
Because the main source of heat for ectotherms comes from their environment, thermal requirements change from species to species depending on geographical location. Due to some species having a static preferred body temperature through generations, they are shown to exhibit behavioral adjustments in situations of drastic environment change with adjustments in physiology as a last resort. In addition, ectotherms, similarly to endotherms, are generally larger in size when living in colder climates, following the temperature-size rule.
Podarcis siculus
The Podarcis siculus otherwise known as the Italian wall lizard has a preferred temperature range of around 28.40-31.57 °C for both males and females. A strong direct relationship has been observed between their body temperatures and air temperature in the summer and a weak correlation has been observed in the spring. To control their internal temperature, seeking shade under rocks and leaves has proven to be effective.
Plants
Many processes during plant reproduction operate at specific temperature ranges making temperature important for reproductive success. Increasing the temperature of the reproductive organs in plants results in more frequent visitations from pollinators and an increase in the rate of metabolic processes. Factors that affect the capture and maintaining of heat in plants include flower orientation, size and shape, coloration, opening and closure, pubescence, and thermogenesis.
Climate change
Due to recent global climate change, thermal ecology has become a topic of interest for scientists concerning ecological response. Through observation it has been found that organisms typically respond to changes in weather and temperature by either moving to an environment in which these factors match what they are already accustomed to or staying in their current environment and consequently become acclimated to the new conditions. In a study of the fish species Galaxias platei, it was concluded that the direct impacts of climate change such as increased temperatures would most likely not pose a significant threat however indirect impacts such as habitat loss may be detrimental.
See also
Quantitative ecology
References
Subfields of ecology
Physiology
Branches of thermodynamics | Thermal ecology | [
"Physics",
"Chemistry",
"Biology"
] | 895 | [
"Branches of thermodynamics",
"Thermodynamics",
"Physiology"
] |
62,129,266 | https://en.wikipedia.org/wiki/Prime%20editing | Prime editing is a 'search-and-replace' genome editing technology in molecular biology by which the genome of living organisms may be modified. The technology directly writes new genetic information into a targeted DNA site. It uses a fusion protein, consisting of a catalytically impaired Cas9 endonuclease fused to an engineered reverse transcriptase enzyme, and a prime editing guide RNA (pegRNA), capable of identifying the target site and providing the new genetic information to replace the target DNA nucleotides. It mediates targeted insertions, deletions, and base-to-base conversions without the need for double strand breaks (DSBs) or donor DNA templates.
The technology has received mainstream press attention due to its potential uses in medical genetics. It utilizes methodologies similar to precursor genome editing technologies, including CRISPR/Cas9 and base editors. Prime editing has been used on some animal models of genetic disease and plants.
Genome editing
Components
Prime editing involves three major components:
A prime editing guide RNA (pegRNA), capable of (i) identifying the target nucleotide sequence to be edited, and (ii) encoding new genetic information that replaces the targeted sequence. The pegRNA consists of an extended single guide RNA (sgRNA) containing a primer binding site (PBS) and a reverse transcriptase (RT) template sequence. During genome editing, the primer binding site allows the 3’ end of the nicked DNA strand to hybridize to the pegRNA, while the RT template serves as a template for the synthesis of edited genetic information.
A fusion protein consisting of a Cas9 H840A nickase fused to a Moloney Murine Leukemia Virus (M-MLV) reverse transcriptase.
Cas9 H840A nickase: the Cas9 enzyme contains two nuclease domains that can cleave DNA sequences, a RuvC domain that cleaves the non-target strand and a HNH domain that cleaves the target strand. The introduction of a H840A substitution in Cas9, through which the 840th amino acid histidine is replaced by an alanine, inactivates the HNH domain. With only the RuvC functioning domain, the catalytically impaired Cas9 introduces a single strand nick, hence the name nickase.
M-MLV reverse transcriptase: an enzyme that synthesizes DNA from a single-stranded RNA template.
A single guide RNA (sgRNA) that directs the Cas9 H840A nickase portion of the fusion protein to nick the non-edited DNA strand.
Mechanism
Genomic editing takes place by transfecting cells with the pegRNA and the fusion protein. Transfection is often accomplished by introducing vectors into a cell. Once internalized, the fusion protein nicks the target DNA sequence, exposing a 3’-hydroxyl group that can be used to initiate (prime) the reverse transcription of the RT template portion of the pegRNA. This results in a branched intermediate that contains two DNA flaps: a 3’ flap that contains the newly synthesized (edited) sequence, and a 5’ flap that contains the dispensable, unedited DNA sequence. The 5’ flap is then cleaved by structure-specific endonucleases or 5’ exonucleases. This process allows 3’ flap ligation, and creates a heteroduplex DNA composed of one edited strand and one unedited strand. The reannealed double stranded DNA contains nucleotide mismatches at the location where editing took place. In order to correct the mismatches, the cells exploit the intrinsic mismatch repair (MMR) mechanism, with two possible outcomes: (i) the information in the edited strand is copied into the complementary strand, permanently installing the edit; (ii) the original nucleotides are re-incorporated into the edited strand, excluding the edit.
Development process
During the development of this technology, several modifications were done to the components, in order to increase its effectiveness.
Prime editor 1
In the first system, a wild-type Moloney Murine Leukemia Virus (M-MLV) reverse transcriptase was fused to the Cas9 H840A nickase C-terminus. Detectable editing efficiencies were observed.
Prime editor 2
In order to enhance DNA-RNA affinity, enzyme processivity, and thermostability, five amino acid substitutions were incorporated into the M-MLV reverse transcriptase. The mutant M-MLV RT was then incorporated into PE1 to give rise to (Cas9 (H840A)-M-MLV RT(D200N/L603W/T330P/T306K/W313F)). Efficiency improvement was observed over PE1.
Prime editor 3
Despite its increased efficacy, the edit inserted by PE2 might still be removed due to DNA mismatch repair of the edited strand. To avoid this problem during DNA heteroduplex resolution, an additional single guide RNA (sgRNA) is introduced. This sgRNA is designed to match the edited sequence introduced by the pegRNA, but not the original allele. It directs the Cas9 nickase portion of the fusion protein to nick the unedited strand at a nearby site, opposite to the original nick. Nicking the non-edited strand causes the cell's natural repair system to copy the information in the edited strand to the complementary strand, permanently installing the edit. However, there are drawbacks to this system as nicking the unaltered strand can lead to additional undesired indels.
Prime editor 4
Prime editor 4 utilizes the same machinery as PE2, but also includes a plasmid that encodes for dominant negative MMR protein MLH1. Dominant negative MLH1 is able to essentially knock out endogenous MLH1 by inhibition, thereby reducing cellular MMR response and increasing prime editing efficiency.
Prime editor 5
Prime editor 5 utilizes the same machinery as PE3, but also includes a plasmid that encodes for dominant negative MLH1. Like PE4, this allows for a knockdown of endogenous MMR response, increasing the efficiency of prime editing.
Nuclease Prime Editor
Nuclease Prime Editor uses Cas9 nuclease instead of Cas9(H840A) nickase. Unlike prime editor 3 (PE3) that requires dual-nick at both DNA strands to induce efficient prime editing, Nuclease Prime Editor requires only a single pegRNA since the single-gRNA already creates double-strand break instead of single-strand nick.
Twin prime editing
The "twin prime editing" (twinPE) mechanism reported in 2021 allows editing large sequences of DNA – sequences as large as genes – which addresses the method's key drawback. It uses a prime editor protein and two prime editing guide RNAs.
History
Prime editing was developed in the lab of David R. Liu at the Broad Institute and disclosed in Anzalone et al. (2019). Since then prime editing and the research that produced it have received widespread scientific acclaim, being called "revolutionary" and an important part of the future of editing.
Development of epegRNAs
Prime editing efficiency can be increased with the use of engineered pegRNAs (epegRNAs). One common issue with traditional pegRNAs is degradation of the 3' end, leading to decreased PE efficiency. epegRNAs have a structured RNA motif added to their 3' end to prevent degradation.
Implications
Although additional research is required to improve the efficiency of prime editing, the technology offers promising scientific improvements over other gene editing tools. The prime editing technology has the potential to correct the vast majority of pathogenic alleles that cause genetic diseases, as it can repair insertions, deletions, and nucleotide substitutions.
Advantages
The prime editing tool offers advantages over traditional gene editing technologies. CRISPR/Cas9 edits rely on non-homologous end joining (NHEJ) or homology-directed repair (HDR) to fix DNA breaks, while the prime editing system employs DNA mismatch repair. This is an important feature of this technology given that DNA repair mechanisms such as NHEJ and HDR, generate unwanted, random insertions or deletions (INDELs). These are byproducts that complicate the retrieval of cells carrying the correct edit.
The prime system introduces single-stranded DNA breaks instead of the double-stranded DNA breaks observed in other editing tools, such as base editors. Collectively, base editing and prime editing offer complementary strengths and weaknesses for making targeted transition mutations. Base editors offer higher editing efficiency and fewer INDEL byproducts if the desired edit is a transition point mutation and a PAM sequence exists roughly 15 bases from the target site. However, because the prime editing technology does not require a precisely positioned PAM sequence to target a nucleotide sequence, it offers more flexibility and editing precision. Remarkably, prime editors allow all types of substitutions, transitions and transversions to be inserted into the target sequence. Cytosine base editing and adenine BE can already perform precise base transitions but for base transversions there have been no good options. Prime editing performs transversions with good usability. PE can insert up to 44bp, delete up to 80, or combinations thereof.
Because the prime system involves three separate DNA binding events (between (i) the guide sequence and the target DNA, (ii) the primer binding site and the target DNA, and (iii) the 3’ end of the nicked DNA strand and the pegRNA), it has been suggested to have fewer undesirable off-target effects than CRISPR/Cas9.
Limitations
There is considerable interest in applying gene-editing methods to the treatment of diseases with a genetic component. However, there are multiple challenges associated with this approach. An effective treatment would require editing of a large number of target cells, which in turn would require an effective method of delivery and a great level of tissue specificity.
As of 2019, prime editing looks promising for relatively small genetic alterations, but more research needs to be conducted to evaluate whether the technology is efficient in making larger alterations, such as targeted insertions and deletions. Larger genetic alterations would require a longer RT template, which could hinder the efficient delivery of pegRNA to target cells. Furthermore, a pegRNA containing a long RT template could become vulnerable to damage caused by cellular enzymes. Prime editing in plants suffers from low efficiency ranging from zero to a few percent and needs significant improvement.
Some of these limitations have been mitigated by recent improvements to the prime editors, including motifs that protect pegRNAs from degradation. Further research is needed before prime editing could be used to correct pathogenic alleles in humans. Research has also shown that inhibition of certain MMR proteins, including MLH1 can improve prime editing efficiency.
Delivery method
Base editors used for prime editing require delivery of both a protein and RNA molecule into living cells. Introducing exogenous gene editing technologies into living organisms is a significant challenge. One potential way to introduce a base editor into animals and plants is to package the base editor into a viral capsid. The target organism can then be transduced by the virus to synthesize the base editor in vivo. Common laboratory vectors of transduction such as lentivirus cause immune responses in humans, so proposed human therapies often centered around adeno-associated virus (AAV) because AAV infections are largely asymptomatic. Unfortunately, the effective packaging capacity of AAV vectors is small, approximately 4.4kb not including inverted terminal repeats. As a comparison, an SpCas9-reverse transcriptase fusion protein is 6.3kb, which does not even account for the lengthened guide RNA necessary for targeting and priming the site of interest. However, successful delivery in mice has been achieved by splitting the editor into two AAV vectors or by using an adenovirus, which has a larger packaging capacity.
Applications
Prime editors may be used in gene drives. A prime editor may be incorporated into the Cleaver half of a Cleave and Rescue/ClvR system. In this case it is not meant to perform a precise alteration but instead to merely disrupt.
PE is among recently introduced technologies which allow the transfer of single-nucleotide polymorphisms (SNPs) from one individual crop plant to another. PE is precise enough to be used to recreate an arbitrary SNP in an arbitrary target, including deletions, insertions, and all 12 point mutations without also needing to perform a double-stranded break or carry a donating template.
See also
Genetics
Glossary of genetics
Human Nature (2019 CRISPR film documentary)
Synthetic biology
Unnatural Selection (2019 TV documentary)
References
Biological engineering
Biotechnology
Genetic engineering
Genome editing
Molecular biology | Prime editing | [
"Chemistry",
"Engineering",
"Biology"
] | 2,632 | [
"Genetics techniques",
"Biological engineering",
"Genome editing",
"Genetic engineering",
"Biotechnology",
"nan",
"Molecular biology",
"Biochemistry"
] |
76,021,562 | https://en.wikipedia.org/wiki/Curium%20nitride | Curium nitride is a binary inorganic compound of curium and nitrogen with the chemical formula .
Synthesis
Curium nitride can be prepared by carbothermic nitridation of the oxide.
Physical properties
Curium nitride is solid and has a NaCl structure. It is ferromagnetic.
References
Nitrides
Curium compounds
Nitrogen compounds | Curium nitride | [
"Chemistry"
] | 76 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
76,023,812 | https://en.wikipedia.org/wiki/Artificial%20intelligence%20in%20architecture | Artificial intelligence in architecture describes the use of artificial intelligence in automation, design and planning in the architectural process or in assisting human skills in the field of architecture. Artificial Intelligence is thought to potentially lead to and ensue major changes in architecture.
AI's potential in optimization of design, planning and productivity have been noted as accelerators in the field of architectural work. The ability of AI to potentially amplify an architect's design process has also been noted. Fears of the replacement of aspects or core processes of the architectural profession by Artificial Intelligence have also been raised, as well as the philosophical implications on the profession and creativity.
Implications
Benefits
Artificial intelligence, according to ArchDaily, is said to potentially significantly augment the Architectural profession though its ability to improve the design and planning process as well as increasing productivity. Through its ability to handle a large amount of data, AI are said to potentially allow architects a range of design choices with criteria considerations such as budget, requirements adjusted to space, and sustainability goals calculated as part of the design process. ArchDaily said this may allow the design of optimized alternatives that can then undergo human review. AI tools are also said to potentially allow architects to assimilate urban and environmental data to inform their designs, streamlining initial stages of project planning and increasing efficiency and productivity.
The advances in generative design through the input of specific prompts allow architects to produce visual designs, including photorealistic images, and thus render and explore various material choices and spatial configurations. ArchDaily noted this could speed the creative process as well as allow for experimentation and sophistication in the design. Additionally, AI's capacity for pattern recognition and coding could aid architects in organizing design resources and developing custom applications, thus enhancing the efficiency and the collaboration between both architects and AI.
AI is thought to also be able to contribute to the sustainability of buildings by analyzing various factors and following which recommend energy-efficient modifications, thus pushing the industry towards greener practices. The use of AI in building maintenance, project management, and the creation of immersive virtual reality experiences are also thought as potentially augmenting the architectural design process and workflow.
Examples include the use of text-to-image systems such as Midjourney to create detailed architectural images, and the use of AI optimization systems from companies such as Finch3D and Autodesk to automatically generate floor plans from simple programmatic inputs.
Architect Kudless in an interview to Dezeen recounted that he uses AI to innovate in architectural design by incorporating materials and scenes not usually present in initial plans, which he believes can significantly alter client presentations. He told Dezeen he believes one should show clients renderings from the onset, with AI assisting in this work, arguing that changes in design should be a positive aspect of the client-designer relationship by actively involving clients in the process. Additionally, Kudless highlighted the AI's potential to facilitate labor in architectural firms, particularly in automating rendering tasks, thus reducing the workload on junior staff while maintaining control over the creative output.
Emergent aesthetics
In an interview for the AItopia series to Dezeen, designer Tim Fu discussed the transformative potential of artificial intelligence (AI) in architecture, there he proposed a future where AI could herald a "neoclassical futurist" style, blending the grandeur of classical aesthetics with futuristic design. Through his collaborative project, The AI Stone Carver, Fu showcased how AI can innovate traditional practices by generating design concepts that are then realized through human craftsmanship, such as stone carving by mason Till Apfel. This approach he believed celebrated the fusion of diverse architectural styles and also emphasized the unique capabilities of AI in enhancing creative design processes.
Fu told Dezeen he envisions the integration of AI in design as a means to revive the ornamentation and detailed aesthetics characteristic of classical architecture, moving away from the minimalism, which he said dominates contemporary architecture He argued that AI's involvement in the ideation phase of design allows for a reversal in the roles of machine and human, enabling architects and designers to focus on creating more intricate and ornamental structures. Fu's optimistic outlook extended to the broader impact of AI on the architectural field, seeing it as an indispensable tool that will shift rather than replace human roles, enriching the field with innovative designs that pay homage to the beauty and qualities of classical architecture not present in contemporary architecture while embracing new technologies.
Concerns
As artificial intelligence continues to expand its presence across various industries, its impact on the architectural profession has become a topic of growing discussion. These discussions focus on how AI processes may influence traditional architectural practices, potentially altering job roles, and shaping the nature of creativity. While AI-driven processes may increase efficiency in some aspects of the profession, it also raises questions about the potential loss of unique design perspectives. These thoughts have been countered by many prominent creative figures in the realm of AI Architecture such as Stephen Coorlas, Tim Fu and Hassan Ragab, who have showcased the amplification of creativity in design and potential benefits in terms of restoring creative power to the designer.
One concern is that AI-powered tools may reduce the demand for human input in certain tasks. There is speculation that this may result in a shift toward managerial or supervisory roles for architects.
In some design scenarios, algorithmically generated solutions can be adjusted to prioritize efficiency and cost-effectiveness, which some argue may overshadow the creative and contextual nuances that define individual architectural styles. As with any discipline though, it has been determined that AI can be configured to provide beneficial results based on inputs and end goals the architect or designer assigns it.
There are also concerns about the potential for AI to exacerbate inequalities within the architectural profession. For instance, larger firms with greater resources to invest in advanced AI technologies may gain a competitive edge over smaller firms and independent architects. This dynamic could contribute to industry consolidation, potentially limiting the diversity of architectural practice and stifling innovation. Ethical considerations in regard to cultural sensitivity have also been raised due to the datasets used to train AI. Without proper vetting of data or implementing failsafe overrides, AI generated outcomes can trend toward overly documented and prioritized content.
References
Applications of artificial intelligence
Architecture
Artificial intelligence art | Artificial intelligence in architecture | [
"Engineering"
] | 1,279 | [
"Construction",
"Architecture"
] |
76,026,176 | https://en.wikipedia.org/wiki/Lomagundi-Jatuli%20Carbon%20Isotope%20Excursion | The Lomagundi-Jatuli Carbon Isotope Excursion or Lomagundi-Jatuli Event (LJE) was a carbon isotope excursion that occurred in the Paleoproterozoic between 2.3-2.1 Ga, possessing the largest magnitude and longest duration of positive δ13C values found in marine carbonate rocks. The δ13C values range from +5 to + 30‰. Carbon isotope compositions in marine carbonates typically fluctuate around zero per mil (‰) through time. To coincide with the LJE global δ13Ccarb levels, the amount of buried organic carbon would have needed to double or triple, and over millions of years.
Measuring δ13Ccarb values within marine carbonate rocks provides scientists with a window into the history of fluxes in the global carbon cycle over the course of Earth history.
In the context of the global carbon cycle, "flux" refers to the movement or flow of carbon between different reservoirs or components of the Earth system. This includes the atmosphere, oceans, terrestrial biosphere (plants and soil), and geosphere (rocks and sediments). These carbon fluxes are driven by various processes, including photosynthesis (which removes CO2 from the atmosphere and incorporates it into plant biomass), respiration and decay (which return CO2 to the atmosphere), weathering of rocks (which can transfer carbon from the geosphere to the hydrosphere and atmosphere), and the dissolution and precipitation of carbonate minerals in the ocean
Understanding these fluxes, especially within the LJE, is crucial for studying the global carbon cycle. They determine the concentration of carbon dioxide in the Earth's atmosphere, which in turn influences the planet's climate. While the LJE's high δ13Ccarb values were first thought to show a substantial local increase in organic carbon (forg) in the localities in which the elevated values were found, marine carbonate outcrops with similarly elevated values have since been found around the world, shifting consideration that this event reflects a global increase.
Changes in carbon fluxes can lead to significant variations in atmospheric CO2 levels and thus have been a major focus of research, and debate, especially in the context of anthropogenic climate change.
Locations and duration
Assuming this excursion is globally synchronous in its commencement and termination, the duration has been dated to range from a maximum of 249 ± 9 Myr (2306 ± 9 Ma to 2057 ± 1 Ma) to a minimum of 128 ± 9.4 Myr (2221 ± 5 Ma to 2106 ± 8 Ma).
Table 1: Lomagundi-Jatuli Event localities presenting similarly elevated δ13C values, formations of occurrence, dated age of formation, and procedural method of δ13C value analysis.
The extremely positive carbon isotope values having occurred during the LJE can be seen on all continents, with the notable exception of Antarctica, having stratigraphic thickness ranging from several to tens of meters The highly elevated δ13C values were first found in the Lomagundi Group in Zimbabwe and the Jatuli group in Fennoscandia at a time when the LJE was first hypothesized to have been a local event.
Table 2: Carbonate lithology within global formations, including associated δ13Ccarb variation (‰) values, and stratigraphic thicknesses of each.
Methods
Scientists choose which geochronology method is best suited for the types of rocks and sediments they work with. Attempting to find the age of marine carbonate rocks is not without challenge, especially in the use of uranium (U) and lead (Pb). These rocks do not have an initially high composition of uranium, and contain too much lead, both of which can further undergo modification over geologic time (diagenetic overprinting).
Isotope Dilution-Thermal Ionization Mass Spectrometry (ID-TIMS) has been utilized for the analysis of marine carbonate rock δ13 values due its precision <1‰ (single analysis or weighted mean dates) in evaluating 206Pb/238U dates. This method involves a two-step process: isotope dilution, where a known amount of isotopically enriched tracer is mixed with the sample to quantify the concentration of elements, and thermal ionization mass spectrometry, where the sample is ionized at high temperatures to measure the isotopic ratios of elements.
Secondary ion mass spectrometry (SIMS) is a form of desorption mass spectrometry, which is used to analyze the composition of solid surfaces and thin films by sputtering the surface of the specimen with a focused primary ion beam and collecting and analyzing ejected secondary ions. The principle behind SIMS is straightforward but involves sophisticated instrumentation and techniques to achieve detailed surface compositional analysis at the micro to nano scale.
Thermal Ionization Mass Spectrometry (TIMS) is a highly precise and sensitive analytical technique used primarily for isotopic analysis (Rhenium-Osmium) and the determination of elemental concentrations in samples. TIMS operates on the principle of thermal ionization, where a sample is vaporized and ionized in a high-temperature environment, allowing for the separation and measurement of isotopes based on their mass-to-charge ratios.
The Re-Os (Rhenium-Osmium) geochronology method is based on the decay of 187Re to 187Os, which occurs over a long half-life, making it suitable for dating geological samples that range in age from millions to billions of years. This method is particularly useful for dating organic-rich rocks, such as black shales, and relies on the principle that the ratio of these isotopes in a sample will change over time due to the radioactive decay of 187Re to 187Os. By measuring the present-day isotope ratios and knowing the decay rate of 187Re, scientists can calculate the age of the sample.
One of the challenges in Re-Os geochronology is dealing with the error correlations that arise from measuring these isotopes, particularly because 188Os, which is used in the denominator of the isotope ratio calculations, is associated with larger mass spectrometer uncertainties. This issue can lead to strong error correlations and potentially obscure geologically significant trends.
Genesis of the LJE
Synchronous, global-scale disturbance
The global view concludes that during the Lomagundi carbon isotope excursion carbonates were deposited world-wide with large amounts of 13C enrichment. The hypothesis that the LJE is related to the Great Oxidation Event (GOE), with the LJE causing a large deviation in the global carbon reservoir, which in turn led to disequilibrium of the carbon cycle and released oxygen.
To explain this global 13C enrichment, the oxidation of siderite (FeCO3, with other Fe(2+) carbonate minerals), was proposed as a hypothesis because it produces 4 times the amount of CO2 than it consumes O2. The oxidation of siderite was the driver for the carbon needed in burial and further oxidation, as well as the accumulation of O2, making the length of the LJE dependent on the size of the Archean siderite reservoir.
Another hypothesis to explain the global nature of the LJE, is large tectonic change leading to increased degassing of volcanic CO2, which could have increased deposition of carbonates and organic matter, due to higher weathering rates and nutrients to the ocean. Similar to a tectonic change, the formation of subaerial continents or global glaciations could have also enhanced volcanic CO2 leading to the same outcome of CO2 and O2 in carbonates and atmosphere. Supporting this, there is evidence for the first large continental plates around 2.2-2.1 Gyr experiencing rifting and a global orogeny. During this time frame there was an increase in seawater 87Sr/ 86Sr, which indicates there was higher levels of continental erosion. To reinforce the effect of high 87Sr/86Sr, the first known glaciations occurred during 2.2-2.1 Gyr as well which favours weathering rates by lowering sea level.
Localized, Facies driven process
This hypothesis acknowledges that there is a global change to the carbon cycle, and agrees that it was a globally synchronous event, but with the idea that different facies environments drive the high C-isotope values. This meaning that the values of d13C and changes in the values are because of processes in individual basins, depending on where the locality is along a carbonate platform/slope. Using 13C carbonate data from locations worldwide and the stratigraphic descriptions, the values can be organized into open marine, nearshore marine-inner shelf and intertidal-coastal-sabka, with a noticed correlation between facies and 13C carbonate values. For open marine the mean 13C carbonate value was +1.5 ± 2.4‰, +6.2 ± 2.0‰ for inner-shelf, and +8.1 ± 3.8‰ for intertidal settings. Using this hypothesis, the extremely positive d13C values can be explained by changes in local dissolved inorganic carbon (DIC) pools, influenced in individual basins, not representative of world-wide change in ocean DIC.
Localized, Diagenesis or Methanogenesis
For a global carbon isotope excursion, sedimentary organic carbon (shales) tend to show a trend as well, as an excursion would affect the d13C value of the biosphere and therefore sedimentary organic matter. Between 2.60 and 1.60 Ga there is no trend within organic carbon. Fluctuations in d13C can be linked to isotopic alteration from breakdown of organic matter due to diagenesis and metamorphism.
A process in sediment columns that can contribute to carbonates with high levels of 13C, methanogenesis, could have caused carbonates enriched in 13C, creating an explanation for the d13C values reaching +28‰. To explain the LJE, a deeper methanic zone in oceans during the start of ocean oxygenation (the GOE) would push pore water DIC to higher d13C values. The carbonates forming at this time would record d13C enrichment.
References
Wikipedia Student Program
Isotope excursions | Lomagundi-Jatuli Carbon Isotope Excursion | [
"Chemistry"
] | 2,105 | [
"Isotope excursions",
"Isotopes"
] |
76,027,809 | https://en.wikipedia.org/wiki/Clathrin-independent%20endocytosis | Clathrin-independent endocytosis refers to the cellular process by which cells internalize extracellular molecules and particles through mechanisms that do not rely on the protein clathrin, playing a crucial role in diverse physiological processes such as nutrient uptake, membrane turnover, and cellular signaling.
While clathrin-coated endocytosis is the most efficient and dominant means of cellular cargo entry, endocytic pathways can operate without the presence of the clathrin triskelion. In the absence of clathrin in a plasma membrane, there are many elements of response that allow for the internalization of essential molecules for cellular function.
The induction of clathin-independent endocytosis involves physical and chemical signaling cascades to induce mechanical responses in the plasma membrane of a cell. Ligands can induce the cross linking of surface cell receptors, phosphorylation of downstream relay molecules, and membrane curvature that helps engulf and process external cargo inside the cell.
Toxins also play an important role in clathrin-independent endocytosis, causing curvature and budding from the membrane upon crosslinking of the receptors.
Mechanisms
The mechanisms of clathrin-independent endocytosis can be separated into dynamin-dependent and dynamin-independent. Others have their own separate pathways completely. All involve highly specialized and technical methods that recruit proteins, enzymes, etcetera to create the mechanical strength and force a cell needs to invaginate and endocytose various cargo necessary for proper cellular functions.
Dynamin dependent
Caveolar
Caveolar dynamin-dependent endocytosis relies of caveolae, which are invaginations of the cell plasma membrane made up of GPI-anchored proteins, sphingolipids, and cholesterol. A part of the cavin family, caleoles provide integral structure for the cell membrane and associates with lipids, such as cholesterol, and PIP2 to form lipid membrane rafts. In caveolar dynamin-dependent endocytosis, actin stress fibers and actin binding in response to a loss in cell adhesion help to internalize caveolae and its contents. Microtubules are stabilizes by beta-1 integrins and integrin signaling that promote the recycling of caveolae.
Using electron microscopy imaging techniques, the Cavin 1 marker is a particularly good indicator of the pathways that the caveolae vesicle takes. The dynamic nature of such vesicles show that they have a life time from 2 seconds up to 7 minutes and about 85% of cavin 1 protein was digested, indicating that the cavin-positive buds were internalized and degraded. The imaging techniques aided in research about the caveolar destiny and budding as it relates to the Cavin 1 protein.
RhoA-regulated
RhoA is a small GTPase mechanism that aids in the correct sorting of the beta chain of the interleukin-2-receptor (IL-2R-B). This pathway is known to be a key player in the dynamics of actin cytoskeleton dynamics and for recruiting actin machinery in other endocytic pathways as well. Activated RhoA is commonly coupled with Rac and the two combined cause an increase in clathrin-independent endocytosis and pinocytosis. It has recently been found that lipid rafts that contain active forms of Rac and RhoA also localize caveolae.
Inhibition of dynamin dependent endocytosis
Sphingolipids plays a key role in the efficiency and effectiveness of endocytic mechanisms independent of clathrin. In the event of depletion or sequestration of sphingolipids, especially cholesterol, both caveolae-mediated and RhoA-regulated endocytosis would not be able to proceed. Because lipid rafts have the ability to cluster and contain active signaling molecules, lipid rafts play and integral role in the invagination process. In the absence of such lipids, the cellular membrane environment is not appropriate for cell signaling and endocytosis.
In addition, the denaturing of temperature dependent dynamin can render dynamin dependent pathways blocked.
Dynamin independent
CDC42-regulated
CDC42 is a small, Rho family GTPase involved in the pinching off and remodeling of the cellular plasma membrane. CDC42-regulated dynamin independent pathways are the main route of non-clathrin, non-caveolar uptake of fluid-phase internalization. Most CDC42-regulated endocytic pathways have large and wide surface invaginations and sometimes involves the recruitment of actin-polymerization machinery that further aid in the pinching off of the vesicle wall.
In epithelial cells the regulation and maintenance of the apical plasma membrane is regulated by CDC42 which binds to and activates PAR6 to dictate the location and positioning of tight junctions and adherens junctions. Alternatvly, CDC42 can trigger a signaling casade via FBP17 and CIP4 that downstream can active RhoA and Rac1.
ARF6-regulated
Although an explicit role of ARF6 has not been found, the Arf family GTPase has been proven to be a crucial factor in actin remodeling and regulating endosome dynamics by influencing and altering the recycling rates of various membrane components. This can lead to the infolding of the plasma membrane and uptake of external cargo. ARF6 is involved in the uptake of major histocompatibility class 1 proteins.
While it is assumed ARF6 is not needed for endocytosis, it is required for recycling. When ARF6 is inactivated, it allows for vesicle coatings to be returned to the membrane surface, but when it is over expressed or overactive, it can cause a cyclical activity that traps the cargo in the internal vesicles.
Fast endophilin-mediated endocytosis
Fast endophilin-mediated endocytosis is a form of clathrin-independent endocytosis uses cargo capture by cytolytic proteins to allow for endophilin and receptor endocytosis.
Endophilin, a BAR protein, is typically bound in distinct patches to the plasma membrane by lamellipodin. However, without receptor activation, these patches disassemble, typically within 5–10 seconds, and move to a new, random, nearby location to reassemble, The complex continues to probe the membrane until the correct ligand binds and the cargo is sorted to a FEME carrier. While the exact mechanisms by which the receptors sorts the cargo to the specific carriers, it is suggested that during cargo capture, the endphillion levels rise to be greater than the critical concentration (Cc) and initiate bending of the membrane.
As endophillin levels rise, the FEME pathway requires lipid phosphatidylinositl 3,4-bisphosphate (PI(3,4)P2) and lamellipodin that binds to the PI(3,4)P2 and SH3 domain of the leading edge on endophillin.
It may also be possible that the binding of a ligand causes receptor clustering near the binding and causes a collapse or bending of the local membrane.
FEME has the ability to internalize G-protein coupled receptors (GCPRs), receptor tyrosine kinases (RTKs), and cytokine receptors on a cell's surface.
CLIC/GEEC endocytic pathway
Clathrin-independent carriers (CLICs) are prevalent tubulovesicular membranes responsible for non-clathrin mediated endocytic events. They appear to endocytose material into GPI-anchored protein-enriched early endosomal compartment (GEECs). Collectively, CLICs and GEECs compose the Cdc42-mediated CLIC/GEEC endocytic pathway, which is regulated by GRAF1, as well as many others.
The clathrin-independent carrier pathway is the main pathway to endocytose cargo and relies heavily GPI-anchored proteins, integrins, and proteins to create membrane tension and fluidity. Crescent shaped tubular clathrin-independent carriers (CLICs) mature into glycosylphosphatidylinositol (GPI)-anchored protein-enriched early endocytic compartments(GEECs).
Role of glycolipid-lectin
Glycolipid-lectin, of the galectin family, facilitate tubular endocytic pits drive CLIC/GEEC endocytosis. Glycolipid-lectin binds onto cargo via a carbohydrate, and oligomerizes. This oligomerization allows the Glycolipid-lectin-protein-cargo complex to interact with glycosphingolipid(GSL)-binding subunits and causes a bending of the membrane. Proteins like galectin-3, galectin 8, and GSL-dependent cellular endocytosis of CD166 are all known to use Glycolipid-lectin.
Endophilin-A2
Another pathway mediated through FEME and the use of endophilins is a toxin pathway that includes Shiga (STxB), a glycosphingolipid (GSL)-binding toxin, and cholera toxin B. These toxins have the ability to induce membrane invagination upon binding to their surface receptor. It was previously thought that cortical actin dynamics aided in the curvature upon binding. However, it is now believed that Endophilin-A2 is also a key factor in stabilization for the scission of STxxB endosomes. With the help of action and dynamin, endophilin-A2 uses microtubulue-associated motor proteins to control the rate and length of endosomal scission.
In model organisms
Fungi
Fungi use a homolog of RhoA, called Rho1, to carry out a similar pathway to RhoA-regulated endocytosis. In yeast cells, Rho1 requires an effector known as Bri 1 to induce cell wall stress, leading to cargo internalization.
Myo2, a type of myosin, has also been found to be important for microtubule motors protein function, which help the cell membrane contract and invaginate. These include proteins like dynein and dynactin.
In fungi calls laking Arp2/3, clathin-independent endocytosis seems to be the dominant form of endocytosis as well.
Plants
Plant cells use for clathrin-dependent and clathin-independent endocytosis to internalize membrane proteins and other cargo. Actin polymerization plays a key role in this endocytosis as demonstrated by the roles of Flotillin 1 (Flot1), which is a sterol and sphinoglipid enriched membrane region that collapses during invagination.
References
Cellular processes
Membrane biology
Cell anatomy | Clathrin-independent endocytosis | [
"Chemistry",
"Biology"
] | 2,320 | [
"Membrane biology",
"Cellular processes",
"Molecular biology"
] |
76,036,587 | https://en.wikipedia.org/wiki/Bitter%20Springs%20anomaly | The Bitter Springs anomaly (BSA) represents a sharp drop in δ13Ccarb concentrations on average by 8‰, for 8 million years during the Tonian period. This marks a noticeable deviation from the relatively high values of the time. The anomaly is named after the Bitter Springs formation in Australia where it was first documented. It has since been found in several other locations worldwide including Norway, Greenland, and Canada, all in carbonate platform environments.
Geology of the anomaly
The first documentation of the BSA in Australia is present between 30 and 60 meters. Lithology here varies between halite, gypsum, dolomite, stromatolite dolomites and limestones. Here δ13Ccarb fluctuates from upwards of 6‰ down to −4‰. In North-Western Canada, the BSA is contained within the Ram Head formation, which has been confined to 1005–775 Myrs in age via zircon dating. The anomaly spans a section of approximately 175 meters, marked by high energy stromatolite and ooid deposits. δ13Ccarb values drop on average from 7‰ to −2‰. In Norway and Greenland it is present within the Grusdievbreen to Svanbergfjellet formations of the Akademikerbreen group. Values range from 6‰ to −3‰. Here more precise dating shows the anomaly to have occurred from 810 to 802 Myrs. In Norway the anomaly is represented by a 250-meter section as opposed to a mere 22 meters in Greenland. Both are of the same carbonate ramps to rimmed carbonate platform deposits of limestone and dolostone. Notably, the anomaly is associated with flooding plains in Canada, and the Scandinavian manifestations are capped by sub-aerial exposure surfaces on both ends.
Mechanisms and controversies
Several mechanisms for the anomaly have been proposed. Iodine ratio analysis has been used to suggest a shift towards euxenic ocean conditions during the time of the anomaly. This represents a shift of some kind in oceanic chemistry. It is possible the BSA relates to the proliferation of eukaryotic life, as evidence for their ecological importance seems to only appear afterwards. Evidence supports the possibility of large paleomagnetic shifts of 55 degrees around the time of the anomaly. This has been tentatively linked to tectonic activity associated with the breakup of the super continent Rodinia. Such events would have had a large impact on the oceans, possibly reflected in the observed sea level changes and deep sea upwelling.
As only several formations are known to document the anomaly at this time, it is still unknown if it was global or constrained to shallower depths of carbonate platforms. More instances of the BSA would have to be documented before such a conclusion could be reached.
The BSA broadly resembles other Tonian δ13Ccarb excursions, whose causes are often tied to both glaciation and the proliferation of eukaryotes.
References
Wikipedia Student Program
Tonian
Isotope excursions
Carbon | Bitter Springs anomaly | [
"Chemistry"
] | 608 | [
"Isotope excursions",
"Isotopes"
] |
76,036,800 | https://en.wikipedia.org/wiki/Petronel%20Nieuwoudt | Petronel Nieuwoudt is a South African wildlife conservationist. In 2011, she established the Care For Wild rhinoceros sanctuary in Mpumalanga, South Africa. It is the largest such sanctuary in the world. She is also the chief executive officer of the foundation.
Early life
Raised in the 1970s on a rural farm in Roedtan, Limpopo, Nieuwoudt attended Rand Afrikaans University between 1988 and 1991. Her father ran the farm; her mother was a teacher.
Career
Niewoudt joined the public-relations department of the South African Police Service's Endangered Species Protection Unit in 1991, and later became captain. She founded The Game Capture School, which educated people on the capture, treatment and management of wildlife.
She next founded Sondela Wildlife Centre, in Bela-Bela, which was in operation from 2005 to 2007, followed by Tamboti Wildlife Centre, in Mookoopong, between 2007 and 2010.
In 2011, Petronel relocated from Limpopo to Mpumalanga, where she established Care For Wild Africa, a rehabilitation centre for indigenous African wildlife which has become the largest rhinoceros sanctuary in the world. The sanctuary gained its first orphan rhinoceros the following year. Four others followed shortly thereafter.
In 2014, South African National Parks invited Petronel to form a partnership to assist in the rescue, rehabilitation and protection of orphaned animals. Nieuwoudt set in motion a registration of her sanctuary as a non-profit organisation.
References
External links
"KP: Why 'rhino mother' Petronel is a hero" – BBC Radio 5, 20 May 2019
Living people
People from Limpopo
South African women
Wildlife conservation
South African ecologists
South African chief executives
Women chief executives
Year of birth missing (living people) | Petronel Nieuwoudt | [
"Biology"
] | 369 | [
"Wildlife conservation",
"Biodiversity"
] |
56,083,217 | https://en.wikipedia.org/wiki/Ferric%20EDTA | Ferric EDTA is the coordination complex formed from ferric ions and EDTA. EDTA has a high affinity for ferric ions. It gives yellowish aqueous solutions.
Synthesis and structure
Solutions of Fe(III)-EDTA are produced by combining ferrous salts and aqueous solutions of EDTA known as Jacobson's solution (cf. chemical equation (1) under Table (1)).
Near neutral pH, the principal complex is [Fe(EDTA)(H2O)]−, although most sources ignore the aquo ligand. The [Fe(EDTA)(H2O)]− anion has been crystallized with many cations, e.g., the trihydrate Na[Fe(EDTA)(H2O)].2H2O. The salts as well as the solutions are yellow-brown. Provided the nutrient solution in which the [Fe(EDTA)(H2O)]− complex will be used has a pH of at least 5.5, all the uncomplexed iron, as a result of incomplete synthesis reaction, will still change into the chelated ferric form.
Uses
EDTA is used to solubilize iron(III) in water. In the absence of EDTA or similar chelating agents, ferric ions form insoluble solids and are thus not bioavailable.
Together with pentetic acid (DTPA), EDTA is widely used for sequestering metal ions. Otherwise these metal ions catalyze the decomposition of hydrogen peroxide, which is used to bleach pulp in papermaking. Several million kilograms EDTA are produced for this purpose annually.
Iron chelate is commonly used for agricultural purposes to treat chlorosis, a condition in which leaves produce insufficient chlorophyll. Iron and ligand are absorbed separately by the plant roots whereby the highly stable ferric chelate is first reduced to the less stable ferrous chelate. In horticulture, iron chelate is often referred to as 'sequestered iron' and is used as a plant tonic, often mixed with other nutrients and plant foods (e.g. seaweed). It is recommended in ornamental horticulture for feeding ericaceous plants like Rhododendrons if they are growing in calcareous soils. The sequestered iron is available to the ericaceous plants, without adjusting the soil's pH, and thus, lime-induced chlorosis is prevented.
Ferric EDTA can be used as a component for the Hoagland solution or the Long Ashton Nutrient Solution. According to Jacobson (1951), the stability of ferric EDTA was tested by adding 5 ppm iron, as the complex, to Hoagland's solution at various pH values. No loss of iron occurred below pH 6. In addition to Jacobson's original recipe and a modified protocol by Steiner and van Winden (1970), an updated version for producing the ferric EDTA complex by Nagel et al. (2020) is presented in Table (1).
Jacobson's solution
Table (1) to prepare the ferric EDTA stock solution
The formation of Fe(III)-EDTA (FeY)− can be described as follows:
FeSO4∙7H2O + K2H2Y + 1/4 O2 → K[FeY(H2O)].H2O + KHSO4 + 5.5 H2O (1)
Iron chelate has also been used as a bait in the chemical control of slugs, snails and slaters in agriculture in Australia and New Zealand. They have advantages over other more generally poisonous substances used as their toxicity is more specific to molluscs.
Ferric EDTA is used as a photographic bleach to convert silver metal into silver salts, that can later be removed.
Related derivatives
Aside from EDTA, the chelating agent EDDHA is used to solubilize iron in water. It also can be used for the purposes of agriculture, accessible to plants.
In iron chelation therapy, deferoxamine, has been used to treat excess iron stores, i.e. haemochromatosis.
See also
DTPA
EDDHA
Tartrate
Citrate
References
Iron(III) compounds
Coordination complexes | Ferric EDTA | [
"Chemistry"
] | 904 | [
"Coordination chemistry",
"Coordination complexes"
] |
56,083,647 | https://en.wikipedia.org/wiki/Backscattering%20cross%20section | Backscattering cross section is a property of an object that determines what proportion of incident wave energy is scattered from the object, back in the direction of the incident wave. It is defined as the area which intercepts an amount of power in the incident beam which, if radiated isotropically, would yield a reflected signal strength at the transmitter of the same magnitude as the actual object produces.
See also
Radar cross-section
Target strength
References
Radar theory
Radiation | Backscattering cross section | [
"Physics",
"Chemistry"
] | 92 | [
"Transport phenomena",
"Waves",
"Physical phenomena",
"Radiation"
] |
65,898,168 | https://en.wikipedia.org/wiki/Nuclear%20microreactor | A nuclear microreactor is a type of nuclear reactor which can be easily assembled and transported by road, rail or air. Microreactors are 100 to 1,000 times smaller than conventional nuclear reactors, and range in capacity from 1 to 20 MWe (megawatts of electricity), compared to 20 to 300 MWe (megawatts of electricity) for small modular reactors (SMRs). Due to their size, they can be deployed to locations such as isolated military bases or communities affected by natural disasters. They can operate as part of the grid, independent of the grid, or as part of a small grid for electricity generation and heat treatment. They are designed to provide resilient, non-carbon emitting, and independent power in challenging environments. The nuclear fuel source for the majority of the designs is "high-assay low-enriched uranium", or HALEU.
History
Nuclear microreactors originated in the United States Navy's nuclear submarine project, which was first proposed by Ross Gunn of United States Naval Research Laboratory in 1939. The concept was adapted by Admiral Hyman Rickover to start American nuclear submarine program in 1950s. The first US nuclear submarine to be constructed was the USS Nautilus, which was launched in 1955. It was installed with Westinghouse's S2W reactor - a pressurized water type reactor which gave out 10 megawatts output.
Design
These reactors are made to fit in small areas where it would be inefficient to introduce a larger power plant, but still has energy needs unsuitable for generators. Nuclear microreactors, a subcategory of Small Modular Reactors (SMRs), are a developing type of nuclear power plant that is designed to generate electricity on a smaller scale than traditional nuclear reactors. These microreactors typically have a capacity of 20 megawatts or less and are designed to be modular and transportable, making them suitable for powering small communities, remote areas, and industries such as desalinization and hydrogen fuel production.
One of the primary advantages of nuclear microreactors is that they have a lower environmental impact than fossil fuels. They emit no greenhouse gases such as and methane. The waste they produce is radioactive however, creating an issue of safe handling and disposal. One of the current methods of disposal is burying waste in deep underground storage facilities such as Onkalo in Finland. In addition, they can operate continuously for up to 10 years without the need for refueling.
Microreactors use nuclear fission to generate heat, which is then used to produce electricity through a steam turbine. The reactor core is surrounded by a thick shield to protect workers and the environment from radiation. The core also contains fuel rods made of uranium or other fissile materials. As the fuel undergoes fission, it releases energy in the form of heat, which is then transferred to a coolant that circulates through the reactor. The coolant is typically water or a liquid metal, such as sodium or lead, which absorbs the heat and transfers it to a heat exchanger. The heat exchanger then transfers the heat to a secondary coolant, which is used to generate steam and produce electricity.
Microreactors and SMRs reflect a wide range of technologies, including light-water reactors (LWRs), high-temperature gas reactors (HTGRs), and advanced reactor designs, such as liquid metal fast reactors (FRs), molten salt reactors (MSRs) and heat pipe (HP) reactors. Designs can vary based on fuel, materials, refrigerants, inverters, manufacturing techniques (such as additive manufacturing), and heat exchangers.
Heat pipe reactor design is the simplest microreactor, which improves power transfer and avoids the use of pumps to circulate the coolant. Microreactors based on HTGR technology use a three-structure isotropic (TRISO) fuel, the same as that used in larger HTGR designs. For FR technologies that provide compactness and energy efficiency, proven oxide fuels, more experimental metals or nitride fuels are available. The experimental fuel is expected to be more efficient for microreactors, as the residence time of the fuel in the reactor core is much longer than in conventional reactors, leading to higher radiation exposure.
One of the key features of nuclear microreactors is their small size and modularity. SMRs can be built in factories and shipped to their final destination, reducing construction costs and time. They can be installed underground, underwater, or in other remote locations, making them ideal for powering small communities, industrial sites, military installations, and other specialized locations. In addition, the modular design allows for easy scalability, allowing additional microreactors to be added to increase power output as needed.
The environmental impact of reducing greenhouse gases and the capability of outputting low powers of less than 100 MWth have caused global interest in nuclear microreactors, which could potentially benefit companies with lower control necessities. Additional benefits could include expanded adaptability with regard to siting, progressed security execution; diminished development times; and decreased forthright venture necessities.
Challenges
Despite these advantages, nuclear microreactors still face challenges. One of the primary challenges is regulatory approval. SMRs must undergo extensive testing and certification before they can be deployed, and many countries have strict regulations in place to govern the use of SMRs such as those given by the U.S. Nuclear Regulatory Commission (NRC). The most profound issue for microreactors is the cost per kWh, as microreactors lose the power-of-scale advantages for economic efficiency. Design, operation and maintenance costs can make these low-power nuclear reactors prohibitively expensive. Economic analysis shows that despite lower capital costs, microreactors cannot compete in cost with large nuclear power plants due to economies of scale. Still, they can compete with technologies of similar size and application, such as diesel generators in small networks and renewable energies.
In addition, public perception of nuclear energy is often negative, with concerns about safety and nuclear waste disposal. The availability of High Assay Low-Enriched Uranium (HALEU) fuel on the commercial market is low, posing an issue to the viability of operating microreactors even if regulatory approval is attained. Other issues include the higher safety and proliferation risks compared to large nuclear power plants and the licensing requirements for small reactors that have yet to be established. The smaller size of a nuclear microreactor, and its use of HALEU fuels also puts it at increased risk for theft. The uranium in a nuclear microreactor is easier to convert to weapons-grade, which makes it an ideal asset for nuclear terrorism and proliferation.
Current development
Microreactors for civilian use are currently in the earliest stages of development, with individual designs ranging in various stages of maturity. The United States has been supporting the development of any form of small or medium reactors (SMRs) since 2012. The present work focuses on the feasibility of combining coolants commonly considered for fast reactor applications, such as sodium, molten salt, and lead-based coolants, with intermediates and special attention to molten salt, from a basic design perspective. Future work focuses on optimizing the basic design and performing coupled 3D calculations, like thermohydraulics, fuel performance, and neutronics to determine detailed behavior and operation.
As of 2010, there has also been a growing interest in mobile floating nuclear power plants, considered to be nuclear microreactors. Two recent notable examples are: The Russian plant Akademik Lomonosov, which utilizes two 35 MWe reactors, and the Chinese plant ACPR50S, which utilizes a 60 MWe reactor, classified as a marine pressurized water reactor. In addition to the Akademik Lomonosov plant, several new designs of autonomous power sources are being studied in Russia.
In 2018, NASA successfully demonstrated a kilowatt-scale microreactor based on its Kilopower technology. It is being developed for supporting human exploration of the Moon and Mars missions. It uses a unique technological approach to cool the reactor core (which is about the size of a paper towel roll): airtight heat pipes transfer reactor heat to engines that convert the heat to electricity. The approach to discovering the coolant fuel used for reactor cores was found through a series of scoping calculations, which utilize reactor vessel and internal dimensions, followed by calculating vibrations and hypothetical core-disruptive accidents.
In April 2022, the US Department of Defense announced its approval of Project Pele, an initiative to lower carbon emissions by the DOD by investing in nuclear technologies. The project has a budget of $300 million to develop a miniaturized reactor capable of generating 1.5 megawatts for a minimum of three years. The US Department of Strategic Capabilities partnered with BWXT Technologies in June 2022 to accomplish this. BWXT Tech developed a high-temperature gas-cooled reactor (HTGR) which will generate between 1 and 5 MWe and will be transportable in shipping containers. It will be powered by TRISO fuel, a specific design of high-assay low-enriched uranium (HALEU) fuel that can withstand high temperatures and has relatively low environmental risks.
The US Department of Energy DOE is also currently planning on developing a 100 kWt reactor in Idaho called the "Microreactor Applications Research Validation and Evaluation" (MARVEL) reactor.
The US Department of Defense anticipates deadlines and challenges for the deployment of the first small reactor by the end of 2027. The nominal time from license application to commercialization is estimated at 7 years.
References
External links
https://www.energy.gov/ne/articles/what-nuclear-microreactor US Office of Nuclear Energy: What is a Nuclear Microreactor?
Energy conversion
Nuclear technology
Power station technology
Nuclear research reactors
Nuclear power reactor types
Nuclear power | Nuclear microreactor | [
"Physics"
] | 2,012 | [
"Nuclear power",
"Physical quantities",
"Nuclear technology",
"Power (physics)",
"Nuclear physics"
] |
57,822,802 | https://en.wikipedia.org/wiki/Parthenin | Parthenin is a chemical compound classified as a sesquiterpene lactone. It has been isolated from Parthenium hysterophorus.
It is genotoxic, allergenic, and an irritant. Parthenin is believed to be responsible for the dermatitis caused by Parthenium hysterophorus.
References
sesquiterpene lactones
Vinylidene compounds
Plant toxins | Parthenin | [
"Chemistry"
] | 92 | [
"Chemical ecology",
"Plant toxins",
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
57,824,255 | https://en.wikipedia.org/wiki/%28Benzene%29ruthenium%20dichloride%20dimer | (Benzene)ruthenium dichloride dimer is the organoruthenium compound with the formula [(CH)RuCl]. This red-coloured, diamagnetic solid is a reagent in organometallic chemistry and homogeneous catalysis.
Preparation, structure, and reactions
The dimer is prepared by the reaction of cyclohexadienes with hydrated ruthenium trichloride. As verified by X-ray crystallography, each Ru center is coordinated to three chloride ligands and a η-benzene. The complex can be viewed as an edge-shared bioctahedral structure.
(Benzene)ruthenium dichloride dimer reacts with Lewis bases to give monometallic adducts:
[(CH)RuCl] + 2 PPh → 2 (CH)RuCl(PPh)
Related compounds
(cymene)ruthenium dichloride dimer, a more soluble analogue of (benzene)ruthenium dichloride dimer.
(mesitylene)ruthenium dichloride dimer, another more soluble derivative.
References
Organoruthenium compounds
Chloro complexes
Dimers (chemistry)
Half sandwich compounds
Ruthenium(II) compounds | (Benzene)ruthenium dichloride dimer | [
"Chemistry",
"Materials_science"
] | 266 | [
"Half sandwich compounds",
"Organometallic chemistry",
"Dimers (chemistry)",
"Polymer chemistry"
] |
57,825,535 | https://en.wikipedia.org/wiki/ISO%209847 | ISO 9847, Solar energy — Calibration of field pyranometers by comparison to a reference pyranometer, is an ISO standard for the calibration of pyranometers.
References
09847
Meteorological instrumentation and equipment | ISO 9847 | [
"Technology",
"Engineering"
] | 53 | [
"Meteorological instrumentation and equipment",
"Measuring instruments"
] |
57,828,301 | https://en.wikipedia.org/wiki/Transition%20metal%20arene%20complex | Metal arene complexes are organometallic compounds of the formula (C6R6)xMLy. Common classes are of the type (C6R6)ML3 and (C6R6)2M. These compounds are reagents in inorganic and organic synthesis. The principles that describe arene complexes extend to related organic ligands such as many heterocycles (e.g. thiophene) and polycyclic aromatic compounds (e.g. naphthalene).
Synthesis
Fischer–Hafner synthesis
Also known as reductive Friedel–Crafts reaction, the Fischer–Hafner synthesis entails treatment of metal chlorides with arenes in the presence of aluminium trichloride and aluminium metal. The method was demonstrated in the 1950s with the synthesis of bis(benzene)chromium by Walter Hafner and his advisor E. O. Fischer. The method has been extended to other metals, e.g. [Ru(C6Me6)2]2+. In this reaction, the AlCl3 serves to remove chloride from the metal precursor, and the Al metal functions as the reductant. The Fischer-Hafner synthesis is limited to arenes lacking sensitive functional groups.
Direct synthesis
By metal vapor synthesis, metal atoms co-condensed with arenes react to give complexes of the type M(arene)2. Cr(C6H6)2 can be produced by this method.
Cr(CO)6 reacts directly with benzene and other arenes to give the piano stool complexes Cr(C6R6)(CO)3. The carbonyls of Mo and W behave comparably. The method works particularly well with electron-rich arenes (e.g., anisole, mesitylene). The reaction has been extended to the synthesis of [Mn(C6R6)(CO)3]+:
BrMn(CO)5 + Ag+ + C6R6 → [Mn(C6R6)(CO)3]+ + AgBr + 2 CO
From hexadienes
Few Ru(II) and Os(II) complexes react directly with arenes. Instead, arene complexes of these metals are typically prepared by treatment of M(III) precursors with cyclohexadienes. For example, heating alcohol solutions of 1,3- or 1,4-cyclohexadiene and ruthenium trichloride gives (benzene)ruthenium dichloride dimer. The conversion entails dehydrogenation of an intermediate diene complex.
Alkyne trimerization
Metal complexes are known to catalyze alkyne trimerization to give arenes. These reactions have been used to prepare arene complexes. Illustrative is the reaction of [Co(mesitylene)2]+ with 2-butyne to give [Co(C6Me6)2]+.
Structure
In most of its complexes, arenes bind in an η6 mode, with six nearly equidistant M-C bonds. The C-C-C angles are unperturbed vs the parent arene, but the C-C bonds are elongated by 0.2 Å. In the fullerene complex Ru3(CO)9(C60), the fullerene binds to the triangular face of the cluster.
η4- and η2-Arene complexes
In some complexes, the arene binds through only two or four carbons, η2 and η4 bonding, respectively. In these cases, the arene is no longer planar. Because the arene is dearomatized, the uncoordinated carbon centers display enhanced reactivity. A well studied example is [Ru(η6-C6Me6)(η4-C6Me6)]0, formed by the reduction of [Ru(η6-C6Me6)2]2+. An example of an [Os(η2-C6H6)(NH3)5)]2+.
Reactivity
When bound in the η6 manner, arenes often function as unreactive spectator ligands, as illustrated by several homogeneous catalysts used for transfer hydrogenation, such as (η6-C6R6)Ru(TsDPEN). In cationic arene complexes or those supported by several CO ligands, the arene is susceptible to attack by nucleophiles to give cyclohexadienyl derivatives.
Particularly from the perspective of organic synthesis, the decomplexation of arenes is of interest. Decomplexation can often be induced by treatment with excess of ligand (MeCN, CO, etc).
References
Ligands
Organometallic chemistry
Coordination chemistry
Transition metals | Transition metal arene complex | [
"Chemistry"
] | 1,001 | [
"Coordination chemistry",
"Ligands",
"Organometallic chemistry",
"Half sandwich compounds"
] |
44,478,442 | https://en.wikipedia.org/wiki/Nanophase%20ceramic | Nanophase ceramics are ceramics that are nanophase materials (that is, materials that have grain sizes under 100 nanometers).
They have the potential for superplastic deformation. Because of the small grain size and added grain boundaries properties such as ductility, hardness, and reactivity see drastic changes from ceramics with larger grains.
Structure
The structure of nanophase ceramics is not too different than that of ceramics. The main difference is the amount of surface area per mass. Particles of ceramics have small surface areas, but when those particles are shrunk to within a few nanometers, the surface area of the same amount of a mass of a ceramic greatly increases. So in general, nanophase materials have greater surface areas than that of a similar mass material at a larger scale. This is important because if the surface area is very large the particles can be in contact with more of their surroundings, which in turn increases the reactivity of the material. The reactivity of a material changes the material's mechanical properties and chemical properties, among many other things. This is especially true in nanophase ceramics.
Properties
Nanophase ceramics have unique properties than regular ceramics due to their improved reactivity. Nanophase ceramics exhibit different mechanical properties than their counterpart such as higher hardness, higher fracture toughness, and high ductility. These properties are far from ceramics which behave as brittle, low ductile materials.
Titanium dioxide
Titanium dioxide (), has been shown to have increased hardness and ductility at the nanoscale. In an experiment, grains of titanium dioxide that had an average size of 12 nanometers were compressed at 1.4 GPa and sintered at 200 °C. The result was a grain hardness of about 2.2 times greater than that of grains of titanium dioxide with an average size of 1.3 micrometers at the same temperature and pressure. In the same experiment, the ductility of titanium dioxide was measured. The strain rate sensitivity of a 250 nanometer grain of titanium dioxide was about 0.0175, while a grain with size of about 20 nanometers had a strain rate sensitivity of approximately .037; a significant increase.
Processing
Nanophase ceramics can be processed from atomic, molecular, or bulk precursors. Gas condensation, chemical precipitation, aerosol reactions, biological templating, chemical vapor deposition, and physical vapor deposition are techniques used to synthesis nanophase ceramics from molecular or atomic precursors. To process nanophase ceramics from bulk precursors, mechanical attrition, crystallization from the amorphous state, and phase separation are used to create nanophase ceramics. Synthesizing nanophase ceramics from atomic or molecular precursors are desired more because a greater control over microscopic aspects of the nanophase ceramic can occur.
Gas condensation
Gas condensation is one way nanophase ceramics are produced. First, precursor ceramics are evaporated from sources within a gas-condensation chamber. Then the ceramics are condensed in a gas (dependent on the material being synthesized) and transported via convection to a liquid-nitrogen filled cold finger. Next, the ceramic powders are scraped off the cold finger and collect in a funnel below the cold finger. The ceramic powders then become consolidated in a low-pressure compaction device and then in a high-pressure compaction device. This all occurs in a vacuum, so no impurities can enter the chamber and affect the results of the nanophase ceramics.
Applications
Nanophase ceramics have unique properties that make them optimal for a variety of applications.
Drug delivery
Materials used in drug delivery in the past ten years have primarily been polymers. However, nanotechnology has opened the door for the use of ceramics with benefits not previously seen in polymers. The large surface area to volume ratio of nanophase materials makes it possible for large amounts of drugs to be released over long periods of time. Nanoparticles to be filled with drugs can be easily manipulated in size and composition to allow for increased endocytosis of drugs into targeted cells and increased dispersion through fenestrations in capillaries. While these benefits all relate to nanoparticles in general (including polymers), ceramics have other, unique abilities. Unlike polymers, slow degradation of ceramics allows for longer release of the drug. Polymers also tend to swell in liquid which can cause an unwanted burst of drugs. The lack of swelling shown by most ceramics allows for increased control. Ceramics can also be created to match the chemistry of biological cells in the body increasing bioactivity and biocompatibility. Nanophase ceramic drug carriers are also able to target specific cells. This can be done by manufacturing a material to bond to the specific cell or by applying an external magnetic field, attracting the carrier to a specific location.
Bone substitution
Nanophase ceramics have great potential for use in orthopedic medicine. Bone and collagen have structures on the nanoscale. Nanomaterials can be manufactured to simulate these structures which is necessary for grafts and implants to successfully adapt to and handle varying stresses. The surface properties of nanophase ceramics is also very important for bone substitution and regeneration. Nanophase ceramics have much rougher surfaces than larger materials and also have increased surface area. This promotes reactivity and absorption of proteins that assist tissue development. Nano-hydroxyapatite is one nanophase ceramic that is used as a bone substitute. Nano grain size increases the bonding, growth, and differentiation of osteoblasts onto the ceramic. The surfaces of nanophase ceramics can also be modified to be porous allowing osteoblasts to create bone within the structure. The degradation of the ceramic is also important because the rate can be changed by changing the crystallinity. This way as bone grows the substitute can diminish at a similar rate.
References
Ceramic materials
Ceramic engineering
Materials | Nanophase ceramic | [
"Physics",
"Engineering"
] | 1,200 | [
"Ceramic engineering",
"Materials",
"Ceramic materials",
"Matter"
] |
44,481,226 | https://en.wikipedia.org/wiki/Ostrogradsky%20instability | In applied mathematics, the Ostrogradsky instability is a feature of some solutions of theories having equations of motion with more than two time derivatives (higher-derivative theories). It is suggested by a theorem of Mikhail Ostrogradsky in classical mechanics according to which a non-degenerate Lagrangian dependent on time derivatives higher than the first corresponds to a Hamiltonian unbounded from below. As usual, the Hamiltonian is associated with the Lagrangian via a Legendre transform. The Ostrogradsky instability has been proposed as an explanation as to why no differential equations of higher order than two appear to describe physical phenomena.
However, Ostrogradsky's theorem does not imply that all solutions of higher-derivative theories are unstable as many counterexamples are known.
Outline of proof
The main points of the proof can be made clearer by considering a one-dimensional system with a Lagrangian . The Euler–Lagrange equation is
Non-degeneracy of means that the canonical coordinates can be expressed in terms of the derivatives of and vice versa. Thus, is a function of (if it were not, the Jacobian would vanish, which would mean that is degenerate), meaning that we can write or, inverting, . Since the evolution of depends upon four initial parameters, this means that there are four canonical coordinates. We can write those as
and by using the definition of the conjugate momentum,
The above results can be obtained as follows. First, we rewrite the Lagrangian into "ordinary" form by introducing a Lagrangian multiplier as a new dynamic variable
,
from which, the Euler-Lagrangian equations for read
,
,
,
Now, the canonical momentum with respect to are readily shown to be
while
These are precisely the definitions given above by Ostrogradski.
One may proceed further to evaluate the Hamiltonian
,
where one makes use of the above Euler-Lagrangian equations for the second equality.
We note that due to non-degeneracy, we can write as .
Here, only three arguments are needed since the Lagrangian itself only has three free parameters.
Therefore, the last expression only depends on , it effectively serves as the Hamiltonian of the original theory, namely,
.
We now notice that the Hamiltonian is linear in . This is a source of the Ostrogradsky instability, and it stems from the fact that the Lagrangian depends on fewer coordinates than there are canonical coordinates (which correspond to the initial parameters needed to specify the problem). The extension to higher dimensional systems is analogous, and the extension to higher derivatives simply means that the phase space is of even higher dimension than the configuration space.
Notes
Lagrangian mechanics
Hamiltonian mechanics
Calculus of variations
Mathematical physics | Ostrogradsky instability | [
"Physics",
"Mathematics"
] | 579 | [
"Applied mathematics",
"Theoretical physics",
"Lagrangian mechanics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mathematical physics",
"Dynamical systems"
] |
64,411,788 | https://en.wikipedia.org/wiki/Super-AGB%20star | A super-AGB star is a star with a mass intermediate between those that end their lives as a white dwarf and those that end with a core collapse supernova, and properties intermediate between asymptotic giant branch (AGB) stars and red supergiants. They have initial masses of in stellar-evolutionary models, but have exhausted their core hydrogen and helium, left the main sequence, and expanded to become large, cool, and luminous.
HR diagram
Super-AGB stars occupy the top-right of the Hertzsprung–Russell diagram (HR diagram), and have cool temperatures between 3,000 and , which is similar to normal AGB stars and red supergiant stars (RSG stars). These cool temperatures allow molecules to form in their photospheres and atmospheres. Super-AGB stars emit most of their light in the infra-red spectrum because of their extremely cool temperatures.
The Chandrasekhar limit and their life
A super-AGB star's core may grow to (or past) the Chandrasekhar mass because of continued hydrogen (H) and helium (He) shell burning, ending as core-collapse supernovae. The most massive super-AGB stars (at around ) are theorized to end in electron capture supernovae. The error in this determination due to uncertainties in the third dredge-up efficiency and AGB mass-loss rate could lead to about a doubling of the number of electron-capture supernovae, which also supports the theory that these stars make up 66% of the supernovae detected by satellites.
These stars are at a similar stage in life to red giant stars, such as Aldebaran, Mira, and Chi Cygni, and are at a stage where they start to brighten, and their brightness tends to vary, along with their size and temperature.
These stars represent a transition to the more massive supergiant stars that undergo full fusion of elements heavier than helium. During the triple-alpha process, some elements heavier than carbon are also produced: mostly oxygen, but also some magnesium, neon, and even heavier elements, gaining an oxygen-neon (ONe) core. Super-AGB stars develop partially degenerate carbon–oxygen cores that are large enough to ignite carbon in a flash analogous to the earlier helium flash. The second dredge-up is very strong in this mass range and that keeps the core size below the level required for burning of neon as occurs in higher-mass supergiants.
References
attribution contains text copied from Asymptotic giant branch available under CC-BY-SA-3.0
Asymptotic-giant-branch stars
Stellar evolution | Super-AGB star | [
"Physics"
] | 553 | [
"Astrophysics",
"Stellar evolution"
] |
64,413,209 | https://en.wikipedia.org/wiki/Splash%20lubrication | Splash lubrication is a rudimentary form of lubrication found in early engines. Such engines could be external combustion engines (such as stationary steam engines), or internal combustion engines (such as petrol, diesel or paraffin engines).
Description
An engine that uses splash lubrication requires neither oil pump nor oil filter. Splash lubrication is an antique system whereby scoops on the big-ends of the connecting rods dip into the oil sump and splash the lubricant upwards towards the cylinders, creating an oil mist which settles into droplets. The oil droplets then pass through drillings to the bearings and thereby lubricate the moving parts. Provided that the bearing is a ball bearing or a roller bearing, splash lubrication would usually be sufficient; however, plain bearings typically need a pressure feed to maintain the oil film, loss of which leads to overheating and seizure.
The splash lubrication system has simplicity, reliability, and cheapness within its virtues. However, splash lubrication can work only on very low-revving engines, as otherwise the sump oil would become a frothy mousse. The Norwegian firm, Sabb Motor, produced a number of small marine diesel engines, mostly single-cylinder or twin-cylinder units, that used splash lubrication.
Modern use of splash lubrication
Splash lubrication is still used in modern engines and mechanisms, such as:
the Robinson R22 helicopter uses splash lubrication on some bevel gears.
all BMW motorcycles with shaft drive use splash lubrication in the final drive hub.
British pre-unit and unit construction motorcycles, (such as Triumph, BSA and Norton) used splash lubrication in their gearboxes.
Cars and lorries invariably still use splash lubrication in their differentials.
See also
Oil pressure
References
External links
International Council for Machinery Lubrication
Machinery Lubrication magazine (archived)
Lubrication
Tribology
Lubricants | Splash lubrication | [
"Chemistry",
"Materials_science",
"Engineering"
] | 402 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
64,415,092 | https://en.wikipedia.org/wiki/Coronavirus%20breathalyzer | A coronavirus breathalyzer is a diagnostic medical device enabling the user to test with 90% or greater accuracy the presence of severe acute respiratory syndrome coronavirus 2 in an exhaled breath.
As of the first half of 2020, the idea of a practical coronavirus breathalyzer was concomitantly developed by unrelated research groups in Australia, Canada, Finland, Germany, Indonesia, Israel, Netherlands, Poland, Singapore, United Kingdom and USA.
Australia
In Australia, GreyScan CEO Samantha Ollerton and Prof. Michael Breadmore of the University of Tasmania are basing a coronavirus breathalyzer on existing technology that is used around the world to detect explosives. Another invention published from ABC News; produced by Colin Hickey and Examin Holdings, have released information on a new breathalyzer called the "Queensland Breath test" claiming its function has 98% efficiency, equipped with a replaceable plastic nozzle for reusability (February 2022). a statement in claim by Bruce Thompson, a professor at Swinburne University of Technology, Although this products is reliable, due to insufficient funding, the product is inaccessible.
Canada
Canary Health Technologies, headquartered in Toronto with offices in Cleveland, Ohio, is developing a breathalyzer with disposable nanosensors using AI-powered cloud-based analysis. According to a press release, clinical trials began in India during November 2020. The stated goal is to develop an accurate, reasonably priced screening tool that can be used anywhere and deliver a result in less than a minute. The company postulates that analyzing volatile organic compounds in human breath could potentially detect diseases before the on-set of symptoms, earlier than currently available methods. Moreover, the cloud-based technology is designed to be used as a disease surveillance apparatus.
Finland
By the end of June 2020, Forum Virium Helsinki, in collaboration with Finnish software firm Deep Sensing Algorithms, funded by the Helsinki-Uusimaa Regional Council, announced that testing of their device had begun with a control group in Kazakhstan, with plans to expand to the Netherlands, the United States, South Africa, Brazil and Finland throughout the summer. The efficacy of the Forum Virium Helsinki / Deep Sensing Algorithms device hinges on its AI component. "We are engaged in innovative cooperation with corporations to solve the coronavirus crisis, and we will help firms to use the city as a development platform. We are utilizing artificial intelligence and digitalization," said Forum Virium Helsinki CEO Mika Malin.
Germany
In March 2020, the Singaporean company RAM Global conducted research in Germany in hopes of developing a one-minute breathalyzer test for SARS-CoV-2 based on terahertz time-domain spectroscopy. The company attempted to develop a disposable test kit for direct detection of COVID-19 virion particles in breath, saliva and swab samples. On 31 March, RAM Global completed an initial clinical study on live patients at University Hospital Saarland. In April, the company pursued a small unknown sample study in which hospital doctors provided unknown samples in order to test accuracy in differentiating positive and negative samples.
Indonesia
Since April 2020, a team of researchers from Gadjah Mada University (UGM) has been developing an electronic nose called GeNose C19. The GeNose C19 can be used as a rapid, non-invasive screening tool in less than two minutes. A profiling test was carried out at the Bhayangkara Hospital and the Covid Bambanglipuro Special Field Hospital in Yogyakarta. GeNose C19 consists of gas sensors and an artificial intelligence-based pattern recognition system. The diagnostic test was carried out with the cooperation of nine multi-center hospitals.
In the end of December 2020, GeNose C19 received a distribution permit from Indonesia's Health Ministry. Initially, 100 units will be released and each device will be able to perform 120 tests per day. The test is estimated to cost 15,000–25,000 Indonesian rupiah ($1–$1.8) and would take three minutes for the test and another two minutes to yield a result. Researchers hope to manufacture up to 1,000 GeNose C19 units, increasing the country's testing capabilities by 120 thousand subjects per day. Moreover, they aim to manufacture 10,000 units by February 2021.
Israel
In Israel, it is at the photonics lab of Gabby Sarusi, professor at Ben-Gurion University of the Negev, that research is underway as of midsummer 2020. Separately from Sarusi's project, in July 2020, it was reported that Israeli start-up Nanoscent in cooperation with Sheba Medical Center had devised a breathalyzer that Magen David Adom (MDA) is seeking to incorporate into existing drive-thru testing stations located throughout the country.
Questionable intellectual property of Gabby Sarusi regarding this project is now under discussion in the court in Israel.
The Netherlands
A breath test with the SpiroNose device, made by the Dutch company Breathomix, has been developed and tested in collaboration with the Leiden University Medical Center (LUMC), Franciscus Gasthuis & Vlietland and the GGD Amsterdam. The breath test has been validated as a pre-screening test for people who have no or mild symptoms of COVID-19. From April 2021, the device was operational in COVID-19 test drive-ins, conferences and events, i.e. Eurovision Song Contest 2021. Subjects must abstain from alcohol for eight hours prior to taking the breath test.
The SpiroNose contains four sets of seven different sensors that can measure the mixture of volatile organic compounds (biomarkers) in the exhaled air. These VOCs provide a picture of a person's metabolism. This 'breath profile' is forwarded to an online analysis platform. Here the breath profile is compared with other breath profiles of people with and without a COVID-19 diagnosis and analysed by algorithms. Data-analysis involves advanced signal processing and statistics based on independent t-tests followed by linear discriminant and ROC analysis. The test result is known within minutes.
The breath test has a sensitivity/specificity for SARS-CoV-2 infection of 100/78, >99/84, 98/82% in validation, replication and asymptomatic cohorts of patients. The breath test reliably detects who is not infected. Such a subject will receive a test result immediately. Other subjects must promptly conduct a subsequent test, for example a PCR test or LAMP test. The test results can be viewed by the client and are not automatically interfaced to other databases, i.e. for public health surveillance, source and contact tracing, vaccination programs. In July 2021, the ministry stopped the tests with the SpiroNose because, according to the GGD, the device gives unusable results in some cases. Breathomix indicates that this is the result of the way in which the SpiroNose is deployed. The SpiroNose is and remains a reliable instrument for lung diseases.
The analysis platform is developed conform the requirements of the standard ISO 27001 (Information Security) and NEN 7510 (Information Security in Health Care).
A CE marking has been requested. In the meantime, the Dutch minister has granted a CE marking exemption on 25 January 2021.
The device may also be used to detect other diseases, e.g., asthma, COPD, lung cancer, interstitial lung diseases (ILD).
Poland
In February 2021, the President of Poland, Andrzej Duda, announced that ML System S. A., headquartered in Zaczernie, Poland, had successfully developed a means of analyzing a patient's breath to test for the presence of coronavirus. According to an anonymous press release, test subjects exhale into a device in order to determine the presence of the coronavirus. The procedure, similar to that of a police breathalyzer, is said to take less than ten seconds. Independent clinical trials were begun in April 2021. In the first half of May 2021, a brief text concerning partial results was published by ML System, stating that independent clinical trials were successful with specificity (97,15%) and accuracy/sensitivity (86,86%), for CT (Cycle Threshold) assumed at 25, which is in line with the guidelines set out by the World Health Organization. Moreover, ML System in partnership with Rzeszów–Jasionka Airport published a statement indicating their intention to test the device at the airport. Similar plans exist between the manufacturer and the Warsaw Chopin Airport. Two large networks of laboratories in Poland, "Diagnostyka" and "ALAB Laboratoria", have signed a letter of intent with ML System. In agreement with ALAB, the parties declared cooperation in the implementation of the product named "COVID DETECTOR" on the Polish, German and Ukrainian markets. In addition, the companies declared joint activities aimed at extending the diagnosis with the use of "COVID Detector" to include mutations of the SARS-CoV-2 virus, differentiate the stage of the disease and other pathogens, including tuberculosis. Cooperation with laboratories Diagnostyka, including detection of mutations of SARS-CoV-2 virus or other pathogens, also involves the diagnosis of cancer with the use of the device.
United Kingdom
In January 2021, Exhalation Technology Ltd (ETL) in Cambridge announced a clinical trial study for a cohort of up to 150 patients for its CoronaCheck breath test for COVID-19.
United States
In 2020, research teams at the University of California, Los Angeles, and Ohio State University received funding to investigate the potential of breath analysis for SARS-CoV-2 detection. These investigations included exploring technologies that might enable rapid diagnosis, potentially within 15 seconds, by analyzing specific volatile organic compounds present in exhaled breath. "The goal in this research is to develop cheap, massively deployable, rapid diagnostic and sentinel systems for detecting respiratory illness and airborne viral threats," says Prof. Pirouz Kavehpour of UCLA Henry Samueli School of Engineering and Applied Science, whose research team received a one-year, $150,000 research grant from the National Science Foundation.
In April 2022, the FDA authorized for emergency use the first COVID-19 diagnostic test using breath samples. "The InspectIR COVID-19 Breathalyzer uses a technique called gas chromatography gas mass-spectrometry (GC-MS) to separate and identify chemical mixtures and rapidly detect five Volatile Organic Compounds (VOCs) associated with SARS-CoV-2 infection in exhaled breath," said the FDA.
Researchers at the Washington University in St. Louis in 2023 reported developing a point-of-care COVID-19 test device using a sensor to directly detect the SARS-CoV-2 virus in exhaled breath. In early testing, the experimental breathalyzer design provided results with high accuracy in about a minute. It functions by collecting a breath sample and directing it towards an electrochemical biosensor coated with antibodies specific to the SARS-CoV-2 virus. If the virus is present, the sensor produces a signal, indicating a positive test. This approach directly detects the virus itself, unlike some breath tests that identify indirect markers of infection. In late 2023, the researchers announced they had been awarded a $3.6 million grant to investigate the possibility of adapting the device to include testing for other respiratory viruses, such as influenza, and to further develop and commercialize their breathalyzer technology.
References
External links
COVID-19 Diagnostics & testing of FIND
SpiroNose
Applications of artificial intelligence
Biomarkers
Biosensors
Breath tests
Breathalyzer
COVID-19 testing | Coronavirus breathalyzer | [
"Biology"
] | 2,417 | [
"Biomarkers",
"Biosensors"
] |
70,266,907 | https://en.wikipedia.org/wiki/Fusion%20Nuclear%20Science%20Facility | Fusion Nuclear Science Facility (FNSF) is a low cost, low aspect ratio compact tokamak reactor design, aiming for a 9 Tesla field at the plasma centre.
It is considered a step after ITER on the path to a fusion power plant.
Because of the high neutron irradiation damage expected, non-insulating superconducting coils are being considered for it.
History
References
Nuclear fusion
Tokamaks
Nuclear energy | Fusion Nuclear Science Facility | [
"Physics",
"Chemistry"
] | 90 | [
"Nuclear fusion",
"Nuclear energy",
"Radioactivity",
"Nuclear physics"
] |
70,267,150 | https://en.wikipedia.org/wiki/Neutron%20irradiation%20damage | Neutron irradiation damage refers to material changes caused by high neutron flux, typically in a nuclear reactor after many years.
Graphite may shrink and then swell.
See also
Neutron embrittlement
Neutron radiation#Effects on materials
References
Neutron
Materials degradation | Neutron irradiation damage | [
"Physics",
"Materials_science",
"Engineering"
] | 51 | [
"Materials degradation",
"Nuclear and atomic physics stubs",
"Materials science",
"Nuclear physics"
] |
70,267,538 | https://en.wikipedia.org/wiki/Keogram | A keogram ("keo" from "Keoeeit" – Inuit word for "Aurora Borealis") is a way of displaying the intensity of an auroral display, taken from a narrow part of a round screen recorded by a camera, more specifically and ideally in practice a "whole sky camera". These images from the narrow band, which usually face up in the north-south orientation in the Northern Hemisphere and the south-north orientation in the Southern Hemisphere, are collected and form a time-dependent graph of the aurora from that part of the sky. This allows one to easily realize the general activity of the display that night, whether it had been interrupted by weather conditions or not, and allows the determination of the regions in which the aurora was seen in terms of latitude and longitude of the area.
The use of keograms started in the 1970s by Eather et al. to allow a more practical and efficient way of determining the activity of the aurora throughout the recorded night and provide a view of the detailed movements of it, the light of which is also recorded in wavelengths outside of the human visible spectrum.
Thus, keograms are also used to analyse the conditions of the equatorial plasma bubbles (EPB) in the ionosphere of the Earth, to estimate its zonal drift at lower latitudes.
See also
Aurora
Night sky
References
Meteorological instrumentation and equipment
Atmospheric optical phenomena | Keogram | [
"Physics",
"Technology",
"Engineering"
] | 285 | [
"Physical phenomena",
"Earth phenomena",
"Meteorological instrumentation and equipment",
"Measuring instruments",
"Optical phenomena",
"Atmospheric optical phenomena"
] |
70,270,570 | https://en.wikipedia.org/wiki/SPT20 | Transcription factor SPT20 is a regulator of transcription. It can recruit TATA binding protein (TBP) and possible other base factors to bind to TATA box. The model of its action by example Saccharomyces cerevisiae was studied. It functions as a component of the transcriptional regulatory complex histone-acetylation a (HAT) SAGA, SALSA and FIG. SAGA is involved in the regulation of transcription-dependent RNA polymerase II about 10% of the yeast gene. In promoter, SAGA is required to engage basal transcription mechanisms. Affects RNA polymerase II transcription activity through various activities, such as TBP interaction (SPT3, SPT8 and SPT20) and promoter selectivity, interaction with transcription activators (GCN5, ADA2, ADA3 and TRA1) and modification chromatin by histone acetylation (GCN5) and ubiquitin deacetylation (UBP8). SAGA acetylates nucleosome or histone H3 to some extent (to form H3K9ac, H3K14ac, H3K18ac, and H3K23ac).
SAGA interacts with DNA via upstream activation sequences (UAS). SALSA, an altered form of SAGA, may be involved in positive regulation transcription. It is suggested that SLIK has partially overlapping functions with SAGA. Preferably acetylation methylated histone H3, at least after activation at the GAL1-10 locus. "ADA5 / SPT20 links the ADA and SPT genes, which are involved in the transcription of yeast".
References
External links
Genes on human chromosome 13
Transcription factors
Human proteins | SPT20 | [
"Chemistry",
"Biology"
] | 351 | [
"Protein stubs",
"Gene expression",
"Signal transduction",
"Biochemistry stubs",
"Induced stem cells",
"Transcription factors"
] |
54,521,251 | https://en.wikipedia.org/wiki/Ketodarolutamide | Ketodarolutamide (developmental code names ORM-15341, BAY-1896953) is a nonsteroidal antiandrogen (NSAA) and the major active metabolite of darolutamide (ODM-201, BAY-1841788), an NSAA which is used in the treatment of prostate cancer in men. Similarly to its parent compound, ketodarolutamide acts as a highly selective, high-affinity, competitive silent antagonist of the androgen receptor (AR). Both agents show much higher affinity and more potent inhibition of the AR relative to the other NSAAs enzalutamide and apalutamide, although they also possess much shorter and comparatively less favorable elimination half-lives. They have also been found not to activate certain mutant AR variants that enzalutamide and apalutamide do activate. Both darolutamide and ketodarolutamide show limited central nervous system distribution, indicating peripheral selectivity, and little or no inhibition or induction of cytochrome P450 enzymes such as CYP3A4, unlike enzalutamide and apalutamide.
References
External links
ODM-201 – New generation androgen receptor inhibitor targeting resistance mechanisms to androgen signalling-directed prostate cancer therapies - Orion Pharma Poster Presentation
Carboxamides
Chlorobenzene derivatives
Hormonal antineoplastic drugs
Human drug metabolites
Nitriles
Nonsteroidal antiandrogens
Peripherally selective drugs
Pyrazoles | Ketodarolutamide | [
"Chemistry"
] | 316 | [
"Chemicals in medicine",
"Nitriles",
"Functional groups",
"Human drug metabolites"
] |
54,523,294 | https://en.wikipedia.org/wiki/Convex%20measure | In measure and probability theory in mathematics, a convex measure is a probability measure that — loosely put — does not assign more mass to any intermediate set "between" two measurable sets A and B than it does to A or B individually. There are multiple ways in which the comparison between the probabilities of A and B and the intermediate set can be made, leading to multiple definitions of convexity, such as log-concavity, harmonic convexity, and so on. The mathematician Christer Borell was a pioneer of the detailed study of convex measures on locally convex spaces in the 1970s.
General definition and special cases
Let X be a locally convex Hausdorff vector space, and consider a probability measure μ on the Borel σ-algebra of X. Fix −∞ ≤ s ≤ 0, and define, for u, v ≥ 0 and 0 ≤ λ ≤ 1,
For subsets A and B of X, we write
for their Minkowski sum. With this notation, the measure μ is said to be s-convex if, for all Borel-measurable subsets A and B of X and all 0 ≤ λ ≤ 1,
The special case s = 0 is the inequality
i.e.
Thus, a measure being 0-convex is the same thing as it being a logarithmically concave measure.
Properties
The classes of s-convex measures form a nested increasing family as s decreases to −∞"
or, equivalently
Thus, the collection of −∞-convex measures is the largest such class, whereas the 0-convex measures (the logarithmically concave measures) are the smallest class.
The convexity of a measure μ on n-dimensional Euclidean space Rn in the sense above is closely related to the convexity of its probability density function. Indeed, μ is s-convex if and only if there is an absolutely continuous measure ν with probability density function ρ on some Rk so that μ is the push-forward on ν under a linear or affine map and is a convex function, where
Convex measures also satisfy a zero-one law: if G is a measurable additive subgroup of the vector space X (i.e. a measurable linear subspace), then the inner measure of G under μ,
must be 0 or 1. (In the case that μ is a Radon measure, and hence inner regular, the measure μ and its inner measure coincide, so the μ-measure of G is then 0 or 1.)
References
Measures (measure theory) | Convex measure | [
"Physics",
"Mathematics"
] | 515 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
44,494,756 | https://en.wikipedia.org/wiki/Stratified%20flows | The flow in many fluids varies with density and depends upon gravity. The fluid with lower density is always above the fluid with higher density (stable stratification). Stratified flows are very common such as the Earth's ocean and its atmosphere.
Stratified fluid
A stratified fluid may be defined as the fluid with density variations in the vertical direction. For example, air and water; both are fluids and if we consider them together then they can be seen as a stratified fluid system. Density variations in the atmosphere profoundly affect the motion of water and air. Wave phenomena in air flow over the mountains and occurrence of smog are the examples of stratification effect in the atmosphere.
When a fluid system having a condition in which fluid density decreases with height, is disturbed, then the gravity and friction restore the undisturbed conditions. If however the fluid tends to be stable if density decreases with height.
Upstream motions in stratified flow
It is known that the sub critical flow of a stratified fluid past a barrier produce motions upstream of the barrier. Sub critical flow may be defined as a flow for which the Froude number based on channel height is less than 1/π, so that one
or more stationary lee waves would be present. Some of the upstream motions do not decompose with the distance upstream. These ‘columnar’ modes have zero frequency and a sinusoidal structure in the direction of the density gradient; they effectively lead to a continuous change in upstream conditions. If the barrier is two-dimensional (i.e. of infinite extent in the direction perpendicular to the upstream flow and the direction of density gradient), inviscid theories show that the length of the upstream
region affected by the columnar modes increases without bound as t->infinity. Non-zero viscosity (and/or diffusivity) will, however, limit the region affected, since the wave amplitudes will then slowly decay.
Efficient mixing in stratified flows
Turbulent mixing in stratified flows is described by mixing efficiency. This mixing efficiency compares the energy used in irreversible mixing, enlarging the minimum gravitational potential energy that can be kept in the density field, to the
entire change in mechanical energy during the mixing process. It can be defined either as an integral quantity, calculated between inert initial and final conditions or as a fraction of the energy flux to mixing and the power into the system. These two definitions can give different values if the system is not in steady state. Mixing efficiency is especially important in oceanography as mixing is required to keep the overall stratification in a steady-state ocean. The entire amount of mixing in the oceans is equal to the product of the power input to the ocean and the mean mixing efficiency.
Stability criteria for stratified flow
Wallis and Dobson (1973) estimate their criterion with transition observations that they call “Slugging” and note that empirically the stability limit is described by
Here and where H is channel height and U, h and ρ denote the mean velocity, holdup and density respectively. The subscripts G and L stand for gas and liquid and g denotes Gravity.
Taitel and Dukler (1976) [TD] expanded the (Kelvin and helmholtz) KH analysis first to the case of a finite wave on a flat liquid sheet in horizontal channel flow and then to finite waves on stratified liquid in an Inclined pipe. In order to apply this criterion they need to provide the equilibrium liquid level hL (or liquid holdup). They calculate through momentum balances in the gas and liquid phases (two fluid models) in which shear stresses are examine and assessed using conventional friction factors definitions. In two fluid models, the pipe geometry is taken into consideration through wetted perimeters by the gas and liquid phases, including the gas-liquid interface. This states that the wall resistance of the liquid is similar to that for open-channel flow and that of the gas to close-duct flow. This geometry analysis is general and could be applied not only to round pipes, but to any other possible shape. In this method, each pair of superficial gas and liquid velocity relates to a distinctive value of .
According to [TD], a finite wave will grow in a horizontal rectangular channel of height H, when or for inclined pipe. D is the pipe diameter and A is the cross section area. Note that . If , , and this is compatible with the result of Wallis and Dobson(1973) The [TD] overall procedure result to a weak dependence on viscosity, through the calculation of .
[TD] also identify two kinds of stratified flow: stratified smooth (SS) and stratified wavy (SW). These waves, as they say, “are produced by the gas flow under conditions where the velocity of gas is enough to cause waves to form, but slower than that needed for the quick wave growth which leads transition to intermittent or annular flow.” [TD] suggest a standard to predict the transition from stratified smooth to stratified wavy flow, based on Jeffreys’ (1925, 1926) ideas.
Effects of stratification on diffusion
Density stratification has significant effect on diffusion in fluids. For example, smoke which is coming from a chimney diffuses turbulently if the earth atmosphere is not stably stratified. When the lower air is in stable condition, as in morning or early evening, the smoke comes out and become flat into a long, thin layer. Strong stratification, or inversions as they are called sometimes, restrict contaminants to the lower regions of the earth atmosphere, and cause many of our current air-pollution problems.
References
External links
Stratified Flow
Atmospheric dynamics
Mass density
Fluid dynamics
Fluid mechanics | Stratified flows | [
"Physics",
"Chemistry",
"Engineering"
] | 1,177 | [
"Mechanical quantities",
"Physical quantities",
"Atmospheric dynamics",
"Chemical engineering",
"Mass",
"Intensive quantities",
"Fluid mechanics",
"Volume-specific quantities",
"Civil engineering",
"Density",
"Piping",
"Mass density",
"Matter",
"Fluid dynamics"
] |
47,570,772 | https://en.wikipedia.org/wiki/NGC%20110 | NGC 110 is an open star cluster located in the constellation Cassiopeia. It was discovered by the English astronomer John Herschel on October 29, 1831.
It is unknown if the members are physically related, or if the cluster exists at all. It is barely visible against the background sky, and the two dozen member stars seem to be at various distances. If the cluster does exist, it is at least 2,000 light years away.
References
External links
0110
Cassiopeia (constellation)
Astronomical objects discovered in 1831
Open clusters
Discoveries by John Herschel | NGC 110 | [
"Astronomy"
] | 115 | [
"Cassiopeia (constellation)",
"Constellations"
] |
47,571,281 | https://en.wikipedia.org/wiki/List%20of%20countries%20by%20average%20yearly%20temperature | This is a list of countries and sovereign states by temperature.
Average yearly temperature is calculated by averaging the minimum and maximum daily temperatures in the country, averaged for the years 1991 – 2020, from World Bank Group, derived from raw gridded climatologies from the Climatic Research Unit.
See also
List of countries by average annual precipitation
Notes
References
List of countries by average yearly temperature
Temperature, average
Meteorology lists
Meteorological quantities
Weather-related lists | List of countries by average yearly temperature | [
"Physics",
"Mathematics"
] | 89 | [
"Physical phenomena",
"Physical quantities",
"Weather",
"Quantity",
"Meteorological quantities",
"Climate and weather statistics",
"Weather-related lists"
] |
47,572,883 | https://en.wikipedia.org/wiki/T-cell%20growth%20factor | T-cell growth factors acronym: TCGF(s) are signaling molecules collectively called growth factors which stimulate the production and development of T-cells. A number of them have been discovered, among them many members of the interleukin family. The thymus is one organ which releases TCGFs. TCGFs have been able to induce T-cell production outside the body for injection.
List of TCGFs
IL-2
IL-7
IL-9
IL-15
References
Immunology
Growth factors | T-cell growth factor | [
"Chemistry",
"Biology"
] | 105 | [
"Immunology",
"Growth factors",
"Signal transduction"
] |
51,684,664 | https://en.wikipedia.org/wiki/Deep%20Earth%20Carbon%20Degassing%20Project | The Deep Earth Carbon Degassing (DECADE) project is an initiative to unite scientists around the world to make tangible advances towards quantifying the amount of carbon outgassed from the Earth's deep interior (core, mantle, crust) into the surface environment (e.g. biosphere, hydrosphere, cryosphere, atmosphere) through naturally occurring processes. DECADE is an initiative within the Deep Carbon Observatory (DCO).
Volcanoes are the main pathway in which deeply sourced volatiles, including carbon, are transferred from the Earth's interior to the surface environment. An additional, though less well understood, pathway includes along faults and fractures within the Earth's crust, often referred to as tectonic degassing. When the DCO was first formed in 2009 estimates of global carbon flux from volcanic regions ranged from 65 to 540 Mt/yr, and constraints on global tectonic degassing were virtually unknown. The order of magnitude uncertainty in current volcanic/tectonic carbon outgassing makes answering fundamental questions about the global carbon budget virtually impossible. In particular, one fundamental unknown is if carbon transferred to the Earth's interior via subduction is efficiently recycled back to the Earth's mantle lithosphere, crust and surface environment through volcanic and tectonic degassing, or if significant quantities of carbon are being subducted into the deep mantle. Because significant quantities of mantle carbon are also released through mid-ocean ridge volcanism, if carbon inputs and outputs at subduction zone settings are in balance, then the net effect will be an imbalance in the global carbon budget, with carbon being preferentially removed from the Earth's deep interior and redistributed to more shallow reservoirs including the mantle lithosphere, crust, hydrosphere and atmosphere. The implications of this may mean that carbon concentrations in the surface environment have increased over Earth's history, which has a significant impact on climate change.
Findings from the DECADE project will increase our understanding of how carbon cycles through deep Earth, and patterns in volcanic emissions data could potentially alert scientists to an impending eruption.
Project goals
The main goal of the DECADE project is to refine estimates of global carbon outgassing using a multipronged approach. Specifically, the DECADE initiative unites scientists with expertise in geochemistry, petrology and volcanology to provide constraints on the global volcanic carbon flux by 1) establishing a database of volcanic and hydrothermal gas compositions and fluxes linked to EarthChem/PetDB and the Smithsonian Global Volcanism Program, 2) building a global monitoring network to measure the volcanic carbon flux of 20 active volcanoes continuously, 3) measure the carbon flux of remote volcanoes, for which no or only sparse data are currently available, 4) develop new field and analytical instrumentation for carbon measurements and flux monitoring, and 5) establish formal collaborations with volcano observatories around the world to support volcanic gas measurement and monitoring activities.
History
The DECADE initiative was conceived in September 2011 by the International Association of Volcanology and Chemistry of the Earth's Interior Commission on the Chemistry of Volcanic Gases during its 11th field workshop. Here the charge of the initiative was broadly defined and the governance structure established. The DECADE receives financial support from Deep Carbon Observatory to meet the project goals, with support distributed to DECADE members based on project proposal submission and external review and/or consensus by the board of directors. All projects are significantly matched by funding sources from the individual investigators or other funding agencies. The initiative is led by a board of directors that has nine members including one chair and two co-vice chairs. Currently, the DECADE initiative has around 80 members from 13 countries.
Achievements
, major achievements supported or partially supported by the DECADE initiative include:
Modification of the IEDA EarthChem database to include volcanic gas composition and gas flux data.
Instrumenting 9 volcanoes (Masaya Volcano, Turrialba Volcano, Poás Volcano, Nevado del Ruiz, Galeras, Villarrica (instruments destroyed by eruption), Popocatépetl, Mount Merapi, Whakaari / White Island) with permanent multi-component gas analyzer system (Multi-GAS) stations for near continuous CO2 and SO2 measurements and near continuous SO2 flux measurements using miniDOAS.
Quantification of volcanic gas emissions and compositions from remote regions such as the Aleutian, Vanuatu and Papua New Guinea volcanic arcs.
First measurements of gas emissions from Mount Bromo and Anak Krakatau Volcanoes, Krakatoa Indonesia.
Establishing volcanic gas chemical changes as eruption precursors at Poás and Turrialba Volcanoes, Costa Rica.
Airborne sampling of volcanic plumes for carbon isotopes and analyses using Delta Ray Infrared Isotope Spectrometer.
Determination of diffuse CO2 degassing in the Azores.
Quantification of global CO2 emissions from volcanoes during eruptions, passive degassing and diffuse degassing
Volcanoes
The following volcanoes are currently monitored by the DECADE initiative:
Map of the DCO DECADE project volcano installations
See also
References
External links
Deep Earth Carbon Degassing
Earthchem/petdb The Petrological Database
Global Volcanism Program
Volcanism
Geophysics
Carbon | Deep Earth Carbon Degassing Project | [
"Physics"
] | 1,049 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
73,118,583 | https://en.wikipedia.org/wiki/Frost%20resistance | Frost resistance is the ability of plants to survive cold temperatures. Generally, land plants of the northern hemisphere have higher frost resistance than those of the southern hemisphere. An example of a frost resistant plant is Drimys winteri which is more frost-tolerant than naturally occurring conifers and vessel-bearing angiosperms such as the Nothofagus that can be found in its range in southern South America.
The physiological process of cold acclimatization is induced in fall and early winter by low above- zero temperatures (cold) and includes complex reprogramming of the cellular environment to induce enhanced frost tolerance.Temperate climate fruit trees reach their highest resistance in the middle of winter. Since freezing belongs to dehydration stresses, cold acclimation process is associated with an enhanced accumulation of osmolytes (sugars, proline, polyamines, and hydrophilic proteins). The loss of frost resistance occurs after warming. Rapid temperature fluctuations during winter deharden trees and increase the risk of spring damage. Species that bloom first even before the leaves develop like apricots or peaches, are particularly vulnerable to damage. The reproductive organs, due to their abundant hydration, are easily damaged, leading to large losses in the cultivation of fruit trees and shrubs.
References
Agricultural economics
Plant physiology | Frost resistance | [
"Biology"
] | 267 | [
"Plant physiology",
"Plants"
] |
73,119,378 | https://en.wikipedia.org/wiki/MT%20Pacific%20Cobalt | MT Pacific Cobalt is a Singaporean oil tanker built in 2020 and owned by Eastern Pacific Shipping. It is one of the first and largest ships to be installed with an onboard filtration and carbon capture system.
Description
Pacific Cobalt is a oil and chemical tanker with an overall length of . It is wide and has an average draft of . It has an identical sister ship named Pacific Gold.
History
In May 2022, Eastern Pacific Shipping announced that it would be working with the Netherlands-based maritime carbon capture company Value Maritime to install prefabricated "Filtree" systems. The installation was finished in February 2023 after a seventeen-day construction period, and Pacific Cobalt steamed from Rotterdam to Venice shortly after the installation was completed.
References
2020 ships
Merchant ships of Singapore
Carbon capture and storage
Oil tankers | MT Pacific Cobalt | [
"Engineering"
] | 163 | [
"Geoengineering",
"Carbon capture and storage"
] |
73,119,604 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93manganese%20alloys | Aluminium–manganese alloys (AlMn alloys) are aluminium alloys that contain manganese (Mn) as the main alloying element. They consist mainly of aluminium (Al); in addition to manganese, which accounts for the largest proportion of about 1% of the alloying elements, but they may also contain small amounts of iron (Fe), silicon (Si), magnesium (Mg), or copper (Cu). AlMn is almost only used as a wrought alloy and is processed into sheets or profiles by rolling or extrusion presses. These alloys are corrosion-resistant, have low strengths for aluminium alloys, and are not hardenable (by heat treatment). They are standard in the 3000 series.
Applications
Aluminium–manganese alloys are used in applications with low strength requirements and also in chemical and food-related environments due to their corrosion resistance. AlMn is therefore used more commonly as a functional material than a construction material.
AlMn is processed into beverage cans and is generally referred to as the packaging material used. It is used for apparatus and pipes in the chemical industry, for roof cladding, wall coverings, pressure vessels, roller shutters, roller doors, and heat exchangers.
Influences of the alloy elements
Manganese combines with aluminium to form intermetallic phases—phases which contain a different crystallographic structure than either manganese or alumunium by itself. Compared to conventional alloys, the increased metallic bonding within the intermetallic phase of AlMn increases its strength and chemical resistance. Each percent of manganese increases its strength by about 42 MPa. Iron and silicon are usually unwanted accompanying elements that cannot be completely removed. Magnesium and copper are more effective in enhancing strength, providing an increase of 70–85 MPa per % of Mg when added to the alloy.
Phases
Binary aluminium manganese phases
Aluminium and manganese are partially miscible in the solid state, meaning they can mix to some extent in the solid phase. However, their complete solubility in each other is limited, and they tend to form various intermetallic phases. The eutectic, between aluminium and Al6Mn is 1.3% manganese and 660 °C, while pure aluminium melts at 660.2 °C. Values of 1.8% and 657 °C or 658 °C can also be found in older literature.
Above 710 °C, Al4Mn is formed with a manganese content of at least 4%. However, such high levels are not typically used. Below 510 or 511 °C, Al12Mn forms.
The solubility of manganese in the Mn-Al solid solution falls rapidly with decreasing temperature and is close to zero at room temperature.
Phases in AlMn materials with other elements
Some of the AlMn materials also contain iron (Fe) or silicon (Si) additives. These form the phases Al3Fe, Al8Fe2Si, Al5FeSi, Al15Si2(Mn,Fe)3. Mixed crystals also occur in the form of Al12(Mn, Fe)3Si.
Aluminium and Al15Si2(Mn,Fe)3 are formed from the melt, Al3Fe and Al6(Mn,Fe) at 648 °C.
At temperatures below 630 °C, aluminium, Al15Si2(Mn,Fe)3 and Al8Fe2Si are formed from the melt, Al3Fe.
Aluminium, Al5FeSi and Al15Si2(Mn,Fe)3 are formed from the melt and Al8Fe2Si at around 600 °C.
Aluminium, silicon and Al15Si2(Mn,Fe)3 are formed from the melt and Al5FeSi at around 565 °C.
Structures
The structure resulting from casting into bars or slabs consists of the main mass, which is an oversaturated mixed crystal, along with areas containing manganese-containing phases that have an average size of about 100 μm. A significant portion of the manganese (approximately 0.7 to 0.9%) remains dissolved in aluminium because the cooling rates after casting are too rapid for all the manganese to undergo diffusion and precipitate. This is further exacerbated by the very low diffusion speed of manganese in aluminium.
Through processes such as homogenization and forming (rolling, forging), the structure of the material undergoes changes. During this transformation, various phases that were previously present within the basic aluminium crystal structure, each having a size of less than one micron, are eliminated. These particles contribute to an approximately 25% increase in strength compared to pure aluminium. They exhibit thermal stability and are challenging to dissolve.
In the formed and homogenized state, the material exhibits a very fine structure, and the larger manganese-containing areas observed in the initial casting state are no longer present. These finely dispersed particles also impede grain growth, further enhancing the material's strength. However, it's important to note that this improvement is relatively modest, as the influence of grain size on the strength of aluminium materials is generally limited.
The presence of silicon accelerates the excretion of Al12(Mn, Fe)3Si. If there is enough silicon, the Al6(Mn,Fe) is converted to Al12(Mn,Fe)3Si during homogenization.
Properties and standardised alloys
3000 series
3000 series are alloyed with manganese and can be work hardened.
References
Further reading
Friedrich Ostermann: Application technology aluminium. 3. Edition. Springer, 2014, , p. 100-102.
Aluminium paperback. Volume 1: Fundamentals and materials. 16. Edition. Beuth-Verlag, Berlin/ Vienna/ Zurich 2002, , p. 104 f, 122.
George E. Totten, D. Scott MacKenzie: Handbook of Aluminum. Volume 1: Physical Metallurgy and Processes. Marcel Dekker, New York/ Basel 2003, , p. 159f.
Aluminium–manganese alloys | Aluminium–manganese alloys | [
"Chemistry"
] | 1,224 | [
"Alloys",
"Aluminium alloys"
] |
73,119,653 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93zinc%20alloys | Aluminium brass is a technically rather uncommon term for high-strength and partly seawater-resistant copper-zinc cast and wrought alloys with 55–66% copper, up to 7% aluminium, up to 4.5% iron, and 5% manganese. Aluminium bronze is technically correct as bronze, a zinc-free copper-tin casting alloy with aluminium content.
The term "special brass" is much more common for this, which then also includes alloys that add further characteristic elements to the copper-zinc base. In addition to the already mentioned elements of iron and manganese, lead, nickel and silicon can also be found as alloy components.
Due to their aluminium content, which is susceptible to oxidation at the usual melting temperatures in the range of 900 °C, the alloys require careful melting and melting treatment. Even when potting, attention must be paid to any oxides forming.
7000 series
7000 series are alloyed with zinc, and can be precipitation hardened to the highest strengths of any aluminium alloy. Most 7000 series alloys include magnesium and copper as well.
References
Further reading
Publication series of the DKI, Berlin, number L5 "Copper-Zinc alloys".
Foundry lexicon. 17. Edition, Schiele and Schön, Berlin,
Aluminium–zinc alloys | Aluminium–zinc alloys | [
"Chemistry"
] | 259 | [
"Alloys",
"Aluminium alloys"
] |
73,119,707 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93magnesium%20alloys | Aluminium–magnesium alloys (AlMg) – standardised in the 5000 series – are aluminium alloys that are mainly made of aluminium and contain magnesium as the main alloy element. Most standardised alloys also contain small additives of manganese (AlMg(Mn)). Pure AlMg alloys and the AlMg(Mn) alloys belong to the medium-strength, natural (not hardened by heat treatment) alloys. Other AlMg alloys are aluminium–magnesium–copper alloys (AlMgCu) and aluminium–magnesium–silicon alloys (AlMgSi, 6000 series).
Applications and processing
Discovery of aluminium–magnesium alloys dates back to late 19th century. AlMg alloys are among the most important aluminium alloys for construction materials. They get cold dwell transform, i.e., by rolling and forging and are easily weldable at Mg levels of at least 3%. AlMg is rarely processed through extrusion presses, as subsequent strength changes in extrusion profiles must be avoided. The majority of AlMg alloys are processed into rolled products as well as pipes, rods, wires and free-form or drop-forged parts. Parts are also processed into extrusion profiles with simple cross-sections.
Due to the good corrosion resistance and high strength at low temperatures, AlMg is used in shipbuilding, in the construction of chemical apparatus and pipelines, and for refrigeration technology and automobiles. The good weldability is crucial for use in the aircraft construction, there also with additions of scandium and zirconium for better weldability.
Solubility of magnesium and phases
The solubility of magnesium is very high in aluminium and reaches a maximum at 450 °C with 14% to 17% depending on the literature reference. At 34.5%, there is a Eutectic with Al8Mg5 (sometimes referred to as Al3Mg2), an intermetallic phase (-phase). The solubility of Mg decreases sharply with falling temperature, i.e., at 100 °C it is still 2%, at room temperature 0.2%.
The elimination of the -phase occurs with pure AlMg alloys after a four-stage process. With technically used alloys with other alloying elements and impurities, the process is much more complicated:
First of all, clusters form, in the case of aluminium as GP zones. These are local accumulations of magnesium atoms in the aluminium grid, which do not yet form their own phase and do not have a regular arrangement.
Formation of the -phase. Their crystals have the same spatial orientation as those of the aluminium mixing crystal.
Formation of the semi-coherent -phase. It is only partially oriented towards the lattice of the Al mixed crystal.
Formation of incoherent -phase. It has no spatial orientation with the Al mixed crystal.
In the case of technical alloys, the excretion differs from this for the following reasons:
Low diffusion of magnesium in aluminium
For the formation of GP zones and -phase, a high oversaturation of 7% Mg and more is required, which is not achieved in most all>oys. In AlMg4.5Mn0.7, no GP zones were used even after prolonged glow at temperatures up to 250 °C or -phase found, although after just a few days -phase is present.
Dislocations are not sufficient terms for the formation of -phase, -phase or -phase. The reason is the small volume difference between these phases and the matrix.
Structures
The diffusion of magnesium in aluminium is very low. The reason is the high size difference between the radius of the aluminium atoms and that of the magnesium atoms (). Therefore, after watering, only part of the magnesium is removed from the mixed crystal, while most of it is present as an oversaturated solution in aluminium. Even with prolonged annealing treatment, this condition cannot be eliminated.
Excess magnesium is excreted mainly at the grain boundaries as well as on dispersion particles in the grain. The speed of the process depends on the Mg content and the temperature and increases with both. At the grain boundaries, so-called plaques are initially excreted, thin plates that are not connected, i.e. do not yet form a continuous layer around the grain. At 70 °C, they form after 3 months, at 100 °C after 3 days and at 150 °C after one to nine hours. If further time passes at elevated temperature, the plaques grow together to form a contiguous film. This has a negative effect on corrosion resistance, but can be dissolved by heat treatment. Annealing at 420 °C for one hour followed by slow cooling of 20 °C/h or starting annealing at 200 °C to 240 °C is suitable. The plaques of the -phase transform into numerous small particles, referred to in the specialist literature as "bead line-like". They no longer form a coherent film.
Composition of standardised varieties
The compositions of some standardised varieties are contained in the following table. Proportions of alloying elements in mass percent. Of the available varieties, there are fine gradations of Mg and Mn levels. Mn-free are very rare. Standard alloys are AlMg3Mn, AlMg4.5Mn0.7, as well as for bodywork AlMg4.5Mn0.4. Magnesium levels of up to 5% and manganese content up to 1% are used for wrought alloys.
Mg contents up to 10% are also possible for cast alloys; however, contents of 7% and more are considered heavypourable.
5000 series
5000 series are alloyed with magnesium. 5083 alloy has the highest strength of non-heat-treated alloys. Most 5000 series alloys include manganese as well.
Corrosion
Aluminium-magnesium alloys are considered to be very corrosion-resistant, making them suitable for marine applications, but this is only true if the -phase exists as a non-contiguous phase. Alloys with Mg contents below 3% are therefore always corrosion-resistant, with higher contents, appropriate heat treatment must ensure that this phase is not present as a continuous film at the grain boundaries.
The -phase and the -phase are very base compared to aluminium and have an anodic characteristic. AlMg therefore tends to intergranular corrosion if
The -phase is excreted as a continuous film at the grain boundaries and at the same time
the material is in an aggressive environment.
Alloys in states susceptible to intergranular corrosion are annealed at temperatures of 200 °C to 250 °C with slow cooling (heterogeneisation annealing). This changes the -phase film in globulite -phase and the material is resistant to intergranular corrosion.
Mechanical properties
Table
Strengths and elongation at break in tensile test
The strength is increased by alloying magnesium. At low Mg levels, the increase in strength is relatively strong with higher levels, it is getting weaker and weaker. However, magnesium increases strength very efficiently compared to other elements; per % Mg, so it is stronger than with alternative elements. Even with medium Mg content, the increase in strength by alloying manganese is higher than by additional magnesium, which is also one reason why most AlMg alloys still contain manganese. As a reason for the high increase in strength of magnesium, the high binding energy of vacancies at Mg atoms. These spaces are then no longer available as free spaces. However, these are favourable for plastic deformation.
The yield strength increases linearly with increasing Mg content from about 45 N/mm2 at 1% Mg to about 120 N/mm2 at 4% Mg. The tensile strength also increases linearly, but with a steeper gradient. With 1% Mg it is about 60 N/mm2, with 4% Mg 240 N/mm2. There are different statements for the elongation at break : Research on alloys based on the purest shows an increasing elongation at break from about 20% elongation at 1% to 30% at 5% Mg Elongation at break: First it drops sharply from 38% elongation and 1% Mg to 34% elongation and about 1.8% Mg, reaches a minimum at 3% Mg with only 32% elongation and then rises again to about 35% Elongation at 5% Mg.
The flow curves for AlMg show the behaviour typical of metallic materials of increasing the flow voltage with the true elongation or forming degree. For all alloys, the increase is relatively strong at low elongations and lower at higher elongations. However, the curves for higher alloy varieties are always above the low-dried. For example, with a true elongation of 0.2, AlMg0.5 has a flow voltage of about 100 N/mm2, AlMg one of 150 N/mm2, AlMg3 of 230 N/mm2 and AlMg4.5Mn0.4 of about 300 N/mm2. The higher the alloy content and the greater the elongation, the greater the resulting PLC effect and the Lüders effect.
Influence of grain size
In the case of pure aluminium, the grain size has a minor influence on the strength for metals. In the case of alloys, the influence increases with the alloy content. At 5% Mg, materials with grain sizes of 50 μm achieve uniform elongations of around 0.25, at 250 μm they are around 0.28. AlMg8 already achieves uniform elongations of 0.3 with a grain diameter of 200 μm. With increasing grain size, both the Lüders strain and the Lüders effect decrease.
Cold forming and heat treatment
In the case of very high degrees of deformation with heavily work-hardened alloys, softening can also occur at room temperature. In a long-term study over 50 years, a decrease in strength could be measured by the end. The decrease is greater the higher the degree of deformation and the higher the alloy content. The softening itself is very pronounced at the beginning and quickly subsides. The effect can be avoided by stabilization annealing at around 120 °C to 170 °C for several hours.
References
Further reading
Aluminium–magnesium alloys | Aluminium–magnesium alloys | [
"Chemistry"
] | 2,084 | [
"Alloys",
"Aluminium alloys"
] |
73,120,046 | https://en.wikipedia.org/wiki/Aluminium%E2%80%93magnesium%E2%80%93silicon%20alloys | Aluminium–magnesium–silicon alloys (AlMgSi) are aluminium alloys—alloys that are mainly made of aluminium—that contain both magnesium and silicon as the most important alloying elements in terms of quantity. Both together account for less than 2 percent by mass. The content of magnesium is greater than that of silicon, otherwise they belong to the aluminum–silicon–magnesium alloys (AlSiMg).
AlMgSi is one of the hardenable aluminum alloys, i.e. those that can become firmer and harder through heat treatment. This curing is largely based on the excretion of magnesium silicide (Mg2Si). The AlMgSi alloys are therefore understood in the standards as a separate group (6000 series) and not as a subgroup of aluminum-magnesium alloys that cannot be hardenable.
AlMgSi is one of the aluminum alloys with medium to high strength, high fracture resistance, good welding suitability, corrosion resistance and formability. They can be processed excellently by extrusion and are therefore particularly often processed into construction profiles by this process. They are usually heated to facilitate processing; as a side effect, they can be quenched immediately afterwards, which eliminates a separate subsequent heat treatment.
Alloy constitution
Phases and balances
The AlMg2Si system forms a Eutectic at 13.9% Mg2Si and 594 °C. The maximum solubility is 583.5 °C and 1.9% Mg2Si, which is why the sum of both elements in the common alloys is below this value. The stoichiometric composition of magnesium to silicon of 2:1 corresponds to a mass ratio of 1.73:1. The solubility decreases very quickly with falling temperature and is only 0.08 percent by mass at 200 °C. Alloys without further alloying elements or impurities are then present in two phases with the-mixed crystal and thephase (Mg2Si). The latter has a melting point of 1085 °C and is therefore thermally stable. Even clusters of magnesium and silicon atoms that are only metastable dissolve only slowly, due to the high binding energy of the two elements.
Many standardised alloys have a silicon surplus. It has little influence on the solubility of magnesium silicide, increases the strength of the material more than an Mg excess or an increase in the Mg2Si content, increases the volume and the number of excretions and accelerates excretion during cold and hot curing. It also binds unwanted impurities; especially iron. A magnesium surplus, on the other hand, reduces the solubility of magnesium silicide.
Alloying elements
In addition to magnesium and silicon, other elements are contained in the standardized varieties.
Copper is used to improve strength and hot curing in quantities of 0.2-1%. It forms the Q phase (Al4Mg8Si7Cu2). Copper leads to a denser dispersion of needle-shaped, semi-coherent excretion (cluster of magnesium and silicon). In addition, there is the phase before the for the Aluminium-copper alloys are typical. Alloys with higher copper content (alloyings 6061, 6056, 6013) are mainly used in aviation.
Iron occurs in all aluminium alloys as an impurity in quantities of 0.05-0.5%. It forms the phases Al8Fe2Si, Al5FeSi and Al8FeMg3Si6, which are all thermally stable, but undesirable because they brittle the material. Silicon surpluses are also used to bind iron.
Manganese (0.2-1%) and Chromium (0.05–0.35%) is deliberately added. If both are allocated at the same time, the sum of the two elements is less than 0.5%. After annealing, they form a dispersion of excretions at at least 400 °C and thus improve strength. Chromium is mainly effective in combination with iron.
As dispersion formers are coming zirconium and vanadium for use.
Dispersions
Dispersion particles have little influence on strength. If magnesium or silicon excrete on them during cooling after the solution annealing and, thus, do not form magnesium silicide as desired, they even lower the strength. They increase the sensitivity to deterrent. However, if the cooling speed is insufficient, they also bind excess silicon, which would otherwise form coarser excretions and thus reduce strength. The dispersion particles activate further even when cured. Sliding planes, so that theDuctility increases and, above all, intergranular fracture can be prevented. The alloys with higher strength therefore contain manganese and chromium and are more sensitive to deterrents.
The following applies to the effect of the alloying elements with regard to dispersion formation:
The strength at room temperature hardly changes. However, the flow limit at higher temperatures rises sharply, which makes theReformability is limited and above all unfavourable in the extrusion is because it increases the minimum wall thickness.
The recrystallisation is made more difficult, which prevents coarse grain formation and has a positive effect on formability.
Dislocation movements are blocked at low temperatures, which improved fracture toughness.
Dispersions of AlMn bind oversaturated silicon during cooling after solution annealing. This improves crystallization and avoids excretion-free zones that otherwise arise at the grain boundaries. This improves the fracture behaviour from brittle to ductile and intragranular.
The sensitivity to quenching increases because precipitated silicon is required for hardening. Alloys containing Mn or Cr must therefore be cooled faster than those without these elements.
6000 series
6000 series are alloyed with magnesium and silicon. They are easy to machine, are weldable, and can be precipitation hardened, but not to the high strengths that 2000 and 7000 can reach. 6061 alloy is one of the most commonly used general-purpose aluminium alloys.
Grain boundaries
to the grain boundaries prefer silicon to be excreted, as it has germination problems. In addition, magnesium silicide is excreted there. The processes are probably similar to those of the AlMg alloys, but still relatively unexplored for AlMgSi until 2008. The phases excreted at the grain boundaries lead to the tendency of AlMgSi to brittle grain boundary breakage.
Compositions of standardised varieties
All information in mass percent. EN stands for European standard, AW for aluminium wrought alloy; the number has no other meaning.
Mechanical properties
Conditions:
O soft (soft annealed, whether or not warmly formed with the same strength limits).
T1: quenched by the hot forming temperature and cold outsourced
T4: solution annealed and cold outsourced
T5: quenched from the hot forming temperature and warm outsourced
T6: solution annealed, quenched and warmly outsourced
T7: solution annealed, quenched, hot outsourced and overhardened
T8: solution annealed, cold solidified and hot outsourced
Heat treatment and curing
AlMgSi can be used in two different ways through aHeat treatment can be hardened, whereby hardness and Strength rise, while ductility and Elongation at break. Both begin with the Solution annealing and can also be used with mechanical processes (Forging), with different effects:
Solution annealing: At temperatures of about 510-540 °C, annealing is made, with the alloying elements in solution.
Quenching almost always follows immediately . As a result, the alloying elements initially remain in solution even at room temperature, whereas they would form precipitates if they cooled down slowly.
Cold curing: At room temperature, excretions gradually form that increase strength and hardness. In the first hours after quenching, the increase is very high, lower in the next few days, then only creeping, but not yet completed even after several years.
Hot curing: At temperatures of 80-250 °C (usual are 160-150 °C), the materials are reheated in the oven. The hardening times are usually 5–8 hours. The alloying elements thus excrete faster and increase hardness and strength. The higher the temperature, the faster the maximum strength possible for this temperature is reached, but the lower the higher the temperature, the lower.
Interim storage and stabilisation
If time passes after quenching and hot curing (so-called interim storage), then the achievable strength decreases during hot curing and only occurs later. The reasons are the change in the material cold curing during temporary storage. However, the effect only affects alloys with more than 0.8% Mg2Si (excluding Mg or Si surpluses) and alloys with more than 0.6% Mg2Si if Mg or Si surpluses are present.
To prevent these negative effects, AlMgSi can be annealed after quenching at 80 °C for 5–30 minutes, which stabilizes the material condition and temporarily does not change. The heat curing is then maintained. Alternatively, a step quenching is possible in which temperatures are initially quenched to be applied during hot curing. The temperatures are maintained for a few minutes to several hours (depending on temperature and alloy) and then completely cooled to room temperature. Both variants allow the workpieces to be processed in the deterred state for some time. Cold curing begins in the event of a longer waiting time. Longer treatment times increase the possible storage period, but reduce the formability. Some of these procedures are protected by patents.
Stabilization has other advantages: The material is then in a definable state, which allows repeatable results in the subsequent processing. Otherwise, for example, the time of interim outsourcing would have an impact on theRebound at theBending so that a constant bending angle would not be possible over several workpieces.
Influence of cold forming
A transformation (forging, rolling, bending) leads to metals and alloys strain hardening, an important form of increasing strength. With AlMgSi, however, it also has an influence on the subsequent warming. Cold forming in the hot-cured state, on the other hand, is not possible due to the low ductility in this state.
Although cold forming directly after quenching increases the strength through strain hardening, it reduces the increase in strength through strain hardening and largely prevents it for from 10%.
On the other hand, cold forming in a partially or fully cold-hardened state also increases the strength, so that both effects add up.
If cold forming (in the quenched or cold-hardened state) is followed by hot forming, this takes place more quickly, but the strength that can be achieved is reduced. The higher the strain hardening, the higher the yield point, but the tensile strength does not increase. If, on the other hand, the cold forming takes place in the stabilized state, the achievable strength values improve.
Applications
AlMgSi is one of the aluminum alloys with medium to high strength, high fracture resistance, good welding suitability, corrosion resistance and formability.
They are used, among other things, for bumper, bodies and for large profiles in the Rail vehicle construction. In the latter case, they were largely responsible for the changed design of rail vehicles in the 1970s: previously, riveted pipe structures were used. Thanks to the good extrusion compatibility of AlMgSi, large profiles can now be produced, which then can be welded. They are also used in aircraft construction, but there they are AlCu and AlZnMg preferred, but not or only difficult are weldable. The weldable higher-strong AlMgSiCu alloys (AA6013 and AA6056) are used in the Airbus models A318 and A380 for ribbed sheets in the aircraft hull used, where through the Laser welding, weight and cost savings are possible. Swelding is cheaper than the usual in aircraft construction Rivets; The overlaps required during riveting can be eliminated during welding, which saves component mass.
References
Further reading
Aluminium–magnesium–silicon alloys | Aluminium–magnesium–silicon alloys | [
"Chemistry"
] | 2,524 | [
"Alloys",
"Aluminium alloys"
] |
73,123,724 | https://en.wikipedia.org/wiki/Uzbekistan%20cough%20syrup%20scandal | The Uzbekistan cough syrup scandal was a series of poisonings that resulted in the deaths of 18 children in Samarkand and two more children elsewhere in Uzbekistan in December 2022 and January 2023. It was caused by the toxic levels of diethylene glycol and ethylene glycol in cold medicines produced by the Indian company Marion Biotech, such as the Dok-1 Max brand. Subsequently, the Indian government investigated Marion Biotech's manufacturing processes, while Uzbek authorities opened a criminal case against members of the health system that had contributed to the children's deaths, such as regulatory officials and pharmacy administrators.
On December 22, the sale and distribution of Dok-1 Max was temporarily suspended in Uzbekistan, followed by the complete ban of Marion-produced cough syrups one week later. Uzbek customs officials prevented the distribution of 60,000 boxes of Dok-1 Max in response to the new regulations. In late December, the Indian government also ordered the company's manufacturing plant in Noida, Uttar Pradesh to shutdown production and on January 10, 2023, it suspended Marion's license to conduct business.
Shavkat Mirziyoyev, the current President of Uzbekistan, removed Sardor Kariyev from directing the Ministry of Health's Agency for the Development of the Pharmaceutical Network for failed regulatory oversight. The World Health Organization is continuing to investigate the poisonings.
Background
Dok-1 Max is a combination drug produced in syrup and tablet forms, used to resolve symptoms of acute respiratory diseases. It is mainly used in cases of whooping cough, runny nose, wet cough, sore throat, angina, headache, body aches, and fever. Despite the packaging indicating that the syrup can be safely used for children aged 2 and over, other companies only recommend use among those over 12 years of age. Each 10 ml dose of the drug contains paracetamol (500 mg), guaifenesin (200 mg), phenylephrine hydrochloride (10 mg), and minor amounts of sorbitol, propylene glycol, sodium benzoate, citric acid monohydrate, sodium saccharin, sucralose, glycerol, caramel coloring, ice lemon flavoring, menthol, and purified water.
The drug was available in Uzbekistan from 2012 to 2022 and imported under the limited liability company Quramax Medical.
Toxicity
The Uzbek Ministry of Health's toxicology studies identified substitution of the cough syrup's propylene glycol with diethylene glycol and ethylene glycol, toxic substances that can cause vomiting, seizures, syncope, acute renal failure, and cardiovascular disease. Marion Biotech allowed adultation with diethylene glycol and ethylene glycol at approximately 300 times higher than regulatory limits.
Incidents
An official letter from the head of the Children's Multidisciplinary Medical Center of the Samarkand region, Mamatkul Azizov, to the head of the Health Department of the regional government, Davronbek Jumaniozov, on December 15, describes the tragic circumstances related to Dok-1 Max to have happened "in the last 2 months".
Oltinoy Esanova, the deputy head of the Syrdarya Region health department, reported that the drug had a negative effect on seven children. 3 of the patients are in the intensive care unit, and 4 have recovered. The information advised parents to refer children under 6 years of age who have received Dok-1 Max to family hospitals, even if there are no side effects.
Abduqayum Tokhtaqulov, the head of the health department of Fergana region, reported to the National News Agency that a 3-year-old boy was poisoned. In his message, he noted that the condition of the patient from the city of Kuvasoy was serious and that he was currently being treated on the basis of specially approved standards. Tokhtakulov also told parents that they should keep their children away from such drugs and not to give them to their children.
There were also reports that 9 children were poisoned in Tashkent region. According to the report of the regional administration, until now there had been no cases of death in the region due to the drug. It was reported in the media that five of the children in the region recovered, and the condition of four was stable.
Akmal Askarov, deputy minister of Karakalpakstan, told the state television channel that two children were poisoned by the Dok-1 Max and were sent to Tashkent for specialized treatment. Spokesman of the Ministry S. Ziyayev noted that the condition of the children was average, with renal toxicity being observed.
Deaths
Cases of death in young children were recorded in Samarkand (18), Kashkadarya (1) and Namangan (1) regions.
18 children died in the children's multidisciplinary medical center located in Samarkand after taking cough syrup produced in India. Human rights representative of the Oliy Majlis, ombudsman Aliya Yunusova, representatives of the child rights protection sector and prosecutor's office conducted a special study on children who died in this hospital, showing that all of the dead children were under the age of 6, and 15 of them were children under the age of 3. The patients who died in Samarkand were from Jizzakh, Kashkadarya, Navoi and Samarkand regions.
On December 29, 2022, the number of children who died as a result of taking this drug reached 20. A child born on August 14, 2021, in Dehqonabad district of Kashkadarya region was brought to the Kashkadarya branch of the Emergency Medical Center of the Republic of Uzbekistan on December 27 under the effects of the syrup. The child died on December 28, despite the medical assistance provided.
One of the family members of the deceased child told Daryo.uz about the development of events:
Out of 43 children with acute respiratory diseases, 20 who died shortly after admission were found to have taken excessive amounts of the Dok-1 Max syrup, sometimes consisting of 2.5 to 5 ml 3-4 times a day for 2–7 days.
Responses
The sale and distribution of imported cough syrup was temporarily suspended on December 22. During a briefing held on December 29, 2022, with the participation of Sevara Ubaidullayeva, the head of the Department of Maternity and Child Protection of the Ministry of Health of Uzbekistan, special recommendations were given to people who took Dok-1 Max, saying that patients who did not develop various symptoms days after taking the drug should not be alarmed. After it was disclosed that several batches of Dok-1 Max and other similar cold syrups were found to contain toxic substances, their sale was banned, but it was reported that some pharmacies were still engaged in selling such drugs. That month, 3 imported batches of the Dok-1 Max (about 60,000 boxes) were placed under strict control by officials of the State Customs Committee against in response to the health crisis.
Ministry of Health
In connection with the incident, officials of the Ministry of Health of Uzbekistan reported that they formed a special working group consisting of qualified specialists of the State Center for Expertise and Standardization of Medicines, Medical Products and Medical Equipment in order to study the negative symptoms caused by the adulterated drug. The message stated that there was a criminal motivation behind the incident, that 7 persons responsible were fired, and that the identified information was sent to the law enforcement authorities. According to the report, the medicine was bought without a doctor's prescription, independently on the recommendation of pharmacists, and was taken in an overdose. Behzod Musayev, the minister of the health department of Republic of Uzbekistan, sent a video message of condolence to the parents and relatives of 18 children who died after taking the Dok-1 Max drug. The head of the Department of Motherhood and Child Protection stated that the result of a study conducted by the special working group appointed by the Ministry showed that the condition observed in children under the influence of syrup had been obtained more than 2 months since its onset; the fact that hospital officials did not provide prompt information on the incident to the ministry on the same day further complicated the situation.
Marion Biotech Pvt. Ltd
World Health Organization
Since December 27, the World Health Organization (WHO) has been in contact with government officials in Uzbekistan regarding the deaths of children who died after consuming Dok-1 Max cough syrup:
The total number of deaths from the toxic cough syrup across three countries of the world was 300, as announced by the World Health Organization announced in a press release:
President of Uzbekistan
Sardor Kariyev, who had been working as the director of the Agency for the Development of the Pharmaceutical Network under the Ministry of Health of Uzbekistan since February 2019, was relieved of his duties at a meeting held by the President of the Republic on December 30. The president also learned about the death of children in Samarkand. He criticized officials of the field who were connected to the situation:
Criminal investigation
Following the scandal, the Indian government has launched an investigation against Marion Biotech. The company, who manufactured the Dok-1 Max drug, was registered as a small producer in India in 2010 and as an international exporter in 2016. Quramax Medical LLC, which imported the company's products to Uzbekistan, was registered in Uzbekistan in 2006, headed by Singh Ragvendra Pratar. The Narcotics Licensing and Control Authority of Uttar Pradesh has been entrusted with the investigation. On December 29, 2022, Marion Biotech officially stopped the production of its Dok-1 Max cough syrup, which had caused the death of more than 20 children in Uzbekistan. According to the Economic Times, the investigative commission asked the government of Uzbekistan for a report on children who died as a result of drug.
In connection with this incident, the Investigation Department of the State Security Service of Uzbekistan initiated a criminal case against the officials of Quramax Medical and Scientific Center for Standardization of Medicines under Article 186–3, Part 4, Clause "a" of the Criminal Code. Suspects were arrested. Among the 7 officials released from their positions is Mamatqul Azizov, who discovered that the syrup was dangerous and that it was this drug that caused the death of children. News began to spread that these officials may be reinstated. Kun.uz website reports that the newly appointed Minister of Health Amrillo Inoyatov held a meeting with 7 dismissed officials.
The Dok-1 Max drug exported from India to Uzbekistan was subjected to laboratory tests by Scientific Center for Standardization of Medicines LLC. However, the review was not systematic, meaning that the drugs were not thoroughly tested. State security authorities arrested two heads of Quramax Medical LLC and Scientific Center for Standardization of Medicines as suspects in connection with this case.
On December 30, 2022, the Indian government's Drug Standards Control Center and Uttar Pradesh state government's Drug Control Department inspected Marion Biotech's manufacturing plant in Noida, Uttar Pradesh. Their findings led to the immediate suspension of all pharmaceutical production at the factory, affecting Marion products beyond the cough syrups attributed to the deaths.
On January 2, 2023, the sale of all drugs imported by Quramax Medical that were found to contain ethylene glycol and diethylene glycol was stopped. On January 10, the Indian government suspended the license of Marion Biotech, the manufacturer of Dok-1 Max. On January 13, the license of Quramax Medical LLC was revoked by a decision of the Tashkent Interdistrict Economic Court.
According to an article from June 2023, sources at Maya Chemtech India told Reuters that Marion purchased industrial-grade propylene glycol as an ingredient from them. Maya is not licensed to sell pharmaceutical-grade materials. It is not facing charges but the investigation is ongoing. Marion did not test the ingredient it purchased.
In October 2023, the Uttar Pradesh state government accepted Marion Biotech's appeal to resume production of medicines that do not contain propylene glycol, as such products would be unlikely to contain the toxic impurities of diethylene glycol and ethylene glycol.
See also
Toxic cough syrup
Notes
References
External links
Central Asia's Dangerous Pharmaceutical Industry
Health disasters in Uzbekistan
2022 health disasters
2022 in Uzbekistan
Adulteration
Mass poisoning
Medical scandals | Uzbekistan cough syrup scandal | [
"Chemistry"
] | 2,585 | [
"Adulteration",
"Drug safety"
] |
73,124,554 | https://en.wikipedia.org/wiki/Linked-read%20sequencing | Linked-read sequencing, a type of DNA sequencing technology, uses specialized technique that tags DNA molecules with unique barcodes before fragmenting them. Unlike traditional sequencing technology, where DNA is broken into small fragments and then sequenced individually, resulting in short read lengths that has difficulties in accurately reconstructing the original DNA sequence, the unique barcodes of linked-read sequencing allows scientists to link together DNA fragments that come from the same DNA molecule. A pivotal benefit of this technology lies in the small quantities of DNA required for large genome information output, effectively combining the advantages of long-read and short-read technologies.
History
This sequencing method was originally developed by 10x Genomics in 2015, and was launched under the name 'GemCode' or 'Chromium'. GemCode employed a method of gel bead-based barcoding to amalgamate short DNA fragments. The longer fragments produced by this could then be sequenced using validated technology such as Illumina next-generation sequencing. An updated version of linked-read sequencing was introduced by the same company in 2018, termed 'Linked-Reads V2'. While GemCode uses a single barcode for tagging of both the gel bead and the DNA fragment, Linked-Reads V2 uses separate barcodes for improved detection of genetic variants.
The group developed the linked-read sequencing technology published their first paper regarding this technology in 2016. The authors of this paper developed the linked-read sequencing technology initially to sequence the genomes of both healthy individuals and cancer patients to determine somatic mutations, copy number variations, and structural variations in cancer genomes. Later that year, another research group combined linked-read sequencing technology with long-read sequencing technology to assemble human genome. Both studies demonstrated the utility of linked-read sequencing in comprehensive genome analysis and in understanding genetic diseases. However, in 2019, a lawsuit relating to patent infringement resulted in 10x Genomics discontinuing their line of linked-read products.
Method
Overview
The linked-read sequencing is microfluidic-based, and only needs nanograms of input DNA. One nanogram of DNA can be distributed across more than 100,000 droplet partitions, where DNA fragments are barcoded and subjected to polymerase chain reactions (PCR). As a result, DNA fragments (or reads) that share the same barcode can be grouped as coming from one single long input DNA sequence. And, long range information can be assembled from short reads.
Steps of Linked-read sequencing:
Sample Preparation: DNA is extracted from a sample (e.g., blood) and cut into fragments of 50 to 200 kilo base-pairs long.
Barcode Sequencing: each DNA fragment is labelled with a unique barcode through a process known as "Gel Bead-In Emulsion" (GEM).
Library Preparation: barcoded DNA fragments are amplified with PCR to generate sequencing libraries.
Sequencing: with Illumina next-generation sequencing technology, generate millions to billions of short sequence reads that represent fragments of the original DNA molecules.
Barcode Processing: group short reads to longer fragments based on barcodes.
Downstream Analysis: processed reads are aligned to a reference genome, or used for de novo assembly of complex genomes, haplotype phasing, or identification of structural variations.
Barcode Sequencing
During barcode sequencing, high molecular weight DNA samples that contain the targeted DNA sequence, ranging from fifty to several hundred kilobases in size, are combined with gel beads containing unique barcodes, enzymes, and sequencing reagents. Microfluidic device can partition input DNA molecules into individual nanoliter-sized droplets of water-in-oil emulsion, called GEMs. Each GEM contains gel beads coated with the same barcode and primers, and a small amount of DNA. The primers are complementary to specific regions of the DNA molecule, allowing for amplification of the DNA in the droplets through PCR. The barcodes enable the identification and grouping of sequencing reads that originate from the same long fragment, which is crucial for downstream analysis.
Library Preparation and Sequencing
The barcoded DNA fragments are amplified using PCR to create a library of DNA fragments with identical barcodes. All the fragments derived from a given DNA molecule are tagged with the same barcode. This step increases the quantity of DNA for sequencing and reduces the chances of losing unique DNA fragments during sequencing. Droplets (or GEM) are later collected in a tube, and the emulsion is broken, releasing the amplified, barcoded DNA sequences.
Standard Illumina next-generation sequencing technology can be used to sequence libraries. During sequencing, the barcodes are read along with the DNA sequences, allowing researchers and scientists to group together DNA fragments that originate from the same DNA molecule. Even though each DNA fragment is typically not fully sequenced, the information from many overlapping fragments in the same genomic region can be combined to reconstruct the long stretches of the genome. Therefore, a genome can be easily assembled from scratch without any prior reference.
Processing
The raw sequencing data is then processed through bioinformatics (e.g., the GemCode analysis software developed by 10x Genomics) to remove low-quality reads and to assign reads to their respective barcodes. Reads can be aligned to a reference genome or assembled de novo to generate long-range contigs. The read alignment step is important for determining the order and orientation of the long DNA fragments, and for identifying genomic variations, such as insertions or deletions.
Applications
De Novo Genome Assembly
Linked-read sequencing can facilitate de novo genome assembly, which involves reconstructing a genome from scratch without any prior reference. Linked-read sequencing enables assembly of large genomic regions, and helps improve the completeness and contiguity of the resulting genome. This can be particularly useful for studying organisms that lack a high-quality reference genome, such as non-model organisms or organisms with complex genomes. Many scientists have been using linked-read sequencing technology for de novo genome assembly recently in a variety of organisms, including humans, plants, and animals. For example, Dr. Evan Eichler and his research group used linked-read sequencing to assemble genome of orangutan, which had previously been difficult to study due to its complex genome. The resulting genome assembly helped scientists to study new insights into the evolutionary history of primates and the genetic basis of human diseases. Also, the aligned or assembled reads can be used for other genetic investigations or downstream analysis, such as haplotype phasing.
Haplotype Phasing
Haplotype refers to a group of genetic variants inherited together on a chromosome from one parent due to their genetic linkage. Haplotype phasing (also called haplotype estimation) refers to the process of reconstructing individual haplotypes, important for determining the genetic basis of diseases. Linked-read sequencing allows consistent coverage of genes related to different diseases, helping scientists to obtain all the regions carrying mutations from targeted genes. For example, in 2018, a group of researchers used linked-read sequencing technology to sequence genetic information from a pregnant woman who was a carrier of Duchenne muscular dystrophy (DMD) mutation. Linked-read sequencing allows them to identify the maternal haplotypes and determine the presence of the mutant alleles in the foetal DNA. This non-invasive prenatal diagnosis of DMD demonstrates the clinical applicability of linked-read sequencing.
Structural Variation Analysis
Structural variations, such as deletions, duplications, inversions, translocations, and other rearrangements, are common in human genomes. These variations can have significant impacts on genome functions, and have been implicated in many diseases. Linked-read sequencing technology labels all reads that originate from the same long DNA fragment with the same barcode, so it enables the detection of a large number of structural variants. Complexity of structural variants can be resolved with linked-read sequencing, and provide a complete picture of the genomic landscape. Many scientists have already been using linked-read sequencing to identify and characterise structural variants in diverse populations, including people with genetic disorders or cancers
Transcriptome Analysis
Transcriptome analysis is the study of all the RNA transcripts that are produced by the genome of an organism. Linked-read sequencing has been used by researchers to assemble transcript isoforms and alternative splicing events. Information regarding alternative splicing events can provide insights into the regulation of gene expression in human transcriptome
Epigenetic Analysis
Epigenetics refers to the study of heritable changes in genetic activities that are distinct from changes in DNA sequences. Epigenetic analysis involves studying DNA-protein interactions, histone modifications, and DNA methylation. Linked-read sequencing has been used for studying DNA methylation patterns by many studies. For example, in 2021, a study investigated the DNA methylation differences in peripheral blood cells between twins, in which one twin had Alzheimer’s Disease and the other was cognitively normal. Linked-read sequencing technology allowed researchers to identify more than 3000 differentially methylated regions between these twins discordant for Alzheimer’s Disease, and investigation of these differentially methylated regions eventually led to identification of genes enriched in neurodevelopmental processes, neuronal signalling, and immune system functions
Use
Advantages
Wide range of genomic applications and scientific questions, including de novo genome assembly, haplotype phasing, structural variant analysis, and transcriptome and epigenetic analysis.
Accuracy and scalability.
Method requires small quantities of input DNA, which can be beneficial for small samples or single cell studies.
More cost effective per sample in comparison with long-read technologies such as Oxford Nanopore sequencing.
Libraries produced by linked-read can be processed using Illumina short read sequencing, increasing accessibility.
Limitations
Complexity of library construction - this technology requires high molecular DNA preparation in order to produce long enough DNA molecules for sequencing.
Limitations in read length may result in limited haplotype resolution, which could reduce the efficacy of this technology in highly complex genomic regions.
Controversy
In 2018, Bio-Rad Laboratories filed a lawsuit against 10x Genomics stating that their linked-read technology infringed on three patents which had been licensed from Bio-Rad at the University of Chicago. Bio-Rad was awarded a sum of $23,930,716 by a jury. The 10x Genomics filed a motion for judgement as a matter of law (JMOL) but were denied in 2019, and the court proceedings concluded in 2020. Following this lawsuit, 10x Genomics discontinued their linked-read assay. An exception was made for linked-read products which had already been sold by the company prior to the lawsuit, allowing 10x Genomics to continue to provide those researchers with services such as support and warranty maintenance for this technology.
References
Molecular biology
Biotechnology | Linked-read sequencing | [
"Chemistry",
"Biology"
] | 2,224 | [
"Biochemistry",
"nan",
"Biotechnology",
"Molecular biology"
] |
73,127,816 | https://en.wikipedia.org/wiki/Exploration-exploitation%20dilemma | The exploration-exploitation dilemma, also known as the explore-exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best option based on current knowledge of the system (which may be incomplete or misleading), while exploration involves trying out new options that may lead to better outcomes in the future at the expense of an exploitation opportunity. Finding the optimal balance between these two strategies is a crucial challenge in many decision-making problems whose goal is to maximize long-term benefits.
Application in machine learning
In the context of machine learning, the exploration-exploitation tradeoff is fundamental in reinforcement learning (RL), a type of machine learning that involves training agents to make decisions based on feedback from the environment. Crucially, this feedback may be incomplete or delayed. The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance.
Multi-armed bandit methods
The multi-armed bandit (MAB) problem was a classic example of the tradeoff, and many methods were developed for it, such as epsilon-greedy, Thompson sampling, and the upper confidence bound (UCB). See the page on MAB for details.
In more complex RL situations than the MAB problem, the agent can treat each choice as a MAB, where the payoff is the expected future reward. For example, if the agent performs epsilon-greedy method, then the agent would often "pull the best lever" by picking the action that had the best predicted expected reward (exploit). However, it would with probability epsilon to pick a random action (explore). The Monte Carlo Tree Search, for example, uses a variant of the UCB method.
Exploration problems
There are some problems that make exploration difficult.
Sparse reward. If rewards occur only once a long while, then the agent might not persist in exploring. Furthermore, if the space of actions is large, then the sparse reward would mean the agent would not be guided by the reward to find a good direction for deeper exploration. A standard example is Montezuma's Revenge.
Deceptive reward. If some early actions give immediate small reward, but other actions give later large reward, then the agent might be lured away from exploring the other actions.
Noisy TV problem. If certain observations are irreducibly noisy (such as a television showing random images), then the agent might be trapped exploring those observations (watching the television).
Exploration reward
This section based on.
The exploration reward (also called exploration bonus) methods convert the exploration-exploitation dilemma into a balance of exploitations. That is, instead of trying to get the agent to balance exploration and exploitation, exploration is simply treated as another form of exploitation, and the agent simply attempts to maximize the sum of rewards from exploration and exploitation. The exploration reward can be treated as a form of intrinsic reward.
We write these as , meaning the intrinsic and extrinsic rewards at time step .
However, exploration reward is different from exploitation in two regards:
The reward of exploitation is not freely chosen, but given by the environment, but the reward of exploration may be picked freely. Indeed, there are many different ways to design described below.
The reward of exploitation is usually stationary (i.e. the same action in the same state gives the same reward), but the reward of exploration is non-stationary (i.e. the same action in the same state should give less and less reward).
Count-based exploration uses , the number of visits to a state during the time-steps , to calculate the exploration reward. This is only possible in small and discrete state space. Density-based exploration extends count-based exploration by using a density model . The idea is that, if a state has been visited, then nearby states are also partly-visited.
In maximum entropy exploration, the entropy of the agent's policy is included as a term in the intrinsic reward. That is, .
Prediction-based
This section based on.
The forward dynamics model is a function for predicting the next state based on the current state and the current action: . The forward dynamics model is trained as the agent plays. The model becomes better at predicting state transition for state-action pairs that had been done many times.
A forward dynamics model can define an exploration reward by . That is, the reward is the squared-error of the prediction compared to reality. This rewards the agent to perform state-action pairs that had not been done many times. This is however susceptible to the noisy TV problem.
Dynamics model can be run in latent space. That is, for some featurizer . The featurizer can be the identity function (i.e. ), randomly generated, the encoder-half of a variational autoencoder, etc. A good featurizer improves forward dynamics exploration.
The Intrinsic Curiosity Module (ICM) method trains simultaneously a forward dynamics model and a featurizer. The featurizer is trained by an inverse dynamics model, which is a function for predicting the current action based on the features of the current and the next state: . By optimizing the inverse dynamics, both the inverse dynamics model and the featurizer are improved. Then, the improved featurizer improves the forward dynamics model, which improves the exploration of the agent.
Random Network Distillation (RND) method attempts to solve this problem by teacher-student distillation. Instead of a forward dynamics model, it has two models . The teacher model is fixed, and the student model is trained to minimize on states . As a state is visited more and more, the student network becomes better at predicting the teacher. Meanwhile, the prediction error is also an exploration reward for the agent, and so the agent learns to perform actions that result in higher prediction error. Thus, we have a student network attempting to minimize the prediction error, while the agent attempting to maximize it, resulting in exploration.
The states are normalized by subtracting a running average and dividing a running variance, which is necessary since the teacher model is frozen. The rewards are normalized by dividing with a running variance.
Exploration by disagreement trains an ensemble of forward dynamics models, each on a random subset of all tuples. The exploration reward is the variance of the models' predictions.
Noise
For neural-network based agents, the NoisyNet method changes some of its neural network modules by noisy versions. That is, some network parameters are random variables from a probability distribution. The parameters of the distribution are themselves learnable. For example, in a linear layer , both are sampled from gaussian distributions at every step, and the parameters are learned via the reparameterization trick.
References
Machine learning
Strategy
Cognition | Exploration-exploitation dilemma | [
"Engineering"
] | 1,376 | [
"Artificial intelligence engineering",
"Machine learning"
] |
60,916,154 | https://en.wikipedia.org/wiki/Skirret%20%28tool%29 | A skirret is an archaic form of chalk line. It is a wooden tool shaped like the letter "T", historically used to ensure the foundation of a building was straight by laying down string as a marker. Today it is obsolete and little known, save for its use in some Freemasonry ceremonies.
Shaped like the letter "T" — with two horizontal pieces of wood at the top and about halfway down the vertical stake. The horizontal two cross-pieces are connected by a dowel at each end, around which a long length of string is wound.
To use, the craftsman unwound the string from its spindle and utilised it to lay out the dimensions of the structure being built, acting on a centre pin from which a line was drawn out to mark the ground. In certain instances, with the spindle as the centre, the skirret could also have been used for drawing a circle.
References
Surveying instruments
Orientation (geometry)
Carpentry tools
Freemasonry | Skirret (tool) | [
"Physics",
"Mathematics"
] | 202 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
60,921,299 | https://en.wikipedia.org/wiki/Hasse%E2%80%93Schmidt%20derivation | In mathematics, a Hasse–Schmidt derivation is an extension of the notion of a derivation. The concept was introduced by .
Definition
For a (not necessarily commutative nor associative) ring B and a B-algebra A, a Hasse–Schmidt derivation is a map of B-algebras
taking values in the ring of formal power series with coefficients in A. This definition is found in several places, such as , which also contains the following example: for A being the ring of infinitely differentiable functions (defined on, say, Rn) and B=R, the map
is a Hasse–Schmidt derivation, as follows from applying the Leibniz rule iteratedly.
Equivalent characterizations
shows that a Hasse–Schmidt derivation is equivalent to an action of the bialgebra
of noncommutative symmetric functions in countably many variables Z1, Z2, ...: the part of D which picks the coefficient of , is the action of the indeterminate Zi.
Applications
Hasse–Schmidt derivations on the exterior algebra of some B-module M have been studied by . Basic properties of derivations in this context lead to a conceptual proof of the Cayley–Hamilton theorem. See also .
References
Abstract algebra | Hasse–Schmidt derivation | [
"Mathematics"
] | 258 | [
"Abstract algebra",
"Algebra"
] |
60,923,680 | https://en.wikipedia.org/wiki/Sukbok%20Chang | Sukbok Chang (; born August 1, 1962) is a South Korean organic chemist. He is a distinguished professor in the Department of Chemistry at Korea Advanced Institute of Science and Technology (KAIST). He is also the director of the Institute for Basic Science (IBS) Center for Catalytic Hydrocarbon Functionalizations (CCHF). He was an associate editor on ACS Catalysis and has served on the editorial advisory boards of The Journal of Organic Chemistry, Journal of the American Chemical Society, and Accounts of Chemical Research. His major research interest is transition metal catalyzed C-H bond functionalization for the carbon-carbon bond and carbon-heteroatom bond formation.
Career
Sukbok Chang received his B.S degree from Korea University in 1985, and M.S degree from KAIST in 1987. Then, he joined Eric N. Jacobsen's group and received his PhD in 1996 at Harvard University. He subsequently worked with Robert H. Grubbs at Caltech as a postdoctoral fellow from 1996 to 1998. In early 1998, he joined the faculty of Ewha Womans University as an assistant professor, and moved to KAIST as a full professor in 2002. In 2012, he was selected as a director of the Center for Catalytic Hydrocarbon Functionalizations at the Institute for Basic Science, which is the biggest Korean government funded research institute. He also has been working as an associate editor of the journal ACS Catalysis since 2015. In 2023, he was selected to co-run the KAIST Cross Generation Creation Lab, a laboratory designed to continue the know-how of professors about to retire through collaboration with younger professors.
Major contributions
Chang's group studies new organic reactions and mechanisms with transition metal catalysis. In particular, his group contributed to the development of "copper catalyzed multicomponent coupling" in the 2000s. Since 2008, his group has focused on C-H functionalization and made a number of contributions.
Copper-catalyzed multicomponent coupling
Cu-catalyzed multicomponent coupling is a notable process developed by Chang's group. In 2005, they published a highly efficient and mild catalytic three component coupling between an alkyne, sulfonyl azide, and amine. Unlike click chemistry which generates 1,4-triazoles as products, in this case a Cu(I) catalyst, sulfonyl azide and alkyne generate ketenimine intermediate after releasing N2 gas. This electrophilic ketenimine intermediate reacts with amines and to generate asymmetric imines as products. Chang's group also showed water, alcohols, the C3 position of pyrrole and other nucleophiles can be used in this reaction.
Rhodium, Iridium-catalyzed C-N bond formation
Rhodium or iridium catalyzed C-H amidation and amination are other achievements of his group. In 2012, his group published rhodium catalyzed intermolecular amidation of arenes using sulfonyl azide as a nitrene precursor. This reaction generates N2 as the single byproduct, doesn't need external oxidant, has broad substrate scope and high functional group tolerance. Chang's group advanced their work by using different directing groups, different azides and various substrates. They also published that iridium also works well for C-H amidation/amination.
In 2016, Chang's group discovered new nitrogen sources. Their new nitrene precursor, 1,4,2-dioxazol-5-one, is more convenient to prepare, store and use compared to azides. Moreover, it has a strong affinity to the rhodium or Iridium metal center, and thus gives excellent amidation efficiency. They later published selective formation of gamma-lactams via C-H amidation with this type of nucleophile.
Honors and awards
2023: The Asian Scientist 100, Asian Scientist
2022: Ho-Am Prize in Science
2019: Top Scientist and Technologist Award of Korea, Korean Federation of Science and Technology Societies ()
2018: Korea Toray Science Award, Korea Toray Science Foundation
2018: JSPS Invitational Fellowship, Japan Society for the Promotion of Science
2018: Grand Academic Research Award, KAIST
2017: 1st ACS-KCS Excellence Award, American Chemical Society and Korean Chemical Society
2017: Humboldt Research Award, Alexander von Humboldt Foundation
2016: Yoshida Prize, International Organic Chemistry Foundation
2015-2020: Highly Cited Researcher in chemistry, Clarivate Analytics
2015: Knowledge Creation Award, Ministry of Science, ICT and Future Planning
2014: Member of the Korean Academy of Science and Technology
2013: Korea Science Award
2013: Kyung-Ahm Prize, Kyung-Ahm Education & Cultural Foundation
2010: KCS Academic Award, Korean Chemical Society
2008: Star Faculty, Korea Research Foundation
2006: One of 50 Representative Research Performances, Korea Science & Engineering Foundation
2005: Shim Sang Cheol Award, Korean Chemical Society
2003: Thieme Journals Award, Synlett/Synthesis Professorship Award in Asian Area
2002: Young Chemist Award, Korean Chemical Society sponsored by WILEY
References
1962 births
Living people
Academic staff of Ewha Womans University
Harvard University alumni
KAIST alumni
Academic staff of KAIST
Korea University alumni
Institute for Basic Science
South Korean organic chemists
Recipients of the Ho-Am Prize in Science | Sukbok Chang | [
"Chemistry"
] | 1,118 | [
"South Korean organic chemists",
"Organic chemists"
] |
59,499,174 | https://en.wikipedia.org/wiki/Herrmann%27s%20catalyst | Herrmann's catalyst is an organopalladium compound that is a popular catalyst for the Heck reaction. It is a yellow air-stable solid that is soluble in organic solvents. Under conditions for catalysis, the acetate group is lost and the Pd-C bond undergoes protonolysis, giving rise to a source of "".
The complex is made by reaction of tris(o-tolyl)phosphine with palladium(II) acetate:
Many analogues of Hermann's catalyst have been developed, e.g. palladacycles obtained from 2-aminobiphenyl.
References
Organopalladium compounds
Dimers (chemistry)
Phosphine complexes | Herrmann's catalyst | [
"Chemistry",
"Materials_science"
] | 147 | [
"Dimers (chemistry)",
"Polymer chemistry"
] |
62,136,161 | https://en.wikipedia.org/wiki/Composable%20disaggregated%20infrastructure | Composable disaggregated infrastructure (CDI), sometimes stylized as composable/disaggregated infrastructure, is a technology that allows enterprise data center operators to achieve the cost and availability benefits of cloud computing using on-premises networking equipment. It is considered a class of converged infrastructure, and uses management software to combine compute, storage and network elements. It is similar to public cloud, except the equipment sits on premises in an enterprise data center.
Overview
American market intelligence firm International Data Corporation (IDC) describes CDI as "an emerging category of infrastructure systems that make use of high-bandwidth, low-latency interconnects to aggregate compute, storage, and networking fabric resources into shared resource pools that can be available for on-demand allocation."
These systems use what is sometimes called rack-scale architecture, which allows network operators to replace components on a rack while the entire data center behaves as a virtualized server. This allows operators to allocate compute, memory, and storage resources inside each server node on-demand, over a high speed, low latency computing fabric. The individual components can be managed as a resource pool, allowing dynamic provisioning and deprovisioning with a common application programming interface (API). No hardware configuration is required.
Technology
Composability refers to the composer, which is another term for the software that allows the server resources, which include compute, storage, and RAM, to be placed into a pool to become available for applications and workloads. Disaggregation is the process of aggregating server resources with the resources of other servers in the data center. These aggregated or pooled resources can be shared by applications or workloads. The composer software controls how much of each disaggregated resource is needed from each server.
The use of software APIs to provisioning the resources without having to directly program any individual hardware device is known as programmatic control. Operators can use open APIs in composable infrastructure in order to integrate third-party software and hardware with proprietary solutions.
References
Computing terminology
Network architecture
Data centers
IT infrastructure | Composable disaggregated infrastructure | [
"Technology",
"Engineering"
] | 437 | [
"Computing terminology",
"IT infrastructure",
"Data centers",
"Network architecture",
"Computer networks engineering",
"Information technology",
"Computers"
] |
62,138,896 | https://en.wikipedia.org/wiki/Halperin%20conjecture | In rational homotopy theory, the Halperin conjecture concerns the Serre spectral sequence of certain fibrations. It is named after the Canadian mathematician Stephen Halperin.
Statement
Suppose that is a fibration of simply connected spaces such that is rationally elliptic and (i.e., has non-zero Euler characteristic), then the Serre spectral sequence associated to the fibration collapses at the page.
Status
As of 2019, Halperin's conjecture is still open. Gregory Lupton has reformulated the conjecture in terms of formality relations.
Notes
Further reading
Homotopy theory
Spectral sequences
Conjectures | Halperin conjecture | [
"Mathematics"
] | 130 | [
"Unsolved problems in mathematics",
"Topology stubs",
"Conjectures",
"Topology",
"Mathematical problems"
] |
62,139,505 | https://en.wikipedia.org/wiki/Vapor%20etching | Vapor etching refers to a process used in the fabrication of Microelectromechanical systems (MEMS) and Nanoelectromechanical systems (NEMS). Sacrificial layers are isotropically etched using gaseous acids such as Hydrogen fluoride and Xenon difluoride to release the free standing components of the device.
Economic advantages and novel technological possibilities result from micron to nano scale size reductions. (MEMS to NEMS) The small dimensions make the use of isotropic wet etch processes traditionally used in micro fabrication suffer from stiction, the permanent adherence of the free standing structure to the underlying substrate due to the scaling of surface effects occurring during the drying of the acid. Vapor etching overcomes stiction because no liquids are used during the etch process. Commonly, hydrogen fluoride and xenon difluorides are used to etch silicon dioxide and silicon sacrificial layers respectively.
HF vapor etching
The wet etching of SiO2 in buffered hydrogen fluoride solutions is a common and well understood process in micro fabrication. In 1966, Holmes and Snell found that SiO2 can be etched in hydrogen fluoride vapor. Initially the interest in this finding was low, because wet etch processes have higher etch rates and did not require sophisticated equipment. During the advent of MEMS technology and the consecutive reduction of size however, stiction started to have a significant impact on the production yields. Therefore HF vapor etching became an interesting commercial fabrication technology. Water or an alcohol catalysts are required, because anhydrous HF does not etch SiO2.
Etch chemistry
The etch chemistry depends on the catalyst used.
Etch reaction with water catalyst
If water is used, the H2O is adsorbed at the SiO2 surface and forms silanol groups can never form.SiO2 + 2 H2O → Si(OH)4 The HF reacts with the silanol groups and forms SiF4 and H2O according to the following reaction.Si(OH)4 + 4 HF → SiF4 + 4 H2OThe etch process commonly takes place at reduced pressures, to promote the desorption of the reaction products. Water is formed during the etch reaction. The efficient H2O removal is critical to prevent the formation of a liquid layer.
Etch reaction with alcohol catalyst
Alternatively, different alcohols such as methanol, ethanol, 1-propanol or IPA can be used to initiate the reaction. An example for this reaction, using methanol (CH3OH) is given below. Firstly the HF and the methanol are absorbed to the surface.CH3OH (g) ↔ CH3OH (ads)
HF (g) ↔ HF (ads)HF2- is formed by an ionization reaction of the absorbed HF and absorbed CH3OH2 HF (ads) + CH3OH (ads)-> HF2- (ads) + CH3OH2+ (ads)The ionized HF then reacts with the SiO2 according to the following reaction.SiO2(s) + 2 HF2- (ads) + 2 CH3OH2+ (ads) -> SiF4 (ads) + 2 H2O (ads) + 2 CH3OHFinally the reaction products are removed from the surface by desorption.
XeF2 vapor etching
Xenon difluoride, bromine trifluoride, chlorine trifluoride and fluorine can be used for gaseous silicon etching. Xenon difluoride is most commonly used to etch silicon in academia and industry, because it has a high selectivity towards other semiconductor materials, allows high process control and is easy to use at room temperature.
Etch systems
The synthesis of XeF2 is an endothermic process which results in a white powder which sublimes at low pressures. (P < 4 Torr) The low vapor pressure allowed early researchers and engineers to use it in comparatively simple set ups. Modern vapour etch tools are more sophisticated and are characterized by the way the gas is feed into the etch chamber. In pulsed systems the etchant is expanded, feed into the reaction chamber and remains there until it has been consumed by the reaction. Then the chamber is evacuated and this process is repeated for multiple cycles. In contrast to that, a carrier gas flows through a bubbler to continuously supply xenon difluoride into the etch chamber in continuous flow systems.
Etch chemistry
The general etch reaction is summarized by the following equation.2 XeF2 + Si → SiF4 + 2 XeThe detailed etch kinetic is more complex reaction consisting of four steps. After the etchant has been mass transported to the silicon surface, the xenon difluoride is absorbed on the silicon surface.2 XeF2 (gas) + Si (s) → 2 XeF2 (abs) + Si (s)The XeF2 disassociates into absorbed fluorine and gaseous Xe.2 XeF2 (abs) + Si (s) → 2 Xe (g) + 2 F (abs) + Si (s)The fluorine bonds with the surface silicon to form silicon tetra fluoride.2 Xe (g) + 2 F (abs) + Si (s) → 2 Xe (g) + SiF4 (ads)The reaction product is desorpted from the silicon surface.2 Xe (g) + SiF4 (ads) → " Xe (g) + SiF4 (g)The reaction products are mass transferred from the surface to the etch chamber, and ejected from there by a vacuum pump.
References
Electromechanical engineering | Vapor etching | [
"Engineering"
] | 1,228 | [
"Electrical engineering",
"Electromechanical engineering",
"Mechanical engineering by discipline"
] |
62,140,469 | https://en.wikipedia.org/wiki/H3K4me1 | H3K4me1 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the mono-methylation at the 4th lysine residue of the histone H3 protein and often associated with gene enhancers.
Nomenclature
H3K4me1 indicates monomethylation of lysine 4 on histone H3 protein subunit:
Lysine methylation
This diagram shows the progressive methylation of a lysine residue. The mono-methylation (second from left) denotes the methylation present in H3K4me1.
Understanding histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K4me1.
Mechanism and function of modification
H3K4me1 is enriched at active and primed enhancers. Transcriptional enhancers control the cell-identity gene expression and are important in the cell identity. Enhancers are primed by histone H3K4 mono-/di-methyltransferase MLL4 and then are activated by histone H3K27 acetyltransferase p300. H3K4me1 fine-tunes the enhancer activity and function rather than controls. H3K4me1 is put down by KMT2C (MLL3) and KMT2D (MLL4)
LSD1, and the related LSD2/KDM1B demethylate H3K4me1 and H3K4me2.
Marks associated with active gene transcription like H3K4me1 and H3K9me1 have very short half-lives.
H3K4me1 with MLL3/4 can also act at promoters and repress genes.
Relationship with other modifications
H3K4me1 is a chromatin signature of enhancers, H3K4me2 is highest toward the 5′ end of transcribing genes and H3K4me3 is highly enriched at promoters and in poised genes. H3K27me3, H4K20me1 and H3K4me1 silence transcription in embryonic fibroblasts, macrophages, and human embryonic stem cells (ESCs).
Enhancers that have two opposing marks like the active mark H3K4me1 and repressive mark H3K27me3 at the same time are called bivalent or poised. These bivalent enhancers convert and become enriched with H3K4me1 and acetylated H3K27 (H3K27ac) after differentiation.
Epigenetic implications
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions.
H3K4me1- primed enhancers
H3K4me3-promoters
H3K36me3-gene bodies
H3K27me3-polycomb repression
H3K9me3-heterochromatin
The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Clinical significance
Suppression of the H3K4 mono- and di-demethylase LSD-1 might extend lifespan in various species.
H3K4me allows binding of MDB and increased activity of DNMT1 which could give rise to CpG island methylator phenotype (CIMP). CIMP is a type of colorectal cancers caused by the inactivation of many tumor suppressor genes from epigenetic effects.
Methods
The histone mark H3K4me1 can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone methylation
Histone methyltransferase
Methyllysine
References
Epigenetics
Post-translational modification | H3K4me1 | [
"Chemistry"
] | 1,457 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
67,380,762 | https://en.wikipedia.org/wiki/Marine%20geophysics | Marine geophysics is the scientific discipline that employs methods of geophysics to study the world's ocean basins and continental margins, particularly the solid earth beneath the ocean. It shares objectives with marine geology, which uses sedimentological, paleontological, and geochemical methods. Marine geophysical data analyses led to the theories of seafloor spreading and plate tectonics.
Methods
Marine geophysics uses techniques largely employed on the continents, from fields including exploration geophysics and seismology, and methods unique to the ocean such as sonar. Most geophysical instruments are used from surface ships but some are towed near the seafloor or function autonomously, as with Autonomous Underwater Vehicles or AUVs.
Objectives of marine geophysics include determination of the depth and features of the seafloor, the seismic structure and earthquakes in the ocean basins, the mapping of gravity and magnetic anomalies over the basins and margins, the determination of heat flow through the seafloor, and electrical properties of the ocean crust and Earth's mantle.
Navigation
Modern marine geophysics, as with most oceanographic surveying with research ships, use Global Positioning System satellites, either the U.S. GPS array or the Russian GLONASS for ship navigation. Geophysical instruments towed near the seafloor typically use acoustic transponder navigation sonar networks.
Ocean depth
The depth of the seafloor is measured using echo sounding, a sonar method developed during the 20th century and advanced during World War II. Common variations are based on the sonar beam width and number of sonar beams as is used in multibeam sonar or swath mapping that became more advanced toward the latter half of the 20th century.
Sedimentary cover of the seafloor
The thickness and type of sediments covering the ocean crust are estimated using the seismic reflection technique. This method was highly advanced by offshore oil exploration companies. The method employs a sound source at the ship with much lower frequencies than echo sounding, and an array of hydrophones towed by the ship, that record echoes from the internal structure of the sediment cover and the crust below the sediment. In some cases, reflections from the internal structure of the ocean crust can be detected. Echo sounders that use lower frequencies near 3.5 kHz are used to detect both the seafloor and shallow structure below the seafloor. Side-looking sonar, where the sonar beams are aimed just below horizontal, is used to map the seafloor bottom texture to ranges from tens of meters to a kilometer or more depending on the device.
Structure of the ocean crust and upper mantle
When the sound or energy source is separated from the recording devices by distances of several kilometers or more, then refracted seismic waves are measured. Their travel time can be used to determine the internal structure of the ocean crust, and from the seismic velocities determined by the method, an estimate can be made of the crustal rock type. Recording devices include hydrophones at the ocean surface and also ocean bottom seismographs. Refraction experiments have detected anisotropy of seismic wave speed in the oceanic upper mantle.
Measuring Earth’s magnetic and gravity fields within the ocean basins
The usual method of measuring the Earth's magnetic field at the sea surface is by towing a total field proton precession magnetometer several hundred meters behind a survey ship. In more limited surveys magnetometers have been towed at a depth close to the seafloor or attached to deep submersibles. Gravimeters using the zero-length spring technology are mounted in the most stable location on a ship; usually towards the center and low. They are specially designed to separate the acceleration of the ship from changes in the acceleration of Earth's gravity, or gravity anomalies, which are several thousand times less. In limited cases, gravity measurements have been made at the seafloor from deep submersibles.
Determine the rate of heat flow from the Earth through the seafloor
The geothermal gradient is measured using a 2-meter long temperature probe or with thermistors attached to sediment core barrels. Measured temperatures combined with the thermal conductivity of the sediment give a measure of the conductive heat flow through the seafloor.
Measure the electrical properties of the ocean crust and upper mantle
Electrical conductivity, or the converse resistivity, can be related to rock type, the presence of fluids within cracks and pores in rocks, the presence of magma, and mineral deposits like sulfides at the seafloor. Surveys can be done at either the sea surface or seafloor or in combination, using active current sources or natural Earth electrical currents, known as telluric currents.
In special cases, measurements of natural gamma radiation from seafloor mineral deposits have been made using scintillometers towed near the seafloor.
Examples of the impact of marine geophysics
Evidence for seafloor spreading and plate tectonics
Echo sounding was used to refine the limits of the known mid-ocean ridges, and to discover new ones. Further sounding mapped linear seafloor fracture zones that are nearly orthogonal to the trends of the ridges. Later, determining earthquake locations for the deep ocean discovered that quakes are restricted to the crests of the mid-ocean ridges and stretches of fracture zones that link one segment of a ridge to another. These are now known as transform faults, one of the three classes of plate boundaries. Echo sounding was used to map the deep trenches of the oceans and earthquake locations were noted to be located in and below the trenches.
Data from marine seismic refraction experiments defined a thin ocean crust, approximately 6 to 8 kilometers in thickness, divided into three layers. Seismic reflection measurements made over the ocean ridges found they are devoid of sediments at the crest, but covered by increasingly thicker sediment layers with increasing distance from the ridge crest. This observation implied that the ridge crests are younger than the ridge flanks.
Magnetic surveys discovered linear magnetic anomalies that in many areas ran parallel to an ocean ridge crest and showed a mirror-image symmetrical pattern centered on ridge crests. Correlation of the anomalies to the history of Earth's magnetic field reversals allowed the age of the seafloor to be estimated. This connection was interpreted as the spreading of the seafloor from the ridge crests. Linking spreading centers and transform faults to a common cause helped to develop the concept of plate tectonics.
When the age of the ocean crust as determined by magnetic anomalies or drill hole samples was compared to the ocean depth it was observed that depth and age are directly related in a seafloor depth age relationship. This relationship was explained by the cooling and contracting of an oceanic plate as it spreads away from a ridge crest.
Evidence for paleoclimate
Seismic reflection data combined with deep-sea drilling at some locations have identified widespread unconformities and distinctive seismic reflectors in the deep sea sedimentary record. These have been interpreted as evidence of past global climate change events. Seismic reflection surveys made on polar continental selves have identified buried sedimentary features due to the advance and retreat of continental ice sheets. Swath sonar mapping has revealed the gouge tracks of ice sheets cut as they traversed polar continental shelves in the past.
Evidence for hydrothermal vents
Heat flow measured in the ocean basins revealed that conductive heat flow decreased with the increased depth and crustal age of flanks of ocean ridges. On the ridge crest, however, conductive heat flow was found to be unexpectedly low for a location where active volcanism accompanies seafloor spreading. This anomaly was explained by the possible heat transfer by hydrothermal venting of seawater circulating in deep fissures in the crust at the ridge crest spreading centers. This hypothesis was borne out in the late 20th century when investigations by deep submersibles discovered hydrothermal vents at spreading centers.
Evidence for Mid-Ocean Ridge structure and properties
Marine gravity profiles made across Mid-Ocean Ridges showed a lack of a gravity anomaly, the Free-air anomaly is small or near zero when averaged over a broad area. This suggested that although ridges reached a height at their crest of two kilometers or more above the deep ocean basins, that extra mass was not related to an increase of gravity on the ridge of the magnitude that would be expected. The ridges are isostatically compensated, meaning the total mass below some reference depth in the mantle below the ridge is about the same everywhere. This requires a lower density mantle below the ridge crest and upper ridge flanks. Data from seismic studies revealed lower velocities under the ridges suggesting parts of the mantle below the crests are lower density rock melt. This is consistent with the theories of seafloor spreading and plate tectonics.
Centers of research conducting marine geophysics
Ocean University of China
Alfred Wegener Institute for Polar and Marine Research
Bedford Institute of Oceanography
Cambridge University
IFREMER
Lamont–Doherty Earth Observatory
National Dong Hwa University Graduate Institute of Marine Biology
National Institute of Water and Atmospheric Research
National Oceanography Centre, Southampton
Rosenstiel School of Marine, Atmospheric, and Earth Science
Scripps Institution of Oceanography
Texas A&M University
University of Hawaii (Manoa)
University of Rhode Island
University of Washington (Seattle)
Woods Hole Oceanographic Institution
See also
Project FAMOUS
RISE Project
References
Further reading
01
Hydrothermal vents
Marine geophysicists
Geophysics | Marine geophysics | [
"Physics",
"Environmental_science"
] | 1,880 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Geophysics"
] |
76,047,279 | https://en.wikipedia.org/wiki/Bothe%E2%80%93Geiger%20coincidence%20experiment | In the history of quantum mechanics, the Bothe–Geiger coincidence experiment was conducted by Walther Bothe and Hans Geiger from 1924 to 1925. The experiment explored x-ray scattering from electrons to determine the nature of the conservation of energy at microscopic scales, which was contested at that time. The experiment confirmed existence of photons, the conservation of energy and the Compton scattering theory.
At that time, quantum mechanics was still under development in what was known as the old quantum theory. Under this framework, the BKS theory by Niels Bohr, Hendrik Kramers, and John C. Slater proposed the possibility that energy conservation is only true for large statistical ensembles and could be violated for small quantum systems. BKS theory also argued against the quantum nature of light. The Bothe-Geiger experiments helped disprove BKS theory, marking an end to old quantum theory, and inspiring the re-interpretation of the theory in terms of matrix mechanics by Werner Heisenberg.
The experiment used for the first time a coincidence method, thanks to the coincidence circuit developed by Bothe. Bothe received the Nobel Prize in Physics in 1954 for this development and successive experiments using this method.
Motivation
In 1923, Arthur Compton had shown experimentally that x-rays were scattered elastically by free electrons, in accordance to the conservation of energy. The scattered photon had a lower frequency than the incoming photon, according to the Planck–Einstein relation for the energy ( is Planck constant and is the angular frequency), while the remaining energy was transmitted to the recoil electron.
This discovery started a debate between those that believed that the energy was always conserved like Compton, Albert Einstein and Wolfgang Pauli, and those who believed it was only statistically valid. Bohr, Kramers and Slater published their BKS theory in February 1924 in Zeitschift fur Physik, arguing against energy conservation in individual atomic scattering events. They also considered that light could be treated classically without the need of the light quanta hypothesis of Einstein.
After finishing his doctoral degree under the supervision of Max Planck in 1913, Walther Bothe joined the radioactivity group in the Physikalisch-Technische Reichsanstalt in Charlottenburg, Berlin, to work with Hans Geiger, at that time head of the lab. Bothe studied Compton scattering with x-rays using a cloud chamber filled with hydrogen.
Shortly after the publication of the BKS theory, Hans and Geiger announced in the same journal an experiment proposal to test BKS theory.
Werner Heisenberg remained agnostic with respect to BKS theory. In a letter to Arnold Sommerfeld, he wrote:
Experiment
According to Compton scattering, if an incident photon with energy given by hits an electron, the recoil electron and the scattered photon would fly in opposite directions in the direction perpendicular to the trajectory of the incident photon.
For the experiment, a collimated x-ray beam is directed to a scattering material in a gap between two counters. The counters are placed in the line perpendicular to the beam. The two counters consist of an electron counter and a photon counter that are placed in opposite sides from the beam. Due to the minimal energy of the recoil electron, the electron detection essentially occurs at their scattering site. Thus the scattering volume must be situated within the electron counter. The whole setup was enclosed in a glass sphere filled with hydrogen at atmospheric pressure.
In Bothe–Geiger experiment, Geiger needle counters covered with thin platinum foil were used to detect scattered photons. A fraction of the photons produced a measurable electric current due to the photoelectric effect. The count detections were recorded photographically using silver bromide film, by the means of a string electrometers. The efficiency of the coincidence counting was of the order of 1 for 10 events. Bothe and Geiger observed 66 coincidences in 5 hours, of which 46 were attributed to false counts, with a statistical fluctuation of 1 in 400,000.
The measurements and data treatment took over a year. The overall experiment produced more than three kilometers of the just 1.5 centimeter-wide film that had to be analyzed manually. According to Bothe, the "film consumption however was so enormous that our laboratory with the film strips strung up for drying sometimes resembled an industrial laundry".
Any delay between the detection of the photon and the electron would be a hint of a violation of the conservation of energy. However a simultaneous detection indicated a confirmation of Compton's theory.
Results, reception and legacy
In April 1925, Bothe and Geiger reported that the photon and electron counters responded simultaneously, with a time resolution of 1 millisecond. Their result confirmed the quantum nature of light and was the first evidence against BKS theory. They argued "Our results are not in accord with Bohr's interpretation of the Compton effect ... it is recommended therefore to retain until further notice the picture of Compton and [Peter] Debye.... One must therefore probably assume that the light quantum concept possesses a high degree of validity as assumed in that theory."
Published in September of the same year, an experiment carried in parallel by Compton and Alfred W. Simon using a different technique, reached similar conclusions. The Compton–Simon experiment used cloud chamber techniques to track two different types of tracks: tracks of the recoil electron and tracks of the photoelectrons. Compton and Simon confirmed the relative angles between the tracks predicted by Compton scattering. Compton and Simon write: "the results do not appear to be reconcilable with the view of the statistical production of recoil and photo-electrons by Bohr, Kramers and Slater. They are, on the other hand, in direct support of the view that energy and momentum are conserved during the interaction between radiation and individual electrons."
The Bothe–Geiger experiment and the Compton–Simon experiment marked an end to the BKS theory. Kramers was skeptic at the beginning. In a letter to Bohr, Kramers said "I can unfortunately not survey how convincing the experiments of Bothe and Geiger actually are for the case of the Compton effect". Bohr however finished by accepting the results, in a letter to Ralph H. Fowler he wrote: "there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible".
Compton congratulated Bothe and Geiger for their results. Max von Laue said that "Physics was saved from being led astray". Science philosopher Karl Popper catalogued the result as an experimentum crucis.
In 1925 after the experiment, Bothe succeeded Geiger as the director of the lab.
The same year, Heisenberg would start to develop a new reinterpretation of quantum mechanics, based on matrix mechanics. In his 1927 paper on the uncertainty principle, he opposes the statistical interpretation of quantum mechanics, citing the Bothe–Geiger paper. Heisenberg writes to Pauli: "I argue with Bohr over the extent to which the relation p1q1~h has its origin in the wave-or the discontinuity aspect of quantum mechanics. Bohr emphasizes that in the gamma-ray microscope the diffraction of the waves is essential; I emphasize that the theory of light quanta and even the Geiger-Bothe experiments are essential."
Almost a decade later, Robert S. Shankland performed an experiment that allegedly showed some inconsistencies with photon scattering, resurfacing the idea of BKS theory. However it was later disproved by Robert Hofstadter and John A. Mcintyre with an experiment similar to the Bothe–Geiger experiment reducing the time resolution to 15 nanoseconds.
Further experiments were carried out by Bothe using his coincidence method. Geiger and Walther Müller further developed the Geiger–Müller tubes, that were used by Bothe and Werner Kolhörster experiment in 1929 to show that fast electrons detected in cloud chambers came from cosmic rays. In 1954, the Nobel Prize in Physics was split in two, half for Max Born for "for his fundamental research in quantum mechanics, especially for his statistical interpretation of the wavefunction"" and the other half for Bothe for his "for the coincidence method and his discoveries made therewith". Geiger had already died in 1945 so he was not eligible for a share of the prize.
Physics experiments
References
Experimental particle physics | Bothe–Geiger coincidence experiment | [
"Physics"
] | 1,716 | [
"Particle physics",
"Experimental physics",
"Physics experiments",
"Experimental particle physics"
] |
76,047,368 | https://en.wikipedia.org/wiki/Point-surjective%20morphism | In category theory, a point-surjective morphism is a morphism that "behaves" like surjections on the category of sets.
The notion of point-surjectivity is an important one in Lawvere's fixed-point theorem, and it first was introduced by William Lawvere in his original article.
Definition
Point-surjectivity
In a category with a terminal object , a morphism is said to be point-surjective if for every morphism , there exists a morphism such that .
Weak point-surjectivity
If is an exponential object of the form for some objects in , a weaker (but technically more cumbersome) notion of point-surjectivity can be defined.
A morphism is said to be weakly point-surjective if for every morphism there exists a morphism such that, for every morphism , we have
where denotes the product of two morphisms ( and ) and is the evaluation map in the category of morphisms of .
Equivalently, one could think of the morphism as the transpose of some other morphism . Then the isomorphism between the hom-sets allow us to say that is weakly point-surjective if and only if is weakly point-surjective.
Relation to surjective functions in Set
Set elements as morphisms from terminal objects
In the category of sets, morphisms are functions and the terminal objects are singletons. Therefore, a morphism is a function from a singleton to the set : since a function must specify a unique element in the codomain for every element in the domain, we have that is one specific element of . Therefore, each morphism can be thought of as a specific element of itself.
For this reason, morphisms can serve as a "generalization" of elements of a set, and are sometimes called global elements.
Surjective functions and point-surjectivity
With that correspondence, the definition of point-surjective morphisms closely resembles that of surjective functions. A function (morphism) is said to be surjective (point-surjective) if, for every element (for every morphism ), there exists an element (there exists a morphism ) such that ( ).
The notion of weak point-surjectivity also resembles this correspondence, if only one notices that the exponential object in the category of sets is nothing but the set of all functions .
References
Category theory
Morphisms | Point-surjective morphism | [
"Mathematics"
] | 526 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Morphisms"
] |
74,536,549 | https://en.wikipedia.org/wiki/Grade%20%28ring%20theory%29 | In commutative and homological algebra, the grade of a finitely generated module over a Noetherian ring is a cohomological invariant defined by vanishing of Ext-modules
For an ideal the grade is defined via the quotient ring viewed as a module over
The grade is used to define perfect ideals. In general we have the inequality
where the projective dimension is another cohomological invariant.
The grade is tightly related to the depth, since
Under the same conditions on and as above, one also defines the -grade of as
This notion is tied to the existence of maximal -sequences contained in of length .
References
Ring theory
Homological algebra
Commutative algebra | Grade (ring theory) | [
"Mathematics"
] | 136 | [
"Mathematical structures",
"Ring theory",
"Fields of abstract algebra",
"Category theory",
"Commutative algebra",
"Homological algebra"
] |
74,536,896 | https://en.wikipedia.org/wiki/Project%20Thunderbird | Project Thunderbird was a 1967 United States Atomic Energy Commission (AEC) proposal to use nuclear explosives to prepare coalbeds to gasify coal in place underground in Wyoming. The project was proposed as a component of Project Plowshare, which sought ways to use nuclear devices in public works and industrial development projects. The project aimed to exploit deep coal deposits to gasify them in situ with controlled combustion in the rubble chimney resulting from a deep nuclear detonation. The project was to be located on the border of Johnson County and Campbell County, about west of Gillette, Wyoming, in the Powder River Basin.
While initial reports on the project were optimistic, subsequent analysis cast doubt on the project's viability, and the project was not pursued.
Proposal
In 1966-677, the U.S. Atomic Energy Commission was approached by Wyoming coal engineers Wold and Jenkins, of Casper, Wyoming, with the idea of extending Project Plowshare programs for the development of natural gas production using nuclear devices into coalbed areas. The project was received with interest, and was named Project Thunderbird. The Lawrence Radiation Laboratory (LRL), which administered many Plowshare programs, initiated a study to define a potential demonstration project in 1968.
The project was intended to investigate the possibility of enhancing the economic value of deeply-buried coalbeds in the Powder River Basin that could not easily be exploited by the strip mining methods used father east, where the beds approached the surface.
Project description
The project was intended to demonstrate techniques for creating a so-call "rubble chimney," a subterranean cavity, containing broken rubble and voids. Following the nuclear explosion that created the chimney, wells would be drilled to introduce oxygen and extract gas products. The pulverized coal would be ignited and fed oxygen under controlled conditions, converting the coal into combustible gas. In effect, the cavity would become a coking oven, driving off the volatile components of coal and leaving the residual carbon coke in the ground.
The project was proposed for deep coalbeds in the Fort Union-Wasatch Formation. The Roland coalbed, which was being surface mined at the Wyodak Mine to the east, lies at depths of or more at the Thunderbird project site, at thicknesses of up to .
Two possible project scopes were described. A 50-kiloton explosion was expected to create a chimney in radius and high, containing about of broken rock. About 25 percent of the contents of the cavity would be coal, which could produce the equivalent of about 1.5 million barrels of oil. A second proposal suggested a one-megaton explosion that was expected to create a chimney with a radius and a height of . This would contain seven times as much coal, and would fracture the coal beds for a greater distance beyond the chimney, with a further 10%-50% increase in gas yield. Gas from the wellhead would be processed by the Fischer-Tropsch process into gas and petroleum products. Existing gas and oil pipeline infrastructure would move the productsto market.
14 test borings were made on Wold and Jenkins leases. No other exploration has been documented, and the area has in subsequent years been extensively investigated and drilled for gas projects.
Outcome
Wold and Jenkins engineers viewed the project as potentially economically viable. A 1969 analysis by Gibbs & Hill, Inc. was less optimistic, advising the LRL that assumptions concerning development costs versus production did not yield a viable project. This opinion appears to have halted the project. No specific location for the test was identified.
References
Further reading
1968 in Wyoming
Thunderbird
Thunderbird
Thunderbird
Johnson County, Wyoming
Campbell County, Wyoming
Coal gasification technologies | Project Thunderbird | [
"Chemistry"
] | 746 | [
"Synthetic fuel technologies",
"Coal gasification technologies",
"Explosions",
"Peaceful nuclear explosions"
] |
74,540,613 | https://en.wikipedia.org/wiki/Germ%C3%A1n%20Sierra | Germán Sierra is a Spanish theoretical physicist, author, and academic. He is Professor of Research at the Institute of Theoretical Physics Autonomous University of Madrid-Spanish National Research Council.
Sierra's research interests span the field of physics and mathematical physics, focusing particularly on condensed matter physics, conformal field theory, exactly solved models, quantum information and computation and number theory. He has authored two books entitled, Quantum Groups in Two-dimensional Physics and Quantum electron liquids and hight-Tc Superconductivity and also has published over 200 articles.
Sierra serves as an Editor of the Journal of Statistical Mechanics: Theory and Experiment, Journal of High Energy Physics and Nuclear Physics B.
Education
Sierra earned his Baccalaureate degree in physics from the University of Complutense de Madrid in 1978, followed by a Ph.D. in physics from the same university in 1981. He then completed his Postdoc from the l’École Normale Supérieure in Paris in 1983.
Career
Following his Postdoc, Sierra began his academic career as a Titular Professor at the University of Complutense de Madrid in 1984, a position he held for
three years. In 1987 he was appointed as a research fellow at the European Council for Nuclear Research (CERN) in Geneva and as a Scientific Researcher at Spanish National Research Council in 1989. Since 2005, he has been serving as a Full Professor of Physics at the Spanish National Research Council in Spain. He has held visiting appointments at Erwin Schrödinger Institute, Kavli Institute for Theoretical Physics, Max Planck Institute for Quantum Optics, University of Sao Paulo, Princeton University, Isaac Newton Institute for Mathematical Sciences, Stony Brook University, and University of Innsbruck
From 2014 to 2017, he was a Member of the International Union of Pure and Applied Physics (IUPAP), Panel C18 on Mathematical Physics.
Research
Sierra's research focuses on quantum physics with a particular emphasis on supergravity, quantum groups, quantum many body systems, integrable models. His research has contributed to the understanding of supergravity theories, conformal field theory, superconductivity spin chains and ladders, Richardson-Gaudin model, physical models of the Riemann Zeros, quantum Hall states, inhomogeneous spin chains, infinite matrix product states, The Prime State, quantum computation, and quantum games.
Supergravity
During his early research career, Sierra worked in the area of supergravity to construct and classify the N = 2 Maxwell-Einstein Supergravity theories (MESGT). His work involved an investigation of the algebraic and geometric structures underlying these theories, as well as their compact and non-compact gaugings. In collaboration with M. Gunaydin and P.K. Townsend, he derived the magic square of Freudenthal, Rozenfeld, and Tits by utilizing the geometric principles found in a specific group of N=2 Maxwell-Einstein supergravity theories.
Quantum groups
In 1990, Sierra's research diverted toward the construction, interpretation, and application of quantum groups in the context of conformal field theories, two-dimensional physics, and renormalization groups. He demonstrated that the representation theory of the q-deformation of SU(2) offers solutions to the polynomial equations formulated by Moore and Seiberg for rational conformal field theories, as long as q is a root of unity. Together with Cesar Gomez, he defined the representation spaces of the quantum group in terms of screened vertex operators and interpreted the number of screening operators as the genuine quantum group number. He introduced a spin chain Hamiltonian that possesses integrability and invariance under 14 (sI(2)) transformations within nilpotent irreducible representations when r3 = 1. Additionally, he proved that the elliptic R-matrix of the eight vertex free fermion model is the intertwiner R-matrix of a quantum deformed Clifford-Hopf algebra that the elliptic R-matrix of the eight-vertex free fermion model corresponds to the intertwiner R-matrix of a quantum deformed Clifford-Hopf algebra. In his work, he also presented a new mathematical structure entitled, graph quantum group which merges the tower of algebras associated with a graph G with the structure of a Hopf algebra {\cal A}. Furthermore, he explored spin-anisotropy commensurable chains, a class of 2D integrable models, and described their mathematics using quantum groups with the deformation parameter as an Nth root of unity. Moreover, alongside Miguel A. Martín-Delgado, he employed real space renormalization group (RG) methods to examine the interplay between two different variants of quantum groups, exploring their relationship.
Spin chains and ladders
In 1996, Sierra started working in condensed matter physics, more concretely on spin chains, spin ladders and high-Tc superconductors. He generalized Haldane's conjecture from spin chains to spin ladders using the O(3) non-linear sigma model. He also investigated phase transitions in staggered spin ladders and three-legged antiferromagnetic ladders. In addition, he applied the variational matrix product ansatz to determine the ground state of several ladder systems. In a joint study with J. Dukelsky, M.A. Martín-Delgado and T. Nishino, he showed that the latter method is equivalent to the DMRG method introduced by S. R. White in 1992. Working together with Martín-Delgado in 1998, he proposed an extension of the variational matrix product ansatzs to two dimensions. In 2004, F. Verstraete and J.I. Cirac rediscovered the latter ansatz using quantum information techniques, designating it as PEPS.
Richardson-Gaudin models
In 1999, Sierra in collaboration with J. Dukelsky applied the DMRG method to the pairing model that describes ultrasmall superconducting grains, confirming the exact solution of the pairing model obtained by Richardson and Sherman in 1963–64. Shortly after its application, a close relationship was revealed between Richardson's solution and another set of exactly solvable models called the Gaudin magnets, collectively known as Richardson-Gaudin models. Several subsequent applications involved studying the effect of level statistics in nanograins, the connection with conformal field theory and Chern-Simons theory, as well as exploring the implications with mean-field solutions and p-wave symmetry.
Russian doll renormalization group
In 2003–04, Sierra, in conjunction with A. LeClair and J.M. Román, introduced multiple models exhibiting a Russian doll renormalization group flow, featuring a cyclic nature instead of converging to a fixed point. Among them was a BCS model with pairing scattering phases that break time-reversal symmetry, which was later demonstrated to be solvable using the algebraic Bethe ansatz. Moreover, they put forward two scattering S-matrices exhibiting a cyclic renormalization group (RG) structure, which is related to both the cycle regime of the Kosterlitz-Thouless flow and an analytic extension of the massive sine-Gordon S matrix.
Physical models of the Riemann zeros
In 2005, Sierra presented a Russian doll model of superconductivity whose spectrum contains the average Riemann zeros as missing spectral lines, and this model is connected to the xp Hamiltonian of Berry, Keating, and Connes. In addition to proposing several variations of the xp model, he collaborated with C. E. Creffield to propose a different physical realization of the Riemann zeros using periodically driven cold atoms; this idea was eventually experimentally achieved in 2021 using trapped ions.
Entanglement in quantum hall states
In 2009, Sierra and I. D. Rodríguez evaluated the entanglement entropy for integer quantum Hall states, involving the computation of the entanglement spectrum, proposed by Li and Haldane to identify topological order in non-abelian quantum Hall states.
Entanglement in conformal field theory
Sierra computed with F. C. Alcaraz and M. Ibáñez the entanglement properties of the low-lying excitations in conformal field theory in 2011 and found several applications to condensed matter systems, holography, and systems with boundaries. Furthermore, he independently discovered with J. C. Xavier and F. C Alcaraz the property of "entanglement equipartition" in conformal systems with U(1) symmetry, where the entanglement entropy is equally distributed in different charge sectors and this finding holds for more general systems up to corrections, separate from the works.
Entanglement in inhomogeneous spin chains
In 2014, Sierra, along with J. Rodríguez-Laguna and G. Ramírez, introduced an inhomogeneous spin chain model called rainbow chain that exhibits a maximal violation of the area law of entanglement entropy, in stark contrast to the behavior observed in homogeneous chains. The rainbow chain model, earlier proposed by J. I. Latorre in a separate joint work, was examined using conformal field theory techniques and was found to support symmetry-protected phases.
Infinite matrix product states and conformal field theory
In 2010, Sierra proposed a variational ansatz for the ground state of the XXZ spin chain using the chiral vertex operators of a CFT to describe the critical region of this model, resulting in a matrix product state with an infinite bond dimension to capture logarithmic entanglement entropy. The ansatz also replicated the Haldane-Shastry wave function for the XXX spin chain, notably matching a conformal block of the WZW model SU(2) at level k=1, and was later extended to any level k jointly with A. E. B. Nielsen and J. Ignacio Cirac. In two spatial dimensions, the CFT wave function demonstrated a bosonic Laughlin spin liquid state on a lattice, that was experimentally realized using optical lattices. This method was extended to other bosonic and fermionic Laughlin states, WZW model SU(N)_1, etc. The CFT wave functions described earlier were derived as tensor network states where the individual tensors are functionals of fields which allowed the analysis of the symmetries of the field tensor network states.
The prime state
Along with J. I. Latorre, Sierra proposed a quantum circuit that creates a pure state corresponding to the quantum superposition of all prime numbers less than 2^n, where n is the number of qubits of the register. They showed the construction of the Prime state using the Gover algorithm that combined with the quantum counting algorithm allows for a verification of the Riemann hypothesis for numbers far beyond the reach of any classical computer. Moreover, the Prime state turned out to be highly entangled with an entanglement spectrum intimately related to the Hardy-Littlewood constants for the pairwise distribution of primes.
Quantum computation
Through collaborative research efforts, Sierra implemented multiple quantum algorithms on the newly launched IBM quantum computers and introduced a quantum circuit capable of generating the Bethe eigenstates for the XXZ Hamiltonian. Additionally, he proposed a simple mitigation strategy for a systematic gate error in IBMQ quantum computers and demonstrated the implementation of data-driven error mitigation techniques to simulate quench dynamics on a digital quantum computer.
Quantum games
In 2022, Sierra, together with A. Bera and S. Singha Roy, demonstrated a connection between the ground state of a topological Hamiltonian and the optimal strategy in a causal order game, where the maximum violation of the classical bound is associated with a second-order quantum phase transition. Furthermore, working in conjunction with D. Centeno led to the development of several quantum versions of the Morra game, known as Chinos in Spain.
Bibliography
Books
Quantum Electron Liquids and High-Tc Superconductivity (1995)
Quantum Groups in Two-dimensional Physics (2011)
Selected articles
Günaydin, M, Sierra, G & Townsend, PK (1984). The geometry of N=2 Maxwell-Einstein supergravity and Jordan algebras. Nuclear Physics B 242 (1), 244–268.
Alvarez-Gaume, L, Gomez, C. & Sierra, G (1989). Quantum group interpretation of some conformal field theories. Physics Letters B 220 (1–2), 142–152.
Dukelsky, J., Martín-Delgado, M. A., Nishino, T., & Sierra, G. (1998). Equivalence of the variational matrix product method and the density matrix renormalization group applied to spin chains. Europhysics letters, 43(4), 457.
Dukelsky, J., Pittel, S., & Sierra, G. (2004). Colloquium: Exactly solvable Richardson-Gaudin models for many-body quantum systems. Reviews of modern physics, 76(3), 643.
Cirac, JI & Sierra, G (2010). Infinite matrix product states, conformal field theory and the Haldane-Shastry model. Physical Review B 81 (10), 104431.
Alcaraz, F. C., Berganza, M. I., & Sierra, G. (2011). Entanglement of low-energy excitations in Conformal Field Theory. Physical Review Letters, 106(20), 201601.
Latorre, J. I. & Sierra, G (2014), Quantum Computation of Prime Number Functions, Quantum Information and Computation, Vol. 14, 0577.
Ramírez, G, Rodríguez-Laguna, J & Sierra, G. (2015). Entanglement over the rainbow. Journal of Statistical Mechanics: Theory and Experiment 2015 (6), P06002.
Xavier, J. C., Alcaraz, F. C., & Sierra, G. (2018). Equipartition of the entanglement entropy. Physical Review B, 98(4), 041106.
Sierra, G (2019). The Riemann zeros as spectrum and the Riemann hypothesis. Symmetry 11 (4), 494.
References
Theoretical physicists
Complutense University of Madrid alumni
Quantum physicists
Condensed matter physicists
1955 births
Living people
Academic staff of the Complutense University of Madrid | Germán Sierra | [
"Physics",
"Materials_science"
] | 2,952 | [
"Condensed matter physicists",
"Theoretical physics",
"Quantum physicists",
"Quantum mechanics",
"Condensed matter physics",
"Theoretical physicists"
] |
74,543,750 | https://en.wikipedia.org/wiki/Cosmological%20phase%20transition | A cosmological phase transition is a physical process, whereby the overall state of matter changes together across the whole universe. The success of the Big Bang model led researchers to conjecture possible cosmological phase transitions taking place in the very early universe, at a time when it was much hotter and denser than today.
Any cosmological phase transition may have left signals which are observable today, even if it took place in the first moments after the Big Bang, when the universe was opaque to light.
Cosmological first-order phase transitions
Phase transitions can be categorised by their order. Transitions which are first order proceed via bubble nucleation and release latent heat as the bubbles expand.
As the universe cooled after the hot Big Bang, such a phase transition would have released huge amounts of energy, both as heat and as the kinetic energy of growing bubbles. In a strongly first-order phase transition, the bubble walls may even grow at near the speed of light. This, in turn, would lead to the production of a stochastic background of gravitational waves. Experiments such as NANOGrav and LISA may be sensitive to this signal.
Shown below are two snapshots from simulations of the evolution of a first-order cosmological phase transition. Bubbles first nucleate, then expand and collide, eventually converting the universe from one phase to another.
Examples
The Standard Model of particle physics contains three fundamental forces, the electromagnetic force, the weak force and the strong force. Shortly after the Big Bang, the extremely high temperatures may have modified the character of these forces. While these three forces act differently today, it has been conjectured that they may have been unified in the high temperatures of the early universe.
Strong force phase transition
Today the strong force binds together quarks into protons and neutrons, in a phenomenon known as color confinement. However, at sufficiently high temperatures, protons and neutrons disassociate into free quarks. The strong force phase transition marks the end of the quark epoch. Studies of this transition based on lattice QCD have demonstrated that it would have taken place at a temperature of approximately 155 MeV, and would have been a smooth crossover transition.
This conclusion assumes the simplest scenario at the time of the transition, and first- or second-order transitions are possible in the presence of a quark, baryon or neutrino chemical potential, or strong magnetic fields.
The different possible phase transition types are summarised by the strong force phase diagram.
Electroweak phase transition
The electroweak phase transition marks the moment when the Higgs mechanism first activated, ending the electroweak epoch.
Just as for the strong force, lattice studies of the electroweak model have found the transition to be a smooth crossover, taking place at
The conclusion that the transition is a crossover assumes the minimal scenario, and is modified by the presence of additional fields or particles. Particle physics models which account for dark matter or which lead to successful baryogenesis may predict a strongly first-order electroweak phase transition.
Phase transitions beyond the Standard Model
If the three forces of the Standard Model are unified in a Grand Unified Theory, then there would have been a cosmological phase transition at even higher temperatures, corresponding to the moment when the forces first separated out. Cosmological phase transitions may also have taken place in a dark or hidden sector, amongst particles and fields that are only very weakly coupled to visible matter.
See also
Timeline of the early universe
Chronology of the universe
Phase transition
Physics beyond the Standard Model
References
Physical cosmology
Big Bang
Concepts in astronomy
Astronomical events
Scientific models
Particle physics | Cosmological phase transition | [
"Physics",
"Astronomy"
] | 739 | [
"Cosmogony",
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Big Bang",
"Astronomical events",
"Theoretical physics",
"Astrophysics",
"Particle physics",
"Physical cosmology"
] |
74,544,862 | https://en.wikipedia.org/wiki/Matthias%20rules | In physics, the Matthias rules refers to a historical set of empirical guidelines on how to find superconductors. These rules were authored Bernd T. Matthias who discovered hundreds of superconductors using these principles in the 1950s and 1960s. Deviations from these rules have been found since the end of the 1970s with the discovery of unconventional superconductors.
History
Superconductivity was first discovered in solid mercury in 1911 by Heike Kamerlingh Onnes and Gilles Holst, who had developed new techniques to reach near-absolute zero temperatures.
In subsequent decades, superconductivity was found in several other materials; In 1913, lead at 7 K, in 1930's niobium at 10 K, and in 1941 niobium nitride at 16 K.
In 1933, Walther Meissner and Robert Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon that has come to be known as the Meissner effect.
Bernd T. Matthias and John Kenneth Hulm were encouraged by Enrico Fermi to start a systematic experimental investigation in the 1950s, looking for superconductors in different elements and compounds. For this reason, they developed a technique based on the Meissner effect.
In collaboration with Theodore H. Geballe, Matthias broke the record in 1954, with the discovery of superconductivity in niobium–tin (Nb3Sn) which had the highest known transition temperature of about 18 K. Later Matthias would try to come up with general empirical properties to find superconducting alloys. In the same year he published a first version of his famous guidelines which came to be known, as the "Mathias rules". Matthias was able to show in 1962 that some deviations from his rules where due to impurities or defects in the materials. Using his rules, Matthias and collaborators found in 1965 that niobium–germanium (Nb3Ge) with a record critical temperature above 20 K.
Matthias published a first outline his rules in 1957. A successful microscopic theory of superconductivity would no come up until the same year, with the development of the BCS theory by John Bardeen, Leon Cooper, and John Robert Schrieffer.
Geballe and Matthias won the Oliver E. Buckley Condensed Matter Prize in 1970 for "For their joint experimental investigations of superconductivity which have challenged theoretical understanding and opened up the technology of high field superconductors."
One of the first deviations of Matthias' rules was found with the discovery of superconductivity in molybdenum sulfide and selenides. Matthias postulated an additional criterion in 1976 at the Rochester Conference on superconductivity to include these materials.
Another violation of Matthias rules appeared in 1979, with the discovery of heavy fermion superconductors by Frank Steglich where magnetism was expected to play a role, contrary to the Matthias rules.
Matthias held the record of highest critical temperature superconductor found until the discovery of high-temperature superconductors were discovered in 1986 by Georg Bednorz and K. Alex Müller.
Description
The Matthias rules are a set of guidelines to find low temperature superconductors but were never provided in list form by Matthias.
A popular summarized version of these rules reads:
High symmetry is good, cubic symmetry is the best.
High density of electronic states is good.
Stay away from oxygen.
Stay away from magnetism
Stay away from insulators.
Stay away from theorists!
Rule 2, rules out materials near metal-insulator transition like oxides. Rule 4, rules out material that are in close vicinity to ferromagnetism or antiferromagnetism. Rule 6 is not an official rule and is often added to indicate skepticism of the theories of the time.
Other equivalent principles as stated by Matthias, indicate to work mainly with d-electron metals; with the average number of valence electrons, preferably odd numbers 3, 5, and 7 and high electron density or high electron density of state at the Fermi level.
In 1976, Mattias added the criterion to include "elements which will not react at all with molybdenum alone form superconducting compounds with Mo3S4 and Mo3Se4, S or Se" due to deviations in molydenum compounds.
Failure and extensions
It has been argued that all of Matthias' rules have been shown to not be completely valid. Specially the rules are not valid for high-temperature superconductors, alternative rules for these materials have been suggested.
References
Superconductivity
History of physics
Obsolete theories in physics | Matthias rules | [
"Physics",
"Materials_science",
"Engineering"
] | 957 | [
"Physical quantities",
"Theoretical physics",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance",
"Obsolete theories in physics"
] |
74,547,833 | https://en.wikipedia.org/wiki/Lead%20apatite | Lead apatite is a generic name for apatite-structure materials that contain lead as the divalent cation. A Copper-doped lead-apatite has been proposed as a room-temperature superconductor. A number of minerals are known. All have a hexagonal crystal structure.
Minerals
References
Lead compounds
Crystals in space group 176
Calcium minerals
Gemstones
Halide minerals
Hexagonal minerals
Minerals in space group 176
Phosphate minerals
Piezoelectric materials | Lead apatite | [
"Physics"
] | 98 | [
"Physical phenomena",
"Materials",
"Electrical phenomena",
"Gemstones",
"Piezoelectric materials",
"Matter"
] |
74,548,285 | https://en.wikipedia.org/wiki/Maytal%20Caspary%20Toroker | Maytal Caspary Toroker is an associate professor in the Department of Materials Science and Engineering at Technion-Israel Institute of Technology, Haifa, Israel. She is recognized for her significant contributions in the field of computational materials science, particularly in its applications to catalysis, charge transport, and energy conversion devices.
Early life and education
Maytal was born in Israel in a Jewish family. She completed her BA degree in molecular biochemistry (2004) from the Department of Chemistry at Technion. Later, she pursued her direct Ph.D. (2009) from the same department.
Research and career
After receiving her Ph.D., she started working with Prof. Emily A. Carter at Princeton University during the period of 2010–13, funded by the Marie Curie International Outgoing Fellowship from the European Union. In 2013, she joined the Department of Materials Science and Engineering at Technion-Israel Institute of Technology, Haifa, as an assistant professor, and she is currently working as an associate professor.
Her research at Technion mainly involves development of density functional theory (DFT) and its application. She has extensively worked on developing a method for charge transport calculation through heterostructures using wave propagation method. Her research on doped NiOOH (Nickel oxy-hydroxide) has explained the reason for the material's remarkable success in the oxygen evolution reaction. Her results show that it is the iron's ability to change its oxidation state that facilitates oxygen evolution. This work was published in the journal Physical Chemistry Chemical Physics and was featured on the front cover of Issue 11, 2017. She has also conducted research on transition metal oxides for their applications as photocatalysts and photoelectrodes. She developed a method to calculate the band edge positions using first principles quantum mechanics calculations. These band edge positions play a crucial role in determining the suitability of these materials for various applications. Apart from this, she also works on metal organic frameworks (MOFs) and covalent organic frameworks (COFs) for their application in photocatalysis and electrocatalysis.
Currently she is the chair of European Cooperation in Science and Technology (COST) action on Computational materials sciences for efficient water splitting with nanocrystals from abundant elements. She is in the editorial advisory board of the journal Advanced Theory and Simulation, Wiley
Awards and honours
Alexander Goldberg Research Prize (2021)
L’Oréal-Unesco-Israel Award (2010)
New England Fund (2009)
References
Year of birth missing (living people)
Living people
Technion – Israel Institute of Technology
Materials scientists and engineers | Maytal Caspary Toroker | [
"Materials_science",
"Engineering"
] | 529 | [
"Materials scientists and engineers",
"Materials science"
] |
74,548,923 | https://en.wikipedia.org/wiki/Minflux | MINFLUX, or minimal fluorescence photon fluxes microscopy, is a super-resolution light microscopy method that images and tracks objects in two and three dimensions with single-digit nanometer resolution.
MINFLUX uses a structured excitation beam with at least one intensity minimum – typically a doughnut-shaped beam with a central intensity zero – to elicit photon emission from a fluorophore. The position of the excitation beam is controlled with sub-nanometer precision, and when the intensity zero is positioned exactly on the fluorophore, the system records no emission. Thus, the system requires few emitted photons to determine the fluorophore's location with high precision. In practice, overlapping the intensity zero and the fluorophore would require a priori location knowledge to position the beam. As this is not the case, the excitation beam is moved around in a defined pattern to probe the emission from the fluorophore near the intensity minimum.
Each localization takes less than 5 microseconds, so MINFLUX can construct images of nanometric structures or track single molecules in fixed and live specimens by pooling the locations of fluorescent labels. Because the goal is to locate the point where a fluorophore stops emitting, MINFLUX significantly reduces the number of fluorescence photons needed for localization compared to other methods.
A commercial MINFLUX system is available from abberior instruments GmbH.
Principle
MINFLUX overcomes the Abbe diffraction limit in light microscopy and distinguishes individual fluorescing molecules by leveraging the photophysical properties of fluorophores. The system temporarily silences (sets in an OFF-state) all but one molecule within a diffraction-limited area (DLA) and then locates that single active (in an ON-state) molecule. Super-resolution microscopy techniques like stochastic optical reconstruction microscopy (STORM) and photoactivated localization microscopy (PALM) do the same. However, MINFLUX differs in how it determines the molecule’s location.
The excitation beam used in MINFLUX has a local intensity minimum or intensity zero. The position of this intensity zero in a sample is adjusted via control electronics and actuators with sub-nanometer spatial and sub-microsecond temporal precision. When the active molecule located at is in a non-zero intensity area of the excitation beam, it fluoresces. The number of photons emitted by the active molecule is proportional to the excitation beam intensity at that position.
In the vicinity of the excitation beam intensity zero, the intensity of the emission from the active molecule when the intensity zero is located at position can be approximated by a quadratic function. Therefore, the recorded number of emission photons is:
where is a measure of the collection efficiency of detection, the absorption cross-section of the emitter, and the quantum yield of fluorescence.
In other words, photon fluxes emitted by the active molecule when it is located close to the zero-intensity point of the excitation beam carry information about its distance to the center of the beam. That information can be used to find the position of the active molecule. The position is probed with a set of excitation intensities . For example, the active molecule is excited with the same doughnut-shaped beam moved to different positions. The probing results in a corresponding set of photon counts . These photon counts are probabilistic; each time such a set is measured, the result is a different realization of photon numbers fluctuating around a mean value. Since their distribution follows Poissonian statistics, the expected position of the active molecule can be estimated from the photon numbers, using, for example, a maximum likelihood estimation of the form:
The position maximizes the likelihood that the measured set of photon counts occurred exactly as recorded and is thus, an estimate of the active molecule’s location.
Localization process
Recordings of the emitting active molecule at two different excitation beam positions are needed to use the quadratic approximation in the one-dimensional basic principle described above. Each recording provides a one-dimensional distance value to the center of the excitation beam. In two dimensions, at least three recording points are needed to ascertain a location that can be used to move the MINFLUX excitation beam toward the target molecule. These recording points demarcate a probing area L. Balzarotti et al. use the Cramér-Rao limit to show that constricting this probing area significantly improves localization precision, more so than increasing the number of emitted photons:
where is the Cramér-Rao limit, is the diameter of the probing area, and is the number of emitted photons.
MINFLUX takes advantage of this feature when localizing an active fluorophore. It records photon fluxes using a probing scheme of at least three recording points around the probing area and one point at the center. These fluxes differ at each recording point as the active molecule is excited by different light intensities. Those flux patterns inform the repositioning of the probing area to center on the active molecule. Then the probing process is repeated. With each probing iteration, MINFLUX constricts the probing area , narrowing the space where the active molecule can be located. Thus, the distance remaining between the intensity zero and the active molecule is determined more precisely at each iteration. The steadily improving positional information minimizes the number of fluorescence photons and the time that MINFLUX needs to achieve precise localizations.
Applications
By pooling the determined locations of multiple fluorescent molecules in a specimen, MINFLUX generates images of nanoscopic structures with a resolution of 1–3 nm. MINFLUX has been used to image DNA origami and the nuclear pore complex and to elucidate the architecture of subcellular structures in mitochondria and photoreceptors. Because MINFLUX does not collect large numbers of photons emitted from target molecules, localization is faster than with conventional camera-based systems. Thus, MINFLUX can iteratively localize the same molecule at microsecond intervals over a defined period. MINFLUX has been used to track the movement of the motor protein kinesin-1, both in vitro and in vivo, and to monitor configurational changes of the mechanosensitive ion channel PIEZO1.
See also
Confocal microscopy
Fluorescence
Fluorescence microscope
Fluorescence resonance energy transfer microscopy
Laser scanning confocal microscopy
Optical microscopy
Photoactivated localization microscopy
Stochastic optical reconstruction microscopy
Super-resolution microscopy
Ground state depletion microscopy
RESOLFT
References
Microscopy
Optical microscopy | Minflux | [
"Chemistry"
] | 1,372 | [
"Optical microscopy",
"Microscopy"
] |
74,550,274 | https://en.wikipedia.org/wiki/RO5256390 | RO5256390 or RO-5256390 is a drug developed by Hoffmann-La Roche which acts as an agonist for the trace amine associated receptor 1 (TAAR1). It is a full agonist of the rat, cynomolgus monkey, and human TAAR1, but a partial agonist of the mouse TAAR1.
Pharmacology
Pharmacodynamics
Actions
RO5256390 is a full agonist of the rat, cynomolgus monkey, and human TAAR1, but a high-efficacy partial agonist of the mouse TAAR1.
Effects
RO5256390 has been found to suppress the firing rates of ventral tegmental area (VTA) dopaminergic neurons and dorsal raphe nucleus (DRN) serotonergic neurons in mouse brain slices ex vivo. This effect was absent in slices from TAAR1 knockout mice. Similarly, acute RO5256390 suppressed VTA dopaminergic and DRN serotonergic neuronal excitability in rats in vivo, whereas the excitability of locus coeruleus (LC) noradrenergic neurons was unaffected. In contrast with acute exposure however, chronic administration of RO5256390 for 14days increased the excitability of VTA dopaminergic and DRN serotonergic neurons. The drug has been found to dose-dependently block cocaine-induced inhibition of dopamine clearance (reuptake inhibition) in rat nucleus accumbens (NAc) slices ex vivo whilst having no effect on dopamine clearance by itself.
RO5256390 has been found to fully suppress the hyperlocomotion (a psychostimulant-like effect) induced by cocaine in rodents. In addition, it dose-dependently inhibited the hyperlocomotion induced by the NMDA receptor antagonists phencyclidine (PCP) and L-687,414. RO5256390 is said to produce a brain activity pattern similar to that of the antipsychotic olanzapine in rodents and hence is presumed to have antipsychotic-like properties. In contrast to classical antipsychotics however, RO5256390 did not produce extrapyramidal-like symptoms in rodents and instead could reduce the catalepsy induced by haloperidol. RO5256390 has been found to dose-dependently inhibit cocaine self-administration and context-triggered cocaine-seeking behavior in rodents.
RO5256390 shows robust aversive and locomotor-suppressing effects in rodents that are dependent on TAAR1 activation. Similar aversive effects have also been observed with other TAAR1 agonists like RO5263397 and RO5166017. RO5256390 has been shown to decrease motor hyperactivity, novelty-induced locomotor activity, and induce anxiolytic-like effects in the spontaneously hypertensive rat (SHR), a rodent model of attention deficit hyperactivity disorder (ADHD). In contrast to the TAAR1 partial agonist RO5263397, RO5256390 did not produce antidepressant-like effects in rodents. Conversely however, both agents produced antidepressant-like effects in monkeys.
RO5256390 has been found to produce pro-cognitive effects in rodents and monkeys. It has been shown to strongly suppress rapid eye movement (REM) sleep in rodents. On the other hand, it did not promote wakefulness in rodents. RO5256390 has been shown to block compulsive and binge-like eating behavior in rats. For this reason, it is being investigated as a potential drug to treat binge eating disorder.
History
RO5256390 was first described in the scientific literature by 2013.
See also
RO5073012 – TAAR1 weak partial agonist
RO5166017 – TAAR1 partial or full agonist
RO5203648 – TAAR1 partial agonist
RO5263397 – TAAR1 partial agonist
EPPTB – TAAR1 antagonist/inverse agonist
References
Amines
Oxazolines
TAAR1 agonists
TAAR1 antagonists | RO5256390 | [
"Chemistry"
] | 896 | [
"Amines",
"Bases (chemistry)",
"Functional groups"
] |
63,050,733 | https://en.wikipedia.org/wiki/Difference%20Equations%3A%20From%20Rabbits%20to%20Chaos | Difference Equations: From Rabbits to Chaos is an undergraduate-level textbook on difference equations, a type of recurrence relation in which the values of a sequence are determined by equations involving differences of successive terms of the sequence. It was written by Paul Cull, Mary Flahive, and Robby Robson, and published by Springer-Verlag in their Undergraduate Texts in Mathematics series (Vol. 111, 2005, doi:10.1007/0-387-27645-9, ).
Topics
After an introductory chapter on the Fibonacci numbers and the rabbit population dynamics example based on these numbers that Fibonacci introduced in his book Liber Abaci, the book includes chapters on
homogeneous linear equations, finite difference equations and generating functions, nonnegative difference equations and roots of characteristic polynomials, the Leslie matrix in population dynamics, matrix difference equations and Markov chains, recurrences in modular arithmetic, algorithmic applications of fast Fourier transforms, and nonlinear difference equations and dynamical systems. Four appendices include a set of worked problems, background on complex numbers and linear algebra, and a method of Morris Marden for testing whether the sequence defined by a difference equation converges to zero.
Reception and related reading
Other books on similar topics include A Treatise on the Calculus of Finite Differences by George Boole, Introduction to Difference Equations by S. Goldberg, Difference Equations: An Introduction with Applications by W. G. Kelley and A. C. Peterson, An Introduction to Difference Equations by S. Elaydi, Theory of Difference Equations: An Introduction by V. Lakshmikantham and D. Trigiante, and Difference Equations: Theory and Applications by R. E. Mickens. However, From Rabbits to Chaos places a greater emphasis on computation than theory compared to some of these other books. Reviewer Henry Ricardo writes that the book is "more suitable to an undergraduate course" than its alternatives, despite being less in-depth, because of its greater accessibility and connection to application areas.
Similarly, reviewer Shandelle Henson calls From Rabbits to Chaos "well written and easy to read" but adds that it is not "comprehensive or up-to-date".
References
Recurrence relations
Mathematics textbooks
2005 non-fiction books | Difference Equations: From Rabbits to Chaos | [
"Mathematics"
] | 454 | [
"Mathematical relations",
"Recurrence relations"
] |
63,054,227 | https://en.wikipedia.org/wiki/Apilimod | Apilimod (STA-5326) is a drug that was initially identified as an inhibitor of production of the interleukins IL-12 and IL-23, and developed for the oral treatment of autoimmune conditions such as Crohn's disease and rheumatoid arthritis, though clinical trial results were disappointing and development for these applications was not continued.
Subsequently, it was discovered that apilimod has an additional mode of action, as an inhibitor of the lipid kinase enzyme PIKfyve. PIKfyve makes two lipids, PtdIns5P and PtdIns(3,5)P2, whose syntheses are efficiently and similarly inhibited by apilimod (ID50 = 0.4 nM) in in vitro assays. Administration of apilimod (100 nM; 60 min) in human embryonic kidney cells powerfully reduces levels of both PtdIns5P and PtdIns(3,5)P2.
Recently apilimod has been repurposed as a potential antiviral and anti-cancer drug, with possible applications in the treatment of non-Hodgkin lymphoma as well as viral diseases such as Ebola virus disease, Lassa fever and COVID-19.
References
Antiviral drugs
Anti-interleukin drugs | Apilimod | [
"Chemistry",
"Biology"
] | 277 | [
"Antiviral drugs",
"Biocides",
"Hydrazones",
"Functional groups"
] |
63,057,517 | https://en.wikipedia.org/wiki/Generalized%20pencil-of-function%20method | Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.
The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory.
Method
Mathematical basis
A transient electromagnetic signal can be represented as:
where
is the observed time-domain signal,
is the signal noise,
is the actual signal,
are the residues (),
are the poles of the system, defined as ,
by the identities of Z-transform,
are the damping factors and
are the angular frequencies.
The same sequence, sampled by a period of , can be written as the following:
,
Generalized pencil-of-function estimates the optimal and 's.
Noise-free analysis
For the noiseless case, two matrices, and , are produced:
where is defined as the pencil parameter. and can be decomposed into the following matrices:
where
and are diagonal matrices with sequentially-placed and values, respectively.
If , the generalized eigenvalues of the matrix pencil
yield the poles of the system, which are . Then, the generalized eigenvectors can be obtained by the following identities:
where the denotes the Moore–Penrose inverse, also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse.
Noise filtering
If noise is present in the system, and are combined in a general data matrix, :
where is the noisy data. For efficient filtering, L is chosen between and . A singular value decomposition on yields:
In this decomposition, and are unitary matrices with respective eigenvectors and and is a diagonal matrix with singular values of . Superscript denotes the conjugate transpose.
Then the parameter is chosen for filtering. Singular values after , which are below the filtering threshold, are set to zero; for an arbitrary singular value , the threshold is denoted by the following formula:
,
and are the maximum singular value and significant decimal digits, respectively. For a data with significant digits accurate up to , singular values below are considered noise.
and are obtained through removing the last and first row and column of the filtered matrix , respectively; columns of represent . Filtered and matrices are obtained as:
Prefiltering can be used to combat noise and enhance signal-to-noise ratio (SNR). Band-pass matrix pencil (BPMP) method is a modification of the GPOF method via FIR or IIR band-pass filters.
GPOF can handle up to 25 dB SNR. For GPOF, as well as for BPMP, variance of the estimates approximately reaches Cramér–Rao bound.
Calculation of residues
Residues of the complex poles are obtained through the least squares problem:
Applications
The method is generally used for the closed-form evaluation of Sommerfeld integrals in discrete complex image method for method of moments applications, where the spectral Green's function is approximated as a sum of complex exponentials. Additionally, the method is used in antenna analysis, S-parameter-estimation in microwave integrated circuits, wave propagation analysis, moving target indication, radar signal processing, and series acceleration in electromagnetic problems.
See also
Estimation of signal parameters via rotational invariance techniques
Generalized eigenvalue problem
Matrix pencil
MUSIC (algorithm)
Prony's method
References
Signal processing
Computational electromagnetics
Radar signal processing
Estimation theory
Articles containing proofs
Signal estimation | Generalized pencil-of-function method | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 769 | [
"Computational electromagnetics",
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Computational physics",
"Articles containing proofs"
] |
53,236,762 | https://en.wikipedia.org/wiki/Epitranscriptomic%20sequencing | In epitranscriptomic sequencing, most methods focus on either (1) enrichment and purification of the modified RNA molecules before running on the RNA sequencer, or (2) improving or modifying bioinformatics analysis pipelines to call the modification peaks. Most methods have been adapted and optimized for mRNA molecules, except for modified bisulfite sequencing for profiling 5-methylcytidine which was optimized for tRNAs and rRNAs.
There are seven major classes of chemical modifications found in RNA molecules: N6-methyladenosine, 2'-O-methylation, N6,2'-O-dimethyladenosine, 5-methylcytidine, 5-hydroxylmethylcytidine, inosine, and pseudouridine. Various sequencing methods have been developed to profile each type of modification. The scale, resolution, sensitivity, and limitations associated with each method and the corresponding bioinformatics tools used will be discussed.
Methods for profiling N6-methyladenosine
Methylation of adenosine does not affect its ability to base-pair with thymidine or uracil, so N6-methyladenosine (m6A) cannot be detected using standard sequencing or hybridization methods. This modification is marked by the methylation of the adenosine base at the nitrogen-6 position. It is abundantly found in polyA+ mRNA; also found in tRNA, rRNA, snRNA, and long ncRNA.
m6A-seq and MeRIP-seq
In 2012, the first two methods for m6A sequencing came out that enabled transcriptome-wide profile of m6A in mammalian cells. These two techniques, called m6A-seq and MeRIP-seq (m6A-specific methylated RNA immunoprecipitation), are also the first methods to allow for any type of RNA modification sequencing. These methods were able to detect 10,000 m6A peaks in the mammalian transcriptome; the peaks were found to be enriched in 3’UTR regions, near STOP codons, and within long exons.
The two methods were optimized to detect methylation peaks in poly(A)+ mRNA, but the protocol could be adapted to profile any type of RNA. Collected RNA sample is fragmented into ~100-nucleotide-long oligonucleotides using a fragmentation buffer, immunoprecipitation with purified anti-m6A antibody, elution and collection of antibody-tagged RNA molecules. The immunoprecipitation procedure in MeRIP-Seq is able to produce >130fold enrichment of m6A sequences. Random primed cDNA library generation was performed, followed by adaptor ligation and Illumina sequencing. Since the RNA strands are randomly chopped up, the m6A site should, in principle, lie somewhere in the center of the regions to which sequence reads align. At extremes, the region would be roughly 200nt wide (100nt up- and downstream of the m6A site).
When the first nucleotide of a transcript is an adenosine, in addition to the ribose 2’-O-methylation, this base can be further methylated at the N6 position.
m6A-seq was confirmed to be able to detect m6Am peaks at transcription start sites. Adapter ligation at both ends of RNA fragment results in reads tending to pileup at the 5’ terminus of the transcript. Schwartz et al. (2015) leveraged this knowledge to detect mTSS sites by picking out sites with a high ratio of the size of pileups in the IP samples compared to input sample. As confirmation, >80% of the highly enriched pileup sites contained adenosine.
The resolution of these methods is 100-200nt, which was the range of the fragment size.
These two methods had several drawbacks: (1) required substantial input material, (2) low resolution which made pinpointing the actual site with the m6A mark difficult, and (3) cannot directly assess false positives.
Especially in MeRIP-Seq, the bioinformatics tools that are currently available are only able to call 1 site per ~100-200nt wide peak, so a substantial portion of clustered m6As (~64nt between each individual site within a cluster) are missed. Each cluster can contain up to 15 m6A residues.
In 2013, a modified version of m6A-seq based on the previous two methods m6A-seq and MeRIP-seq came out which aimed to increase resolution, and demonstrated this in the yeast transcriptome. They achieved this by decreasing fragment size and employing a ligation-based strand-specific library preparation protocol capturing both ends of the fragmented RNA, ensuring that the methylated position is within the sequenced fragment. By additionally referencing the m6A consensus motif and eliminating false positive m6A peaks using negative control samples, the m6A profiling in yeast was able to be done at single-base resolution.
UV-based Methods
PA-m6A-seq
UV-induced RNA-antibody crosslinking was added on top of m6A-seq to produce PA-m6A-seq (photo-crosslinking-assisted m6A-seq) which increases resolution up to ~23nt. First, 4-thiourodine (4SU) is incorporated into the RNA by adding 4SU in growth media, some incorporation sites presumably near m6A location. Immunoprecipitation is then performed on full-length RNA using m6A-specific antibody [36]. UV light at 365 nm is then shined onto RNA to activate the crosslinking to the antibody with 4SU. Crosslinked RNA was isolated via competition elution and fragmented further to ~25-30nt; proteinase K was used to dissociate the covalent bond between crosslinking site and antibody. Peptide fragments that remain after antibody removal from RNA cause the base to be read as a C as opposed to a T during reverse transcription, effectively inducing a point mutation at the 4SU crosslinking site. The short fragments are subjected to library construction and Illumina sequencing, followed by finding the consensus methylation sequence.
The presence of the T to C mutation helps increase the signal to noise ratio of methylation site detection as well as providing greater resolution to the methylation sequence.
One shortcoming of this method is that m6A sites that did not incorporate 4SU can't be detected.
Another caveat is that position of 4SU incorporation can vary relative to any single m6A residue, so it still remains challenging to precisely locate m6A site using the T to C mutation.
m6A-CLIP and miCLIP
m6A-CLIP (crosslinking immunoprecipitation) and miCLIP (m6A individual-nucleotide-resolution crosslinking and immunoprecipitation) are UV-based sequencing techniques. These two methods activate crosslinking at 254 nm, fragments RNA molecules before immunoprecipitation with antibody, and do not depend on the incorporation of photoactivatable ribonucleosides - the antibody directly crosslinks with a base close (very predictable location) to the m6A site. These UV-based strategies uses antibodies that induces consistent and predictable mutational and truncation patterns in the cDNA strand during reverse-transcription that could be leveraged to more precisely locate the m6A site. Though both m6A-CLIP and miCLIP reply on UV induced mutations, m6A-CLIP is distinct by taking advantage that m6A alone can induce cDNA truncation during reverse transcription and generate single-nucleotide mapping for over ten folds more precise m6A sites (MITS, m6A-induced truncation sites), permitting comprehensive and unbiased precise m6A mapping. In contrast, UV-mapped m6A sites by miCLIP is only a small subset of total precise m6A sites. The precise location of tens of thousands of m6A sites in human and mouse mRNAs by m6A-CLIP reveals that m6A is enriched at last exon but not around stop codon.
In m6A-CLIP and miCLIP, RNA is fragmented to ~20-80nt first, then the 254 nm UV-induced covalent RNA/m6A antibody complex was formed in the fragments containing m6A. The antibody was removed with proteinase K before reverse-transcription, library construction and sequencing. Remnants of peptides at the crosslinking site on the RNA after antibody removal, leads to insertions, truncations, and C to T mutations during reverse transcription to cDNA, especially at the +1 position to the m6A site (5’ to the m6A site) in the sequence reads.
Positive sites seen using m6A-CLIP and miCLIP had high percent of matches with those detected using SCARLET, which has higher local resolution around a specific site, (see below), implicating m6A-CLIP and miCLIP has high spatial resolution and low false discovery rate.
miCLIP has been used to detect m6Am by looking at crosslinking-induced truncation sites at the 5’UTR.
Methods for quantifying m6A modification status
Although m6A sites could be profiled at high resolution using UV-based methods, the stoichiometry of m6A sites - the methylation status or the ratio m6A+ to m6A- for each individual site within a type of RNA - is still unknown. SCARLET (2013) and m6A-LAIC-seq (2016) allows for the quantitation of stoichiometry at a specific locus and transcriptome-wide, respectively.
Bioinformatics methods used to analyze m6A peaks do not make any prior assumptions about the sequence motifs within which m6A sites are usually found, and take into consideration all possible motifs. Therefore, it is less likely to miss sites.
SCARLET
SCARLET (site-specific cleavage and radioactive-labeling followed by ligation-assisted extraction and thin-layer chromatography) is used determining the fraction of RNA in a sample that carries a methylated adenine at a specific site. One can start with total RNA without having to enrich for the target RNA molecule. Therefore, it is an especially suitable method for quantifying methylation status in low abundance RNAs such as tRNAs. However, it is not suitable or practical for large-scale location of m6A sites.
The procedure begins with a chimeric DNA oligonucleotide annealing to the target RNA around the candidate modification site. The chimeric ssDNA has 2’OMe/2’H modifications and is complementary to the target sequence. The chimeric oligonucleotide serves as a guide to allow RNase H to cleave the RNA strand precisely at the 5’-end of the candidate site. The cut site is then radiolabeled with phosphorus-32 and splint-ligated to a 116nt ssDNA oligonucleotide using DNA ligase. RNase T1/A is introduced to the sample to digest all RNA, except for the RNA molecules with the 116-mers DNA attached. This radiolabeled product is then isolated and digested by nuclease to generate a mixture of modified and unmodified adenosines (5’P-m6A and 5’-P-A) which is separated using thin layer chromatography. The relative proportions of the two groups can be determined using UV absorption levels.
m6A-LAIC-seq
m6A-LAIC-seq (m6A-level and isoform-characterization sequencing) is a high-throughput approach to quantify methylation status on a whole-transcriptome scale. Full-length RNA samples are used in this method. RNAs are first subjected to immunoprecipitation with an anti-m6A antibody. Excess antibody is added to the mixture to ensure all m6A-containing RNAs are pulled down. The mixture is separated into eluate (m6A+ RNAs) and supernatant (m6A- RNAs) pools. External RNA Controls Consortium (ERCC) spike ins are added to the eluate and supernatant, as well as an independent control arm consisting of just ERCC spike in. After antibody cleavage in the eluate pool, each of the three mixtures are sequenced on a next generation sequencing platform. The m6A levels per site or gene could be quantified by the ERCC-normalized RNA abundances in different pools. Since full-length RNA is used, it is possible to directly compare alternatively spliced isoforms between the m6A+ and m6A- fractions as well as comparing isoform abundance within the m6A+ portion.
Despite the advances in m6A-sequencing, several challenges still remain: (1) A method has yet to be developed that characterizes the stoichiometry between different sites in the same transcript; (2) Analysis results are heavily dependent on the bioinformatics algorithm used to call the peaks; (3) Current methods all use m6A-specific antibodies to tag m6A sites, but it has been reported that the antibodies contain intrinsic bias for RNA sequences.
Methods for 2'-O-methylation Profiling
The 2'-O-methylation of the ribose moiety is one of the most common RNA modifications and is present in diverse highly abundant non-coding RNAs (ncRNAs) and at the 5' cap of mRNAs. Moreover, many studies have revealed that Nm at 3’-end is presented in some ncRNAs, such as microRNAs (miRNAs) in plants as well as PIWI-interacting RNAs (piRNAs) in animals.This modification can perturb the function of ribosomes and disrupt tRNA decoding, regulate alternative splicing fidelity, protect ncRNAs from 3’-5’ exonucleolytic degradation and provide a molecular signature for discrimination of self from non-self mRNA.
Nm-REP-seq
A novel method, Nm-REP-seq, was developed for the transcriptome-wide identification of 2'-O-methylation sites at single-base resolution by using RNA exoribonuclease (Mycoplasma genitalium RNase R, MgR) and periodate oxidation reactivity to eliminate 2'-hydroxylated (2'-OH) nucleosides. Nm-REP-seq discovered telomerase RNA component (TERC) RNA, scaRNAs and snoRNAs as new classes of Nm-containing ncRNAs as well as identified many 2'-O-methylation sites in various ncRNAs and mRNAs. Furthermore, Nm-REP-seq revealed 2'-O-Methylation located at the 3’-end of snoRNAs, snRNAs, tRNAs and fragments derived from them, as well as piRNAs and miRNAs.
Methods for N6,2'-O-dimethyladenosine (m6Am) Profiling
N6,2'-O-dimethyladenosine, abundant in polyA+ mRNAs, occurs at the first nucleotide after the 5' cap, when an additional methyl group is added to a 2ʹ-O-methyladenosine residue at the ‘capped’ 5ʹ end of mRNA.
Since m6Am can be recognized by anti-m6A antibodies at transcription start sites, the methods used for m6A profiling can be and were adapted for m6Am profiling, namely m6A-seq, and miCLIP (see m6A-seq and miCLIP descriptions above).
Methods for 5-methylcytidine profiling
5-methylcytidine, m5C, is abundantly found in mRNA and ncRNAs, especially tRNA and rRNAs. In tRNAs, this modification stabilizes the secondary structure and influences anticodon stem-loop conformation. In rRNAs, m5C affects translational fidelity.
Two principles have been used to develop m5C sequencing methods. The first one is antibody-based approach (bisuphite sequencing and m5C-RIP), similar to m6C sequencing. The second is detecting targets of m5C RNA methyltransferases by covalently linking the enzyme to its target, and then using IP specific to the target enzyme to enrich for RNA molecules containing the mark (Aza-IP and miCLIP).
Modified bisulfite sequencing
Modified bisulfite sequencing was optimized for rRNA, tRNA, and miRNA molecules from Drosophila.
Bisulfite treatment has been most widely used to detect dm5C (DNA m5C). The treatment essentially converts a cytosine to a uridine, but methylated cytosines would be unchanged by the treatment.
Previous attempts to develop m5C sequencing protocols using bisulfite treatment were not able to effectively address the problem of the harsh treatment of RNA which causes significant degradation of the molecules. Specifically, bisulfite deamination treatment (high pH) of RNA is detrimental to the stability of phosphodiester bonds. As a result, it is difficult to pre-enrich RNA molecules or to obtain enough PCR product of the correct size for deep sequencing.
A modified version of bisulfite sequencing was developed by Schaefer et al. (2009) which decreased the temperature at which bisulfite treatment of RNA from 95 °C to 60 °C. The rationale behind the modification was that since RNA, unlike DNA, is not double-stranded, but rather, consists of regions of single-strandedness, double-stranded stem structures and loops, it could be possible to unwind RNA at a much lower temperature. Indeed, RNA could be treated for 180 minutes at 60C without significant loss of PCR amplicons of the expected size. Deamination rates were determined to be 99% at 180min of treatment.
After bisulfite treatment of fragmented RNA, reverse transcription is performed, followed by PCR amplification of the cDNA products, and finally deep sequencing was done using the Roche 454 platform.
Since the developers of the method used the Roche platform, they also used GS Amplicon Variant Analyzer (Roche) for analyzing deep sequencing data to quantify sequence-specific cytosine content.
However, recent papers have suggested that the method have several flaws: (1) Incomplete conversion of regular cytosines in double-stranded regions of RNA; (2) areas containing other modifications that resulted in bisulfite-treatment resistance; and (3) sites containing potential false-positives due to (1) and (2) In addition, it is possible the sequencing depth is still not high enough to correctly detect all methylated sites.
Aza-IP
Aza-IP 5-azacytidine-mediated RNA immunoprecipitation has been optimized on and used for detecting targets of methyltransferases, particularly NSUN2 and DNMT2 — the two main enzymes responsible for laying down the m5C mark.
First, the cell is made to overexpress an epitope-tagged m5C-RNA methytransferase derivative so that the antibody used later on for immunoprecipitation could recognize the enzyme. Second, 5-aza-C is introduced to the cells so that it could be incorporated into nascent RNA in place of cytosine. Normally, the methyltransferases are released (i.e. covalent bond between cytosine and methyltransferase is broken) following methylation of the residue. For 5-aza-C, due to a nitrogen substitution in the C5 position of cytosine, the RNA methytransferase enzyme remains covalently bound to the target RNA molecule at the C6 position.
Third, the cell is lysed and the m5C-RNA methyltransferase of interest is immunoprecipitated along with the RNA molecules that are covalently linked to the protein. The IP step enabled >200-fold enrichment of RNA targets, which were mainly tRNAs. The enriched molecules were then fragmented and purified. cDNA library is then constructed and sequencing is performed.
An important additional feature is that RNA methyltransferase covalent linkage to the C5 of m-aza-C induces rearrangement and ring opening. This ring opening results in preferential pairing with cytosine and is therefore read as guanosine during sequencing. This C to G transversion allows for base resolution detection of m5C sites.
One caveat is that m5C sites not replaced by 5-azacytosine will be missed.
miCLIP
miCLIP (Methylation induced crosslinking immunoprecipitation) was used to detect NSUN2 targets, which were found to be mostly non-coding RNAs such as tRNA. An induced mutation of C271A in NSUN2 inhibits release of enzyme from RNA target. This mutation was over-expressed in the cells of interest, and the mutated NSUN2 was also tagged with the Myc epitope. The covalently linked RNA-protein complexes are isolated via immunoprecipitation for a Myc-specific antibody. These complexes are confirmed and detected by radiolabeling with phosphorus-32. The RNA is then extracted from the complex, reverse-transcribed, amplified with PCR, and sequenced using next-generation platforms.
Both miCLIP and Aza-IP, though limited by specific targeting of enzymes, can allow for the detection of low-abundance methylated RNA without deep sequencing.
Methods for Inosine Profiling
Inosine is created enzymatically when an adenosine residue is modified.
Analysis of base-pairing properties
Since the chemical makeup of inosine is a deaminated adenosine, this is one of few methylation alterations that has an accompanying alteration in base pairing, which can be capitalised on. The original adenosine nucleotide will pair with a thymine, whereas the methylated inosine will pair with a cytosine. cDNA sequences obtained by rtPCR can therefore be compared to the corresponding genomic sequences; in sites where A residues are repeatedly interpreted as G, a methylation event can be assumed. At high enough accuracy, it is feasible that the quantity of mRNA molecules in the population that have been methylated can be calculated as a percentage. This method potentially has single-nucleotide resolution. In fact, the abundance of RNA-seq data that is now publicly available can be leveraged to investigate G (in cDNA) versus A (in genome). One particular pipeline, called RNA and DNA differences (RDD), claims to excludes false positives, but only 56.8% of its A-to-I sites were found to be valid by ICE-seq (see below).
Limitations
The background noise caused by single nucleotide polymorphisms (SNPs), somatic mutations, pseudogenes and sequencing errors reduce the reliability of the signal, especially in a single-cell context.
Chemical methods
Inosine-specific cleavage
The first method to detect A-to-I RNA modifications, developed in 1997, was inosine-specific cleavage. RNA samples are treated with glyoxal and borate to specifically modify all G bases, and subsequently enzymatically digested to by RNase T1, which cleaves after I sites. The amplification of these fragments then allows analysis of cleavage sites and inference of A-to-I modification.
. It was used to prove the position of inosine at specific sites rather than identify novel sites or transcriptome-wide profiles.
Limitations
The existence of two A-to-I modifications in relatively close proximity, which is common in Alu elements, means the downstream mod is less likely to be detected since the cDNA synthesis will be truncated at a prior nucleotide. The throughput is low, and the initial method required specific primers; the protocol is complicated and labour-intensive.
ICE and ICE-seq
Inosine chemical erasing (ICE) refer to a process in which acrylonitrile is reacted with inosine to form N1-cyanoethylinosine (ce1I). This serves to stall reverse transcriptase and lead to truncated cDNA molecules. This was combined with deep-sequencing in a developed method called ICE-seq. Computational methods for automated analysis of the data are available, the main premise being the comparison of treated and untreated samples to identify truncated transcripts and thus infer an inosine modification by read count, with a step to reduce false positives by comparison to online database dbSNP.
Limitations
The original ICE protocol involved an RT-PCR amplification step and therefore required primers and knowledge of the location or regions to be investigated, alongside a maximum cDNA length of 300–500bp.
The ICE-seq method is complicated, along with being labour-, reagent- and time-intensive. One protocol from 2015 took 22 days. This shares a limitation with inosine-specific cleavage, in that if there are two A-to-I modifications in relatively close proximity, the downstream mod is less likely to be detected since the cDNA synthesis will be truncated at a prior nucleotide.
Both ICE and ICE-seq suffer from a lack of sensitivity to infrequently edited locations: it becomes difficult to distinguish a modification with a frequency of <10% from a false positive. An increase in read depth and quality can increase sensitivity, but also then suffer from further amplification bias.
Biological methods
ADAR knockdown
The modification of A to I is effected by adenosine deaminases that act on RNA (ADARs), of which in mice there are three. The knockdown of these in the cell, therefore, and the subsequent cell–cell comparison of ADAR+ and ADAR- RNA content would be anticipated to provide a basis for A-to-I modification profiling. However, there are further functions of ADAR enzymes within the cell — for example, they have further roles in RNA processing, and in miRNA biogenesis — which would also be likely to change the landscape of cellular mRNA. Recently a map of A-to-I editing in mice was generated using editing-deficient ADAR1 and ADAR2 double-knockout mice as a negative control. Thereby, A-to-I editing was detected with high confidence.
Methods for Pseudouridine Methylation Profiling
Pseudouridine, or Ψ, the overall most abundant post-translational RNA modification, is created when a uridine base is isomerised. In eukaryotes, this can occur by either of two distinct mechanisms; it is sometimes referred to as the ‘fifth RNA nucleotide’. It is incorporated into stable non-coding RNAs such as tRNA, rRNA, and snRNA, with roles in ribosomal ligand binding and translational fidelity in tRNA, and in fine-tuning branching events and splicing events in snRNAs. Pseudouridine has one more hydrogen bond donor from an imino group and a more stable C–C bond, since a C-glycosidic linkage has replaced the N-glycosidic linkage found in its counterpart (regular uridine). As neither of these changes affect its base-pairing properties, both will have the same output when directly sequenced; therefore methods for its detection involve prior biochemical modification.
Biochemical methods
CMCT methods
There are multiple pseudouridine detection methods beginning with the addition of N-cyclohexyl-N′-b-(4-methylmorpholinium) ethylcarbodiimide metho-p-toluene-sulfonate (CMCT; also known as CMC), since its reaction with pseudouridine produces CMC-Ψ. CMC-Ψ causes reverse transcriptase to stall one nucleotide in the 3’ direction. These methods have single-nucleotide resolution.
In an optimisation step, azido-CMC can confer the ability to add biotinylation; subsequent biotin pulldown will enrich Ψ-containing transcripts, allowing identification of even low-abundance transcripts.
Limitations
As with other procedures predicated on biochemical alteration followed by sequencing, the development of high-throughput sequencing has removed the limitations requiring prior knowledge of sites of interest and primer design. The method causes a lot of RNA degradation, so it is necessary to start with a large amount of sample, or use effective normalisation techniques to account for amplification biases. One final limitation is that, for CMC labelling of pseudouridine to be specific, it is not complete, and therefore nor is it quantitative. A new reactant that could achieve a higher sensitivity with specificity would be beneficial.
Methods for 5-hydroxylmethylcytidine Profiling
Cytidine residues, modified once to m5C (discussed above), can be further modified: either oxidised once for 5-hydroxylmethylcytidine (hm5C), or oxidised twice for 5-formylcytidine (f5C). Arising from the oxidative processing of m5C enacted in mammals by ten-eleven translocation (TET) family enzymes, hm5C is known to occur in all three kingdoms and to have roles in regulation. While 5-hydroxymethylcytidine (hm5dC) is known to be found in DNA in a widespread manner, hm5C is also found in organisms for which no hm5dC has been detected, indicating it is a separate process with distinct regulatory stipulations. To observe the in vivo addition of methyl groups to cytosine RNA residues followed by oxidative processing, mice can be fed on a diet incorporating particular isotopes and these be traced by LC-MS/MS analysis. Since the metabolic pathway from nutritional intake to nucleotide incorporation is known to progress from dietary methionine --> S-adenosylmethionine (SAM) --> methyl group on RNA base, the labelling of dietary methionine with 13C and D means these will end up in hm5C residues that have been altered since the addition of these into the diet. In contrast to m5C, a large quantity of hm5C modifications have been recorded within coding sequences.
hMeRIP-seq
hMeRIP-seq is an immunoprecipitation method, in which RNA–protein complexes are crosslinked for stability, and antibodies specific to hm5C are added. Using this method, over 3,000 hm5C peaks have been called in Drosophila melanogaster S2 cells.
Limitations
Despite two distinct base-resolution methods being available for hm5dC, there are no base-resolution methods for detection of hm5C.
Biophysical validation of RNA modifications
Apart from mass spectrometry and chromatography, other two validation techniques have been developed, namely
Pre- and post-labelling techniques:
Pre-labelling → involves the use of 32P: cells are grown in 32P containing medium, thus allowing the incorporation of [α-32P]NTPs during transcription by T7 RNA polymerase. The modified RNA is then extracted, and each RNA species is isolated and subsequently digested by T2 RNase. Next, RNA is hydrolyzed into 5' nucleoside monophosphates, which are analyzed 2D-TLC (two-dimensional thin-layer chromatography). This method is able to detect and quantify every modification but will not contribute to the characterization of the sequence.
Post-labelling → implicates the selective labelling of a specific position within the sequence: these techniques rely on the Stanley-Vassilenko approach principles, that has been adjusted to achieve a better validation quality. First, RNA is cleaved into free 5’-OH fragments either by RNase H or DNAzymes, by sequence specific hydrolysis. The polynucleotide kinase (PKN) then performs the 5’ radioactive post-labelling phosphorylation using [γ-32P]ATP. At this point, the labelled fragments undergo a size fragmentation, that can be performed either by Nuclease P1 or according to the SCARLET method. In both cases, the final product is a group of 5’ nucleoside monophosphates (5’ NMPs) that will be analyzed by TLC.
SCARLET: this recent approach exploits not just one, but two sequence selection steps, the last of which is obtained during the splinted ligation of the radioactive-labelled fragments with a long DNA oligonucleotide, at its 3’-end. After degradation, the labelled residue is purified together with the ligated DNA oligonucleotide and finally hydrolyzed and therefore released thanks to the activity of the Nuclease P1.
This method has proven to be very useful in the validation of modified residues in mRNAs and lncRNAs, such as m6A and Ψ
Oligonucleotide-based techniques: this method includes several variants
Splinted ligation of particular modified DNAs, that exploits the ligase sensitivity to 3’ and 5’ nucleotides (so far used for m6A, 2’-O-Me, Ψ)
Microarray modification identification through a DNA-chip, that exploits the decrease in duplex stability of cDNA oligonucleotides, due to the impediment in conventional base-pairing caused by modifications (ex. m1A, m1G, m22G)
RT primer extension at low dNTPs concentration, for mapping of RT arrest signals.
Single-Molecule Real-Time Sequencing for epitranscriptome sequencing
Single-molecule real-time sequencing (SMRT) is used in the epigenomic and epitranscriptomic fields. As regards epigenomics, thousands of zero-mode waveguides (ZMWs) are used to capture the DNA polymerase: when a modified base is present, the biophysical dynamics of its movement changes, creating a unique kinetic signature before, during, and after the base incorporation.
SMRT sequencing can be used to detect modified bases in RNA, including m6A sites. In this case, a reverse transcriptase is used as enzyme with ZMWs to observe the cDNA synthesis in real time. The incorporation of synthetically designed m6A sites leaves a kinetic signature and increases the interpulse duration (IPD).
There are some issues concerning the reading of homonucleotide stretches and the base resolution of m6A therein, due to the stuttering of reverse transcriptase. Secondly, the throughput is too low for transcriptome-wide approaches.
One of the most commonly used platform is the SMRT sequencing technology by Pacific Biosciences.
Nanopore sequencing in epitranscriptomics
A possible alternative to the detection of epitranscriptomic modifications by SMRT sequencing is the direct detection using the Nanopore sequencing technologies. This technique exploits nanometer-sized protein channels embedded into a membrane or solid materials, and coupled to sensors, able to detect the amplitude and duration of the variations of the ionic current passing through the pore. As the RNA passes through the nanopore, the blockage leads to a disruption in current stream, which is different for the different bases, included modified ones, and therefore can be used to identify possible modifications. By producing single-molecule reads, without previous RNA amplification and conversion to cDNA, these techniques can lead to the production of quantitative transcriptome-wide maps.
In particular, the Nanopore technology proved to be effective in detecting the presence of two nucleotide analogs in RNA: N6-methyladenosine (m6A) and 5-methylcytosine (5-mC). Using Hidden Markov Models (HMM) or recurrent neural networks (RNN) trained with known sequences, it was possible to demonstrate that the modified nucleotides produce a characteristic disruption in the ionic current when passing through the pore, and that these data can be used to identify the nucleotide.
References
RNA
Nucleosides
Bioinformatics
Molecular biology | Epitranscriptomic sequencing | [
"Chemistry",
"Engineering",
"Biology"
] | 7,662 | [
"Bioinformatics",
"Biological engineering",
"Biochemistry",
"Molecular biology"
] |
56,089,649 | https://en.wikipedia.org/wiki/European%20Secure%20Software-defined%20Radio | European Secure Software-defined Radio (ESSOR) is a planned European Union (EU) Permanent Structured Cooperation project for the development of common technologies for European military software-defined radio systems, to guarantee the interoperability and security of voice and data communications between EU forces in joint operations, on a variety of platforms.
History
The project was based on United States' Software Communications Architecture and Joint Tactical Radio System, to which Thales was a major contributor. Germany initially did not participate in ESSOR, developing instead its own SDR system, Streitkräftegemeinsame, verbundfähige Funkgerät-Ausstattung.
Consortium
The work of development is being carried out by a consortium of private companies, one from each member country, including Thales (FR), Leonardo (IT), Indra Sistemas (SP), Radmor (PL), Bittium (FI) and Rohde & Schwarz (DE).
See also
Permanent Structured Cooperation
Organisation for Joint Armament Cooperation
References
External links
Description
Permanent Structured Cooperation projects
Software-defined radio
Military equipment of the European Union | European Secure Software-defined Radio | [
"Engineering"
] | 226 | [
"Radio electronics",
"Software-defined radio"
] |
56,091,940 | https://en.wikipedia.org/wiki/Positive%20displacement%20pipette | Positive displacement pipettes are a type of pipette that operates via piston-driven displacement. Unlike an air displacement pipette, which dispenses liquid using an air cushion in the pipette tip, the piston in a positive displacement pipette makes direct contact with the sample, allowing the aspiration force to remain constant.
Applications
Since the piston makes direct contact with the sample, the aspiration force in a positive displacement pipette is unaffected by the sample's physical properties. Several liquid handling companies suggest that positive displacement pipettes can be used to accurately pipette very viscous, volatile, hot or cold, or corrosive samples.
Viscous liquids
Viscous liquids, such as glycerol, flow very slowly. Glycerol has high dynamic viscosity, and if a researcher aspirates a sample of glycerol too quickly with an air displacement pipette, It will draw up an air bubble. When a researcher attempts to dispense the liquid, some of it will stick to the pipette tip wall, dispense very slowly and remain in the tip. Surfactants also produce this effect, but the remaining liquid film is thinner.
In a positive displacement pipette, the aspiration strength remains constant, so the tip fills evenly. Also, the piston slides along the internal sides of the pipette tip and pushes the total volume out, so no liquid is left behind.
Volatile liquids
Volatile liquids such as acetone, hexane, and methanol, evaporate continuously in air displacement pipettes. Some volatile liquids expand so quickly that they expand the air column in the pipette, which causes leakage: The pipette will lose drops and dispense liquid imprecisely. As drops leak out, they can contaminate the bench, ultimately causing cross-contamination from sample to sample. These drops can also produce a health hazard.
Because there is no air cushion in a positive displacement pipette, liquids do not evaporate or leak. Drops will not fall from the tip, and vapors will not contaminate the internal parts of the pipette. Also, the capillary/piston (CP) tips used for positive displacement pipetting are disposable.
Hot or cold liquids
In an air displacement pipette, the ambient temperature is correlated with the volume of the air cushion and affects the aspiration volume. Cold liquids, such as a suspension of restriction enzymes, which are usually handled at 0°C, cause the air cushion to shrink and the pipette to aspirate more liquid than expected, making the pipette over-deliver. Hot samples, such as mammalian cell cultures at body temperature or polymerase chain reaction solutions at 60°C or higher, will cause the air cushion to expand, causing the pipette to aspirate less liquid than expected and making the pipette under-deliver.
Positive displacement pipettes do not have an air cushion and are less affected by liquid temperature, yielding greater pipetting accuracy.
Corrosive and hazardous liquids
Corrosive and radioactive liquids may damage the piston, seal, and tip holder in an air displacement pipette. Positive displacement pipettes use a disposable capillary/piston (CP) tip, so the pipette is not affected by corrosive samples over its lifetime. Since there is no contact between the sample and the pipette, there is little risk of contamination.
Pipetting technique
Positive displacement pipettes operate very similarly to air displacement pipettes.
Steps for operating a positive displacement pipette
Set the pipetting volume.
Attach a CP tip onto the pipette.
Hold the pipette vertically and press the plunger to the first stop.
Put the CP tip into the sample and slowly release it, moving the button to the home position.
Press the plunger to the first stop again to dispense the sample.
Press the plunger to the second stop to eject the CP tip.
References
Laboratory equipment
Volumetric instruments | Positive displacement pipette | [
"Technology",
"Engineering"
] | 807 | [
"Volumetric instruments",
"Measuring instruments"
] |
56,095,710 | https://en.wikipedia.org/wiki/Pixel%20Imaging%20Mass%20Spectrometry%20camera | The Pixel Imaging Mass Spectrometry camera (PImMS) is an ultrafast imaging sensor designed for time-of-flight particle imaging. It was invented by professors of chemistry at the University of Oxford, Mark Brouard and Claire Vallance., Renato Turchetta from IMASENIC (formerly at the STFC Rutherford Appleton Laboratory), and Andrei Nomerotski from Brookhaven National Labs (formerly at the Department of Physics, University of Oxford). The camera and accompanying software have been further developed by Iain Sedgwick (STFC Rutherford Appleton Laboratory), Jaya John John (Department of Physics, University of Oxford), and Jason Lee (Department of Chemistry, University of Oxford). The camera has been used for studies in chemical reaction dynamics, imaging mass spectrometry, and neutron time-of-flight imaging.
References
External links
Chemistry
Cameras
Mass spectrometry | Pixel Imaging Mass Spectrometry camera | [
"Physics",
"Chemistry",
"Technology"
] | 186 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Cameras",
"Recording devices",
"Matter"
] |
56,096,452 | https://en.wikipedia.org/wiki/Myo-Inositol%20trispyrophosphate | myo-Inositol trispyrophosphate (ITPP) is an inositol phosphate, a pyrophosphate, a drug candidate, and a putative performance-enhancing substance, which exerts its biological effects by increasing tissue oxygenation.
Chemistry
ITPP is a pyrophosphate derivative of phytic acid with the molecular formula C6H12O21P6.
Biological effects
ITPP is a membrane-permeant allosteric regulator of hemoglobin that mildly reduces its oxygen-binding affinity, which shifts the oxygen-hemoglobin dissociation curve to the right and thereby increases oxygen release from the blood into tissue. Phytic acid, in contrast, is not membrane-permeant due to its charge distribution.
Rodent studies in vivo demonstrated increased tissue oxygenation and dose-dependent increases in endurance during physical exercise, in both healthy mice and transgenic mice expressing a heart failure phenotype.
The substance is believed to have a high potential for use in athletic doping, and liquid chromatography–mass spectrometry tests have been developed to detect ITPP in urine tests. Its use as a performance-enhancing substance in horse racing has also been suspected and similar tests have been developed for horses
ITPP has been studied for potential adjuvant use in the treatment of cancer in conjunction with chemotherapy, due to its effects in reducing tissue hypoxia. Human clinical trials were registered in 2014 under the compound number OXY111A. The substance has also been examined in the context of other illnesses involving hypoxia, such as cardiovascular disease and dementia
See also
Phytic acid
Inositol
Inositol phosphate
Inositol trisphosphate
myo-Inositol
References
Phospholipids
Inositol
Signal transduction | Myo-Inositol trispyrophosphate | [
"Chemistry",
"Biology"
] | 380 | [
"Phospholipids",
"Inositol",
"Signal transduction",
"Biochemistry",
"Neurochemistry"
] |
56,103,252 | https://en.wikipedia.org/wiki/Padmakar%E2%80%93Ivan%20index | In chemical graph theory, the Padmakar–Ivan (PI) index is a topological index of a molecule, used in biochemistry. The Padmakar–Ivan index is a generalization introduced by Padmakar V. Khadikar and Iván Gutman of the concept of the Wiener index, introduced by Harry Wiener. The Padmakar–Ivan index of a graph G is the sum over all edges uv of G of number of edges which are not equidistant from u and v.
Let G be a graph and e = uv an edge of G. Here denotes the number of edges lying closer to the vertex u than the vertex v, and is the number of edges lying closer to the vertex v than the vertex u. The Padmakar–Ivan index of a graph G is defined as
The PI index is very important in the study of quantitative structure–activity relationship for the classification models used in the chemical, biological sciences, engineering, and nanotechnology.
Examples
The PI index of Dendrimer Nanostar of the following figure can be calculated by
References
Mathematical chemistry
Cheminformatics
Graph invariants | Padmakar–Ivan index | [
"Chemistry",
"Mathematics"
] | 225 | [
"Drug discovery",
"Applied mathematics",
"Graph theory",
"Molecular modelling",
"Mathematical chemistry",
"Computational chemistry",
"Theoretical chemistry",
"Graph invariants",
"nan",
"Cheminformatics",
"Mathematical relations"
] |
65,906,702 | https://en.wikipedia.org/wiki/Quantum%20clock%20model | The quantum clock model is a quantum lattice model. It is a generalisation of the transverse-field Ising model . It is defined on a lattice with states on each site. The Hamiltonian of this model is
Here, the subscripts refer to lattice sites, and the sum is done over pairs of nearest neighbour sites and . The clock matrices and are generalisations of the Pauli matrices satisfying
and
where is 1 if and are the same site and zero otherwise. is a prefactor with dimensions of energy, and is another coupling coefficient that determines the relative strength of the external field compared to the nearest neighbor interaction.
The model obeys a global symmetry, which is generated by the unitary operator where the product is over every site of the lattice. In other words, commutes with the Hamiltonian.
When the quantum clock model is identical to the transverse-field Ising model. When the quantum clock model is equivalent to the quantum three-state Potts model. When , the model is again equivalent to the Ising model. When , strong evidences have been found that the phase transitions exhibited in these models should be certain generalizations of Kosterlitz–Thouless transition, whose physical nature is still largely unknown.
One-dimensional model
There are various analytical methods that can be used to study the quantum clock model specifically in one dimension.
Kramers–Wannier duality
A nonlocal mapping of clock matrices known as the Kramers–Wannier duality transformation can be done as follows:
Then, in terms of the newly defined clock matrices with tildes, which obey the same algebraic relations as the original clock matrices, the Hamiltonian is simply . This indicates that the model with coupling parameter is dual to the model with coupling parameter , and establishes a duality between the ordered phase and the disordered phase.
Note that there are some subtle considerations at the boundaries of the one dimensional chain; as a result of these, the degeneracy and symmetry properties of phases are changed under the Kramers–Wannier duality. A more careful analysis involves coupling the theory to a gauge field; fixing the gauge reproduces the results of the Kramers Wannier transformation.
Phase transition
For , there is a unique phase transition from the ordered phase to the disordered phase at . The model is said to be "self-dual" because Kramers–Wannier transformation transforms the Hamiltonian to itself. For , there are two phase transition points at and . Strong evidences have been found that these phase transitions should be a class of generalizations of Kosterlitz–Thouless transition. The KT transition predicts that the free energy has an essential singularity that goes like , while perturbative study found that the essential singularity behaves as where goes from to as increases from to . The physical pictures of these phase transitions are still not clear.
Jordan–Wigner transformation
Another nonlocal mapping known as the Jordan Wigner transformation can be used to express the theory in terms of parafermions.
References
Mathematical modeling
Quantum lattice models | Quantum clock model | [
"Physics",
"Mathematics"
] | 618 | [
"Applied mathematics",
"Mathematical modeling",
"Quantum mechanics",
"Quantum lattice models"
] |
68,748,543 | https://en.wikipedia.org/wiki/Aeralis%20Advanced%20Jet%20Trainer | The Aeralis Advanced Jet Trainer (ADJ) is an advanced jet trainer aircraft designed by Aeralis in the United Kingdom. It is the initial variant of a family of modular aircraft which are reconfigurable to cover a variety of roles, including operational training, basic jet training, aerobatics/display and light combat.
Work on the ADJ began during the early 2010s; the project was publicly announced in June 2015 under the initial name of Dart. Funding was sought from various sources, both within Britain and internationally; in February 2021, the Rapid Capabilities Office of the Royal Air Force (RAF) awarded a three-year contract for the further development of the aircraft; the service is reviewing the aircraft for various purposes, including the Future Combat Air System (FCAS) initiative and as a potential replacement for its aging BAE Systems Hawk aircraft. Aeralis had partnered with various organisations to develop the ADJ, including the engineering consultancy company Atkins, the multinational propulsion specialist International Turbine Engine Company (ITEC), and the German conglomerate Siemens.
During October 2022, wind tunnel testing was performed by Airbus UK. Aeralis plan to carry out a first flight of the advanced jet trainer variant in 2024.
Development
The Advanced Jet Trainer (AJT) project originated in the work of Tristan Crawford during the early 2010s. Crawford sought to develop a capable new trainer aircraft that would be suited to a various of purposes via the use of modular sections. It has been claimed that the modular approach would achieve a 30 percent reduction in both acquisition and maintenance costs in comparison to traditional flight training counterparts. Early market research was collected from Royal Air Force (RAF) pilots, Fielding Aerospace Consultants, and the British government via UK Trade & Investment; additional expertise was intentionally sought outside of the conventional players in the British aerospace sector, such as Formula One suppliers.
During June 2015, the existence of the project was revealed to the public, at which point it was referred to as the Dart Jet. In 2018, Aeralis sought £1 million ($1.32 million) via crowdfunding to fund the design of a concept fuselage demonstrator to be presented at trade shows. During September 2019, it was announced that Aeralis had partnered with engineering and design consultancy firm Atkins to work on two out of three planned variants of the aircraft: the advanced jet trainer and the basic jet trainer.
During February 2021, Aeralis was awarded a three-year contract with the RAF's Rapid Capabilities Office for the further development of the aircraft. Additional external funding was also actively being sought to accelerate the program; according to a spokesperson, the company aimed to reach the preproduction stage prior to the middle of the decade. In March 2021, the company signed a teaming agreement with Thales UK for the latter to support development of training and simulation systems.
In September 2021, Aeralis showcased a number of potential future variants of the type, including an uncrewed combat model and an uncrewed refuelling aircraft. The company also announced that it had received a £10.5 million cash injection from an unnamed Middle Eastern nation, later revealed to be the Qatar-based Barzan Holdings, which it stated was an indication that the AJT was gaining international interest. During that same month, Aeralis also signed a Memorandum of Understanding (MoU) with Rolls-Royce to supply engines for the aircraft; under this agreement, the preproduction aircraft will be powered by Rolls-Royce powerplants. Atkins and Siemens also agreed to collaborate with Aeralis on Aerside, the aircraft's digital system. Also in September 2021, Aeralis stated that it was scheduled to perform the first flight of the ADJ sometime during 2024.
During March 2022, a pair of full-scale mock-ups of the aircraft were presented at the DIMDEX conference in Qatar; these mock-ups were unveiled by the Emir of Qatar, Sheikh Tamim bin Hamad Al Thani, in a ceremony attended by representatives of Barzan Holdings and senior figures of the British and Qatari governments; international delegations from India, South Korea and Indonesia were also in attendance. Two months later, Aeralis received another significant investment from the RAF, initiating Phase 2 of the programme which evaluated the potential of PYRAMID, the UK Ministry of Defence's (MOD) open mission architecture. Aeralis also engaged with the MOD and its procurement arm, Defence Equipment and Support (DE&S) to explore the potential of Aeralis within the framework of the RAF's Future Combat Air System (FCAS) initiative. During July 2022, the company signed a MoU with Ascent Flight Training to develop a future flying training system and explore collaboration opportunities in the provision of military flying training.
During October 2022, wind tunnel testing of a scale model of the AJT was performed by Airbus UK at Filton. Two months later, Aeralis was awarded a £9 million (US$11 million) contract from the MOD to provide digital engineering services. During June 2023, Aeralis signed a MoU with International Turbine Engine Company (ITEC), a joint venture between Honeywell and Aerospace Industrial Development Corporation (AIDC), to develop powerplant solutions for the AJT; the agreement also covers collaboration on the designing of electrical and thermal management systems.
Design
The Aeralis Advanced Jet Trainer (AJT) is the initial variant of a family of light jet aircraft which share approximately 85% of their components, including avionics, digital systems and core fuselage. The rest of the aircraft, including engine pods, wings and tail, can be interchanged to fulfil different roles. According to Aeralis, this system of modularity and fleet rationalisation is intended to deliver lower costs and increased flexibility to its end-user. The roles deliverable by the Aeralis system include advanced jet trainer, basic jet trainer, operational trainer, aerobatics/display and light combat. In a basic trainer configuration, it is to be fitted with straight wings and straight tailplanes, possess a maximum take-off weight (MTOW) of around 7,700 pounds, and be capable of a maximum speed of 350 knots; in an advanced configuration, the aircraft is fitted with swept wing and tailplanes, an MTOW of roughly 11,000 pounds and a top speed of Mach 0.90.
It is intended for a range of powerplants to be available for the AJT, delivering different thrust outputs and other performance criteria to suit the diverse mission roles of the operator. Aeralis has formed agreements with multiple engine manufacturers, such as Rolls-Royce and the International Turbine Engine Company (ITEC), to provide propulsion and other systems for the AJT.
References
External links
Aeralis website
PYRAMID Programme
Proposed aircraft of the United Kingdom
Proposed military aircraft | Aeralis Advanced Jet Trainer | [
"Engineering"
] | 1,386 | [
"Proposed military aircraft",
"Military projects"
] |
68,756,978 | https://en.wikipedia.org/wiki/Virosphere | Virosphere (virus diversity, virus world, global virosphere) was coined to refer to all those places in which viruses are found or which are affected by viruses. However, more recently virosphere has also been used to refer to the pool of viruses that occurs in all hosts and all environments, as well as viruses associated with specific types of hosts (prokaryotic virosphere, archaeal virosphere, Invertebrate virosphere), type of genome (RNA virosphere, dsDNA virosphere) or ecological niche (marine virosphere).
Viral genome diversity
The scope of viral genome diversity is enormous compared to cellular life. Cellular life including all known organisms have double stranded DNA genome. Whereas viruses have one of at least 7 different types of genetic information, namely dsDNA, ssDNA, dsRNA, ssRNA+, ssRNA-, ssRNA-RT, dsDNA-RT. Each type of genetic information has its specific manner of mRNA synthesis. Baltimore classification is a system providing overview on these mechanisms for each type of genome. Moreover, in contrast to cellular organisms, viruses don't have universally conserved sequences in their genomes to be compared by.
Viral genome size varies approximately 1000 fold. Smallest viruses may consist of only from 1–2 kb genome coding for 1 or 2 genes and it is enough for them to successfully evolve and travel through space and time by infecting and replicating (make copies of their own) in its host. Two most basic viral genes are replicase gene and capsid protein gene, as soon as virus has them it represents a biological entity able to evolve and reproduce in cellular life forms. Some viruses may have only replicase gene and use capsid gene of other e.g. endogenous virus. Most viral genomes are 10-100kb, whereas bacteriophages tend to have larger genomes carrying parts of genome translation machinery genes from their host. In contrast, RNA viruses have smaller genomes, with maximum 35kb by coronavirus. RNA genomes have higher mutation rate, that is why their genome has to be small enough in order not to harbour to many mutations, which would disrupt the essential genes or their parts. The function of the vast majority of viral genes remain unknown und the approaches to study have to be developed. The total number of viral genes is much higher, than the total number of genes of three domains of life all together, which practically means viruses encode most of the genetic diversity on the planet.
Viral host diversity
Viruses are cosmopolites, they are able to infect every cell and every organism on planet earth. However different viruses infect different hosts. Viruses are host specific as they need to replicate (reproduce) within a host cell. In order to enter the cell viral particle needs to interact with a receptor on the surface of its host cell. For the process of replication many viruses use their own replicases, but for protein synthesis they are dependent on their host cell protein synthesis machinery. Thus, host specificity is a limiting factor for viral reproduction.
Some viruses have extremely narrow host range and are able to infect only 1 certain strain of 1 bacterial species, whereas others are able to infect hundreds or even thousands of different hosts. For example cucumber mosaic virus (CMV) can use more than 1000 different plant species as a host. Members of viral families like Rhabdoviridae infect hosts from different kingdoms e.g. plants and vertebrates. And members of genera Psimunavirus and Myohalovirus infect hosts from different domains of life e.g. bacteria and archaea.
Viral capsid diversity
Capsid is the outer protecting shell or scaffold of a viral genome. Capsid enclosing viral nucleic acid make up viral particle or a virion. Capsid is made of protein and sometimes has lipid layer harboured from the host cell while exiting it. Capsid proteins are highly symmetrical and assemble within a host cell by their own due to the fact, that assembled capsid is more thermodynamically favourable state, than separate randomly floating proteins. The most viral capsids have icosahedral or helical symmetry, whereas bacteriophages have complex structure consisting of icosahedral head and helical tail including baseplate and fibers important for host cell recognition and penetration. Viruses of archaea infecting hosts living in extreme environments like boiling water, highly saline or acidic environments have totally different capsid shapes and structures. The variety of capsid structures of Archaeal viruses includes lemon shaped viruses Bicaudaviridae of family and Salterprovirus genus, spindle form Fuselloviridae, bottle shaped Ampullaviridae, egg shaped Guttaviridae.
Capsid size of a virus differs dramatically depending on its genome size and capsid type.Icosahedral capsids are measured by diameter, whereas helical and complex are measured by length and diameter. Viruses differ in capsid size in a spectrum from 10 to more than 1000 nm. The smallest viruses are ssRNA viruses like Parvovirus. They have icosahedral capsid approximately 14 nm in diameter. Whereas the biggest currently known viruses are Pithovirus, Mamavirus and Pandoravirus. Pithovirus is a flask-shaped virus 1500 nm long and 500 nm in diameter, Pandoravirus is an oval-shaped virus1000nm (1 micron) long and Mamavirus is an icosahedral virus reaching approximately 500 nm in diameter. Example of how capsid size depends on the size of viral genome can be shown by comparing icosahedral viruses - the smallest viruses are 15-30 nm in diameter have genomes in range of 5 to 15 kb (kilo bases or kilo base pairs depending on type of genome), and the biggest are near 500 nm in diameter and their genomes are also the largest, they exceed1Mb (million base pairs).
Viral evolution
Viral evolution or evolution of viruses presumably started from the beginning of the second age of RNA world, when different types of viral genomes arose through the transition from RNA- RT –DNA, which also emphasises that viruses played a critical role in the emergence of DNA and predate LUCA The abundance and variety of viral genes also imply that their origin predates LUCA. As viruses do not share unifying common genes they are considered to be polyphyletic or having multiple origins as opposed to one common origin as all cellular life forms have. Virus evolution is more complex as it is highly prone to horizontal gene transfer, genetic recombination and reassortment. Moreover viral evolution should always be considered as a process of co-evolution with its host, as a host cell is inevitable for virus reproduction and hence, evolution.
Viral abundance
Viruses are the most abundant biological entities, there are 10^31 viruses on our planet. Viruses are capable of infecting all organisms on earth and they are able to survive in much harsher environments, than any cellular life form. As viruses can not be included in the tree of life there is no separate structure illustrating viral diversity and evolutionary relationships. However, viral ubiquity can be imagined as a virosphere covering the whole tree of life.
Nowadays we are entering the phase of exponential viral discovery. The genome sequencing technologies including high-throughput methods allow fast and cheap sequencing of environmental samples. The vast majority of the sequences from any environment, both from wild nature and human made, reservoirs are new. It practically means that during over 100 years of virus research from the discovery of bacteriophages - viruses of bacteria in 1917 until current time we only scratched on a surface of a great viral diversity. The classic methods like viral culture used previously allowed to observe physical virions or viral particles using electron microscope, they also allowed to gathering information about their physical and molecular properties. New methods deal only with the genetic information of viruses.
See also
virus
virology
Virome
viral evolution
virus classification
list of virus families
list of virus genera
list of virus species
References
External links
Welcome to the Virosphere
Virolution
A Planet of Viruses, Carl Zimmer
Pathogen genomics
Viruses | Virosphere | [
"Biology"
] | 1,677 | [
"Viruses",
"Tree of life (biology)",
"Molecular genetics",
"DNA sequencing",
"Microorganisms",
"Pathogen genomics"
] |
68,760,591 | https://en.wikipedia.org/wiki/Time%20in%20Togo | Time in Togo is given by Greenwich Mean Time (GMT; UTC+00:00). Togo has never observed daylight saving time and adopted this time zone in 1907.
IANA time zone database
In the IANA time zone database, Togo is given one zone in the file zone.tab – Africa/Lome. "TG" refers to the country's ISO 3166-1 alpha-2 country code. Data for Togo directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
References
External links
Current time in Togo at Time.is
Time in Togo at TimeAndDate.com
Time by country
Geography of Togo
Time in Africa | Time in Togo | [
"Physics"
] | 146 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
54,532,965 | https://en.wikipedia.org/wiki/Landscape%20%28magazine%29 | Landscape was a magazine of human geography founded by J.B. Jackson in 1951 and published three times a year in Berkeley, California until 1999.
The magazine's original subtitle was "Human Geography of the Southwest"; this was later dropped.
The first five issues consisted largely of Jackson's own essays. Jackson was the magazine's publisher and editor until 1968. Publication was suspended from 1971–1974.
Its ISSN was 0023-8023.
Notes
Human geography
Urban planning
Landscape architecture
Magazines established in 1951
Defunct magazines published in the United States
Magazines published in California
Magazines disestablished in 1999
Triannual magazines published in the United States
Architecture magazines
Mass media in Berkeley, California | Landscape (magazine) | [
"Engineering",
"Environmental_science"
] | 140 | [
"Landscape architecture",
"Environmental social science stubs",
"Urban planning",
"Environmental social science",
"Human geography",
"Architecture"
] |
54,533,486 | https://en.wikipedia.org/wiki/Differential%20testing | Differential testing, also known as differential fuzzing, is a software testing technique that detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is also called back-to-back testing.
Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug.
Application domains
Differential testing has been used to find semantic bugs successfully in diverse domains like SSL/TLS implementations, C compilers, JVM implementations, Web application firewalls, security policies for APIs, antivirus software, and file systems. Differential testing has also been used for automated fingerprint generation from different network protocol implementations.
Input generation
Unguided
Unguided differential testing tools generate test inputs independently across iterations without considering the test program’s behavior on past inputs. Such an input generation process does not use any information from past inputs and essentially creates new inputs at random from a prohibitively large input space. This can make the testing process highly inefficient, since large numbers of inputs need to be generated to find a single bug.
An example of a differential testing system that performs unguided input generation is "Frankencerts". This work synthesizes Frankencerts by randomly combining parts of real certificates. It uses syntactically valid certificates to test for semantic violations of SSL/TLS certificate validation across multiple implementations. However, since the creation and selection of Frankencerts are completely unguided, it is significantly inefficient compared to the guided tools.
Guided
Guided input generation process aims to minimize the number of inputs needed to find each bug by taking program behavior information for past inputs into account.
Domain-specific evolutionary guidance
An example of a differential testing system that performs domain-specific coverage-guided input generation is Mucerts. Mucerts relies on the knowledge of the partial grammar of the X.509 certificate format and uses a stochastic sampling algorithm to drive its input generation while tracking the program coverage.
Another line of research builds on the observation that the problem of new input generation from existing inputs can be modeled as a stochastic process. An example of a differential testing tool that uses such a stochastic process modeling for input generation is Chen et al.’s tool. It performs differential testing of Java virtual machines (JVM) using Markov chain Monte Carlo (MCMC) sampling for input generation. It uses custom domain-specific mutations by leveraging detailed knowledge of the Java class file format.
Domain-independent evolutionary guidance
NEZHA is an example of a differential testing tool that has a path selection mechanism geared towards domain-independent differential testing. It uses specific metrics (dubbed as delta-diversity) that summarize and quantify the observed asymmetries between the behaviors of multiple test applications. Such metrics that promote the relative diversity of observed program behavior have shown to be effective in applying differential testing in a domain-independent and black-box manner.
Automata-learning-based guidance
For applications, such as cross-site scripting (XSS) filters and X.509 certificate hostname verification, which can be modeled accurately with finite-state automata (FSA), counter-example-driven FSA learning techniques can be used to generate inputs that are more likely to find bugs.
Symbolic-execution-based guidance
Symbolic execution is a white-box technique that executes a program symbolically, computes constraints along different paths, and uses a constraint solver to generate inputs that satisfy the collected constraints along each path. Symbolic execution can also be used to generate input for differential testing.
The inherent limitation of symbolic-execution-assisted testing tools—path explosion and scalability—is magnified especially in the context of differential testing where multiple test programs are used. Therefore, it is very hard to scale symbolic execution techniques to perform differential testing of multiple large programs.
See also
Software testing
Software diversity
References
Software testing | Differential testing | [
"Engineering"
] | 883 | [
"Software engineering",
"Software testing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.