id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
68,065,196 | https://en.wikipedia.org/wiki/Percolation%20surface%20critical%20behavior | Percolation surface critical behavior concerns the influence of surfaces on the critical behavior of percolation.
Background
Percolation is the study of connectivity in random systems, such as electrical conductivity in random conductor/insulator systems, fluid flow in porous media, gelation in polymer systems, etc. At a critical fraction of connectivity or porosity, long-range connectivity can take place, leading to long-range flow. The point where that connectivity takes place is called the percolation threshold, and considerable amount of work has been undertaken in finding those critical values for systems of various geometries, and the mathematical behavior of observables near that point. This leads to the study of critical behavior and the percolation critical exponents. These exponents allow one to describe the behavior as the threshold is approached.
The behavior of the percolating network near a surface will be different from that in the main part of a system, called the "bulk." For example, exact at the percolation threshold, the percolating network in the system is a fractal with large voids and a ramified structure. The surface interrupts this structure, so the percolating cluster is less likely to come in contact to the surface. As an example, consider a lattice system of bond percolation (percolation along the bonds or edges of the lattice). If the lattice is cubic in nature, and is the probability that a bond is occupied (conducting), then the percolation threshold is known to be . At the surface, the lattice becomes a simple square lattice, where the bond threshold is simply 1/2. Therefore, when the bulk of the system is at its threshold, the surface is way below its threshold, and the only way to have long-range connections along the surface is to have a path that goes from the surface to the bulk, conduction through the fractal percolation network, and then a path back to the surface again. This occurs with a different critical behavior as in the bulk, and is different from the critical behavior of a two-dimensional surface at its threshold.
In the most common model for surface critical behavior in percolation, all bonds are assigned with the same probability , and the behavior is studied at the bulk , with a value of 0.311608 in this case. In an other model for surface behavior, the surface bonds are made occupied with a different probability , while the bulk is kept at the normal bulk value. When is increased to a higher value, a new "special" critical point is reached , which has a different set of critical exponents.
Surface phase transitions
In percolation, we can choose to occupy the sites or bonds at the surface with a different probability to the bulk probability . Different surface phase transitions can then occur depending on the values of the bulk occupation probability and the surface occupation probability . The simplest case is the ordinary transition, which occurs when is at the critical probability for the bulk phase transition. Here both the bulk and the surface start percolating, regardless of the value of , since there will typically be a path connecting the surface boundaries through the percolating bulk. Then there is the surface transition, where the bulk probability is below the bulk threshold, but the surface probability is at the percolation threshold for percolation in one lower dimension (i.e. the dimension of the surface). Here the surface undergoes a percolation transition while the bulk remains disconnected. If we enter this region of the phase diagram where the surface is ordered while the bulk is disordered, and then increase the bulk probability, we eventually encounter the extraordinary transition, where the bulk undergoes a percolation transition with the surface already percolating. Finally, there is the special phase transition, which is an isolated point where the phase boundaries for the ordinary, special, and extraordinary transitions meet.
In general the different surface transitions will be in distinct surface universality classes, with different critical exponents. Given an exponent, say , we label the relevant exponent at the ordinary, surface, extraordinary, and special transitions by , , , and respectively.
surface percolation thresholds
Surface critical exponents
The probability that a surface sites connected to the infinite (percolating) cluster, for an infinite system and , is given by
with where is the bulk exponent for the order parameter.
As a function of the time in an epidemic process (or the chemical distance), we have at
with , where is the bulk dynamical exponent.
Scaling relations
The critical exponents the following scaling relations:
(Deng and Blöte)
See also
Percolation
Percolation theory
Percolation critical exponents
References
Percolation theory
Critical phenomena | Percolation surface critical behavior | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 971 | [
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Percolation theory",
"Combinatorics",
"Condensed matter physics",
"Statistical mechanics",
"Dynamical systems"
] |
68,066,476 | https://en.wikipedia.org/wiki/Carbonate%20oxalate | The carbonate oxalates are mixed anion compounds that contain both carbonate (CO3) and oxalate (C2O4) anions. Most compounds incorporate large trivalent metal ions, such as the rare earth elements. Some carbonate oxalate compounds of variable composition are formed by heating oxalates.
Formation
One method to form carbonate oxalates is to heat a metal salt with ascorbic acid, which decomposes to oxalate and carbonate and combines with the metal.
Reactions
When heated, oxalate carbonates decompose to carbon monoxide and carbonates, which form oxides at higher temperatures.
List
References
Carbonates
Mixed anion compounds
Oxalates | Carbonate oxalate | [
"Physics",
"Chemistry"
] | 141 | [
"Ions",
"Matter",
"Mixed anion compounds"
] |
68,067,905 | https://en.wikipedia.org/wiki/Oxalate%20phosphate | The oxalate phosphates are chemical compounds containing oxalate and phosphate anions. They are also called oxalatophosphates or phosphate oxalates. Some oxalate-phosphate minerals found in bat guano deposits are known. Oxalate phosphates can form metal organic framework compounds.
Related compounds include the arsenate oxalates, and phosphite oxalates, oxalatomethylphosphonate, and potentially other oxalate phosphonates.
List
References
Oxalates
Phosphates
Mixed anion compounds | Oxalate phosphate | [
"Physics",
"Chemistry"
] | 116 | [
"Matter",
"Mixed anion compounds",
"Salts",
"Phosphates",
"Ions"
] |
63,770,785 | https://en.wikipedia.org/wiki/Construction%20robots | Construction robots are a subset of industrial robots used for building and infrastructure construction at site. Despite being traditionally slow to adopt new technologies, 55% of construction companies in the United States, Europe, and China now say they use robots on job sites. Most of the robots working on jobsites today are designed to remove strains on humans, e.g., excavating and lifting heavy objects. Robots that survey and layout markers, tie rebar, and install drywall are also now on the market.
Other robots are being developed to perform tasks such as finishing the exterior, steel placement, construction of masonry wall, reinforcement concrete, etc. The main challenge to use robots in site is due to limitation in workspace.
Features
General features include:
It must be able to move.
It must be able to handle components of variable size and weight.
It must be able to adjust with changing environment.
It must be able to interact with its surroundings.
It must be able to perform multiple tasks.
Capabilities
Construction robots have been tested to carry out the followings:
Building walls
Monitor the construction progress
Inspection robots are used to investigate the infrastructures, mainly at dangerous locations
Notable construction by robots
30 storied Rail City Building at Yokohama, Japan was constructed by an automated system.
Concrete floor finish robot was used by Kajima and Tokimec companies in Japan.
Obayashi Corporation in Japan has developed and used a system to lay concrete layers in dam construction.
Social impact
Use of the construction robots in the USA is rare, mainly due to opposition from labour unions. However, in Japan, these robots are taken positively.
See also
Industrial robots
References
Robotics | Construction robots | [
"Engineering"
] | 330 | [
"Robotics",
"Automation"
] |
63,780,293 | https://en.wikipedia.org/wiki/Tire-derived%20aggregate | Tire-derived aggregate (TDA) is a building material made of recycled tires, which are shredded into pieces of varying sizes. It is commonly used in construction projects because it is sustainable and lightweight, along with being less expensive than many competing available materials. In 2007, an estimated 561.6 thousand tons (about 509 metric tons) of TDA were produced. This accounted for about 12 percent of the total recycled tire material used. Particle sizes less than 12mm are considered crumb rubber.
Applications:
stormwater management due to high permeability
road fill and parking lots improves weak soil and for frost heave reduction in cold climates
landfilling due to permeability for leachate collection, gas collection
sight, slope, landslide stabilization due to lower hydrostatic pressure than soil
vibration mitigation due to absorption capacity
backfill for driveways, septic tanks, sidewalks, basements, etc.
soft surfaces for walking paths, playgrounds, etc.
References
Tire industry
Building materials | Tire-derived aggregate | [
"Physics",
"Engineering"
] | 201 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
72,476,496 | https://en.wikipedia.org/wiki/Hegerfeldt%27s%20theorem | Hegerfeldt's theorem is a no-go theorem that demonstrates the incompatibility of the existence of spatially localized discrete particles with the combination of the principles of quantum mechanics and special relativity. A crucial requirement is that the states of single particle have positive energy. It has been used to support the conclusion that reality must be described solely in terms of field-based formulations. However, it is possible to construct localization observables in terms of positive-operator valued measures that are compatible with the restrictions imposed by the Hegerfeldt theorem.
Specifically, Hegerfeldt's theorem refers to a free particle whose time evolution is determined by a positive Hamiltonian. If the particle is initially confined in a bounded spatial region, then the spatial region where the probability to find the particle does not vanish, expands superluminarly, thus violating Einstein causality by exceeding the speed of light. Boundedness of the initial localization region can be weakened to a suitably exponential decay of the localization probability at the initial time. The localization threshold is provided by twice the Compton length of the particle. As a matter of fact, the theorem rules out the Newton-Wigner localization.
The theorem was developed by Gerhard C. Hegerfeldt and first published in 1974.
See also
Wave–particle duality
Local realism
References
No-go theorems
Quantum field theory
Theory of relativity
Theorems in quantum mechanics | Hegerfeldt's theorem | [
"Physics",
"Mathematics"
] | 291 | [
"Theorems in quantum mechanics",
"Quantum field theory",
"No-go theorems",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Theory of relativity",
"Physics theorems"
] |
72,477,621 | https://en.wikipedia.org/wiki/Diversity%20%28mathematics%29 | In mathematics, a diversity is a generalization of the concept of metric space. The concept was introduced in 2012 by Bryant and Tupper,
who call diversities "a form of multi-way metric". The concept finds application in nonlinear analysis.
Given a set , let be the set of finite subsets of .
A diversity is a pair consisting of a set and a function satisfying
(D1) , with if and only if
and
(D2) if then .
Bryant and Tupper observe that these axioms imply monotonicity; that is, if , then . They state that the term "diversity" comes from the appearance of a special case of their definition in work on phylogenetic and ecological diversities. They give the following examples:
Diameter diversity
Let be a metric space. Setting for all defines a diversity.
L diversity
For all finite if we define then is a diversity.
Phylogenetic diversity
If T is a phylogenetic tree with taxon set X. For each finite , define
as the length of the smallest subtree of T connecting taxa in A. Then is a (phylogenetic) diversity.
Steiner diversity
Let be a metric space. For each finite , let denote
the minimum length of a Steiner tree within X connecting elements in A. Then is a
diversity.
Truncated diversity
Let be a diversity. For all define
. Then if , is a diversity.
Clique diversity
If is a graph, and is defined for any finite A as the largest clique of A, then is a diversity.
References
Metric spaces | Diversity (mathematics) | [
"Mathematics"
] | 303 | [
"Mathematical structures",
"Space (mathematics)",
"Metric spaces"
] |
72,480,205 | https://en.wikipedia.org/wiki/J%C3%BCrgen%20Kirschner | Jürgen Kirschner (born April 18, 1945) is a German solid state physicist and a director at the Max Planck Institute of Microstructure Physics. Kirschner is known for his research in electron spectroscopy, including instrument development and the study of magnetic materials.
Education and career
Kirschner was born in Arendsee (Altmark) and studied physics at Technical University of Munich, where he obtained his PhD in 1974. He followed with research position at Forschungszentrum Jülich and obtained his habilitation at RWTH Aachen University in 1982. From 1988 to 1991, he was a professor in experimental physics at the Free University of Berlin. From 1992 to 2015, he was at the Max Planck Institute of Microstructure Physics in Halle, where he also became a professor at the University of Halle since 1993. He retired in 2015.
Honors and awards
Kirschner is a member of the German National Academy of Sciences Leopoldina since 2002. He has led several programs from the German Research Foundation.
Bibliography
References
1945 births
People from Altmarkkreis Salzwedel
Technical University of Munich alumni
Academic staff of RWTH Aachen University
Academic staff of the Free University of Berlin
Academic staff of the University of Halle
Max Planck Institute directors
Max Planck Society people
German experimental physicists
Condensed matter physicists
German materials scientists
Members of the German National Academy of Sciences Leopoldina
Living people | Jürgen Kirschner | [
"Physics",
"Materials_science"
] | 282 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
72,482,500 | https://en.wikipedia.org/wiki/Subdivision%20%28simplicial%20complex%29 | A subdivision (also called refinement) of a simplicial complex is another simplicial complex in which, intuitively, one or more simplices of the original complex have been partitioned into smaller simplices. The most commonly used subdivision is the barycentric subdivision, but the term is more general. The subdivision is defined in slightly different ways in different contexts.
In geometric simplicial complexes
Let K be a geometric simplicial complex (GSC). A subdivision of K is a GSC L such that:
|K| = |L|, that is, the union of simplices in K equals the union of simplices in L (they cover the same region in space).
each simplex of L is contained in some simplex of K.
As an example, let K be a GSC containing a single triangle {A,B,C} (with all its faces and vertices). Let D be a point on the face AB. Let L be the complex containing the two triangles {A,D,C} and {B,D,C} (with all their faces and vertices). Then L is a subdivision of K, since the two triangles {A,D,C} and {B,D,C} are both contained in {A,B,C}, and similarly the faces {A,D}, {D,B} are contained in the face {A,B}, and the face {D,C} is contained in {A,B,C}.
Subdivision by starring
One way to obtain a subdivision of K is to pick an arbitrary point x in |K|, remove each simplex s in K that contains x, and replace it with the closure of the following set of simplices:where is the join of the point x and the face t. This process is called starring at x.
A stellar subdivision is a subdivision obtained by sequentially starring at different points.
A derived subdivision is a subdivision obtained by the following inductive process.
Star each 1-dimensional simplex (a segment) at some internal point;
Star each 2-dimensional simplex at some internal point, over the subdivision of the 1-dimensional simplices;
... Star each k-dimensional simplex at some internal point, over the subdivision of the (k-1)-dimensional simplices.
The barycentric subdivision is a derived subdivision where the points used for starring are always barycenters of simplices. For example, if D, E, F, G are the barycenters of {A,B}, {A,C}, {B,C}, {A,B,C} respectively, then the first barycentric subdivision of {A,B,C} is the closure of {A,D,G}, {B,D,G}, {A,E,G}, {C,E,G}, {B,F,G}, {C,F,G}.
Iterated subdivisions can be used to attain arbitrarily fine triangulations of a given polyhedron.
In abstract simplicial complexes
Let K be an abstract simplicial complex (ASC). The face poset of K is a poset made of all nonempty simplices of K, ordered by inclusion (which is a partial order). For example, the face-poset of the closure of {A,B,C} is the poset with the following chains:
{A} < {A,B} < {A,B,C}
{A} < {A,C} < {A,B,C}
{B} < {A,B} < {A,B,C}
{B} < {B,C} < {A,B,C}
{C} < {A,C} < {A,B,C}
{C} < {B,C} < {A,B,C}
The order complex of a poset P is an ASC whose vertices are the elements of P and whose simplices are the chains of P.
The first barycentric subdivision of an ASC K is the order complex of its face poset. The order complex of the above poset is the closure of the following simplices:
{ {A} , {A,B} , {A,B,C} }
{ {A} , {A,C} , {A,B,C} }
{ {B} , {A,B} , {A,B,C} }
{ {B} , {B,C} , {A,B,C} }
{ {C} , {A,C} , {A,B,C} }
{ {C} , {B,C} , {A,B,C} }
Note that this ASC is isomorphic to the ASC {A,D,G}, {B,D,G}, {A,E,G}, {C,E,G}, {B,F,G}, {C,F,G}, with the assignment: A={A}, B={B}, C={C}, D={A,B}, E={A,C}, F={B,C}, G={A,B,C}.
The geometric realization of the subdivision of K is always homeomorphic to the geometric realization of K.
Simplicial sets
References | Subdivision (simplicial complex) | [
"Mathematics"
] | 1,170 | [
"Basic concepts in set theory",
"Families of sets",
"Simplicial sets"
] |
75,251,847 | https://en.wikipedia.org/wiki/HD%207977 | HD 7977 (also designated as TYC 4034-1077-1 or USNO-A2 1500-01356484) is a G-type main-sequence star located in the constellation of Cassiopeia, around 246.9 light-years away from Earth. HD 7977 is notable for its close flyby of the Solar System 2.8 million years ago. Its flyby may have taken it close enough to the Sun that it penetrated deep into the Oort Cloud and disturbed the population of Oort Cloud bodies and long-period comets there. Its mass is equivalent to 1.07 times the Sun's mass.
References
Cassiopeia (constellation)
7977 | HD 7977 | [
"Astronomy"
] | 147 | [
"Cassiopeia (constellation)",
"Constellations"
] |
75,251,889 | https://en.wikipedia.org/wiki/Aprocitentan | Aprocitentan, sold under the brand name Tryvio, is a medication used to treat hypertension (high blood pressure). It is developed by Idorsia. It is taken by mouth.
Aprocitentan is a receptor antagonist that targets both endothelin A and endothelin B receptors.
Aprocitentan was approved for medical use in the United States in March 2024. It is the first endothelin receptor antagonist to be approved by the US Food and Drug Administration (FDA) to treat systemic hypertension. The FDA considers it to be a first-in-class medication.
Medical uses
Aprocitentan is indicated for the treatment of hypertension in combination with other antihypertensive drugs, to lower blood pressure in adults who are not adequately controlled on other medications.
Adverse effects
Aprocitentan may cause hepatotoxicity (liver damage), edema (fluid retention), anemia (reduced hemoglobin), and decreased sperm count.
Contraindications
Data from animal reproductive toxicity studies with other endothelin-receptor agonists indicate that use is contraindicated in pregnant women.
Mechanism of action
Aprocitentan is an endothelin receptor antagonist that inhibits the protein endothelin-1 from binding to endothelin A and endothelin B receptors. Endothelin-1 mediates various adverse effects via its receptors, such as inflammation, cell proliferation, fibrosis, and vasoconstriction.
Society and culture
Economics
Aprocitentan is developed by Idorsia, which sold it to Janssen and purchased the rights back in 2023, for .
Legal status
Aprocitentan was approved for medical use in the United States in March 2024.
In April 2024, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Jeraygo, intended for the treatment of resistant hypertension in adults. The applicant for this medicinal product is Idorsia Pharmaceuticals Deutschland GmbH. Aprocitentan was approved for medical use in the European Union in June 2024.
References
Further reading
Endothelin receptor antagonists
Human drug metabolites
Pyrimidines
Bromoarenes
4-Bromophenyl compounds
Glycol ethers
Sulfamides | Aprocitentan | [
"Chemistry"
] | 495 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
75,252,094 | https://en.wikipedia.org/wiki/Flip%20distance | In discrete mathematics and theoretical computer science, the flip distance between two triangulations of the same point set is the number of flips required to transform one triangulation into another. A flip removes an edge between two triangles in the triangulation and then adds the other diagonal in the edge's enclosing quadrilateral, forming a different triangulation of the same point set.
This problem is known to be NP-hard. However, the computational complexity of determining the flip distance between convex polygons, a special case of this problem, is unknown. Computing the flip distance between convex polygon triangulations is also equivalent to rotation distance, the number of rotations required to transform one binary tree into another.
Definition
Given a family of triangulations of some geometric object, a flip is an operation that transforms one triangulation to another by removing an edge between two triangles and adding the opposite diagonal to the resulting quadrilateral. The flip distance between two triangulations is the minimum number of flips needed to transform one triangulation into another. It can also be described as the shortest path distance in a flip graph, a graph that has a vertex for each triangulation and an edge for each flip between two triangulations. Flips and flip distances can be defined in this way for several different kinds of triangulations, including triangulations of sets of points in the Euclidean plane, triangulations of polygons, and triangulations of abstract manifolds.
Feasibility
The flip distance is well-defined only if any triangulation can be converted to any other triangulation via a sequence of flips. An equivalent condition is that the flip graph must be connected.
In 1936, Klaus Wagner showed that maximal planar graphs on a sphere can be transformed to any other maximal planar graph with the same vertices through flipping. A. K. Dewdney generalized this result to triangulations on the surface of a torus while Charles Lawson to triangulations of a point set on a 2-dimensional plane.
For triangulations of a point set in dimension 5 or above, there exists examples where the flip graph is disconnected and a triangulation cannot be obtained from other triangulations via flips. Whether all flip graphs of finite 3- or 4-dimensional point sets are connected is an open problem.
Diameter of the flip graph
The maximum number of flips required to transform a triangulation into another is the diameter of the flip graph. The diameter of the flip graph of a convex -gon has been obtained by Daniel Sleator, Robert Tarjan, and William Thurston when is sufficiently large and by Lionel Pournin for all . This diameter is equal to when .
The diameter of other flip graphs has been studied. For instance Klaus Wagner provided a quadratic upper bound on the diameter of the flip graph of a set of unmarked points on the sphere. The current upper bound on the diameter is , while the best-known lower bound is . The diameter of the flip graphs of arbitrary topological surfaces with boundary has also been studied and their exact value is known in several cases.
Equivalence with other problems
The flip distance between triangulations of a convex polygon is equivalent to the rotation distance between two binary trees.
Computational complexity
Computing the flip distance between triangulations of a point set is both NP-complete and APX-hard. However, it is fixed-parameter tractable (FPT), and several FPT algorithms that run in exponential time have been proposed.
Computing the flip distance between triangulations of a simple polygon is also NP-hard.
The complexity of computing the flip distance between triangulations of a convex polygon remains an open problem.
Algorithms
Let be the number of points in the point set and be the flip distance. The current best FPT algorithm runs in . A faster FPT algorithm exists for the flip distance between convex polygon triangulations; it has time complexity
If no five points of a point set form an empty pentagon, there exists a algorithm for the flip distance between triangulations of this point set.
See also
Associahedron
Flip graph
Rotation distance
Tamari lattice
References
Triangulation (geometry)
Reconfiguration | Flip distance | [
"Mathematics"
] | 884 | [
"Triangulation (geometry)",
"Reconfiguration",
"Planar graphs",
"Computational problems",
"Planes (geometry)",
"Mathematical problems"
] |
75,252,127 | https://en.wikipedia.org/wiki/Firibastat | Firibastat is a prodrug of two brain aminopeptidase A inhibitors, developed to treat resistant hypertension. It failed to show efficacy in a Phase III trial.
References
Prodrugs
Antihypertensive agents
Sulfonic acids
Diamines
Organic disulfides | Firibastat | [
"Chemistry"
] | 63 | [
"Chemicals in medicine",
"Functional groups",
"Prodrugs",
"Sulfonic acids"
] |
75,252,202 | https://en.wikipedia.org/wiki/Aminopeptidase%20A%20inhibitor | Aminopeptidase A inhibitors are a class of antihypertensive drugs that work by inhibiting the conversion of angiotensin II to angiotensin III by the aminopeptidase A enzyme. The first medication in this class is firibastat. It is hypothesized that the drugs may be more effective in overweight people and those of African descent.
References
Antihypertensive agents | Aminopeptidase A inhibitor | [
"Chemistry"
] | 91 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,253,555 | https://en.wikipedia.org/wiki/Soluble%20guanylate%20cyclase%20stimulator | Soluble guanylate cyclase (sGC) stimulators are a class of drugs developed to treat heart failure, pulmonary hypertension, and other diseases. The first-in-class medication was riociguat, approved in 2013 for pulmonary hypertension. They have also been investigated for hypertension, systemic sclerosis, and sickle cell disease.
Background
In 1998, the role of nitric oxide (NO) in cardiovascular disease received the Nobel Prize in Physiology. Although NO is still used to treat angina, its side effects, potential for tolerance, short duration of action, and narrow therapeutic index limit its therapeutic use. PDE5 inhibitors increase NO and are approved for erectile dysfunction, pulmonary arterial hypertension (PAH), and benign prostatic hyperplasia, but they are less effective in patients for whom NO production is suppressed, such as people with diabetes or obesity. Soluble guanylate cyclase is one of the downstream targets of NO, but the stimulators operate independently of it. sGC activators, another experimental class of drugs, may be more effective than stimulators when oxidative stress is high.
The drugs are also considered to possibly have the potential to treat kidney disease, lung fibrosis, scleroderma, and sickle cell disease.
List of drugs
FDA approved
Riociguat, approved in 2013 for pulmonary hypertension
Vericiguat, approved in 2021 for heart failure
Investigational
Praliciguat was tried in a phase II trial for heart failure with preserved ejection fraction
Olinciguat was developed for sickle cell disease but its development was discontinued in 2020.
References
Soluble guanylate cyclase stimulators
Drugs by mechanism of action | Soluble guanylate cyclase stimulator | [
"Chemistry"
] | 361 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,254,721 | https://en.wikipedia.org/wiki/Bene%20Meat%20Technologies | Bene Meat Technologies a.s. (BMT) is a Czech biotechnology start-up focused on research and development of technology for the production of cultivated meat on an industrial scale. It cooperates with scientific institutions and companies in the Czech Republic and abroad. The company has its laboratories on the first floor of the Cube building in Vokovice, Prague.
History
Bene Meat Technologies a.s. was founded in 2020 by Mgr. Roman Kříž, who is the project leader. The main biologist of the scientific team is Jiří Janoušek and one of the external scientists involved in the ongoing research is the immunologist Prof. RNDr. Jan Černý, Ph.D. In 2022, the BMT research team consisted of 70 scientists
Research objective
Developing a technology to produce cultured meat by propagating animal cells without using fetal bovine serum, ideally with growth factors from their own production. BMT claims that their final technology will allow its operators to produce and offer the product at prices affordable to consumers.
In March 2023, the company said that the first cultured meat product launched on the market may not be for human consumption, but as pet food. However, BMT states that the creation of meat meant for human consumption is one of their goals.
Progress
BMT is the first company registered in European Feed Materials Register for the production and sale of laboratory-grown meat for pet food; specifically cat and dog food. BMT claims to be the only entity in the world that can produce and sell this product for the pet food market. By 2024, BMT plans to make several metric tons per day of laboratory-grown meat meant for pet food.
References
Cellular agriculture
Tissue engineering
Meat substitutes
Biotechnology companies
Pet foods
Food and drink companies established in 2020 | Bene Meat Technologies | [
"Chemistry",
"Engineering",
"Biology"
] | 362 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Biotechnology companies",
"Biotechnology organizations",
"Medical technology"
] |
75,257,512 | https://en.wikipedia.org/wiki/Waj%C5%AB | A is a hydraulic engineering and flood control structure unique to the alluvial floodplain of the Kiso Three Rivers in central Japan. It is comparable to the European polder, although a wajū is usually not reclaimed. The hardships endured for centuries by farmers whose lives revolved around the wajū has given rise to the term .
History
Since prehistoric times, sudden freshets along the course of the major rivers of Owari and Mino in late spring caused by snowmelt in the snow country, especially in the Japanese Alps and Koshi, created great suffering for agricultural communities. The wajū was developed to protect fertile riparian farmland from becoming submerged by rising water levels during these freshets. Wajū are known to have been in use since at least the 16th century, but some wajū are reputed to be much older, such as which was allegedly completed in 1319.
To develop a wajū, an area of land, usually a river island, was enclosed by a levee ring. In the event of a levee failure, most wajū incorporated structures allowing for vertical evacuation.
One evacuation system used by those who could afford to build it, such as the well-to-do gōnō, was the , a sort of tower house above the high water line built on a foundation in ishigaki style. For lower class people, including peasants and rural samurai (gōshi), who couldn't afford to build mizuya, there was the an artificial earthen high ground similar to the terps of Northern Europe or the cattle mounds built on American ranches.
Over the centuries, the wajū suffered numerous failures due to engineering deficiencies. In the 18th century, was among the first to suggest redirecting the rivers to relieve water pressure on the wajū and compensate for the inadequacies of the existing system of pressure-regulating aqueducts, known as . In the late 19th century, the wajū were improved and reinforced using technology imported from Europe.
See also
Ōgaki Castle, which is protected by a wajū
Johannis de Rijke
Tatsuta wajū sluice gates
1754 Hōreki River incident
References
Further reading
Water in Japan
Hydraulic engineering
Hydraulic structures
Flood control projects
Hydrology
Artificial landforms
Agriculture in Japan
Irrigation | Wajū | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 462 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
75,268,648 | https://en.wikipedia.org/wiki/Olezarsen | Olezarsen, sold under the brand name Tryngolza, is a medication used in the treatment of familial chylomicronemia syndrome. Olezarsen is an apolipoprotein C-III-directed antisense oligonucleotide. It is given by injection under the skin.
Olezarsen was approved for medical use in the United States in December 2024. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
Olezarsen is indicated as an adjunct to diet to reduce triglycerides in adults with familial chylomicronemia syndrome.
History
The US Food and Drug Administration (FDA) granted the application of olezarsen orphan drug designation in February 2024.
Society and culture
Legal status
Olezarsen was approved for medical use in the United States in December 2024.
Names
Olezarsen is the international nonproprietary name.
Olezarsen is sold under the brand name Tryngolza.
References
Further reading
External links
Antisense RNA
Hypolipidemic agents
Nucleic acids
Orphan drugs
Triglycerides | Olezarsen | [
"Chemistry"
] | 238 | [
"Biomolecules by chemical classification",
"Nucleic acids"
] |
75,271,665 | https://en.wikipedia.org/wiki/L-H%20mode%20transition | Low to High Confinement Mode Transition, more commonly referred to as L-H transition, is a phenomenon in the fields of plasma physics and magnetic confinement fusion, signifying the transition from less efficient plasma confinement to highly efficient modes. The L-H transition, a milestone in the development of nuclear fusion, enables the confinement of high-temperature plasmas (ionized gases at extremely high temperatures). The transition is dependent on many factors such as density, magnetic field strength, heating method, plasma fueling, and edge plasma control, and is made possible through mechanisms such as edge turbulence, E×B shear, edge electric field, and edge current and plasma flow. Researchers studying this field use tools such as Electron Cyclotron Emission, Thomson Scattering, magnetic diagnostics, and Langmuir probes to gauge the PLH (energy needed for the transition) and seek to lower this value. This confinement is a necessary condition for sustaining the fusion reactions, which involve the combination of atomic nuclei, leading to the release of vast amounts of energy.
Background
Key terms and concepts needed to comprehend L-H Transition include understanding plasma and fusion.
Plasma
Plasma is one of the four fundamental states of matter, other than solid, liquid, and gas. In contrast to other states, plasma is composed of ionized gas particles, which cause the separation of its electrons from atoms/molecules and result in the creation of an electrically conductive medium. It occurs in phenomena like lightning, stars, and fusion plasma.
Fusion
Fusion is a nuclear process in which two atomic nuclei combine to form a single bigger nucleus. This phenomenon releases a substantial amount of energy and is the process that powers stars. On Earth, controlled nuclear fusion is being pursued as a clean and virtually limitless energy source. It involves the fusion of isotopes like deuterium (hydrogen atom with 1 neutron) and tritium (hydrogen atom with 2 neutrons), and generates energy in the form of kinetic energy (energy in the form of motion/high speed) of released particles, such as neutrons, and intense heat. The principle is based on Einstein's equation E=mc^2, and as the resulting helium is marginally lighter than the two original hydrogens, the difference in the mass is converted into energy, known as mass defect. It is this energy that can be converted into clean electricity without producing waste.
Overview of Confinement Modes
Sources:
Plasma in both L-Mode and H-Mode exhibit distinct characteristics related to turbulence, control, power thresholds, energy efficiency, and confinement durations.
PLH (H-Mode Power Threshold)
PLH
PLH (H-mode power threshold) is an essential parameter in nuclear fusion. It represents the minimum power input required to trigger the transition from a low-confinement mode (L-Mode) to a high-confinement mode (H-Mode) in plasma confinement devices, such as tokamaks or stellarators. The PLH signifies the point at which the plasma attains the conditions necessary for enhanced energy confinement, reduced turbulence, and improved stability characteristic of H-Mode. Controlled nuclear fusion requires understanding and precise control of the PLH in order to facilitate the continuous generation of energy from the fusion process.
Factors Influencing PLH
Plasma Density and Magnetic Field Strength
H-Mode Power Threshold (PLH) in experimental nuclear-controlled fusion is highly dependent on both plasma confinement and magnetic field intensity. Higher plasma densities and stronger magnetic fields correlate positively with the elevated PLH.
τ is the confinement time
n is plasma density
V is the volume of the plasma
B is the magnetic field strength
Higher plasma densities result in increased particle collisions, enhancing the confinement of energy and increasing the plasma's stability. The greater the density, the higher the threshold of power (PLH) required to transition from L-Mode to H-Mode. The increased particle density allows for improved plasma confinement, which is vital for sustaining fusion reactions efficiently.
Similarly, stronger magnetic fields serve to contain and shape the plasma, mitigating its loss and preventing contact with the reactor's walls, which would ultimately lead to the reaction's failure. This magnetic confinement is essential for preventing energy losses and ensuring that the plasma reaches the conditions necessary for the L-Mode to H-Mode transition.
Heating Method
The heating methods used in fusion devices significantly impact the PLH. Various techniques, such as neutral beam injection (introduction high energy neutral particles to increase plasma temperature), radio frequency heating (uses radiofrequency waves to increase kinetic energy of particles), and magnetic confinement(uses magnetic fields to control extremely hot plasma), are employed to heat the plasma to the required temperatures for H-Mode. The choice of heating method and the effectiveness of energy transfer to the plasma are key factors in determining the PLH.
Plasma Fueling
Plasma fueling, which involves introducing additional fuel into the plasma, is another factor influencing the PLH. By injecting fuel, researchers can alter the plasma's density and temperature. An efficient and well-calibrated fueling system can elevate the plasma density, increasing the number of particles within the plasma, which is essential for enhancing confinement and stability. Additionally, effective fueling contributes to the rise in plasma temperature, a vital factor in achieving the conditions required for the L-Mode to H-Mode transition.
Edge Plasma Control
Edge plasma control is an important aspect of achieving and maintaining H-Mode in fusion devices. The edge plasma region, located at the outer boundary of the plasma confinement area, is susceptible to instabilities and turbulence.
The edge plasma is sensitive to disturbances because it's close to the magnetic confinement boundaries, where the plasma interacts with the walls of the containment vessel. These disturbances can lead to issues like uneven heat and particle movement or localized turbulence, which affect the transition to H-Mode.
To tackle this techniques such as magnetic shaping and advanced tools can control the edge plasma. The aim is to reduce these disturbances and make the edge plasma more stable. By regulating factors such as temperature, density, and impurities in the edge plasma, researchers can influence the PLH (H-Mode Power Threshold). Effective control of these factors ensures that the conditions for transitioning from L-Mode to H-Mode are met and maintained.
Methods for Measuring and Determining PLH
Electron Cyclotron Emissions (ECE)
Electron Cyclotron Emission (ECE) diagnostics, involve observing the radiation emitted by electrons as they undergo cyclotron motion (motion where a particle moves in a spiral path away from the center) in the magnetic field. This technique provides valuable insights into plasma parameters, including electron temperature and density. By analyzing the emitted radiation's spectral characteristics, researchers can precisely measure these properties, aiding in the determination of PLH.
Thomson Scattering
Thomson scattering employs laser beams to scatter off plasma electrons. The scattered light's characteristics show data on the velocity and temperature of these electrons, providing critical information about the plasma's thermal energy.
Magnetic Diagnostics
Magnetic sensors and probes are employed to map the magnetic fields within the plasma confinement device. Knowledge of the magnetic field's strength and configuration is fundamental for determining PLH, as it directly affects plasma stability and confinement.
Langmuir Probes
Langmuir probes are small electrodes inserted into the plasma to measure its properties, including electron temperature, density, and plasma potential. These measurements are critical for evaluating PLH and understanding the behavior of the plasma.
Transition Mechanisms
A few key processes that make the transition between L-H transition possible and allow for the improved stability of H-mode are edge turbulence, E×B shear, edge electric field, edge current, and plasma flow.
Mechanisms Driving L-H Transition
Edge Turbulence
The behavior of edge turbulence, a common feature in plasmas, is closely linked to the L-H transition. Researchers study how turbulence responds to changes in parameters like E×B shear, Er gradients, and other variables.
E×B Shear
One of the mechanisms thought to be responsible for triggering the L-H transition is the phenomenon known as E×B shear stabilization of turbulence. This refers to the rotation of the plasma resulting from the interaction between the electric field (E) and the magnetic field (B). As the plasma approaches the transition point, the E×B shear increases, creating a shearing (moving in a way that opposes the turbulent transport of particles, heat, and energy) motion within the plasma. This shearing motion suppresses turbulent transport (turbulent structures, such as eddies and vortices, within the plasma), promoting stability and improved confinement characteristic of H-mode.
Edge Electric Field (Er)
The behavior of the plasma at its edge, specifically the edge electric field (Er), plays a role in the L-H transition. As the transition approaches, there is the emergence of increasingly steep Er gradients near the plasma's edge. These gradient changes are closely associated with the suppression of turbulent transport, which refers to the erratic movement of particles and heat within the plasma. This suppression marks the shift to the H-mode, a state of plasma confinement that is significantly more efficient and stable, making it a key goal in nuclear fusion research.
Edge Current and Plasma Flow
The L-H transition's characteristics are further influenced by edge current and the toroidal flow of plasma. The complex interactions between these two elements can introduce variability in the threshold conditions for the transition to the more efficient H-mode.
Future Implications
L-H transition in nuclear fusion, if understood and used correctly, has the potential for clean energy and sustainable power plants.
Importance of Understanding L-H Transition in Nuclear Fusion
Enhanced Confinement
The transition to H-Mode brings about an improvement in plasma confinement. This leads to increased energy production and more efficient fusion reactions.
Pedestal Formation
H-Mode is associated with the development of a "pedestal" in the plasma profile. This pedestal acts as a protective barrier, preventing the plasma from contacting the reactor walls. The pedestal enhances stability and enables the plasma to reach the conditions necessary for sustained fusion reactions.
PLH Optimization
Achieving and maintaining H-Mode requires reaching the PLH (H-Mode Power Threshold). Understanding the factors that influence PLH, such as plasma density, magnetic field strength, heating methods, and edge plasma control, is essential for ensuring a smooth transition and sustained H-Mode operation.
Future Energy Solutions
Controlled nuclear fusion has the potential to revolutionize the energy sector. It offers a clean and virtually limitless energy source, significantly reducing greenhouse gas emissions and addressing energy demands. The L-H transition is a critical step towards harnessing the immense energy release of fusion reactions.
References
Plasma phenomena | L-H mode transition | [
"Physics"
] | 2,163 | [
"Plasma phenomena",
"Physical phenomena",
"Plasma physics"
] |
68,073,714 | https://en.wikipedia.org/wiki/Dinosauroid | The dinosauroid is a hypothetical species created by Dale A. Russell in 1982. Russell theorized that if a dinosaur such as Stenonychosaurus had not perished in the Cretaceous–Paleogene extinction event, its descendants might have evolved to fill the same ecological niche as humans. While the theory has been met with criticism from other scientists, the dinosauroid has been featured widely in books and documentaries since the theory's inception.
Theory
In 1982, Dale A. Russell, then curator of vertebrate fossils at the National Museum of Canada in Ottawa, conjectured a possible evolutionary path for Stenonychosaurus, if it had not perished in the Cretaceous–Paleogene extinction event, suggesting that it could have evolved into intelligent beings similar in body plan to humans. Over geologic time, Russell noted that there had been a steady increase in the encephalization quotient or EQ (the relative brain weight when compared to other species with the same body weight) among the dinosaurs. Russell had discovered the first Troodontid skull, and noted that, while its EQ was low compared to humans, it was six times higher than that of other dinosaurs. Russell suggested that if the trend in Stenonychosaurus evolution had continued to the present, its brain case could by now measure 1,100 cm3, comparable to that of a human.
Troodontids had semi-manipulative fingers, able to grasp and hold objects to a certain degree, and binocular vision. Russell proposed that his dinosauroid, like members of the troodontid family, would have had large eyes and three fingers on each hand, one of which would have been partially opposed. Russell also speculated that the dinosauroid would have had a toothless beak. As with most modern reptiles (and birds), he conceived of its genitalia as internal. Russell speculated that it would have required a navel, as a placenta aids the development of a large brain case. However, it would not have possessed mammary glands, and would have fed its young, as some birds do, on regurgitated food. He speculated that its language would have sounded somewhat like bird song.
Reflecting on the dinosauroid theory, Russel said in an interview in 2000;
The theory, perhaps due to its outlandish premise and the dinosauroid's striking image, became a staple in dinosaur books published throughout the 1980s and into the 2000s.
Sculpture
Dale Russell worked in collaboration with taxidermist and artist Ron Seguin to create models of both a Stenonychosaurus and the fictional dinosauroid. While the model of Stenonychosaurus was constructed to reflect the biology of Stenonychosaurus as accurately as possible, the dinosauroid was wholly fabricated. The models were made in tandem, the Stenonychosaurus model taking about seven months to construct and the dinosauroid model taking about three and a half months to construct. The two models were built using similar techniques, built up over a skeleton of the creature, the final sculpt then being recast in fiberglass and filled with sand.
It can be assumed Seguin's sculpture depicts a male dinosauroid since Russell conjectured that males of the species could have had a wattle under their chin.
Reception
Russell's thought experiment has been met with criticism from other paleontologists since the 1980s, many of whom point out that his Dinosauroid is overly anthropomorphic. Gregory S. Paul (1988) and Thomas R. Holtz, Jr., consider it "suspiciously human" and Darren Naish has argued that a large-brained, highly intelligent troodontid would retain a more standard theropod body plan, with a horizontal posture and long tail, and would probably manipulate objects with the snout and feet in the manner of a bird, rather than with human-like "hands".
As Darren Naish explained in a 2012 Scientific American article on the subject of the dinosauroid,
Some authors, however, had more favourable opinions on the dinosauroid, with David Norman remarking in the 1985 book The Illustrated Encyclopedia of Dinosaurs that “Such an idea is an obviously fanciful, though provocative thought.”
The dinosauroid theory, along with the often repeated fact that Troodontids were the most intelligent dinosaurs, may have led to an overestimation of its intelligence among the general public; while Stenonychosaurus did have a larger brain to body ratio when compared to other theropod dinosaurs of its time, its intelligence was likely comparable to that of modern birds such as bustards and emus.
Modernisation
Since Russell's original work in the 1980s, alternate interpretations of intelligence in non-avian dinosaurs have been depicted in art. The collaborative work of Turkish artist C. M. Kosemen and Canadian comic book artist Simon Roy is a particularly notable example of a modern take on the dinosauroid concept. These art pieces from the late 2000s show dinosaurs with anatomy more reflective of modern paleontological understanding and retain more of their ancestral, theropod features. These designs for a dinosauroid were based on Darren Naish's writings on Russell and Seguin's original.
Kosemen and Roy's work expands upon the original concept by creating a wide range of different dinosauroid species and placing them inside of an entirely speculative ecosystem in a world without the Cretaceous–Paleogene extinction. These new dinosauroids are more inspired by birds and their tool use capability, with some inspiration from early hunter-gatherer societies as well.
See also
The New Dinosaurs
Silurian hypothesis
References
Anthropomorphic dinosaurs
Speculative evolution
Thought experiments
Anthropomorphic reptiles | Dinosauroid | [
"Biology"
] | 1,145 | [
"Biological hypotheses",
"Speculative evolution",
"Hypothetical life forms"
] |
68,079,080 | https://en.wikipedia.org/wiki/Szczecin%20water%20pumps | The Szczecin water pumps, colloquially known as Berliners, are historic water pumps in Szczecin, Poland, that are a characteristic object of the city. There were 70 pumps originally made between 1865 and 1895, with 28 surviving to this day, 27 of which hold the status of cultural property.
History
The pumps were manufactured between 1865 and 1895, in the F. Poepck Water Pump factories located in Szczecin and Chojna. Originally, there were around 70 pumps made, of which 28 survive to this day. Originally, the pumps were painted in blue with colorful details depicting the city coat of arms. After World War 2, they were repainted to a green colour. In the first years after the war, the pumps were very useful for the city inhabitants as the water supply network remained not fully functional at the time. At that time, the elements from damaged pumps were relocated to the working ones. At the beginning of the 21st century, the pumps were repainted to their original colours.
In 2000, thanks to a city conservator-restorer, Małgorzata Gwiazdowska, 27 of the pumps were listed as cultural property. By the 2010s, all pumps required restoration to their original state, as they were damaged, and decorative details of the pumps had been stolen over the years. Polcast foundry, in cooperation with West Pomeranian University of Technology, was tasked by the Szczecin Department of Water Supply and Sewage to prepare the copies of missing decorative elements. Such copies were made using both traditional methods used in the original manufacturing and modern technologies, such as computer modeling. In 2013, the department was given 25,000 Polish złoty from the city for the restoration efforts.
Many of the pumps no longer work, due to the lowering of the groundwater levels in the 2000s.
Currently, the pumps are a characteristic object of the city, and are popular among tourists.
Description
The pumps are made of iron and have a form of almost 3-metre-tall (9.8 ft.) column with diameter of 36 cm (14.2 in) and base in the shape of the square with the side dimensions of 61 cm (24 in). The water is pumped with a hand lever and discharged from an opening in a form of a sculpture of a dragon. They are painted in a blue colour with a yellow crown placed at the top and a city coat of arms at the bottom. They were originally painted as such in 19th century, and after World War 2, were repainted to green. At the beginning of the 21st century, the pumps were repainted to their original colours.
The pumps are independent from the city water supply network, instructed using the ground water. Many of them do not work anymore, due to the lowering of the groundwater levels in the 2000s. They are under the administration and maintenance of the Szczecin Department of Water Supply and Sewage.
The pumps are colloquially known as Berliners, as they are the same model as the historical water pumps in the city of Berlin, Germany.
There are currently 28 of them in the city, located at:
intersection of Piątego Lipca Street and Noakowskiego Street;
Bazarowa Street;
Concord Square;
intersection of Cieszkowskiego Street and Bojki Street;
intersection of Grodzka Street and Mariacka Street;
intersection of Tkacka Street and Grodzka Street;
intersection of Heleny Street and Karpińskiego Street;
intersection of Pope John Paul II Avenue and Mazurska Street;
intersection of Kaszubska Street and Narutowicza Street;
intersection of Kopernika Street and Krzywoustego Street;
intrsection of Królowej Jadwigi Street and Krzywoustego Street;
intersection of Bogusława X Street and Łokietka Street;
8 Malczewskiego Street in Strefan Żeromski Park;
55 Małopolska Street;
intersection of Monte Cassino Street and Jagiellońska Street;
Mściwoja Street at the Hay Market Square
Independence Avenue
intersection of Bałuki Street and Św. Wojciecha Street at Anders Park
Grey Ranks Square;
Grunwald Square
Św. Piotra i Pawła Street near the Castle Way;
Zawisza the Black Square;
Polish Soldier Square;
intersection of Potulicka Street and Drzymały Street;
intersection of Wyzwolenia Avenue and Felczaka Street;
intersection of Odzieżowa Street and Wyzwolenia Avenue;
intersection of Wyzwolenia Avenue and Rayskiego Street;
and an intersection of Żupańskiego Street and Niemierzyńska Street.
Notes
References
Buildings and structures in Szczecin
Pumps
History of Szczecin
Objects of cultural heritage in Poland
1865 establishments in Prussia
Infrastructure completed in 1865
Buildings and structures completed in 1865 | Szczecin water pumps | [
"Physics",
"Chemistry"
] | 1,009 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
68,081,463 | https://en.wikipedia.org/wiki/Asbestos%20Hazard%20Emergency%20Response%20Act | The Asbestos Hazard Emergency Response Act (AHERA) is a US federal law enacted by the 99th United States Congress and signed into law by President Ronald Reagan. It required the EPA to create regulations regarding local educational agencies inspection of school buildings for asbestos-containing building material, prepare asbestos management plans, and perform asbestos response actions to prevent or reduce asbestos hazards. AHERA was implemented under the Toxic Substance Control Act of 1986.
AHERA demanded the EPA develop a plan for states for accrediting persons conducting asbestos inspection and corrective-action activities at schools. Whistleblowers are protected from retribution by the act.
References
Asbestos
Hazardous air pollutants
Carcinogens
IARC Group 1 carcinogens
Occupational safety and health
Industrial minerals
Air pollution in the United States
United States federal environmental legislation
1986 in American law
1986 in the environment
1986 in the United States
Environmental law in the United States | Asbestos Hazard Emergency Response Act | [
"Chemistry",
"Environmental_science"
] | 178 | [
"Carcinogens",
"Toxicology",
"Asbestos"
] |
73,901,890 | https://en.wikipedia.org/wiki/Ernest%20Kempton%20Adams%20Lectures | The Ernest Kempton Adams (EKA) Lectures at Columbia University is a lecture series on physics that originally took place from 1905 to 1913. According to physicist Andrew Millis, the series "marked the beginning of America’s engagement with modern physics," and was the first and only occasion on which several leading European physicists visited or lectured in America. It was originally funded by Edward Dean Adams with a $50,000 endowment in memory of his son, Ernest Kempton Adams, who was an 1897 alumnus of Columbia’s School of Mines. The lecture series was founded by Professor George B. Pegram. The series was revived in 2022 with a lecture by Michael Berry.
List of lectures
1905–06: Vilhelm Bjerknes, "Fields of Force"
1906–07: Hendrik Lorentz, "The Theory of Electrons and its Application to the Phenomena of Light and Radiant Heat"
1909: Max Planck, "Eight Lectures on Theoretical Physics"
1909–10: Carl Runge, "Graphical Methods"
1911: Jacques Hadamard, "Four Lectures on Mathematics"
1913: Robert W. Wood, "Researches in Physical Optics, Part I"
1913: Wilhelm Wien, "Neuere Probleme der theoretischen Physik"
2022: Michael Berry, "Four Geometric Optical Illusions"
See also
Bampton Lectures (Columbia University)
Man's Right to Knowledge Lectures
References
Lecture series at Columbia University
Recurring events established in 1905
1905 establishments in New York City
Physics education
1913 disestablishments in New York (state)
Physics events | Ernest Kempton Adams Lectures | [
"Physics"
] | 318 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
73,907,946 | https://en.wikipedia.org/wiki/Elmira%20Ramazanova | Elmira Mammadamin gizi Ramazanova (; 28 October 1934 – 8 December 2020) was an Azerbaijani geologist who specialized in the efficiency of oil and gas extraction and was a professor at Azerbaijan State Oil and Industry University.
Biography
Elmira Mammadamin gizi Ramazanova was born on 28 October 1934 in Baku, the capital of the Azerbaijan Soviet Socialist Republic (now Azerbaijan), and was educated at School No. 134 in Baku and the Azerbaijan Industrial Institute (now Azerbaijan State Oil and Industry University. In 1965, she received her PhD in Technical Sciences, with her thesis titled "Razrabotka i issledovaniye metodiki rascheta usloviy razdeleniya gaza v gazokondensatnykh sistemakh" ().
After working sometime at the Azerbaijan SSR's energy department, she remained at the Azerbaijan Industrial Institute (by then renamed the Azerbaijan Oil and Chemistry Institute) as a faculty member, eventually being promoted to the rank of professor in 1978. In 1975, she received a Doctorate of Technical Sciences, with her thesis titled "Termodinamicheskiye issledovaniya neftyanykh i gazokondensatnykh mestorozhdeniy na osnove primeneniya metodov adaptatsii" (). In 1992, she moved to the Geotechnological Problems of Oil, Gas and Chemistry Scientific Research Institute, where she became director that same year, and she subsequently remained in that position until her death.
As an academic, Ramazanova specialized in the efficiency of oil and gas extraction. In 1986, she and Fuad Veliyev wrote the book "Prikladnaya termodinamika neftegazokondensatnykh mestorozhdeniy" (). She served on the editorial board of the Azerbaijan Oil Industry Journal. She was vice-president of the country's National Oil Committee and, according to Azerbaijani newspaper İki sahil, "devoted her life and all her work to the development of oil and gas science and the oil and gas industry in Azerbaijan".
Ramazanova won several awards, including Order of the Badge of Honour in 1986, Honored Science Worker of the Azerbaijan SSR (1990), Honored Oilman of the USSR (1990), and the Shohrat Order in 2014 (for her "services in training highly qualified specialists for the oil and gas industry in the Republic of Azerbaijan"). In 2007, Ramazanova was elected to the Azerbaijan National Academy of Sciences as a corresponding member.
Ramazanova died on 8 December 2020, aged 86.
References
1934 births
2020 deaths
Azerbaijani geologists
Women geologists
20th-century geologists
Petroleum engineers
Azerbaijani engineers
Azerbaijani women engineers
20th-century women engineers
Academic staff of Azerbaijan State Oil and Industry University
Azerbaijan State Oil and Industry University alumni
Recipients of the Shohrat Order | Elmira Ramazanova | [
"Engineering"
] | 590 | [
"Petroleum engineers",
"Petroleum engineering"
] |
70,944,233 | https://en.wikipedia.org/wiki/HD%20104237 | HD 104237 is a candidate multiple star system in the southern constellation of Chamaeleon. It has the variable star designation DX Chamaeleontis, abbreviated DX Cha; HD 104237 is the stellar designation from the Henry Draper Catalogue. The system is dimly visible to the naked eye with an apparent visual magnitude that ranges from 6.59 down to 6.70. It is located at a distance of approximately 348 light-years from the Sun based on parallax measurements. The system is positioned just to the north-east of the 5th magnitude star Epsilon Chamaeleontis, and is a member of the ε Cha association of co-moving stars.
N. Houk and A. P. Cowley found a stellar classification of 'B/A peculiar' for this object in 1975. The following year, K. G. Nehize catalogued it as a star displaying emission lines. In 1988, J. Y. Hu and associates found it to be a candidate Herbig Ae/Be star. This is a class of pre-main sequence stars that recently formed from a molecular cloud. In particular, the star displays an infrared excess associated with a dusty circumstellar shell, and its spectrum closely resembles other Herbig Ae/Be stars such as AB Aurigae and HR 5999. No characteristic molecular cloud was detected nearby, although there are small molecular clumps in the vicinity that may be the remains of a dissipating cloud.
This is the optically brightest Herbig star known, making it a useful object for investigation. Delta scuti-like pulsations have been detected with frequencies of 33.29 and 36.61 cycles per day. It is an X-ray source with a luminosity of ·sec−1, which may originate in a hot corona. DX Cha displays an ultraviolet excess, which indicates the star is still accreting matter at a rate of ≈ 10−8 ·yr−1. This inflow is generating a pair of jets emerging from the poles of the star. The circumstellar disk is being viewed from nearly edge on.
Infrared observations in 1996 showed evidence of an infrared source located at an angular separation of , now designated component B. In 2003, optical observations combined with the Chandra X-ray Observatory indicated that five low mass, pre-main sequence objects lie within , equivalent to a projected distance of from the primary, component A. At least two of these are T Tauri stars. It is uncertain whether all of the nearby companions form a gravitationally bound system with the primary. The close A/B pair display radial velocity variation that indicate this is a double-lined spectroscopic binary with a K-type secondary.
References
Further reading
Herbig Ae/Be stars
Delta Scuti variables
Circumstellar disks
Spectroscopic binaries
T Tauri stars
Pre-main-sequence stars
Multiple star systems
Chamaeleon
Durchmusterung objects
104237
058520
Chamaeleontis, DX | HD 104237 | [
"Astronomy"
] | 622 | [
"Chamaeleon",
"Constellations"
] |
66,564,318 | https://en.wikipedia.org/wiki/Maximum%20energy%20product | In magnetics, the maximum energy product is an important figure-of-merit for the strength of a permanent magnet material. It is often denoted and is typically given in units of either (kilojoules per cubic meter, in SI electromagnetism) or (mega-gauss-oersted, in gaussian electromagnetism). 1 MGOe is equivalent to .
During the 20th century, the maximum energy product of commercially available magnetic materials rose from around 1 MGOe (e.g. in KS Steel) to over 50 MGOe (in neodymium magnets). Other important permanent magnet properties include the remanence () and coercivity (); these quantities are also determined from the saturation loop and are related to the maximum energy product, though not directly.
Definition and significance
The maximum energy product is defined based on the magnetic hysteresis saturation loop (- curve), in the demagnetizing portion where the and fields are in opposition. It is defined as the maximal value of the product of and along this curve (actually, the maximum of the negative of the product, , since they have opposing signs):
Equivalently, it can be graphically defined as the area of the largest rectangle that can be drawn between the origin and the saturation demagnetization B-H curve (see figure).
The significance of is that the volume of magnet necessary for any given application tends to be inversely proportional to . This is illustrated by considering a simple magnetic circuit containing a permanent magnet of volume and an air gap of volume , connected to each other by a magnetic core. Suppose the goal is to reach a certain field strength in the gap. In such a situation, the total magnetic energy in the gap (volume-integrated magnetic energy density) is directly equal to half the volume-integrated in the magnet:
thus in order to achieve the desired magnetic field in the gap, the required volume of magnet can be minimized by maximizing in the magnet. By choosing a magnetic material with a high , and also choosing the aspect ratio of the magnet so that its is equal to , the required volume of magnet to achieve a target flux density in the air gap is minimized. This expression assumes that the permeability in the core that is connecting the magnetic material to the air gap is infinite, so unlike the equation might imply, you cannot get arbitrarily large flux density in the air gap by decreasing the gap distance. A real core will eventually saturate.
See also
Tetrataenite
References
Magnetostatics
Magnetism
Magnetic ordering | Maximum energy product | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 532 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
66,567,874 | https://en.wikipedia.org/wiki/Tensor%20network | Tensor networks or tensor network states are a class of variational wave functions used in the study of many-body quantum systems and fluids. Tensor networks extend one-dimensional matrix product states to higher dimensions while preserving some of their useful mathematical properties.
The wave function is encoded as a tensor contraction of a network of individual tensors. The structure of the individual tensors can impose global symmetries on the wave function (such as antisymmetry under exchange of fermions) or restrict the wave function to specific quantum numbers, like total charge, angular momentum, or spin. It is also possible to derive strict bounds on quantities like entanglement and correlation length using the mathematical structure of the tensor network. This has made tensor networks useful in theoretical studies of quantum information in many-body systems. They have also proved useful in variational studies of ground states, excited states, and dynamics of strongly correlated many-body systems.
Diagrammatic notation
In general, a tensor network diagram (Penrose diagram) can be viewed as a graph where nodes (or vertices) represent individual tensors, while edges represent summation over an index. Free indices are depicted as edges (or legs) attached to a single vertex only. Sometimes, there is also additional meaning to a node's shape. For instance, one can use trapezoids for unitary matrices or tensors with similar behaviour. This way, flipped trapezoids would be interpreted as complex conjugates to them.
History
Foundational research on tensor networks began in 1971 with a paper by Roger Penrose. In “Applications of negative dimensional tensors” Penrose developed tensor diagram notation, describing how the diagrammatic language of tensor networks could be used in applications in physics.
In 1992, Steven R. White developed the Density Matrix Renormalization Group (DMRG) for quantum lattice systems. The DMRG was the first successful tensor network and associated algorithm.
In 2002, Guifre Vidal and Reinhard Werner attempted to quantify entanglement, laying the groundwork for quantum resource theories. This was also the first description of the use of tensor networks as mathematical tools for describing quantum systems.
In 2004, Frank Verstraete and Ignacio Cirac developed the theory of matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems.
In 2006, Vidal developed the multi-scale entanglement renormalization ansatz (MERA). In 2007 he developed entanglement renormalization for quantum lattice systems.
In 2010, Ulrich Schollwock developed the density-matrix renormalization group for the simulation of one-dimensional strongly correlated quantum lattice systems.
In 2014, Román Orús introduced tensor networks for complex quantum systems and machine learning, as well as tensor network theories of symmetries, fermions, entanglement and holography.
Connection to machine learning
Tensor networks have been adapted for supervised learning, taking advantage of similar mathematical structure in variational studies in quantum mechanics and large-scale machine learning. This crossover has spurred collaboration between researchers in artificial intelligence and quantum information science. In June 2019, Google, the Perimeter Institute for Theoretical Physics, and X (company), released TensorNetwork, an open-source library for efficient tensor calculations.
The main interest in tensor networks and their study from the perspective of machine learning is to reduce the number of trainable parameters (in a layer) by approximating a high-order tensor with a network of lower-order ones. Using the so-called tensor train technique (TT), one can reduce an N-order tensor (containing exponentially many trainable parameters) to a chain of N tensors of order 2 or 3, which gives us a polynomial number of parameters.
See also
Tensor
Tensor diagrams
Tensor contraction
Tensor Processing Unit (TPU)
Tensor rank decomposition
Einstein Notation
Spin network
References
External links
tensornetwork.org - a resource for tensor network algorithms, theory, and software
tensors.net - tensor network tutorials, sample implementations and other resources
Tensor Network Contractions: Methods and Applications to Quantum Many-Body Systems
Applied mathematics
Concepts in physics
Quantum states
Applications of artificial intelligence
Lattice field theory | Tensor network | [
"Physics",
"Mathematics"
] | 847 | [
"Applied mathematics",
"Quantum states",
"Quantum mechanics",
"nan"
] |
69,444,595 | https://en.wikipedia.org/wiki/Seismic%20stratigraphy | Seismic stratigraphy is a method for studying sedimentary rock in the deep subsurface based on seismic data acquisition.
History
The term Seismic stratigraphy was introduced in 1977 by Vail as an integrated stratigraphic and sedimentologic technique to interpret seismic reflection data for stratigraphic correlation and to predict depositional environments and lithology. This technique was initially employed for petroleum exploration and subsequently evolved into sequence stratigraphy by academic institutes.
Basic Concept
Seismic reflection is generated at interfaces that separate media with different acoustic properties, and traditionally these interfaces have been interpreted as the lithological boundaries. Vail in 1977, however, recognized that these reflections were, in fact, parallel to the bedding surfaces, and therefore time equivalent surfaces. Interruption of reflections indicates the disappearance of bedding surfaces. Hence, onlap, down lap and top lap and other depositional features observed on surface outcrops have been demonstrated on seismic profiles. This revolutionary interpretation has been substantiated by Vail’s associated industrial drilling results and extensive multichannel seismic data. Furthermore, the most indisputable evidence comes from the progradational dipping reflection pattern associated with the advancing delta deposition in shallow marine environments. Lithological boundaries associated with delta front and slope are nearly horizontal, but are not represented by reflections. Instead, the dipping reflections are a clear indication of depositional surfaces, hence time plane equivalents.
Methodology
Establishing Sequence Boundary
Sequence boundaries are defined as an erosional unconformity recognized on the seismic profile as a reflection surface with reflection termination features such as truncation below and onlap above the surface, The sequence boundary, therefore, represents a marine regression event, during which continental shelf is partially exposed to subaerial erosion processes.
A seismic sequence is defined as the stratigraphic interval between two consecutive sequence boundaries, representing two marine regression events with a marine transgression event at the middle. Thus a seismic sequence is further subdivided with a basal unit of regressive systems tract, a transgressive systems tract at the middle, and a regressive systems tract at the top. The transgressive systems tract is marked at the top by a maximum flooding surface.
Describing Seismic Facies
Within a systems tract, each seismic facies is mapped based on reflection geometry, continuity, amplitude, frequency, and interval velocity. The lithology of each facies is then predicted according to known depositional model and nearby drilling results.
Estimating Relative Sea level Changes
Since onlaps on an erosional surface approximate the positions of sea level on a coastal plain, the sea level variation of a marine transgression/regression cycle could be estimated by the onlap positions on seismic profiles. The maximum sea-level rise is represented by the highest onlap position on a sequence boundary and the minimum sea-level fall by the lowest onlap position on the next younger sequence boundary. The difference in depth between the two positions represents the sea level change magnitude of the cycle.
See also
Stratigraphy
References
Stratigraphy
Geophysics | Seismic stratigraphy | [
"Physics"
] | 622 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
69,445,761 | https://en.wikipedia.org/wiki/Liquid%E2%80%93liquid%20phase%20separation%20sequence-based%20predictors | LLPS often involves sequence regions that have unique functional characteristics, as well as the presence of prion-like and RNA-binding domains. Nowadays there are just a few methods to predict the propensity of a protein to drive LLPS. The range of biological mechanisms involved in LLPS, the limited knowledge about these mechanisms and the important context-dependent component of LLPS make this problem challenging. In the last years, despite the advances in this field, just few predictors, specific for LLPS, have been developed, trying to understand the relationship between protein sequence properties and the capability to drive LLPS. Here we will revise the state-of-the-art LLPS sequence-based predictors, briefly introducing them and explaining which are the individual protein characteristics that they identify in the context of LLPS.
LLPS Simulations
Another important computational resource in the field of LLPS are the theoretic simulations of proteins, particularly Intrinsically disordered proteins (IDPs), driving LLPS. These simulations are complementary to the experiments and provide important insights about the molecular mechanisms of individual proteins driving LLPS. A review from Dignon et al. discussed how these simulations can be applied to interpret the experimental results, to explain the phase behavior and to provide predictive frameworks to design proteins with tunable phase transition properties. The challenge is the compromise between the resolution of the model and the computational efficiency, since all-atom simulations of big systems involving IDPs are still difficult to be performed. Moreover, the molecular interactions among IDPs in the droplet-state are still poorly understood, and the combination of experimental data and simulations are indispensable to elucidate them. Improvements in sampling and simulation methods might occur in the next few years, in order to enlighten these mechanisms.
See also
Intrinsically disordered proteins
DisProt database
MobiDB database
References
Protein structure
Structural bioinformatics
Proteomics
Neurodegenerative disorders | Liquid–liquid phase separation sequence-based predictors | [
"Chemistry",
"Biology"
] | 397 | [
"Bioinformatics",
"Structural bioinformatics",
"Protein structure",
"Structural biology"
] |
69,449,377 | https://en.wikipedia.org/wiki/Genetically%20modified%20agriculture | Genetically modified agriculture includes:
Genetically modified crops
Genetically modified livestock
Genetic engineering
Genetically modified organisms | Genetically modified agriculture | [
"Chemistry",
"Engineering",
"Biology"
] | 18 | [
"Biological engineering",
"Genetic engineering",
"Genetically modified organisms",
"Molecular biology"
] |
69,449,760 | https://en.wikipedia.org/wiki/Chandrasekhar%E2%80%93Fermi%20method | Chandrasekhar–Fermi method or CF method or Davis–Chandrasekhar–Fermi method is a method that is used to calculate the mean strength of the interstellar magnetic field that is projected on the plane of the sky. The method was described by Leverett Davis Jr in 1951 and independently by Subrahmanyan Chandrasekhar and Enrico Fermi in 1953. According to this method, the magnetic field in the plane of the sky is given by
where is the mass density, is the line-of-sight velocity dispersion and is the dispersion of polarization angles and is an order unity factor, which is typically taken it to be . The method is also employed for prestellar molecular clouds.
References
Astrophysics
Equations of astronomy | Chandrasekhar–Fermi method | [
"Physics",
"Astronomy"
] | 153 | [
"Concepts in astronomy",
"Astronomical sub-disciplines",
"Astrophysics",
"Equations of astronomy"
] |
69,456,240 | https://en.wikipedia.org/wiki/Perimyotini | Perimyotini is a tribe of bats in the family Vespertilionidae. It contains two species of bats found in North America, each in their own monotypic genus.
Although this name is already in use by taxonomic authorities, such as the Handbook of the Mammals of the World, ITIS and the American Society of Mammalogists, and was first suggested as a name in a 2009 study, it has not actually been formally described.
Species
There are two genera in the tribe, each with one species:
Genus Parastrellus
Canyon bat, Parastrellus hesperus
Genus Perimyotis
Tricolored bat, Perimyotis subflavus
References
Mammal tribes
Vesper bats
Nomina nuda | Perimyotini | [
"Biology"
] | 148 | [
"Biological hypotheses",
"Nomina nuda",
"Controversial taxa"
] |
76,896,022 | https://en.wikipedia.org/wiki/Chlamydomonas%20moewusii | Chlamydomonas moewusii is a species of unicellular green alga belonging to the genus Chlamydomonas. C. moewusii is typically a freshwater species and occupies a significant position as a model organism for various scientific studies due to its relatively simple cellular structure and ease of cultivation.
Taxonomy
Chlamydomonas moewusii was first published by Gerloff in 1940. In his research, Gerloff examined cultures of Chlamydomonas eugametos sourced from the Berlin Institute of Plant Physiology. His findings contradicted the description provided by Moewus(1933), indicating the presence of a papilla and a significantly thinner membrane than previously described and illustrated by Moewus.
Distribution
Chlamydomonas moewusii is commonly found in freshwater and soil environments worldwide.
Morphology
Chlamydomonas moewusii is a unicellular organism with a characteristic chloroplast-containing cell. Individual cells are typically small, around 20 micrometers in diameter, and have a spherical to ovoid shape. Chlamydomonas moewusii possesses two flagella, which it uses for locomotion and orientation in its aquatic environment. As in other Chlamydomonas species, reproduction in C. moewusii occurs both asexually through cell division and sexually through the formation of gametes.
Reproduction
Chlamydomonas moewusii is a heterothallic species, exhibiting distinct behavioral differences between the gametes of its two mating types. When suspensions containing 'plus' and 'minus' gametes are mixed under light, they form clumps that eventually separate into pairs after a few minutes. These pairs then swim freely for 4–8 hours. Throughout this motile phase, there is no fusion of nuclei or cytoplasm between the cells; instead, they remain connected at their anterior ends by a short protoplasmic bridge, moving consistently in one direction. Despite both gametes retaining their flagella, only one flagellum is actively involved in propulsion. This activity is observable under favorable lighting conditions: one cell's flagellum actively beats while the other's trails behind, occasionally twitching.
Motion
Chlamydomonas moewusii exhibits a unique type of motion propelled by its two flagella. This motility is essential for various biological processes, including navigation towards light sources for photosynthesis, finding optimal environmental conditions, and locating nutrients.The motion of C. moewusii is primarily characterized by a type of swimming known as "flagellar beating." Each cell possesses two flagella of unequal length: a longer anterior flagellum and a shorter posterior flagellum. These flagella beat in a coordinated fashion, generating propulsion for the cell through the surrounding medium, typically water.
References
External link
Chlamydomonadaceae
Plants described in 1940
Chlorophyta species
Model organisms
Freshwater algae | Chlamydomonas moewusii | [
"Biology"
] | 610 | [
"Model organisms",
"Biological models"
] |
76,899,013 | https://en.wikipedia.org/wiki/Lipoprotein%20rotamase%20A | Lipoprotein rotamase A (SlrA), also known as peptidyl prolyl isomerase A (PpiA), functions as a molecular chaperone that operates within the Streptococcus pneumoniae cell membrane-cell wall interface as well as outside the bacteria. SlrA shares homology with the cyclophilin-type peptidyl-prolyl isomerases (PPIases). PPIases accelerate the folding of proteins by catalyzing the cis-trans isomer conversions of peptide bonds in the amino acid proline.
Structure
SlrA is a 29kDa, 267-amino acid long membrane-bound lipoprotein. It is encoded by the S. pneumoniae gene, SP_0771, located at position 729,840–730,643 on the complementary strand. The structure of SlrA is predicted to contain an eight-strand β-bundle and two associated α-helices, similar to the PPIase domains of cyclophilins.
Lipidated forms of SlrA occur in all sequenced streptococcal genomes with the homologs sharing 60-70% amino acid sequence identity. SlrA also shares homology with other Gram-positive cyclophilins such as the membrane-bound PpiA in Lactococcus lactis.
Function
As a PPIase, SlrA functions at the rate-limiting step of protein folding of secreted proteins. The identity of the proteins folded by SlrA and SlrA homologs are still under investigation, but the roles of these proteins can be hypothesized based on the phenotypes observed in mutants without SlrA. The SlrA homologs in Streptococcus mutans and Streptococcus gordonii, PpiA, also display anti-phagocytic activity in their respective bacteria. SlrA has been implicated in S. pneumoniae colonization, competence, cell wall integrity, and adhesion to human cells derived from the upper and lower respiratory tract. It is hypothesized that SlrA acts as a protein-folding chaperone for client proteins involved in those key processes. Additionally, SlrA has been shown to indirectly contribute to S. pneumoniae anti-phagocytic activity
References
Lipoproteins | Lipoprotein rotamase A | [
"Chemistry"
] | 488 | [
"Lipid biochemistry",
"Lipoproteins"
] |
76,901,573 | https://en.wikipedia.org/wiki/CYP109%20family | Cytochrome P450, family 109, also known as CYP109, is a cytochrome P450 monooxygenase family, many members are associated with fatty acid hydroxylation. The first gene identified in this family is the CYP109A1 and CYP109B1 from Bacillus subtilis. CYP109 is one of the only three P450 families shared in bacteria and archaea, the other two are CYP147 and CYP197. Genes in this family are co-present on archaeal plasmids and chromosomes, implying the plasmid-mediated horizontal gene transfer of these genes from bacteria to archaea.
References
109
Protein families | CYP109 family | [
"Biology"
] | 152 | [
"Protein families",
"Protein classification"
] |
78,229,079 | https://en.wikipedia.org/wiki/%28R%29-MDMA | (R)-3,4-Methylenedioxy-N-methylamphetamine ((R)-MDMA), also known as (R)-midomafetamine or as levo-MDMA, is the (R)- or levorotatory (l-) enantiomer of 3,4-methylenedioxy-N-methylamphetamine (MDMA; midomafetamine; "ecstasy"), a racemic mixture of (R)-MDMA and (S)-MDMA. Like MDMA, (R)-MDMA is an entactogen or empathogen. It is taken by mouth.
The drug is a serotonin–norepinephrine releasing agent (SNRA) and weak serotonin 5-HT2A receptor agonist. It has substantially less or no significant dopamine-releasing activity compared to MDMA and (S)-MDMA. In preclinial studies, (R)-MDMA shows equivalent therapeutic-like effects to MDMA, such as increased prosocial behavior, but shows reduced psychostimulant-like effects, addictive potential, and serotonergic neurotoxicity. In clinical studies, (R)-MDMA produces similar effects to MDMA and (S)-MDMA, but is less potent and has a longer duration.
(R)-MDMA was first described in enantiopure form by 1978. Under the developmental code names EMP-01 and MM-402, it is under development for the treatment of post-traumatic stress disorder (PTSD), social phobia, and pervasive development disorders (PDDs) such as autism. It is thought that (R)-MDMA might have a better safety profile than MDMA itself whilst retaining its therapeutic benefits.
Pharmacology
Pharmacodynamics
Preclinical studies
MDMA is a well-balanced serotonin–norepinephrine–dopamine releasing agent (SNDRA). (R)-MDMA and (S)-MDMA are both SNDRAs similarly. However, (R)-MDMA is several-fold less potent than (S)-MDMA in vitro and is also less potent than (S)-MDMA in vivo in non-human primates. In addition, whereas MDMA and (S)-MDMA are well-balanced SNDRAs, (R)-MDMA is comparatively much less potent as a dopamine releasing agent (~11-fold less potent in releasing dopamine than serotonin), and could be thought of instead more as a serotonin–norepinephrine releasing agent (SNRA) than as an SNDRA. In non-human primates, (S)-MDMA demonstrated significant dopamine transporter (DAT) occupancy, whereas DAT occupancy with (R)-MDMA was undetectable. Similarly, MDMA and (S)-MDMA were found to increase dopamine levels in the striatum in rodents and non-human primates, whereas (R)-MDMA did not increase striatal dopamine levels. As such, (R)-MDMA may be less psychostimulant-like than MDMA or (S)-MDMA.
In addition to its actions as an SNDRA, MDMA has weak affinity for the serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptors, where it acts as an agonist. (R)-MDMA shows higher affinity for the serotonin 5-HT2A receptor than (S)-MDMA or MDMA. In addition, (R)-MDMA is more potent as an agonist of the serotonin 5-HT2A receptor, acting as a weak partial agonist of this receptor, whereas (S)-MDMA shows very little effect. Conversely however, (S)-MDMA is more potent as an agonist of the serotonin 5-HT2C receptor. Based on these findings, it has been hypothesized that (R)-MDMA may be more psychedelic-like than (S)-MDMA. However, although (R)-MDMA partially substitutes for lysergic acid diethylamide (LSD) in animal drug discrimination tests, it did not produce the head-twitch response, a behavioral proxy of psychedelic effects, at any tested dose. In any case, findings in this area are conflicting. (R)-MDMA is inactive as an agonist of the human TAAR1, whereas (S)-MDMA shows very weak potency as an agonist of the receptor ( = 74,000nM).
MDMA is a well-known serotonergic neurotoxin and this has been demonstrated both in animals and in humans. There is evidence that the serotonergic neurotoxicity of MDMA may be driven primarily by (S)-MDMA rather than (R)-MDMA. (R)-MDMA shows substantially lower or potentially no neurotoxicity compared to (S)-MDMA in animal studies. This has been the case even when doses of (R)-MDMA were increased to account for its lower potency than (S)-MDMA. However, more research is needed to confirm this in other species, such as non-human primates. In contrast to (S)-MDMA, (R)-MDMA does not produce hyperthermia in rodents, and this may be involved in its reduced risk of neurotoxicity, as hyperthermia augments and is essential for the serotonergic neurotoxicity of MDMA. The reduced potency of (R)-MDMA as a dopamine releasing agent may also be involved in its reduced neurotoxic potential, as dopamine release is likewise essential for the neurotoxicity of MDMA. The hyperthermia of MDMA may in fact be mediated by dopamine release. As (R)-MDMA is less neurotoxic than (S)-MDMA and MDMA or even non-neurotoxic, it may allow for greater clinical viability and prolonged regimens of drug-assisted psychotherapy.
(R)-MDMA and (S)-MDMA have shown equivalent effects in terms of inducing prosocial behavior in monkeys. However, (S)-MDMA shows higher potency, whereas (R)-MDMA shows greater maximal effects. Conversely, (S)-MDMA does not increase prosocial behavior in mice, whereas both MDMA and (R)-MDMA do so. MDMA and (S)-MDMA increase locomotor activity, a measure of psychostimulant-like effect, in rodents, whereas (R)-MDMA does not do so. (R)-MDMA likewise showed fewer reinforcing effects than (S)-MDMA in non-human primates. These findings further add to (R)-MDMA showing reduced psychostimulant-like and addictive effects compared to MDMA and (S)-MDMA.
Clinical studies
The first modern clinical study of the comparative effects of MDMA, (R)-MDMA, and (S)-MDMA was published in August 2024. It compared 125mg MDMA, 125mg (S)-MDMA, 125 and 250mg (R)-MDMA, and placebo. (R)-MDMA increased any drug effect, good drug effect, drug liking, stimulation, drug high, alteration of vision, and alteration of sense of time ratings similarly to MDMA and (S)-MDMA. However, (S)-MDMA 125mg was more potent in increasing subjective effects, including stimulation, drug high, happy, and open, among others, than (R)-MDMA 125 or 250mg or MDMA 125mg. Ratings of bad drug effect and fear were minimal with MDMA, (R)-MDMA, and (S)-MDMA. In contrast to expectations, (R)-MDMA did not produce more psychedelic-like effects than (S)-MDMA. Besides subjective effects, (R)-MDMA increased heart rate, blood pressure, and body temperature similarly to MDMA and (S)-MDMA, though it was less potent in producing these effects. Body temperature was notably increased to the same extent with (R)-MDMA 250mg as with MDMA 125mg and (S)-MDMA 125mg.
The differences in effects between (R)-MDMA and (S)-MDMA may reflect the higher potency of (S)-MDMA rather than actual qualitative differences between the effects of (S)-MDMA and (R)-MDMA. It was estimated that equivalent effects would be expected with (S)-MDMA 100mg, MDMA 125mg, and (R)-MDMA 300mg. The findings of the study were overall regarded as not supporting the hypothesis that (R)-MDMA would produce equivalent therapeutic effects as (S)-MDMA or MDMA whilst reducing safety concerns. However, more clinical studies were called for to assess the revised estimated equivalent doses of MDMA, (R)-MDMA, and (S)-MDMA.
Pharmacokinetics
The elimination half-life of (S)-MDMA is 4.1hours, whereas the half-life of (R)-MDMA is 12 to 14hours. In the case of racemic MDMA administration, the half-life of (S)-MDMA is 5.1hours and the half-life of (R)-MDMA is 11hours. (R)-MDMA shows cytochrome P450 CYP2D6 inhibition and lower levels of the metabolite 4-hydroxy-3-methoxymethamphetamine (HMMA) than (S)-MDMA.
History
(R)-MDMA was first described in the scientific literature in enantiopure form by 1978. It was described in a paper authored by Alexander Shulgin, David E. Nichols, and other colleagues.
Clinical development
(R)-MDMA is under development separately by Empath Biosciences (EmpathBio) and MindMed. It is being developed by Empath Biosciences for the treatment of PTSD and social phobia and it is being developed by MindMed for the treatment of PDDs or autism. As of 2024, the drug is in phase 1 clinical trials for both PTSD, social phobia, and PDDs/autism.
See also
List of investigational hallucinogens and entactogens
List of investigational autism and pervasive developmental disorder drugs
List of investigational social anxiety disorder drugs
References
5-HT2A agonists
Benzodioxoles
Enantiopure drugs
Entactogens and empathogens
Entheogens
Experimental entactogens
Experimental hallucinogens
Experimental psychiatric drugs
Methamphetamines
Serotonin-norepinephrine releasing agents
Serotonin receptor agonists
Substituted amphetamines
VMAT inhibitors | (R)-MDMA | [
"Chemistry"
] | 2,405 | [
"Stereochemistry",
"Enantiopure drugs"
] |
78,231,730 | https://en.wikipedia.org/wiki/MSP-1014 | MSP-1014 is a serotonergic psychedelic which is under development for the treatment of major depressive disorder, other depressive disorders, and anxiety disorders.
It is a prodrug of psilocin similarly to psilocybin, and hence acts as a non-selective serotonin receptor agonist, including of the serotonin 5-HT2A receptor.
The drug is under development by Mindset Pharma and Otsuka America Pharmaceutical. As of January 2024, it is in phase 2 clinical trials for major depressive disorder and is in the preclinical stage of development for anxiety disorders and other depressive disorders. The chemical structure of MSP-1014 does not yet seem to have been disclosed. However, Mindset Pharma patented psilocin derivatives and prodrugs in 2022.
References
5-HT2A agonists
Drugs with undisclosed chemical structures
Experimental antidepressants
Experimental hallucinogens
Prodrugs
Psychedelic tryptamines
Serotonin receptor agonists | MSP-1014 | [
"Chemistry"
] | 217 | [
"Chemicals in medicine",
"Prodrugs"
] |
78,237,247 | https://en.wikipedia.org/wiki/Poison%20exon | Poison exons (PEs); also called premature termination codon (PTC) exons or nonsense-mediated decay (NMD) exons] are a class of cassette exons that contain PTCs. Inclusion of a PE in a transcript targets the transcript for degradation via NMD. PEs are generally highly conserved elements of the genome and are thought to have important regulatory roles in biology. Targeting PE inclusion or exclusion in certain transcripts is being evaluated as a therapeutic strategy.
Discovery
In 2002, a model termed regulated unproductive splicing and translation (RUST) was proposed based on the finding that many (~one-third) alternatively spliced transcripts contain PEs. In this model, coupling alternative splicing to NMD (AS-NMD) is thought to tune transcript levels to regulate protein expression. Alternative splicing may also lead to NMD via other pathways besides PE inclusion, e.g., intron retention.
PEs were initially characterized in RNA-binding proteins from the SR protein family. Genes for other RNA-binding proteins (RBPs) such as those for heterogenous nuclear ribonucleoprotein (hnRNP) also contain PEs. Numerous chromatin regulators also contain PEs, though these are less conserved than PEs within RBPs such as the SR proteins. Multiple spliceosomal components contain PEs. Certain PEs may occur only in specific tissues.
PE-containing transcripts generally represent a minority of the overall transcript population, in part due to their active degradation via NMD, though this relative abundance can be elevated upon inhibition of NMD or certain biological states. Certain PE-containing transcripts are resistant to NMD and may be translated into truncated proteins.
Regulation
Cis-regulatory elements neighboring PEs have been found to affect PE inclusion.
Many proteins whose corresponding genes contain PEs autoregulate PE inclusion in their respective transcripts and thereby control their own levels via a feedback loop. Cross-regulation of PE inclusion has also been observed.
Differential splicing of PEs is implicated in biological processes such as differentiation, neurodevelopment, dispersal of nuclear speckles during hypoxia, tumorigenesis, organism growth, and T cell expansion.
Protein kinases that regulate phosphorylation of splicing factors can affect splicing processes, thus kinase inhibitors may affect inclusion of PEs. For example, CMGC kinase inhibitors and CDK9 inhibitors have been found to induce PE inclusion in RBM39.
Small molecules that modulate chromatin accessibility can affect PE inclusion.
Mutations in splicing factors can lead to inclusion of PEs in certain transcripts.
PE inclusion can be regulated by external variables such as temperature and electrical activity. For example, PE inclusion in RBM3 transcript is lowered during hypothermia. This is mediated by temperature-dependent binding of the splicing factor HNRNPH1 to the RBM3 transcript. The neuronal RBPs NOVA1/2 are translocated from the nucleus to the cytoplasm during pilocarpine-induced seizure in mice, and it was found that NOVA1/2 regulates the expression of cryptic PEs. The glycosyltransferase O-GlcNAc transferase is responsible for installing the O-GlcNAc post-translational modification and contains a PE. It has been frequently observed that pharmacological or genetic perturbations that elevate cellular O-GlcNAc levels increase PE inclusion in the OGT transcript.
Disease
Proper regulation of PE inclusion and exclusion is important for health. Genetic mutations can affect inclusion of PEs and cause disease. For example, loss of CCAR1 leads to PE inclusion in the FANCA transcript, resulting in a Fanconi anemia phenotype.
Dysregulation of components of the splicing machinery can also cause dysregulation of PE inclusion. Mutations in the splicing factor SF3B1 have been found to promote PE inclusion in BRD9, reducing BRD9 mRNA and protein levels and leading to melanomagenesis. Mutations in U2AF1 promote PE inclusion in EIF4A2, leading to impaired global mRNA translation and acute myeloid leukemia (AML) chemoresistance through the integrated stress response pathway. The splicing factor SRSF6 contains a PE whose skipping is connected to T cell acute lymphoblastic leukemia (T-ALL), and PE inclusion in SRSF10 is linked to acute lymphoblastic leukemia (ALL).
Intronic mutations can lead to PE inclusion, such as in the case of SCN1A, where mutations within intron 20 promote inclusion of the nearby PE 20N, leading to Dravet syndrome-like phenotypes in mouse models. An intronic mutation in FLNA has been found to impair binding of the splicing regulator PTBP1, leading to inclusion of a poison exon in FLNA transcripts that causes a brain-specific malformation. In RAD50, TGAGT deletion is associated with a cryptic poison exon that occurs 30 nucleotides downstream within intron 21 mediated by altered U2AF recognition.
Differential inclusion of PEs in various splicing factor and hnRNP genes has been reported in type 1 diabetes. SRSF2 mutations have been found to promote PE inclusion in the epigenetic regulator EZH2, resulting in impaired hematopoietic differentiation.
The TRA2B PE is essential for male fertility and meiotic cell division in mouse models. Deletion of this PE leads to an azoospermia phenotype.
Clinical relevance
Diagnostics
With the advent of next-generation sequencing technologies, diagnostic genetic testing has emerged as a powerful tool to diagnose afflictions associated with specific genetic variants. Many diagnostic genetic testing efforts have focused on exome sequencing. PE annotations may improve the diagnostic yield of these tests for certain diseases. For example, variants that affect PE inclusion in sodium channel genes (SCN1A, SCN2A, and SCN8A) have been found to be associated with epilepsies, and analogous variants in SNRPB have been found to be associated with cerebrocostomandibular syndrome.
Therapeutic discovery
As PE inclusion results in transcript degradation, targeted PE inclusion or exclusion is being evaluated as a therapeutic strategy. This strategy may prove especially applicable towards targets whose gene products are not easily ligandable such as "undruggable" proteins. Targeting PE inclusion/exlusion has been demonstrated with both small molecules and antisense oligonucleotides (ASOs). Small molecules may modulate splicing by stabilizing alternative splice sites. ASOs may block specific splice sites or target certain cis-regulatory elements to promote splicing at other sites. These ASOs may also be referred to as splice-switching oligonucleotides (SSOs). ASO walks tiling different ASOs across a gene sequence may be necessary to identify ASOs that have the desired effect on PE inclusion.
Stoke Therapeutics is evaluating a strategy termed Targeted Augmentation of Nuclear Gene Output (TANGO). Targeting exon 20N in SCN1A mRNA with the antisense oligonucleotide zorevunersen (STK-001) blocks inclusion of this PE, leading to elevated levels of the productive SCN1A transcript and the gene product sodium channel protein 1 subunit alpha (NaV1.1). In mouse models of Dravet syndrome, which is driven by mutations in SCN1A, zorevunersen was able to reduce incidence of electrographic seizures and sudden unexpected death in epilepsy and prolong survival. As of October 2024, zorevunersen is being evaluated in phase 2 clinical trials (NCT04740476). Zorevunersen received FDA Breakthrough Therapy Designation in December 2024. Also in December 2024, Stoke Therapeutics disclosed that zorevunersen is generally well tolerated and shows substantial and sustained reductions in convulsive seizure frequency. Stoke Therapeutics expects to launch a phase 3 clinical trial in 2025 evaluating zorevunersen for reduction in seizure frequency as the primary endpoint and cognition and behavioral changes as secondary endpoints.
Stoke Therapeutics is also evaluating the ASO STK-002 for treatment of autosomal dominant optic atrophy (ADOA). STK-002 promotes removal of a PE in the transcript of OPA1, leading to elevated OPA1 protein levels.
Remix Therapeutics developed REM-422, which is an oral small molecule that promotes PE inclusion in the oncogene MYB. REM-422 was discovered through a screening campaign for molecules that promote PE inclusion in MYB. Subsequent in vitro experiments showed that REM-422 selectively facilitates binding of the U1 snRNP complex to oligonucleotides containing the MYB 5' splice site sequence. In various AML cell lines, REM-422 leads to degradation of MYB mRNA and lower MYB protein levels. REM-422 demonstrated antitumor activity in mouse xenograft models of acute myeloid leukemia. As of October 2024, REM-422 is being evaluated in phase 1 clinical trials (NCT06118086, NCT06297941). The splicing modulator small molecule risdiplam, originally developed to promote exon 7 inclusion in the SMN2 transcript for treatment of spinal muscular atrophy, dose-dependently promotes PE inclusion in the MYB transcript as well.
Rgenta Therapeutics has also developed RGT-61159, an oral small molecule that promotes PE inclusion in MYB, as a potential treatment for adenoid cystic carcinoma (ACC). RGT-61159 is being evaluated in phase 1 clinical trials (NCT06462183).
PTC Therapeutics is evaluating the oral small molecule PTC518 as a treatment for Huntington's disease. PTC518 was well-tolerated and showed dose-dependent decreases in HTT mRNA and HTT protein levels in a phase 1 clinical trial. As of October 2024, PTC518 is being evaluated in phase 2 clinical trials (NCT05358717). In December 2024, Novartis entered a global license and collaboration agreement with PTC Therapeutics for PTC518 with an upfront payment of $1.0 billion and up to $1.9 billion in development, regulatory, and sales milestones.
Therapeutic targeting of poison exon inclusion/exclusion has also been proposed for oncogenic splicing factors, BRD9 (for treatment of cancer), SYNGAP1, RBM3 (for treatment of neurodegeneration), and CFTR (for treatment of cystic fibrosis).
References
Genetics
Medicine
Spliceosome
Gene expression
RNA
RNA splicing
Drug discovery
Nucleic acids
Molecular biology
Cellular processes
Cell biology
Biology | Poison exon | [
"Chemistry",
"Biology"
] | 2,302 | [
"Medicine",
"Biomolecules by chemical classification",
"Cell biology",
"Life sciences industry",
"Drug discovery",
"Genetics",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Medicinal chemistry",
"Molecular biology",
"Biochemistry",
"Nucleic acids"
] |
78,243,798 | https://en.wikipedia.org/wiki/K-factor%20%28metallurgy%29 | The K-factor is the bending capacity of sheet metal, and by extension the forumulae used to calculate this. Mathematically it is an engineering aspect of geometry. Such is its intricacy
in precision sheet metal bending (with press brakes in particular) that its proper application in engineering has been termed an art.
See also
Bending (metalworking)
Engineering
Metal
Technology
References
Metallurgy
Metal forming | K-factor (metallurgy) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 82 | [
"Metallurgy",
"Materials science stubs",
"Materials science",
"nan"
] |
63,783,750 | https://en.wikipedia.org/wiki/Gap%20junction%20modulation | Gap junction modulation describes the functional manipulation of gap junctions, specialized channels that allow direct electrical and chemical communication between cells without exporting material from the cytoplasm. Gap junctions play an important regulatory role in various physiological processes including signal propagation in cardiac muscles and tissue homeostasis of the liver. Modulation is required, since gap junctions must respond to their environment, whether through an increased expression or permeability. Impaired or altered modulation can have significant health implications and are associated with the pathogenesis of the liver, heart and intestines.
Modulation is achieved by endogenous chemicals, growth factors, hormones and proteins that affect gap junction expression, structure, degradation and permeability. Natural forms of modulation include voltage gating and chemical modulation. Voltage-gating is a relatively fast modulation categorized into Vj gating and slow voltage gating, which are further influenced by calcium ions (Ca2+), pH and calmodulin. Chemical modulation entails the addition or removal of a functional group or protein from the connexin subunits of gap junctions; this can alter gap junction expression and structure.
Voltage gating
The molecular structure of gap junctions makes them sensitive and responsive to intercellular currents. This sensitivity allows the channel to alter its size and structure according to electrical signals. The two types of voltage gating, Vj gating and slow voltage gating, are similar in their mechanisms, but react to different electrical magnitudes. The electrical signals that modulate gap junctions release Ca2+ which induces a positive feedback with voltage gating. This calcium modulation is also influenced by pH and calmodulin.
Mechanisms
Vj gating
Vj gating governs the size of the gap junction, and is able to reduce the channel size by up to 40% from its fully open state. The sensitivity towards voltage is largely attributed to the gap junction’s cytoplasmic NH2-terminal which is responsive to small voltages (2-3mV). Voltage gating modulation is associated with the charge of connexin; positively charged connexin close with hyperpolarization and negatively charged connexins close with depolarization. Other than connexin charge, Vj gating is also regulated by different concentrations of Ca2+, H+ and calmodulin.
Slow voltage gating
Slow voltage gating is hypothesized to be similar to Vj gating in terms of mechanism, but unlike Vj gating, fully closes the channel to a non-conducting state. This modulation is slower than the prior gating method, as it occurs in response to Vj gating. The temporal voltage regulation is also subject to higher voltage (10-30mV), various natural factors–such as lipophiles and low pH–and the docking of two hemichannels. The exact mechanisms of both Vj gating and slow voltage gating remain unknown, but it is predicted that change in charge causes the cytoplasmic NH2-terminal domain to move toward the cytoplasm to decrease the pore size.
Factors
Calcium
Calcium exists in organisms in the form of the ion, Ca2+, and is an effective modulator of gap junctions, having a close relationship with voltage gating. An increase in intracellular calcium ion concentration by above 500nM causes the permeability of plasma membranes to decreases rapidly. This modulation via calcium is known to be protective, as it prevents dead cells from inducing apoptosis in neighboring cells. Yet, high Ca2+ concentration is rarely seen, as this gating method is self-inhibiting. Ca2+ concentrations are a crucial determiner behind voltage gating as the influx and movement of Ca2+ is required for depolarization.
pH
Gap junction permeability is further influenced by their environment’s pH. The pH sensitivity depends on the type of connexin composing the gap junction, but the channels generally close at a pH of 6.4-6.2. Under weak acidic conditions, the gap junction’s channels are observed to remain closed despite voltage changes, while under strong acidic conditions, the channels do open with voltage, but close immediately.
Reports further indicate a synergistic relationship between hydrogen ions and the intracellular concentration of calcium in reducing gap junction permeability. Studies on cardiac cells noted that acidosis, decreased pH, by itself had a limited effect in reducing dye diffusion between cells; the reduction was elevated significantly with an increase in intracellular calcium concentration.
Calmodulin
Calmodulin (CaM) is a protein composed of 148 amino acids that plays both an intermediary and direct role in moderating gap junctions. Calmodulin acts as a regulator on membrane channels including both small and intermediate Ca2+-activated potassium ion channels, L-type Ca2+ channels, P/Q-type Ca2+ channels and sodium ion channels. All of these membrane channels can further influence cation concentrations, determining the electrochemical gradient of the cellular membrane, and affecting voltage gating.
Calmodulin also acts directly on gap junctions through its two Ca2+ binding sites. With the binding of Ca2+, calmodulin goes through a conformational change that eventually blocks the gap junction’s channel, preventing the passage of cytoplasmic material. Likewise, while the inhibition of calmodulin expression increases the probability of gap junction closure, CaM-antagonist and CaM-blockers promote the opening of gap junctions.
Chemical modification
Chemical modification takes place on connexin proteins after their translation and typically involves changes in phosphorylation and ubiquitination, although nitrosylation, deamidation and hydroxylation have also been noted to be modifying processes. The implications of chemical modification vary widely depending on the functional group or protein that was added and the connexin proteins involved. Typically the changes occur in the development and lifecycle of the connexin protein or in the gating and structure of gap junctions themselves.
Mechanisms
Phosphorylation
Phosphorylation, the addition of a phosphate group, plays an important role in regulating both gap junctions and the subunits that form them. The gap junction protein connexin generally possesses a number of phosphorylation sites (connexin Cx43 has 21). The binding of phosphate to these sites can bring about various effects that influence aspects of the protein’s lifecycle. For example, phosphorylation of Cx43’s phosphorylation sites promote its trafficking from the Golgi apparatus to the plasma membrane. The subsequent oligomerization of this protein into hemichannels and the hemichannels into gap junctions is also induced by phosphorylation. Likewise, degradation can be initiated by phosphorylation as well as changes in gating, which determines the permeability of gap junctions.
Phosphorylation of gap junctions and their subunits is typically achieved through protein kinases, enzymes that add phosphates to the amino acids of proteins. Serine/threonine kinases, which phosphorylate the hydroxyl group of serine or threonine residues, form the bulk of the Connexin phosphorylation kinases. These include protein kinase C (PKC), protein kinase G (PKG), Ca2+/calmodulin-dependent kinase II (CaMKII), cAMP-dependent protein kinase A (PKA), MAP kinase (MAPK) and casein kinase (CK). Kinase Src is the lone Tyrosine kinase that has been observed to phosphorylate connexins. Protein kinases vary in their targeted connections, specific sites of phosphorylation and phosphorylation effect.
For example, PKA phosphorylation impacts both hemichannel and connexin activity. Here, neuronal hemichannel activity is suppressed by reducing permeability while connexins are affected by an increased trafficking and assembly into gap junctions. PKA activity is largely associated with an increased cAMP concentration. On the other hand, PKB phosphorylation can prevent the binding of the zonula occludens-1 protein, resulting in an increased gap junction size and hemichannel permeability. Its activity is usually in response to physiological changes such as wounding or hypoxia.
Ubiquitination
Ubiquitin is a small, long lived, globular protein that covalently bonds to lysine residues of target proteins in a process known as ubiquitination. Much like phosphorylation, it acts as a post-translational regulator for many proteins including connexin. Ubiquitination has been observed to be most involved in the final stages of the connexin protein’s lifecycle, regulating both Gap junction endocytosis and Connexin degradation. However, details of specific pathways and involved proteins are still being studied.
The distinct effects of ubiquitination tend to vary widely, depending on the tissues and subcellular location where it occurs and the type of ubiquitin involved. For example, newly synthesized Cx43 in the endoplasmic reticulum can undergo polyubiquitination, resulting in recognition by proteasomes that carry out endoplasmic reticulum associated protein degradation (ERAD). Ubiquitination of Cx43 that is at the plasma membrane and organized into gap junctions will result in internalization, or endocytosis, followed by degradation of Cx43 by lysosomes.
Nitrosylation
Nitrosylation, the addition of a Nitric oxide (NO) group, has been demonstrated to have a substantial role in causing post translational modifications of both gap junction proteins and hemichannels. Nitrosylation can either be induced on the connexin proteins or proteins that further regulate connexin such as kinases. The type of nitrosylation that occurs is S-nitrosylation, the addition of a nitric oxide group to a cysteine thiol of a protein.
Experiments regarding S-nitrosylation and the lifecycle of gap junctions suggest it has a role in regulating hemichannel trafficking and gap junction formation; addition of NO rapidly increased the level of Cx40 and Cx43 connexin at the plasma membrane as well as the formation of gap junctions in endothelial cells. The mechanism behind this phenomenon is still unknown but the pro oxidant conditions induced by NO is thought to modulate the properties of the Golgi apparatus which is responsible for modifying and sorting proteins.
Related diseases
Arrhythmogenic cardiomyopathy
Electrical coupling among cardiac cells is crucial for a healthy heart, allowing the cardiac muscle fibers to contract normally. This coupling is done by gap junctions. Gap junctions permit the passive diffusion of materials–such as ions–across the cytoplasm of one cell to another; this junction enables proper propagation of electrical impulses along cardiac cells.
The genetic cardiac disease, Arrhythmogenic Cardiomyopathy (ACM), is marked by the reduced expression/number of the heart’s gap junctions, which can further lead to impaired function and ventricular arrhythmia. This disease results from an altered expression of proteins, including Neural Cadherin (CDH2) and Plakophilin-2 (PKP2), which naturally promote gap junction expression. Decreased CDH2 is found to reduce the expression of connexin 43 (Cx43), a major protein that promotes gap junction synthesis, further leading to a reduced conduction velocity of electrical impulses. Decrease in PKP2 also limits Cx43 expression, but only with a concurrent decrease in the reduction of N-Cadherin.
Liver diseases
As gap junctions have a major role in regulating the homeostasis of the liver, an abnormal expression of gap junctions can be a major contributor towards liver failure. Taking cirrhosis and acute liver failure (ACLF) for examples, an increased expression of hepatic connexin 43 is associated with severe inflammation. Conditions are worsened as the increased expression of Cx43 rapidly propagates death signals to neighboring cells, causing them to undergo apoptosis.
Gastrointestinal diseases
Just as with the heart, gap junctions play a significant role in mediating electrical signals within the intestines. Electrical signals are necessary for the synchronization of smooth muscles, buffering substrate concentrations, and mediating inflammation. As such, dysfunction of gap junctions leads to numerous symptoms such as gastrointestinal infections and inflammatory bowel disease.
The pathogenesis of gap junctions varies between diseases. For inflammatory bowel disease, a decrease in gap junction expression disrupts junctional complexes among intestinal cells, leading to symptoms such as diarrhea and internal cramps. Less is known about the mechanism behind the pathogenesis of gap junctions in gastrointestinal infections, but the correlation is clear: infections are marked with increased Cx43 levels and their abnormal localization.
See also
Gap junction
Junctional complex
Vinnexin
References
Cell communication | Gap junction modulation | [
"Biology"
] | 2,714 | [
"Cell communication",
"Cellular processes"
] |
65,323,969 | https://en.wikipedia.org/wiki/Metal%E2%80%93metal%20bond | In inorganic chemistry, metal–metal bonds describe attractive interactions between metal centers. The simplest examples are found in bimetallic complexes. Metal–metal bonds can be "supported", i.e. be accompanied by one or more bridging ligands, or "unsupported". They can also vary according to bond order. The topic of metal–metal bonding is usually discussed within the framework of coordination chemistry, but the topic is related to extended metallic bonding, which describes interactions between metals in extended solids such as bulk metals and metal subhalides.
Unsupported metal–metal bonds
An example of a metal–metal bond is found in dimanganese decacarbonyl, Mn2(CO)10. As confirmed by X-ray crystallography, a pair of Mn(CO)5 units are linked by a bond between the Mn atoms. The Mn-Mn distance (290 pm) is short. Mn2(CO)10 is a simple and clear case of a metal-metal bond because no other atoms tie the two Mn atoms together.
When several metals are linked by metal-metal bonds, the compound or ion is called a metal cluster. Many metal clusters contain several unsupported M–M bonds. Some examples are M3(CO)12 (M = Ru, Os) and Ir4(CO)12.
A subclass of unsupported metal–metal bonded arrays are linear chain compounds. In such cases the M–M bonding is weak as signaled by longer M–M bonds and the tendency of such compounds to dissociate in solution.
Supported metal–metal bonds
In many compounds, metal-metal bonds are accompanied by bridging ligands. In those cases, it is difficult to state unequivocably that the metal-metal bond is the cohesive force binding the two metals together. Diiron nonacarbonyl is such an example. Another example of a supported metal–metal bond is cyclopentadienyliron dicarbonyl dimer, [(C5H5)Fe(CO)2]2. In the predominant isomers of this complex, the two Fe centers are joined not only by an Fe–Fe bond, but also by bridging CO ligands. The related cyclopentadienylruthenium dicarbonyl dimer features an unsupported Ru–Ru bond. Many metal clusters contain several supported M–M bonds. Further examples are Fe3(CO)12 and Co4(CO)12.
Multiple metal–metal bonds
In addition to M–M single bonds, metal pairs can be linked by double, triple, quadruple, and in a few cases, quintuple bonds. Isolable complexes with multiple bonds are most common among the transition metals in the middle of the d-block, such as rhenium, tungsten, technetium, molybdenum and chromium. Typical the coligands are π-donors, not π-acceptors. Well studied examples are the tetraacetates, such as dimolybdenum tetraacetate (quadruple bond) and dirhodium tetraacetate (single bond). Mixed-valence druthenium tetraacetates have fractional M–M bond orders, i.e., 2.5 for [Ru2(OAc)4(H2O)2]+.
The complexes Nb2X6(SR2)3 adopt a face-sharing bioctahedral structures (X = Cl, Br; SR2 = thioether). As dimers of Nb(III), they feature double metal–metal bonds, the maximum possible for a pair of metals with d2 configuration.
Hexa(tert-butoxy)ditungsten(III) is a well studied example of a complex with a metal–metal triple bond.
References
Cluster chemistry | Metal–metal bond | [
"Chemistry"
] | 819 | [
"Cluster chemistry",
"Organometallic chemistry"
] |
65,327,603 | https://en.wikipedia.org/wiki/Sakura%20Pascarelli | Sakura Pascarelli is an Italian physicist and the scientific director at the European XFEL. Her research focuses on the study on matter at extreme conditions of pressure, temperature and magnetic fields, in particular using X-ray absorption spectroscopy (XAS) and X-ray Magnetic Linear and Circular Dichroism (XMCD).
Early life and education
Pascarelli was born in Japan. She received a Laurea in Physics from La Sapienza (Rome, Italy) and a PhD degree in Physics at the Joseph Fourier University (Grenoble, France). She is an accomplished swimmer.
Research and career
Pascarelli was the head of the Matter at Extremes Group within the Experiment Division of the European Synchrotron Radiation Facility in Grenoble, France, and in charge of the X-ray absorption spectroscopy beamlines. She joined the European XFEL in Hamburg, Germany, as scientific director.
Pascarelli is a member of the scientific advisory committee of SLAC's Stanford Synchrotron Radiation Lightsource.
References
External links
Italian women physicists
Condensed matter physicists
Sapienza University of Rome alumni
Living people
Year of birth missing (living people)
21st-century Italian physicists | Sakura Pascarelli | [
"Physics",
"Materials_science"
] | 248 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
65,327,793 | https://en.wikipedia.org/wiki/Pascal%20Elleaume | Pascal Elleaume (1956–2011) was a French physicist and a pioneer in the field of synchrotron radiation and synchrotron light sources, where his work on radiations from insertion device was pivotal. Pascal died in 2011.
Education and career
Elleaume studied at the Ecole Normale Superieure in Paris, France, where he completed his PhD on turbulence in Helium and obtained his agrégation in 1978. After completing his PhD, he became a visiting scholar at Berkeley for a year, then joined the French Alternative Energies and Atomic Energy Commission (CEA), where he started working on Free-electron lasers with Yves Petroff.
He joined the European Synchrotron Radiation Facility (ESRF) in 1986, where he became the director of the accelerator division.
Life and family
Pascal married in October 1992 and had three children. Pascal died in the French Alps in 2011 in an avalanche.
References
Free-electron lasers
Turbulence
Helium
Governmental nuclear organizations
École Normale Supérieure alumni
University of California, Berkeley alumni
Particle accelerators
Tunisian physicists
Particle physicists
1956 births
2011 deaths | Pascal Elleaume | [
"Physics",
"Chemistry",
"Engineering"
] | 224 | [
"Turbulence",
"Nuclear organizations",
"Governmental nuclear organizations",
"Particle physics",
"Particle physicists",
"Fluid dynamics"
] |
65,333,038 | https://en.wikipedia.org/wiki/Hexa%28tert-butoxy%29ditungsten%28III%29 | Hexa(tert-butoxy)ditungsten(III) is a coordination complex of tungsten(III). It is one of the homoleptic alkoxides of tungsten. A red, air-sensitive solid, the complex has attracted academic attention as the precursor to many organotungsten derivatives. It an example of a charge-neutral complex featuring a W≡W bond, arising from the coupling of a pair of d3 metal centers.
Synthesis
Hexa(tert-butoxy)ditungsten(III) was first discovered by M. H. Chisholm and M. Extine in 1975. They synthesized hexa(tert-butoxy)ditungsten(III) by reacting tungsten(III) dialkylamides with t-BuOH in organic solvents. They also found that W2(O-t-Bu)6 reacts with carbon dioxide in toluene to form green W2(O-t-Bu)4(O2CO-t-Bu)2 under room temperature. In CO2, these compounds can be separated from cooled toluene purely. Without the presence of CO2, W2(O-t-Bu)4(O2CO-t-Bu)2 is regenerated into W2(O-t-Bu)6 reversibly.
W2(O-t-Bu)6 can also be synthesized by using NaW2Cl7(THF)5 as reactant in THF with addition of NaO-t-Bu under ambient temperature for 18 hours. After the reaction, the solvent is removed, and it becomes a red slurry. Further cooling (-35oC) and decantation or vacuum filtration separate red crystalline W2(O-t-Bu)6. The salt metathesis reaction from the THF complex of ditungsten heptachloride is as follows:
NaW2Cl7(THF)5 + 6 NaO-t-Bu → W2(O-t-Bu)6 + 7 NaCl + 5 THF
Characteristics
These needle-like red crystals are highly unstable under oxygen and water and can be dissolved in most organic solvents such as diethyl ether and pentane. They are found in dimers with two tungsten(III) bond with each other to form triple bonds. These two W(III) form pseudotetrahedral center and adopt a staggered, ethane-like conformation, similar to its dimolybdenum analogue. The structure of the compound was investigated by Chisholm and his team using single crystal X-ray diffraction. The investigation was performed in a C-centered monoclinic crystal. In C2/c space group, there is one half inversion center molecule and one whole molecule in general position. There are several orientations for each position which leads to the length of WW ranging from 1.74 to 2.53 Å. The orientation of t-butyl groups in each W are one direct away from WW (distal) and two over WW (proximal). This arrangement had been calculated as the best to minimize the steric repulsing effect.
This compound can be decomposed into WO2, t-BuOH, and isobutylene, with trace amount of water under 200oC. This compound can react easily with alkynes or nitriles to generate RC≡W(O-t-Bu)3 or both RC≡W(O-t-Bu)3 and N≡W(O-t-Bu)3. With excess amount of nitrile, only N≡W(O-t-Bu)3 are formed along with RC≡CR. RC≡W(O-t-Bu)3 is important catalyst for alkyne metathesis while N≡W(O-t-Bu)3 is a catalyst for nitrogen exchange of nitriles. The C≡W bond in RC≡W(O-t-Bu)3 was concluded to behave as polarized C(-)≡W(+). Thus, the metathesis catalytic reaction starts with tungsten as electrophilic attacker to attack acetylene and followed by alkylidyne carbon as nucleophilic attacker to attack acetylenic carbon atom.
Reactivity
Carbon monoxide adds to W2(O-t-Bu)6 to form W2(O-t-Bu)6(CO). The carbonyl group is a bridging ligand. This compound can further react with i-PrOH to generate W4(μ-CO)2(O-i-Pr)12.
Alkynes
C≡C bonds are cleaved by hexa(tert-butoxy)ditungsten(III) giving a pair of tungsten alkylidyne complexes: Although the reaction applies to many alkynes, PhC≡CPh or Me3SiC≡CSiMe3 do not react.
(R = Me, Et, Si(CH3)3)
This reaction includes an alkyne adduct on the μ-perpendicular site to increase both the length of WW bonds and CC (alkyne) bonds. This intermediate can be analogue as a dimetallatetrahedranes and further react into RC≡W(O-t-Bu)3 with internal redox reaction. The resulting RC≡W(O-t-Bu)3 is a catalyst for metathesis reactions. RC≡W(O-t-Bu)3 can react with normal alkynes for metathesis reactions and also with terminal alkynes for both metathesis reactions and polymerizations.
Besides simple metathesis reactions, W2(O-t-Bu)6 also reacts with 3-hexyne in a 1:1 molar ratio to form a triangular tritungsten complex compound [W3(O-t-Bu)5(μ-O)(μ-CEt)O]2. This reaction takes about 3 days under 75-80 oC in toluene. This reaction has a two steps mechanism; first is the C≡C and W≡W metathesis reaction and follow by formal addition of carbyne (W≡C) to alkoxide (W2):
W2(O-t-Bu)6 + RC≡CR → 2[RC≡W(O-t-Bu)3]
W2(O-t-Bu)6 + RC≡W(O-t-Bu)3 → W3(O-t-Bu)5(μ-O)(μ-CEt)O → [W3(O-t-Bu)5(μ-O)(μ-CEt)O]2
W2(O-t-Bu)6 also reacts with EtC≡CC≡CEt to form (t-Bu-O)3W≡CC≡W(O-t-Bu)3:
W2(O-t-Bu)6 + EtC≡CC≡CEt → (t-Bu-O)3W≡CC≡W(O-t-Bu)3 + EtC≡CEt
This compound, however, does not act as a metathesis catalyst.
Nitriles
Similar to the reaction with alkynes, W2(O-t-Bu)6 cleaves nitriles to give the alkyldyne and nitride:
W2(O-t-Bu)6 + RC≡N → RC≡W(O-t-Bu)3 + N≡W(O-t-Bu)3
Although W2(O-t-Bu)6 reacts with nitriles, it doesn’t react with nitrogen (N≡N).
When C≡C and C≡N bond both exist, W2(O-t-Bu)6 reacts more rapidly with C≡N than C≡C bond. Here’s an example of W2(O-t-Bu)6 reacting with EtC≡CCN in the presence of quinuclidine:
W2(O-t-Bu)6 + EtC≡CCN + 12quin → EtC≡CC≡W(O-t-Bu)3(quin) + N≡W(O-t-Bu)3
On the other hand, the metathesis catalyst MeC≡W(O-t-Bu)3 reacts more rapidly with C≡C than C≡N bond. Similar reaction with EtC≡CCN and quinuclidine produce different product:
MeC≡W(O-t-Bu)3 + EtC≡CCN + 12quin → NCC≡W(O-t-Bu)3(quin) + EtC≡CMe
Other reactions
W2(O-t-Bu)6 cleaves nitrosobenzene to give [W(O-t-Bu)2(NPh)]2(μ-O)(μ-O-t-Bu)2.
Allenes
W2(O-t-Bu)6 can also react with allene (H2C=C=CH2) for adduction. In a ratio of 1:1, allene adduct on W2 to form a v-shape bridge structure:
W2(O-t-Bu)6 + H2C=C=CH2 → W2(O-t-Bu)6(C3H4)
This compound is synthesized under 0oC in hexane and crystallized under -72oC. It decomposes easily in solution at 0oC and in crystalline state at ~25oC but very stable at ~20oC. The bridging allene is parallel to the W2 bond. In a ratio of 1:2, the additional allene will bind to single metal center as typical bonding:
W2(O-t-Bu)6(C3H4) + 2H2C=C=CH2 → W2(O-t-Bu)6(C3H4)2
The product of 1:1 adduction can further react with carbon monoxide to form a similar structure to 1:2 adduction but adducted with carbon monoxide instead of allene:
W2(O-t-Bu)6(C3H4) + 2CO → W2(O-t-Bu)6(C3H4)(CO)2
Reaction using methylallene (MeHC=C=CH2) instead of allene is also feasible forming similar structures.
References
Alkoxides
Tungsten(III) compounds | Hexa(tert-butoxy)ditungsten(III) | [
"Chemistry"
] | 2,220 | [
"Bases (chemistry)",
"Alkoxides",
"Functional groups"
] |
65,333,929 | https://en.wikipedia.org/wiki/Three-wave%20equation | In nonlinear systems, the three-wave equations, sometimes called the three-wave resonant interaction equations or triad resonances, describe small-amplitude waves in a variety of non-linear media, including electrical circuits and non-linear optics. They are a set of completely integrable nonlinear partial differential equations. Because they provide the simplest, most direct example of a resonant interaction, have broad applicability in the sciences, and are completely integrable, they have been intensively studied since the 1970s.
Informal introduction
The three-wave equation arises by consideration of some of the simplest imaginable non-linear systems. Linear differential systems have the generic form
for some differential operator D. The simplest non-linear extension of this is to write
How can one solve this? Several approaches are available. In a few exceptional cases, there might be known exact solutions to equations of this form. In general, these are found in some ad hoc fashion after applying some ansatz. A second approach is to assume that and use perturbation theory to find "corrections" to the linearized theory. A third approach is to apply techniques from scattering matrix (S-matrix) theory.
In the S-matrix approach, one considers particles or plane waves coming in from infinity, interacting, and then moving out to infinity. Counting from zero, the zero-particle case corresponds to the vacuum, consisting entirely of the background. The one-particle case is a wave that comes in from the distant past and then disappears into thin air; this can happen when the background is absorbing, deadening or dissipative. Alternately, a wave appears out of thin air and moves away. This occurs when the background is unstable and generates waves: one says that the system "radiates". The two-particle case consists of a particle coming in, and then going out. This is appropriate when the background is non-uniform: for example, an acoustic plane wave comes in, scatters from an enemy submarine, and then moves out to infinity; by careful analysis of the outgoing wave, characteristics of the spatial inhomogeneity can be deduced. There are two more possibilities: pair creation and pair annihilation. In this case, a pair of waves is created "out of thin air" (by interacting with some background), or disappear into thin air.
Next on this count is the three-particle interaction. It is unique, in that it does not require any interacting background or vacuum, nor is it "boring" in the sense of a non-interacting plane-wave in a homogeneous background. Writing for these three waves moving from/to infinity, this simplest quadratic interaction takes the form of
and cyclic permutations thereof. This generic form can be called the three-wave equation; a specific form is presented below. A key point is that all quadratic resonant interactions can be written in this form (given appropriate assumptions). For time-varying systems where can be interpreted as energy, one may write
for a time-dependent version.
Review
Formally, the three-wave equation is
where cyclic, is the group velocity for the wave having as the wave-vector and angular frequency, and the gradient, taken in flat Euclidean space in n dimensions. The are the interaction coefficients; by rescaling the wave, they can be taken . By cyclic permutation, there are four classes of solutions. Writing one has . The are all equivalent under permutation. In 1+1 dimensions, there are three distinct solutions: the solutions, termed explosive; the cases, termed stimulated backscatter, and the case, termed soliton exchange. These correspond to very distinct physical processes. One interesting solution is termed the simulton, it consists of three comoving solitons, moving at a velocity v that differs from any of the three group velocities . This solution has a possible relationship to the "three sisters" observed in rogue waves, even though deep water does not have a three-wave resonant interaction.
The lecture notes by Harvey Segur provide an introduction.
The equations have a Lax pair, and are thus completely integrable. The Lax pair is a 3x3 matrix pair, to which the inverse scattering method can be applied, using techniques by Fokas. The class of spatially uniform solutions are known, these are given by Weierstrass elliptic ℘-function. The resonant interaction relations are in this case called the Manley–Rowe relations; the invariants that they describe are easily related to the modular invariants and
That these appear is perhaps not entirely surprising, as there is a simple intuitive argument. Subtracting one wave-vector from the other two, one is left with two vectors that generate a period lattice. All possible relative positions of two vectors are given by Klein's j-invariant, thus one should expect solutions to be characterized by this.
A variety of exact solutions for various boundary conditions are known. A "nearly general solution" to the full non-linear PDE for the three-wave equation has recently been given. It is expressed in terms of five functions that can be freely chosen, and a Laurent series for the sixth parameter.
Applications
Some selected applications of the three-wave equations include:
In non-linear optics, tunable lasers covering a broad frequency spectrum can be created by parametric three-wave mixing in quadratic () nonlinear crystals.
Surface acoustic waves and in electronic parametric amplifiers.
Deep water waves do not in themselves have a three-wave interaction; however, this is evaded in multiple scenarios:
Deep-water capillary waves are described by the three-wave equation.
Acoustic waves couple to deep-water waves in a three-wave interaction,
Vorticity waves couple in a triad.
A uniform current (necessarily spatially inhomogenous by depth) has triad interactions.
These cases are all naturally described by the three-wave equation.
In plasma physics, the three-wave equation describes coupling in plasmas.
References
Nonlinear optics
Nonlinear systems
Differential equations | Three-wave equation | [
"Mathematics"
] | 1,234 | [
"Mathematical objects",
"Differential equations",
"Equations",
"Nonlinear systems",
"Dynamical systems"
] |
75,274,295 | https://en.wikipedia.org/wiki/SCQ1 | (S)-SCQ1 is a drug which acts as a potent and selective antagonist for the 5-HT2B and 5-HT2C serotonin receptors, but with only modest affinity for the closely related 5-HT2A receptor and other targets such as 5-HT7. Since most currently available 5-HT2 class ligands have relatively poor selectivity and bind to all three subtypes, the selectivity of (S)-SCQ1 is expected to be useful for studying 5-HT2A receptor mediated responses in the absence of 5-HT2B and 5-HT2C activation.
See also
SB-206553
SB-242,084
Z3517967757
References
5-HT2C antagonists
Benzochromenes
Ketones
Spiro compounds
Quinuclidines | SCQ1 | [
"Chemistry"
] | 183 | [
"Organic compounds",
"Ketones",
"Functional groups",
"Spiro compounds"
] |
75,277,275 | https://en.wikipedia.org/wiki/QLever | QLever (pronounced , as in "clever") is an open-source triplestore and graph database developed by a team at the University of Freiburg led by Hannah Bast. QLever performs high-performance queries of semantic Web knowledge bases, including full-text search within text corpuses. A specialized user interface for QLever predictively autocompletes SPARQL queries.
Characteristics
A 2023 study found that, compared to other triplestores, QLever achieved fast execution of successful queries but offered limited support for complex SPARQL constructs.
Contents
The official QLever instance provides API endpoints for querying the following datasets:
Wikidata
Wikimedia Commons
Freebase
OpenStreetMap
OpenHistoricalMap
UniProt
PubChem
DBLP
OpenCitations
IMDb
Integrated Authority File
YAGO
DBpedia
Wallscope Olympics database
For OpenStreetMap and OpenHistoricalMap data, the QLever engine supports a limited subset of GeoSPARQL functions, supplemented by a precomputed subset of GeoSPARQL relationships stored as dedicated triples.
Adoption
Besides the official instance, the QLever engine also powers the official SPARQL endpoint of DBLP. QLever is one of the candidates to replace Blazegraph as the triplestore for the Wikidata Query Service.
See also
List of SPARQL implementations
References
Further reading
External links
Triplestores
Graph databases
University of Freiburg | QLever | [
"Mathematics"
] | 312 | [
"Graph databases",
"Mathematical relations",
"Graph theory"
] |
75,278,006 | https://en.wikipedia.org/wiki/Jaktinib | Jaktinib is a janus kinase inhibitor under development for myelofibrosis. It is a deuterated-drug analog of momelotinib.
References
Janus kinase inhibitors
Deuterated compounds
Morpholines | Jaktinib | [
"Chemistry"
] | 49 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,279,549 | https://en.wikipedia.org/wiki/Antimicrobial%20photodynamic%20therapy | Antimicrobial photodynamic therapy (aPDT), also referred to as photodynamic inactivation (PDI), photodisinfection (PD), or photodynamic antimicrobial chemotherapy (PACT), is a photochemical antimicrobial method that has been studied for over a century. Supported by in vitro, in vivo and clinical studies, aPDT offers a treatment option for broad-spectrum infections, particularly in the context of rising antimicrobial resistance. Its multi-target mode of action allows aPDT to be a viable therapeutic strategy against drug-resistant microorganisms. The procedure involves the application of photosensitizing compounds, also called photoantimicrobials, which, upon activation by light, generate reactive oxygen species (ROS). These ROS lead to the oxidation of cellular components of a wide array of microbes, including pathogenic bacteria, fungi, protozoa, algae, and viruses.
Historical perspective
In the early 20th century, decades before the first chemical antibiotics were developed, Dr. Niels Finsen discovered that blue light could be used to treat skin infections. In the following years, Finsen's phototherapy was used in many European medical institutions as a topical antimicrobial. In 1903, the Nobel Prize committee awarded him for his work in Physiology/Medicine, "in recognition of his contribution to the treatment of diseases, especially lupus vulgaris, with concentrated light radiation, whereby he has opened a new avenue for medical science".
Similarly, in the beginning of the 20th century, Oscar Raab, a German medical student supervised by Professor Herman Von Tappeiner, accidentally made a scientific observation of the antimicrobial effects of light-activated dyes. While conducting experiments on the viability of motile protozoa, Raab noticed that fluorescent dyes, like some acridine and xanthene dyes, could kill stained microbes when sunlight was directed onto the stained samples. These effects were particularly pronounced during the summer, when sunlight is brightest. This chance observation highlighted the ability of certain fluorescent compounds, now termed "photosensitizers" (PS), to artificially induce light sensitivity in microorganisms and enhance the known antimicrobial effects of sunlight. Shortly thereafter, Von Tappeiner and Jodlbauer discovered that oxygen was crucial for light-mediated reactions, leading to the creation of the term "photodynamische wirkung" (photodynamic effect).
However, it wasn't until the 1970s that researchers began to systematically explore the potential of photodynamic therapy for medical applications. Since then, significant progress has been made in understanding the underlying mechanisms and optimizing the efficacy of photodynamic therapy (PDT) for treatment of cancers and age-related macular degeneration. Today, the branch of PDT focused on killing microbial cells is considered as an option to prevent and treat infectious diseases in a manner that avoids the emergence of antimicrobial drug-resistance.
Mechanism of action
The photochemical principle underlying antimicrobial photodynamic therapy involves the activation of a photosensitizer, a light-sensitive compound that can locally generate reactive products, such as radicals and reactive oxygen species (ROS), upon exposure to specific wavelengths of light. An ideal photosensitizer selectively accumulates in the target microbial cells, where it remains inactive and non-toxic until it is activated by irradiation with light of a specific wavelength. This activation promotes the photosensitizer molecules to a short-lived excited state that possesses different chemical reactivity relative to its ground-state counterpart. When the photosensitizer molecule is in an excited triplet state, it can induce local Type 1 photodynamic reactions by direct contact with molecular oxygen, inorganic ions or biological targets. These redox reactions (Type 1) involve charge transfers, by donation of electron (e–) or Hydrogen ion (H+), to form radicals and ROS, such as anion radical superoxide, hydrogen peroxide and hydroxyl radicals. The excited triplet-state photosensitizer can also transfer energy to molecular triplet-state oxygen producing singlet oxygen via Type 2 photodynamic reactions. The photoinduced burst of active reactants affect cellular redox regulations and can cause oxidative damage to vital structures made of proteins, lipids, carbohydrates and nucleic acids, leading to localized cellular death.
Efficacy against drug-resistant pathogens
The efficacy of antimicrobial photodynamic therapy, using various distinct photosensitizers, has been studied since the 1990s. Most studies have yielded positive outcomes, often achieving disinfection levels, as defined by infection control guidelines, exceeding 5 log10 (99.999%) of microbial inactivation. Over the past decade, a collection of novel photoantimicrobials has been developed, exhibiting improved efficiencies in antimicrobial photodynamic action against various bacterial species. These studies have primarily focused on the inactivation of planktonic cultures, which are free-floating bacterial cells. This method serves as a convenient approach for high-throughput antimicrobial screening of multiple compounds, such as evaluating whether minor chemical modifications to a given photosensitizer can enhance antimicrobial efficacy. However, when present in biofilms, microbial populations can exhibit distinct characteristics compared to their planktonic counterparts, including significantly higher tolerance towards antimicrobials (up to 1,000-fold). Among the various factors contributing to this enhanced tolerance is the biofilm matrix composed extracellular polymeric substance (EPS). The EPS can shield constituent bacteria from antimicrobials through dual mechanisms: 1) by impeding the penetration of antimicrobial substances throughout the biofilm due to interactions between positively charged agents and negatively charged EPS residues, and also by 2) redox processes and π-π interactions involving aromatic surfaces generally acting to dismute the incoming active substance. EPS must be considered in the rational design of antimicrobial photosensitizers, because the densely cross linked matrix may also obstruct diffusion of photosensitizer into deeper biofilm layers.
The multi-target mechanisms of aPDT avoid antimicrobial resistance, which continues to be a major global health concern. The likelihood of developing resistance in pathogens is higher for antimicrobial strategies that have a specific target structure, following the key-lock principle, embodied in many antibiotics or antiseptics. In such cases, pathogens can evade the antimicrobial challenge through specific mutations, upregulation of efflux pumps, or production of enzymes that deactivate antimicrobials. In contrast, aPDT acts through a variety of non-specific oxidative mechanisms targeting multiple structures and pathways simultaneously, making the technique far less prone to resistance. The possibility of bacteria developing tolerance to aPDT has therefore been deemed highly unlikely. Several studies have demonstrated the efficacy of aPDT against various drug-resistant pathogens, including the World Health Organization (WHO) priority pathogens, such as Staphylococcus aureus, Pseudomonas aeruginosa, Klebsiella pneumoniae, Acinetobacter baumannii, Enterococcus faecium, Candida auris, Escherichia coli and many others.
Light sources
Light is required to excite the photosensitizer, which leads to the photochemical production of ROS. To efficiently transfer photon energy to the electron structure of the photosensitizer, the wavelength of the light source must be matched to the absorption spectrum of the photosensitizer. Different light sources have been used in aPDT, such as lamps (e.g. tungsten filament, Xenon arc and fluorescent lamps), lasers and light emitting diodes (LEDs). Lamps typically emit white light, but a filter can be used to select the appropriate wavelength to be absorbed by the photosensitizer and to avoid undesired thermal effects. In contrast, lasers are monochromatic light sources that can be easily coupled to optical fibers to access non-surface regions. LEDs are also monochromatic light sources, although their spectral emission bands are wider than those of lasers. However, the coupling of LEDs and optical fibers is not efficient, resulting in significant loss of light. More recently, organic LEDs (OLEDs) have been used in aPDT as wearable light sources because they can be made to be more flexible, thinner, and lighter than conventional LEDs. Sunlight can also serve as a source of light for aPDT; however, exact illumination parameters may be difficult to precisely reproduce.
Light dosimetry
aPDT results depend on the interplay of three physical quantities: irradiance, radiant exposure and exposure time. Irradiance is defined as the optical power of the light source in Watts, divided by the area of tissue illumination conventionally described in square meters or centimeters (W/m2 or W/cm2). The irradiance, as a photodynamic parameter, is limited by the onset of adverse thermal factors in exposed tissue, or by degradative consequences to the sensitizer itself (commonly referred to as "photobleaching"). Radiant exposure is given by the product of irradiance and exposure time in seconds, divided by the illuminated area (J/cm2), and is commonly termed the light dose. This parameter is often limited by acceptable treatment times because lengthy treatment times can be unacceptable in a point-of-care setting. Fluence is a different physical quantity often used by aPDT practitioners, which considers the backscattering flux of light-tissue interaction causing re-entry of photons back into the treated area.
Photosensitizers
Photodynamic action relies on absorption of electromagnetic radiation by the photosensitizing compound and conversion of this energy into redox chemical reactions or transfer to ground-state oxygen, producing the highly oxidizing species, singlet oxygen. Consequently, the photosensitizer can be considered a photocatalyst, but it is also true that the sensitizer directly interacts with target moieties such as microbes to establish, for example, molecular targeting. This explains why not all photosensitizers are useful as photoantimicrobials.
The most effective photosensitizer molecules carry a positive charge (cationic). This promotes electrostatic attraction with negatively charged groups found on microbial cell surfaces (e.g. phosphate, carboxylate, sulfate), thus ensuring that during illumination, production of reactive oxygen species occurs in close contact with the targeted cellular population. Consequently, negatively charged photosensitizers are less effective, particularly against gram-negative bacterial cells that carry a strongly negative zeta potential.
The most widely employed photosensitizer in clinical practice is the phenothiazine derivative, methylene blue, which carries a +1 charge. Methylene blue is also favored due to its long record of safe use in patients, both in surgical staining and the systemic treatment of methemoglobinemia. Many other photosensitizers have been suggested, from various chemical classes, such as porphyrins, phthalocyanines and xanthenes, but the requirement for cationic nature and proven safety for human/animal use represents a high barrier to new chemical entity development.
aPDT Enhancement by inorganic salts and gold nanoparticles
It was discovered in 2015 that the addition of inorganic salts can potentiate aPDT by several orders of magnitude, and may even allow oxygen-independent photoinactivation to take place. Potassium iodide (KI) is the most relevant example. Other inorganic salts such as potassium thiocyanate (KSCN), potassium selenocyanate (KSeCN), potassium bromide (KBr), sodium nitrite (NaNO2) and even sodium azide (NaN3, toxic) have also been shown to increase the killing of a broad range of pathogens by up to one million times.
The addition of KI at concentrations up to 100 mM allows gram-negative bacteria to be killed by photosensitizers, which have no effect on their own, and this was shown to be effective in several animal models of localized infections. KI was shown to be effective in human AIDS patients with oral candidiasis who were treated with methylene blue aPDT. Oral consumption of saturated KI solution (4-6 g KI/day) is a standard treatment for some deep fungal infections of the skin.
The photochemical mechanisms of action are complex. KI can react with singlet oxygen to form free molecular iodine plus hydrogen peroxide, which show synergistic and long-lived antimicrobial effects, as well as forming short-lived, reactive iodine radicals. Type 1 photosensitizers can carry out direct electron transfer to form iodine radicals, even in the absence of oxygen. KSCN reacts with singlet oxygen to form sulfur trioxide radicals, while KSeCN forms semi-stable selenocyanogen. KBr reacts with TiO2 photocatalysis to form hypobromite, while NaNO2 reacts with singlet oxygen to form unstable peroxynitrate. NaN3 quenches singlet oxygen so it can only react by electron transfer to form azide radicals. Relatively high concentrations of salts are necessary to trap the short-lived reactive species produced during aPDT.
The presence of gold nanoparticles is able to enhance the antimicrobial effectiveness of photosensitizers such as toludine blue. Covalently linking nanoparticles to a photosensitizer also results in enhanced antimicrobial activity. The gold nanoparticles have two roles: firstly they enhance the light capture of the dye and secondly they help direct the decay pathway for the dye, encouraging a non-radiative process through the formation of excess bactericidal radical species.
Incorporation of photosensitizers into polymers
Photosensitizers can be incorporated into polymers resulting in materials that can kill microbes on their surfaces when activated by visible light. Such polymers have been shown to be effective in killing bacteria in a clinical environment. These self-disinfecting materials could, therefore, be used to coat surfaces in order to reduce the spread of disease-causing microbes in clinical environments as well as in food-processing and food-handling premises.
Advances in medicine and surgery have led to increasing reliance on a variety of medical devices of which the catheter is the most widely used. Unfortunately, the non-shedding surfaces of catheters can be colonized by microbes resulting in biofilm formation and, consequently, lead to an infection. Such catheter-related infections are a major cause of morbidity and mortality. Photosensitizers such as methylene blue and toluidine blue have been incorporated into silicone, the main polymer used in the manufacture of catheters, and the resulting composites have been shown to exert an antimicrobial effect when exposed to light of a suitable wavelength. Suitable irradiation of such materials has been shown to be able to significantly reduce biofilm accumulation on their surfaces. This approach has potential for reducing the morbidity and mortality associated with catheter-associated infections.
Microbial resistance to aPDT
The generation of reactive oxygen species (ROS) in neutrophils, macrophages, and eosinophils is one of the primary means by which the human immune system combats infecting microbes. Highly adaptable microbes have evolved some level of protection strategies against these reactive molecules by upregulating antioxidant enzymes when exposed to ROS, suggesting one method by which microbes could develop increased resistance to aPDT. However, these biochemical responses are limited when compared to the magnitude of oxidative stress placed on the microbe by aPDT. Numerous investigations involving the repeated exposure of microorganisms to sublethal doses of antimicrobial photodynamic therapy (aPDT) and the subsequent analysis of the resilience of the cultured cells that survive, consistently reveal no significant indication of the development of resistance in these microorganisms. In fact, a study using methylene blue as a photosensitizer (PS) against MRSA, a series of aPDT exposure followed by re-cultivation tests conducted over multiple years showed that the microorganism's sensitivity to aPDT remained unchanged. In contrast, significant resistance to oxacillin emerged in fewer than twelve cycles.
Virulence inhibition by aPDT
Pathogenic microbes cause harm to their hosts and evade host defense mechanisms through a range of virulence factors, which include elements like exotoxins, endotoxins, capsules, adhesins, invasins, and proteases. While antibiotics can inactivate microbes and thereby prevent further production of host-damaging virulence factors, few have any effect on pre-existing virulence factors or those which are released during the bactericidal process. These factors can continue to produce damaging effects even after the offending microbial cells have been inactivated.
Unlike most antimicrobial drugs, antimicrobial photodynamic therapy (aPDT) is typically capable of neutralizing or diminishing the effectiveness of microbial virulence factors, or it can reduce their expression. The ability to inhibit microbial virulence is of particular interest because it could be related to accelerated infection site healing when compared to standard antimicrobial chemotherapy that only relies on bacteriostatic or bactericidal effects. Secreted virulence factors normally contain peptides, and it is well known that some amino acids (e.g. histidine, cysteine, tyrosine, tryptophan and methionine) are highly vulnerable to oxidation. Photodynamic reactions have demonstrated significant effectiveness in diminishing the harmful activity of lipopolysaccharides (LPS), proteases, and various other microbial toxins. The capability to not only eliminate the microbes causing an infection but also to inhibit expression of various molecules that lead to host tissue damage offers a significant benefit over traditional antimicrobial drugs.
Nasal decolonization
Nasal decolonization is recognized as a primary preventive intervention in the development of hospital-acquired infections (HAIs), especially surgical site infections (SSIs). HAIs represent a serious public health concern worldwide, with approximately 2.5 million HAIs annually in the United States leading to high morbidity and mortality (e.g. 30,000 deaths per year directly attributable to HAIs). HAIs affect one in every 31 hospitalized patients in the USA. Staphylococcus aureus, a gram-positive bacterium, is the most common cause of nosocomial pneumonia and surgical site infections and the second-most common cause of bloodstream, cardiovascular, and eye, ear, nose, and throat infections. S. aureus is by far the leading cause of skin and soft tissue HAIs, which can lead to potentially lethal bacteremia. SSIs are among the most common healthcare-associated infections with substantial morbidity and mortality. An analysis of the 2005 Nationwide Inpatient Sample Database showed that S. aureus infections in inpatients tripled the duration of hospital stay, increasing length of stay by an average of 7.5 days for surgical site infections. The anterior nares have been classified as the most consistent site of S. aureus colonization. Asymptomatic S. aureus nasal carriage in healthy individuals has been reported at 20-55%, causing increased risk of surgical-site infection by almost 4-fold. Critically,a growing proportion of these bacterial populations exhibit antibiotic resistance.
Nasal decolonization of S. aureus to reduce the incidence of SSIs is expanding into current standard of care in both intensive care units (ICU) and presurgical settings. Various decolonization strategies have been used in hospitals in an effort to reduce transmission of bacteria and decrease overall infection rate. Decolonization effects are both directly and indirectly related via reduction of the overall bioburden when broadly administered within an acute care setting. There is the added benefit of effects that go beyond the treated patients extending to healthcare workers and other patients.
Several clinical studies performed using the current standard of care – intranasal mupirocin 2% antibiotic ointment – in surgical patients, concluded that this treatment significantly decreased the rate of hospital-acquired infections. One study found a 44% reduction in bloodstream infection rates when universal decolonization was used (e.g. intranasal mupirocin ointment and chlorhexidine body wash) in a trial involving 73,256 hospital patients. In addition, researchers have demonstrated that eradicating S. aureus from the anterior nares also utilizing intranasal mupirocin ointment reduced surgical site infection rates up to 58% in hospitalized patients who were nasal carriers. However, widespread use of mupirocin is associated with development of mupirocin-resistant strains of MRSA, with one hospital in Canada experiencing an increase from 2.7% to 65% resistant strains in three years. A targeted – as opposed to universal – decolonization approach is sometimes recommended because of increasing levels of mupirocin resistance. Currently, only universal decolonization with mupirocin has been demonstrated to be an effective control measure and therefore selective administration of mupirocin is contraindicated.
Nasal aPDT addresses the issues of antibiotic-induced resistance in multiple ways. As a site-specific therapy, it does not interfere with the overall microbiome because it is not systemically administered. Moreover, phenothiazinium photosensitizers can target negatively charged bacterial cells leaving zwitterionic host tissues unharmed. Treatment of the nose specifically targets the respiratory outlet, which is a key source of microbial colonization and dissemination through touch or normal respiration. Yet, the unspecific mechanisms of action effectively prevent development of resistance.
The first large-scale study involving aPDT for nasal decolonization, initially conducted exclusively on specific surgery types, the study demonstrated a significant 42% reduction in surgical site infections. The most significant reduction in SSI rates were in orthopedic and spinal surgeries. Currently, the use of nasal photodisinfection has been expanded to encompass a wide range of surgeries, resulting in an increased effect size with an approximate efficacy of 80%. The technique has been deployed in multiple Canadian hospitals since that time, and is undergoing clinical trials in the US for the same purpose.
Specialty-specific studies have also been carried out, especially in high-risk surgery of the spine. One large Canadian study found that the spine-surgery SSI rate decreased 5.6% (from 7.2% to 1.6%) because of nasal aPDT combined with chlorhexidine bathing, saving on average $45–55 CAD per treated patient ($4.24 million CAD annually). This study concluded that "CSD/nPDT is both efficacious and cost-effective in preventing surgical site infections". No adverse events were reported.
Skin infections
There are three main types of skin infections in humans that have been treated with aPDT: 1) Fungal infections, 2) Mycobacterial infections and 3) Cutaneous Leishmaniasis. The most clinically used photosensitizers are methylene blue and curcumin, as well as the protoporphyrin IX precursors, aminolevulinic acid (ALA) and methyl-ALA.
Fungal infections treated with aPDT have included both Dermatophytosis and Sporotrichosis. Infections with filamentous fungi such as Trichophyton spp. which express keratinase enzymes usually affect the toenails (onychomycosis), but can also affect the skin (tinea). In onychomycosis (tinea unguium), efforts are often made to increase the penetration of photosensitizers into the toenail matrix before the application of light. Cutaneous tinea infections affecting the foot, scalp or crotch have been treated with ALA-aPDT. Sporotrichosis is a zoonosis caused by the dimorphic fungus Sporothrix spp often transmitted by animal bites or scratches. It has been treated with aPDT mediated by ALA or methylene blue.
Skin infections can be caused by non-tuberculous mycobacteria, including rapidly growing species such as Mycobacterium marinum (swimmers' granuloma) and Mycobacterium avium complex. Some of these infections have been treated with aPDT using ALA in combination with conventional antibiotics.
Leishmaniasis is caused by an intracellular parasitic infection caused by single-celled protozoa of the genus Leishmania. It is transmitted by the bites of infected sand flies found in both the Old World (Southern Europe and Middle East) and the New World (Central and South America). Each year there are up to 2 million new cases and 70,000 deaths worldwide. Leishmaniasis infections can be either cutaneous, mucosal, or visceral, with the latter type being the deadliest. Cutaneous leishmaniasis has been treated with aPDT mediated by either ALA or methylene blue, because the standard treatment using systemic amphotericin B or topical pentavalent antimonial preparations have several drawbacks.
Chronic wounds
Chronic wounds are those that do not heal within months of treatment. They are classified into three main types, i.e. venous, diabetic, and pressure ulcers and are frequently sites of microbial infection that become a major deterrent to for patient recovery. aPDT offers a treatment option for chronic wounds, because of its lethal action against drug-resistant microorganisms.
Diabetic Foot ulcers (DFU) affect 10 to 25% of diabetic patients during their lives, requiring long and intensive hospitalization. The economic impact of DFU to worldwide health care systems is significant. DFU are frequently infected with a combination of fungi and bacteria including the genera Serratia, Morganella, Proteus, Haemophilus, Acinetobacter, Enterococcus, and Staphylococcus. In addition, there is an increased likelihood of contracting resistant strains of these and other microorganisms from hospital settings. DFU patients commonly respond poorly to antibiotic therapy. Consequently, amputation becomes indicated to prevent other complications, such as osteonecrosis, thrombosis and more disseminated types of bacteremia.
aPDT has been successfully used to treat the diabetic foot, reducing the incidence of amputation in DFU patients. DFU patients treated with aPDT were associated with only a 2.9% chance of amputation, compared to 100% in the control group (classical antibiotic therapy, without aPDT). Using an initial cohort study of 62 patients and subsequently of 218 patients, Tardivo and colleagues developed the Tardivo algorithm as a prognostic score to determine the risk of amputation and to predict the ideal therapeutic options for the treatment of DFU by aPDT. The score is based on three factors: Wagner's classification, signs of PAD, and location of foot ulcers. Values for the independent parameters are multiplied together and, for patients with scores below 16, treatment with aPDT is associated with approximately 85% (95% CI) chance of recovery.
Oral infections
In the early 90s, Emeritus Professor Michael Wilson from University College London (UCL), initiated scientific investigations on the potential of aPDT to combat bacteria of interest in dentistry. Since then, aPDT has been explored for various oral conditions, such as periodontal disease (gum disease), dental caries (cavities), endodontic treatment (root canal treatment), oral herpes and oral candidiasis. Research and clinical studies have shown promising results in reducing microbial load and treating infections. However, the efficacy of aPDT can vary based on factors like the type and concentration of photosensitizer used, light parameters, and the specific infection being treated.
While aPDT can be considered as an adjunctive treatment to standard of care, it is not currently intended to replace conventional therapies. This may change in the future, as drug-resistance patterns in the oral microbiome develop over time, making aPDT monotherapy increasingly necessary.
Some advantages of aPDT in oral infections include broad-spectrum action since aPDT can target a wide range of microorganisms (e.g. bacteria, fungi, and virus), including antibiotic-resistant strains, and oral biofilm is composed of wide variety of microorganisms. Another advantage is the localized treatment that can be used to target specific infected areas, minimizing damage to healthy tissues, and maintaining the normal microbiota without significant damage. To date, no significant adverse events associated with intraoral aPDT have been reported.
aPDT offers the dental practitioner an intraoral decontamination therapy that its minimally invasive nature, broad-spectrum action, rapid microbicidal effect, reduced antibiotic use, patient comfort factor, high compliance rate, treatment of resistant strains and minimization of microbial resistance selection.
Disinfection of blood-products
During the 1980s, the realization of the presence of the human immunodeficiency virus (HIV) in the global supply of donated blood led to the development of both thorough hemovigilance and of methods for the safe disinfection of microbial species in donated blood and blood products.
Blood is a mixture of cells and proteins and is routinely separated into its constituent parts for use in various therapies, e.g. platelets, red cells and plasma might be used in specific replacement, and proteins (typically clotting factors) derived from the plasma fraction are provided for the treatment of hemophilia, for example. Viruses, such as HIV, might be associated with the cellular components or suspended extracellularly, thus representing a threat of recipient infection whichever of these fractions is used. However, treatments aimed at viral inactivation/destruction must preserve cell/protein function, and this represents a barrier, particularly to cellular disinfection.
In terms of the use of photosensitizers, both methylene blue and riboflavin are employed for the photodisinfection of plasma, using visible or long-wave ultraviolet illumination respectively, while riboflavin is also used for disinfection of platelets. However, neither approach is employed for red blood cell concentrates. Among related approaches, the psoralen derivative Amotosalen, activated by long-wavelength UV light, is used in Europe for disinfection of plasma and platelets. However, this represents a photochemical reaction between the psoralen nucleus and viral nucleic acids, rather than a purely photodynamic effect.
Veterinary applications
In small animal practice, aPDT has been investigated for the treatment of different dermatological diseases with positive results. Although there are limited scientific data in this field, successful applications include otitis externa caused by multidrug-resistant Pseudomonas aeruginosa, dermatophytosis caused by Microsporum canis, and in association with itraconazole for sporotrichosis.
aPDT can also be used as a non-antibiotic platform for the treatment of infectious diseases in food-producing animals. Indeed, overuse of antimicrobials in these animals may lead to contamination of meat and milk by antibiotic-resistant bacteria or antibiotic residues. In this regard, aPDT has proven effective in the treatment of caseous lymphadenitis and streptococcal abscesses in sheep, and is demonstrably more effective than oxytetracycline (gold standard treatment) for bovine digital dermatitis. Other applications of aPDT include the treatment of mastitis in dairy cattle and sheep, and sole ulcers and surgical wound healing in cattle.
Exotic, zoo, and wildlife medicine is challenging and stands out as another field of possibility for aPDT. In this regard, aPDT has been successfully used to treat penguins suffering from pododermatitis and snakes with infectious stomatitis caused by gram-negative bacteria. Additionally, aPDT has been deployed as an adjuvant endodontic treatment for a traumatic tusk fracture in an elephant.
Food decontamination
The ever-increasing demand for food decontamination technologies has resulted in several studies focusing on the evaluation of the antimicrobial efficacy of aPDT in food and its effect on the organoleptic properties of the food products.
aPDT has shown antimicrobial efficacy against microbes on fruits, vegetables, seafood, and meat. The efficacy of aPDT used in this way is dependent on several factors including wavelength of light, temperature, and food-related factors such as acidity, surface properties and water activity. Endogenous porphyrins that are light-absorbing compounds located within certain bacteria produce photosensitized reactions in the presence of light in the blue region of the spectrum (400-500 nm), showing better antimicrobial efficacy than other wavelengths in the visible spectrum (e.g. green and red, 500-700 nm) in the absence of an exogenous photosensitizer.
Acidity of the food being disinfected plays an important role, as gram-positive bacteria have been found to be more sensitive to aPDT in acidic conditions while gram-negative bacteria are more sensitive to aPDT at alkaline conditions. Since aPDT is a surface decontamination technology, the surface characteristics of the tested material play an important role. The irregular surfaces of products like pet food pellets can lead to a shadowing effect, where microorganisms can hide in food crevices and be shielded from the light treatment. Flat surfaces can show better efficacy of aPDT as compared to the spherical or irregular surfaces. Moreover, high water activity conditions contribute to the success of aPDT compared to low water activity conditions, due to limited penetration of light in more desiccated foods. Other factors like irradiance, treatment time (or dose), microbial strain, and distance of the product from the light source also play a major role in the microbicidal efficacy of food-based aPDT.
A recent study demonstrated that appropriate concentrations of a photosensitizer potentially useful for food-based disinfection combined with appropriate peak absorption wavelength light resulted in upwards of 99.999% (5 log10) reduction in MRSA and complete kill in Salmonella cell counts. In addition to bacteria, aPDT has shown efficacy against fungal species. Optimization of the factors influencing antimicrobial efficacy and scalability of aPDT are required for successful application in the food industry.
References
External links
Academic journals focused on photodynamic science and technology
Journal of Photochemistry and Photobiology A: Chemistry
Journal of Photochemistry and Photobiology B: Biology
Photodiagnosis and Photodynamic Therapy
Photochemistry and Photobiology
Photochemical and Photobiological Sciences
Journal of Biophotonics
Lasers in Surgery and Medicine
Lasers in Medical Science
Professional associations promoting research on photodynamic therapy
International Photodynamic Association (IPA)
European Society for Photobiology (ESP)
American Society for Photobiology (ASP)
International Society for Optics and Photonics (SPIE)
Antimicrobials
Antibiotics | Antimicrobial photodynamic therapy | [
"Biology"
] | 7,386 | [
"Antibiotics",
"Biocides",
"Antimicrobials",
"Biotechnology products"
] |
75,281,541 | https://en.wikipedia.org/wiki/Anusha%20Shah | Anusha Shah is an Indian-born civil engineer. Elected the 159th President of the Institution of Civil Engineers, she became the third woman and first person of colour to hold the position, taking office in November 2023.
Early life and education
Shah grew up in Kashmir. She studied civil engineering at Jamia Millia Islamia in New Delhi, India. In 1999, after winning a Commonwealth scholarship, she studied for an MSc in water and environmental engineering at the University of Surrey in the UK.
Professional career
Shah specialised in water and environmental engineering from the late 1990s. After completing her first degree, she worked as a project engineer for New Delhi-based Development Alternatives, overseeing production of compressed earth building blocks, and sparking a career interest in sustainable development. She then joined IramConsult, a local partner of Royal HaskoningDHV, to work in Kashmir on rehabilitating a lake. After completing her masters, she was seconded by Black & Veatch to work for Clancy Docwra as a design engineer on United Utilities' Haweswater scheme in the UK's Lake District. In 2008, Shah moved to Jacobs, becoming technical director for sustainable solutions and utilities in 2010, and a director of the firm in 2018. In 2019, she moved to Arcadis, becoming senior director for resilient cities and UK climate adaptation lead. She is currently seconded to the Eiffage, Kier, Ferrovial and BAM Nuttall joint venture on High Speed 2 as senior director of environmental consents.
Institutional and board roles
Shah is a Fellow of the Institution of Civil Engineers. Prior to succeeding Keith Howells and becoming President of the Institution of Civil Engineers in November 2023, she served on the Thomas Telford board, the ICE Executive Board, ICE's Fairness, Inclusion and Respect panel, the ICE research and development panel and the ICE qualifications panel.
Shah is a non-executive director of the Met Office, UK and a Green Alliance trustee. She represents Arcadis at the London Climate Change Partnership and 50L Home Initiative of the World Business Council for Sustainable Development. She is a past chair of the Thames Estuary Partnership Board, which works towards sustainable management of the River Thames. Shah has been a chair and also a judge of the Ofwat Water Breakthrough Challenge for two consecutive terms.
Academia
In 2021, Shah was made an honorary professor by the University of Wolverhampton for knowledge transfer. In the same year, she received an honorary doctorate from the University of East London for her contributions to climate change in engineering. Shah is a visiting professor at the University of Edinburgh and is a Royal Academy of Engineering visiting professor at King's College London.
Awards
Shah won the Civil Engineering Contractors Association Fairness Inclusion and Respect Inspiring Engineers Award 2019, and was honoured in New Civil Engineer's 2019 Recognising Women in Engineering awards for her contributions to gender diversity. In 2020, she was named as one of Climate Reframe's leading BAME voices on climate change in the UK. In 2023, she was selected by the Women's Engineering Society as one the UK's Top 50 Women in Sustainability.
References
Civil engineering
Year of birth missing (living people)
Living people
People from Jammu and Kashmir
Presidents of the Institution of Civil Engineers
21st-century Indian people
Jamia Millia Islamia alumni
Alumni of the University of Surrey | Anusha Shah | [
"Engineering"
] | 681 | [
"Civil engineering",
"Civil engineers"
] |
75,287,425 | https://en.wikipedia.org/wiki/Mitiperstat | Mitiperstat (AZD4831) is an irreversible inhibitor of myeloperoxidase and experimental drug in development for heart failure with preserved ejection fraction. It is being developed by AstraZeneca.
References
Enzyme inhibitors
Drugs developed by AstraZeneca
Pyrrolopyrimidines
Chloroarenes
Amines
Thioureas | Mitiperstat | [
"Chemistry"
] | 80 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Amines",
"Pharmacology stubs",
"Bases (chemistry)"
] |
66,573,106 | https://en.wikipedia.org/wiki/Gaussian%20distribution%20on%20a%20locally%20compact%20Abelian%20group | Gaussian distribution on a locally compact Abelian group is a distribution on a second
countable locally compact Abelian group which satisfies the
conditions:
(i) is an infinitely divisible distribution;
(ii) if , where is the generalized
Poisson distribution, associated with a finite measure , and
is an infinitely divisible distribution, then the measure
is degenerated at zero.
This definition of the Gaussian distribution for the group
coincides with the classical one. The support of
a Gaussian distribution is a coset of a connected subgroup of .
Let be the character group of the group . A distribution
on is Gaussian () if and only if its
characteristic function can be represented in the form
,
where is the
value of a character at an element , and
is a continuous nonnegative function on satisfying
the equation .
A Gaussian distribution is called symmetric if . Denote by
the set of Gaussian distributions on the group , and by the set of symmetric Gaussian distribution on
. If , then is a continuous
homomorphic image of a Gaussian distribution in a real linear space.
This space is either finite dimensional or infinite dimensional
(the space of all sequences of real numbers in the product
topology) ().
If a distribution can be embedded in a continuous
one-parameter semigroup , of distributions on
, then if and only if
for any neighbourhood of zero
in the group ().
Let be a connected group, and
. If is not a locally connected, then
is singular (with respect of a Haar distribution on )
(). If is a locally connected and has a finite
dimension, then is either absolutely continuous or
singular. The question of the validity of a similar statement on
locally connected groups of infinite dimension is open, although on
such groups it is possible to construct both absolutely continuous
and singular Gaussian distributions.
It is well known that two Gaussian distributions in a linear space
are either mutually absolutely continuous or mutually singular. This
alternative is true for Gaussian distributions on connected groups
of finite dimension ().
The following theorem is valid (), which can be considered
as an analogue of Cramer's theorem on the decomposition of the normal distribution for locally compact Abelian groups.
Cramer's theorem on the decomposition of the Gaussian distribution for locally compact Abelian groups
Let be a random variable with values in a locally compact
Abelian group with a Gaussian distribution, and let
, where and are independent random variables
with values in . The random variables and are Gaussian if
and only if the group contains no subgroup topologically
isomorphic to the circle group, i.e. the multiplicative group of
complex numbers whose modulus is equal to 1.
References
Probability distributions | Gaussian distribution on a locally compact Abelian group | [
"Mathematics"
] | 556 | [
"Functions and mappings",
"Mathematical relations",
"Mathematical objects",
"Probability distributions"
] |
66,574,331 | https://en.wikipedia.org/wiki/Bellman%20filter | The Bellman filter is an algorithm that estimates the value sequence of hidden states in a state-space model. It is a generalization of the Kalman filter, allowing for nonlinearity in both the state and observation equations. The principle behind the Bellman filter is an approximation of the maximum a posteriori estimator, which makes it robust to heavy-tailed noise. It is in general a very fast method, since at each iteration only the very last state value is estimated. The algorithm owes its name to the Bellman equation, which plays a central role in the derivation of the algorithm.
References
Control theory
Nonlinear filters
Signal estimation | Bellman filter | [
"Mathematics"
] | 129 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
66,579,401 | https://en.wikipedia.org/wiki/Living%20technology | Living technology is the field of technology that derives its functionality and usefulness from the properties that make natural organisms alive (see life). It may be seen as a technological subfield of both artificial life and complex systems and is relevant beyond biotechnology to nanotechnology, information technology, artificial intelligence, environmental technology and socioeconomic technology for managing human society.
Overview
Living technology is broadly defined as technology that derives its usefulness primarily from its life-like properties.
Living technologies are "characterized by robustness, autonomy, energy efficiency, sustainability, local intelligence, self-repair, adaptation, self-replication and evolution, all properties current technology lack, but living systems possess." Thus, the potential usefulness of technologies that are engineered to become more life-like stem from the properties of life itself.
The word “technology,” from the Greek techne, usually evokes physical technologies like artificial intelligence, smartphones or genetically engineered organisms. But there is an older meaning. By Jacob Bigelow’s 1829 definition, technology can describe a process that benefits society. In that sense, social institutions, like governments and healthcare systems, can be seen, and studied as technologies. Physical technologies may be defined as tools for transforming matter, energy or information in pursuit of our goals while social technologies are tools for organizing people in pursuit of our goals. Under this definition, our social institutions, economy, and laws are technologies that, like physical technologies, can be studied and improved. In the broadest sense living technology are technologies that possess properties that characterize living processes.
History
The term "living technology" was coined by Mark Bedau, John McCaskill, Norman Packard and Steen Rasmussen in 2001, in a pitch to form a center for living technology. The ideas mainly grew out of the conceptual foundations of Artificial Life and Complex Systems, but with an engineering focus where engineering aims at developing technologies with life-like properties mainly using bottom up design approaches.
Based on the living technology ideas a number of projects were initiated, including the European Commission sponsored project, Programmable Artificial Cell Evolution (PACE), that in part co-sponsored the European Centre for Living Technology (ECLT) in Venice, Italy in 2004. Also the Protocell Assembly project at Los Alamos National Laboratory, USA, was based on these ideas and also sponsored in 2004. A number of successive EC sponsored projects followed including a EC call for proposals on Living Technology in 2009. In 2007 the Center for Fundamental Living Technology (FLinT)
was established at the University of Southern Denmark co-sponsored by the Danish National Science Foundation (Grundforskningsfonden).
An EC Flagship project based on further developing living technologies, Sustainable Programmable Living Technologies (SPLiT) was submitted in 2010 and ranked within the top 15 proposals, but did not obtain funding.
It is obvious that technology in particular over recent years has become both more life-like and more intelligent. This is enabling technology to both become more powerful and to meet societal challenges of being less disruptive to the environment, more sustainable, less subject to failure and more akin to human needs and accepted modes of interaction. This development is only expected to continue.
Research and range of living technology
The research perspectives and methods for living technologies are usually bottom up in opposition to top down. Thus, there is focus on engineering design without an explicit blueprint, which means the desired system properties emerge from the subsystem interactions. It is an ambition for engineering living technologies to create systems that are adaptive and can develop in an openended way over time as seen in ecological systems. The development of living technologies pose a number of ethical issues that in part has to be addressed in the engineering design process and in part through legislation.
As with biotechnology, there is a range of technology that might be considered as versions of living technology. Below is a list, beginning with rather trivial versions, and ending with more modern, sophisticated versions. Generally the term is widely understood to apply to technology that does not merely have living properties or involve life, but rather technology that derives is principal functionality from its living properties.
Use of living organisms for functionality unrelated to life-like properties (e.g., guiding growth of a tree to become a bridge).
Use of living organisms without modification for functionality that intrinsically uses life-like properties (e.g., brewing).
Modification of living organisms for new functionality (biotechnology, bioengineering, genetic engineering, synthetic biology)
Creation of new technology independent of existing living organisms, whose functionality depends on life-like properties.
Protocells, spanning a range of realizations:
Assembly of nonliving matter to form a living cell (still an unachieved research vision).
Construction of vesicles with intrinsic life-like properties such as metabolism and motility.
Construction of vesicles filled with components harvested from living cells.
Modifying existing cells with a complete programmable genome.
Synergetic combinations of electronic, chemical, and biological components
Social and socio-technical systems
Organizations and institutions with focus on their life-like properties
Non-biochemical instantiations of technology with life-like properties, e.g. the World Wide Web
Open problems
Ethical issues with living technology
Ethical issues in living technology are of several kinds:
(i) issues related to the creation of life-like or living entities like artificial cells
(ii) safety issues related to the release of entities potentially capable of proliferation into the environment
(iii) ecological issues related to preservation of biodiversity, natural wilderness and privacy
(iv) issues of ownership and responsibility for actions involving ongoing processes rather than material objects
The first issue was given careful consideration during the PACE project, resulting in a guideline document
Engineering living technology
bottom up vs. top down
design with no blueprint
engineering open endedness
References
Artificial life
Complex dynamics
Emergence | Living technology | [
"Mathematics"
] | 1,174 | [
"Complex dynamics",
"Dynamical systems"
] |
66,580,704 | https://en.wikipedia.org/wiki/Perfect%20month | A perfect month or a rectangular month designates a month whose number of days is divisible by the number of days in a week and whose first day corresponds to the first day of the week. This causes the arrangement of the days of the month to resemble a rectangle. In the Gregorian calendar, this arrangement can only occur for the month of February.
Constraints
To satisfy such an arrangement in the Gregorian calendar, the number of days in the month must be divisible by seven. Only the month of February of a common year can meet this constraint as the month has 28 days, a multiple of 7.
For a February to be a perfect month, the month must start on the first day of the week (usually considered to be Sunday or Monday). For Sunday-first calendars, this means that the year must start on a Thursday, and for Monday-first calendars, the year must start on a Friday. It must also occur in a common year, as the phenomenon does not occur when February has 29 days.
Occurrence
In the Gregorian calendar, the phenomenon occurs every six years or eleven years following a 6-11-11, 11-6-11, or an 11-11-6 sequence until the end of the 21st century. The most recent perfect months were February 2015 (Sunday-first) and February 2021 (Monday-first). Due to calculation rules, the years 1700, 1800, and 1900 are not leap years, causing a shift in the sequence with a spacing of twelve years between 1698 and 1710, 1795 and 1807, and 1897 and 1909 respectively; however 2094, 2100 and 2106 will all feature perfect months with spacings of six years on Monday-first calendars.
The next perfect months will be February 2026 (Sunday-first) and February 2027 (Monday-first).
Attributes
The calendar arrangement brings together notions of harmony and organization.
See also
Palindrome#Dates
Perfectionism (psychology)
Perfectionism (philosophy)
References
Calendars
February
Months | Perfect month | [
"Physics"
] | 411 | [
"Spacetime",
"Calendars",
"Physical quantities",
"Time"
] |
66,581,545 | https://en.wikipedia.org/wiki/Jermain%20G.%20Porter | Jermain Gildersleeve Porter (January 8, 1852 - April 14, 1933) was an American astronomer and opponent of the theory of relativity.
Porter was born at Buffalo, New York. He studied at Hamilton College, was employed by the United States Coast and Geodetic Survey in 1878, and from 1884 to 1930 was director of the Cincinnati Observatory and professor at the University of Cincinnati. He observed comets and nebulae, but gained a name mainly through his three star catalogs (1895–1905) and through his studies of the stars motion, collected in the Catalog of Proper Motion Stars , I – IV (1915–18), Publications of the Cincinnati Observatory No. 18.
He also authored Variation of Latitude 1899–1906 (1908) and The Stars in Song and Legend (1901). Together with Elliott Smith, he published the Catalog of 4683 Stars of the Epoch 1900, Publications of the Cincinnati Observatory, No. 193, in which he also stated his own movements. He also made a name for himself as an opponent of the theory of relativity.
Selected publications
Historical Sketch of the Cincinnati Observatory 1843-1893 (1893)
The Stars in Song and Legend (1901)
The Overthrow of Newton's Theory of Gravitation (1920)
The Relativity Deflection of Light: Facts versus Theory (1929)
Recent Textbooks and Relativity (1927)
References
1852 births
1933 deaths
American astronomers
Hamilton College (New York) alumni
Relativity critics
University of Cincinnati faculty | Jermain G. Porter | [
"Physics"
] | 296 | [
"Relativity critics",
"Theory of relativity"
] |
72,484,166 | https://en.wikipedia.org/wiki/HR%201217 | HR 1217 is a variable star in the constellation Eridanus. It has the variable star designation DO Eridani, but this seldom appears in the astronomical literature; it is usually called either HR 1217 or HD 24712. At its brightest, HR 1217 has an apparent magnitude of 5.97, making it very faintly visible to the naked eye for an observer with excellent dark-sky conditions.
HR 1217 is one of the best-studied rapidly oscillating Ap (roAp) stars. Inspired by the 1978 discovery of the rapid (12 minute period) brightness variability of Przybylski's Star (an Ap star), in 1980 D. W. Kurtz observed the Ap star HR 1217, and found clear 6.15 minute oscillations, the amplitude of which slowly changed over the course of several days. The next year, high-speed photometric observations of the star revealed six nearly equally spaced pulsation periods ranging from 6.126 minutes (strongest) to 5.966 minutes (weakest). In 1989 it was found that the amplitudes of these pulsations are modulated over a period equal to the star's rotation period. By 2019, ten pulsation frequencies had been found in the TESS data.
HR 1217 is a chemically peculiar star, with particular over-abundances of copper, europium, and chromium in its spectrum. At the same time, lines of other metals such as iron are less strong than expected for an A9 star, which is typical of an Ap star. In 2009, Shulyak et al. computed a model atmosphere for the star which showed how the elemental abundances varied as a function of atmospheric height. In 2015, doppler imaging was used to produce maps of both the star's magnetic field and the distribution of several chemical elements across the star's surface. It was the first roAp star to be mapped in this way.
References
Eridanus (constellation)
Eridani, DO
024712
1217
018339
Rapidly oscillating Ap stars
-12 752 | HR 1217 | [
"Astronomy"
] | 437 | [
"Eridanus (constellation)",
"Constellations"
] |
72,489,376 | https://en.wikipedia.org/wiki/Beta-tungsten | Beta-tungsten (β-W) is a metastable phase of tungsten widely observed in tungsten thin films. While the commonly existing stable alpha-tungsten (α-W) has a body-centered cubic (A2) structure, β-W adopts the topologically close-packed A15 structure containing eight atoms per unit cell, and it irreversibly transforms to the stable α phase through thermal annealing of up to 650 °C. It has been found that β-W possesses the giant spin Hall effect, wherein the applied charge current generates a transverse spin current, and this leads to potential applications in magnetoresistive random access memory devices.
History
β-W was first observed by Hartmann et al. in 1931 as part of the dendritic metallic deposit formed on the cathode after electrolysis of phosphate melts below 650°C. In the beginning stages of research into β-W, oxygen was commonly found to promote the formation of the β-W structure, thus discussions of whether the β-W structure is a phase of single-element tungsten or a tungsten suboxide were long-standing, but ever since the 1950s there has been a lot of experimental proof showing that the oxygen in β-W thin films is in a zero valence state, and thus the structure is a true allotrope of tungsten.
While the initial interest in β-W thin films was driven by its superconducting properties at low temperatures, the discovery of giant spin Hall effect in β-W thin films by Burhman et al. in 2012 has generated new interest in the material for potential applications in spintronic magnetic random access memories and spin-logic devices.
Structure
β-W has a cubic A15 structure with space group , which belongs to the Frank–Kasper phases family. Each unit cell contains eight tungsten atoms. The structure can be seen as a cubic lattice with one atom at each corner, one atom in the center, and two atoms on each face. There are two inequivalent tungsten sites corresponding to Wyckoff positions and , respectively. On the first site, Wyckoff position , each tungsten atom is bonded to twelve equivalent W atoms to form a mixture of edge- and face-sharing WW12 cuboctahedratungsten. On the second site, with Wyckoff position , each tungsten atom is bonded to fourteen neighboring tungsten atoms, and there is a spread of W–W bond lengths ranging from 2.54 to 3.12 Å. The experimentally measured lattice parameter of β-W is 5.036 Å, while the DFT calculated value is 5.09 Å.
Properties
Two key properties of β-W have been well-established: the high electrical resistivity and the giant spin Hall effect.
Although the exact value depends on the preparation conditions, β-W has an electrical resistivity of at least five to ten times higher than that of α-W (5.3 μΩ.cm), and this high conductivity will remain almost unchanged in a temperature range of 5 to 380 K, making β-W a potential thin film resistor while α-W is a thin film conductor.
Thin films of β-W display a giant spin Hall effect with a spin Hall angle of 0.30 ± 0.02 and a spin-diffusion length of around 3.5 nm. In contrast, α-W exhibits a much smaller spin Hall angle of less than 0.07 and a comparable spin-diffusion length. In the spin Hall effect, the application of a longitudinal electric current through a nonmagnetic material generates a transverse spin current due to the spin–orbit interaction, and the spin Hall angle is defined as the ratio of the transverse spin current density and the longitudinal electric current density. The spin Hall angle of β-W is large enough to generate spin torques capable of flipping or setting the magnetization of adjacent magnetic layers into precession by means of the spin Hall effect.
Preparation
While there have been some reports about preparing β-W with chemical methods such as hydrogen reduction reaction, almost all the reported β-W in the recent thirty years are prepared through sputter deposition, an atom-by-atom physical vapor deposition (PVD) technique. In the sputter deposition, a tungsten target is bombarded with ionized gas molecules (usually Ar), causing the tungsten atoms to be “sputtered” off into the plasma. These vaporized atoms are then deposited when they condense as a thin film on the substrate to be coated. The formation of β-W through sputter deposition depends on the base pressure, Ar pressure, substrate temperature, impurity gas, deposition rate, film thickness, substrate type, etc. It has been widely observed that oxygen or nitrogen gas flow can assist and is necessary for the formation of β-W, but recently there have also been reports on preparing β-W without putting into any impurity gas during deposition.
References
Tungsten
Allotropes | Beta-tungsten | [
"Physics",
"Chemistry"
] | 1,025 | [
"Periodic table",
"Properties of chemical elements",
"Allotropes",
"Materials",
"Matter"
] |
73,915,200 | https://en.wikipedia.org/wiki/Quantum%20robotics | Quantum robotics is an interdisciplinary field that investigates the intersection of robotics and quantum mechanics. This field, in particular, explores the applications of quantum phenomena such as quantum entanglement within the realm of robotics. Examples of its applications include quantum communication in multi-agent cooperative robotic scenarios, the use of quantum algorithms in performing robotics tasks, and the integration of quantum devices (e.g., quantum detectors) in robotic systems.
Introduction
The free-space quantum communication between mobile platforms was proposed for reconfigurable Quantum Key Distribution (QKD) applications using drones in 2017. This technology was later advanced in various aspects in mobile drone and vehicle platforms in several configurations such as drone-to-drone, drone-to-moving vehicle, and vehicle-to-vehicle systems .Communication system technology for demonstration of BB84 quantum key distribution in optical aircraft downlinks. Airborne demonstration of a quantum key distribution receiver payload.
Communication system technology for demonstration of BB84 quantum key distribution in optical aircraft downlinks.
Other researchers contributed to low size, weight and power quantum key distribution system for small form unmanned aerial vehicles ., characterization of a polarization-based receiver for mobile free space optical QKD ., and optical-relayed entanglement distribution using drones as mobile nodes. The topic of free-space quantum communication between mobile platforms, which was initially implemented to fulfill the need for free-space QKD and entanglement distribution using mobile nodes, was brought into robotics domain as an emerging interdisciplinary mechatronics topic to investigate and explore the interface between the quantum technologies and robotic systems domain. The main advantage of such integrated technology being the guaranteed security in communication between multiagent and cooperative autonomous systems. Although as a newfound emerging area, other benefits are anticipated in the future research by accessing the fast-growing and forthcoming quantum advantages. However, such progress can only be made after a foundation is laid out in what is referred to as “quantum robotics” and “quantum mechatronics”. The paper contributes to providing the complementary background needed for the research in integrating free-space quantum communication into the robotics field. Other contributions include modernizing the mechatronics discipline with quantum engineering for educational purposes which was initially proposed in. This paper further introduces quantum engineering topics needed in training and preparing the future engineering workforce to succeed in the rapid-paced ever-changing industry. In particular, the topics on the quantum mechanics fundamentals such as quantum entanglement, cryptography, teleportation, as well and the Bell test, are proposed which are suitable for engineering curriculum and University projects.
Alice and Bob Robots
In the realm of quantum mechanics, the names Alice and Bob are frequently employed to illustrate various phenomena, protocols, and applications. These include their roles in quantum cryptography, quantum key distribution, quantum entanglement, and quantum teleportation. The terms "Alice Robot" and "Bob Robot" serve as analogous expressions that merge the concepts of Alice and Bob from quantum mechanics with mechatronic mobile platforms (such as robots, drones, and autonomous vehicles). For example, the Alice Robot functions as a transmitter platform that communicates with the Bob Robot, housing the receiving detectors.
The schematic representation of the experimental setup for achieving quantum entanglement through the spontaneous parametric down-conversion process is shown in the figure.
The experimental setup that includes the laser source, and Alice and Bob is shown in the figure below.
The Alice and Bob and the corresponding components.
The schematic representation of the Alice and Bob robots when sharing entangled photons in a quantum communication or quantum key distribution experimental setup between moving robotic platforms is shown in the figure.
The nomenclature used in the figure:
AL: Alignment Laser
DMSP: Shortpass dichroic mirror
FSM: Fast steering mirror
FFC: Fixed focus collimator
HWP: Half-wave plate
M: Mirror
MTC: Motion tracking camera
MTC & M: Motion tracking camera and mirror
NBF: Narrowband filter
NPBS: Non-Polarizing beamsplitter cube (50:50)
PABBO: Paired Barium borate (BBO) Crystal (Type I SPDC crystals)
PBS: Polarizing beamsplitter cube
PSD: Position sensing detector
QP: Quartz plate
QRC: QR code
SL: Source Laser
SPCM: Single photon counter module
References
Robotics | Quantum robotics | [
"Engineering"
] | 893 | [
"Robotics",
"Automation"
] |
73,919,334 | https://en.wikipedia.org/wiki/Chakr%20Innovation | Chakr Innovation is a cleantech startup based in India specializing in material science technology. The company was founded by graduates from IIT Delhi and works in the fields of air and environmental protection. Chakr Innovation is the first company in India to receive type approval certification for their retrofit emission control device (RECD) from labs approved by the Central Pollution Control Board (CPCB). They have over 15 patents filed, and their work has been recognized across the globe by reputed organizations like the United Nations, WWF, Forbes, and the like.
History
Chakr Innovation was founded in 2016 by Kushagra Srivastava, Arpit Dhupar and Bharti Singhla - graduates from IIT Delhi, to reduce pollution with the help of innovation and technology. The idea began with a group of friends having sugarcane juice at a shop with a wall turned black because of soot particles coming out of the diesel generator exhaust used for crushing sugarcane.
Chakr Innovation launched Chakr Shield in 2017, one year after its incorporation. The device could reduce the particulate matter 2.5 (PM 2.5) emission from a diesel generator by up to 90%. In 2022, the company introduced a dual fuel kit that would allow a diesel generator to run on fossil fuel and natural gas simultaneously in a 30 to 70 ratio.
Products
Chakr Shield is a patented Retrofit Emission Control Device (RECD) by Chakr Innovation. It was also the first in India to get a Type Approval Certification from CPCB-certified labs like ICAT and ARAI for its capability to reduce the pollution from diesel generators by up to 90%.
The Chakr Dual Fuel Kit uses technology to allow a diesel generator set to operate on a mixture of gas and diesel as a fuel, with 70% natural gas and 30% fossil fuel. This can be a perfect conversion kit for industries with access to gas pipeline networks. With the launch of this product, Chakr Innovation reportedly became the only turnkey solution provider in India to control the emissions from diesel generators.
In 2020, Chakr Innovation launched a decontamination cabinet for N95 masks with the help of ozone gas. Ozone is a strong oxidizing agent that destroys viruses and bacteria by diffusing through their protein coats. Chakr DeCoV reportedly inactivated SARS-CoV-2 and reduced the bacterial load by 99.9999%, allowing N95 masks to be reused up to 10 times.
Awards
2016: Winner of Urban Labs Innovation Challenge – University of Chicago
2017: Climate solver award – World Wide Fund for Nature (WWF)
2017: Echoing Green Fellowship
2017: Champions of Change – NITI Aayog
2017: Recipient of "Start-up in Oil & Gas Sector" award – Federation of Indian Petroleum Industry(FIPI)
2018: Winner of Young Champions of the Earth – United Nations Environment Programme (Asia Pacific)
2018: 30 Under 30 Social Entrepreneurs – Forbes
2019: Winner of Maharashtra Startup Week Award – Maharashtra State Innovation Society
References
Pollution
Pollution control technologies
Technology companies established in 2016
Indian companies established in 2016
Manufacturing companies based in Delhi
Manufacturing companies established in 2016
Diesel engine components | Chakr Innovation | [
"Chemistry",
"Engineering"
] | 638 | [
"Pollution control technologies",
"Environmental engineering"
] |
73,923,536 | https://en.wikipedia.org/wiki/Peucemycin | Peucemycin is a polyketide produced by Streptomyces peucetius, a Gram-positive filamentous bacteria that also produces the anticancer compounds daunorubicin and doxorubicin. This compound was elucidated from a cryptic biosynthetic gene cluster and is produced under temperature-specific conditions for bacterial growth (metabolite is present at 18 °C but not 28 °C). Peucemycin has demonstrated bioactivity against growth of S. aureus, P. hauseri, and S. enterica and also is weakly active against cancer cell lines. Peucemycin is biosynthesized through a Type 1 PKS system.
Biosynthesis
Peucemycin is synthesized through a type 1 polyketide synthase with 8 proposed modules from 5 PKS-related genes (peuA-peuE). The type 1 PKS pathway used for biosynthesis is shown in Figure 1. The first gene, peuA, encodes for an initiation module and two elongation modules. The structure of the ketosynthase enzyme in the initiation module has a mutation of a cysteine residue to a glutamine residue that allows for decarboxylation of the starting material without condensation. The next gene, peuB, encodes for modules 3 and 4. Sequencing data of Module 4 indicates presence of a dehydratase enzyme, but amino acid mutations leave this enzyme inactivated, meaning a singular hydroxyl group at carbon 9 is generated. Module 5 is encoded by peuC, and an additional gene, peuI, is proposed to introduce the butyl malonyl-CoA in this module. Modules 6 and 7 are encoded by peuD and peuE respectively. The forming polyketide chain is hydroxylated with cytochrome P450 enzymes, peuH and peuG, and the terminal thioesterase domain in Module 7 catalyzes the product release and macrocycle formation to form peucemycin.
References
Polyketides
Lactones
Heterocyclic compounds with 2 rings
Twelve-membered rings
Diols | Peucemycin | [
"Chemistry",
"Biology"
] | 450 | [
"Biomolecules by chemical classification",
"Bacteria stubs",
"Natural products",
"Polyketides",
"Bacteria"
] |
70,956,038 | https://en.wikipedia.org/wiki/Polyakov%20loop | In quantum field theory, the Polyakov loop is the thermal analogue of the Wilson loop, acting as an order parameter for confinement in pure gauge theories at nonzero temperatures. In particular, it is a Wilson loop that winds around the compactified Euclidean temporal direction of a thermal quantum field theory. It indicates confinement because its vacuum expectation value must vanish in the confined phase due to its non-invariance under center gauge transformations. This also follows from the fact that the expectation value is related to the free energy of individual quarks, which diverges in this phase. Introduced by Alexander M. Polyakov in 1975, they can also be used to study the potential between pairs of quarks at nonzero temperatures.
Definition
Thermal quantum field theory is formulated in Euclidean spacetime with a compactified imaginary temporal direction of length . This length corresponds to the inverse temperature of the field . Compactification leads to a special class of topologically nontrivial Wilson loops that wind around the compact direction known as Polyakov loops. In theories a straight Polyakov loop on a spatial coordinate is given by
where is the path-ordering operator and is the Euclidean temporal component of the gauge field. In lattice field theory this operator is reformulated in terms of temporal link fields at a spatial position as
The continuum limit of the lattice must be taken carefully to ensure that the compact direction has fixed extent. This is done by ensuring that the finite number of temporal lattice points is such that is constant as the lattice spacing goes to zero.
Order parameter
Gauge fields need to satisfy the periodicity condition in the compactified direction. Meanwhile, gauge transformations only need to satisfy this up to a group center term as . A change of basis can always diagonalize this so that for a complex number . The Polyakov loop is topologically nontrivial in the temporal direction so unlike other Wilson loops it transforms as under these transformations. Since this makes the loop gauge dependent for , by Elitzur's theorem non-zero expectation values of imply that the center group must be spontaneously broken, implying confinement in pure gauge theory. This makes the Polyakov loop an order parameter for confinement in thermal pure gauge theory, with a confining phase occurring when and deconfining phase when . For example, lattice calculations of quantum chromodynamics with infinitely heavy quarks that decouple from the theory shows that the deconfinement phase transition occurs at around a temperature of MeV. Meanwhile, in a gauge theory with quarks, these break the center group and so confinement must instead be deduced from the spectrum of asymptotic states, the color neutral hadrons.
For gauge theories that lack a nontrivial group center that could be broken in the confining phase, the Polyakov loop expectation values are nonzero even in this phase. They are however still a good indicator of confinement since they generally experience a sharp jump at the phase transition. This is the case for example in the Higgs model with the exceptional gauge group .
The Nambu–Jona-Lasinio model lacks local color symmetry and thus cannot capture the effects of confinement. However, Polyakov loops can be used to construct the Polyakov-loop-extended Nambu–Jona-Lasinio model which treats both the chiral condensate and the Polyakov loops as classical homogeneous fields that couple to quarks according to the symmetries and symmetry breaking patters of quantum chromodynamics.
Quark free energy
The free energy of quarks and antiquarks, subtracting out the vacuum energy, is given in terms of the correlation functions of Polyakov loops
This free energy is another way to see that the Polyakov loop acts as an order parameter for confinement since the free energy of a single quark is given by . Confinement of quarks means that it would take an infinite amount of energy to create a configuration with a single free quark, therefore its free energy must be infinite and so the Polyakov loop expectation value must vanish in this phase, in agreement with the center symmetry breaking argument.
The formula for the free energy can also be used to calculate the potential between a pair of infinitely massive quarks spatially separated by . Here the potential is the first term in the free energy, so that the correlation function of two Polyakov loops is
where is the energy difference between the potential and the first excited state. In the confining phase the potential is linear , where the constant of proportionality is known as the string tension. The string tension acquired from the Polyakov loop is always bounded from above by the string tension acquired from the Wilson loop.
See also
Quark–gluon plasma
't Hooft loop
References
Gauge theories
Quantum chromodynamics
Lattice field theory
Phase transitions | Polyakov loop | [
"Physics",
"Chemistry"
] | 987 | [
"Physical phenomena",
"Phase transitions",
"Phases of matter",
"Critical phenomena",
"Statistical mechanics",
"Matter"
] |
70,958,960 | https://en.wikipedia.org/wiki/Atmospheric%20circulation%20of%20exoplanets | Atmospheric circulation of a planet is largely specific to the planet in question and the study of atmospheric circulation of exoplanets is a nascent field as direct observations of exoplanet atmospheres are still quite sparse. However, by considering the fundamental principles of fluid dynamics and imposing various limiting assumptions, a theoretical understanding of atmospheric motions can be developed. This theoretical framework can also be applied to planets within the Solar System and compared against direct observations of these planets, which have been studied more extensively than exoplanets, to validate the theory and understand its limitations as well.
The theoretical framework first considers the Navier–Stokes equations, the governing equations of fluid motion. Then, limiting assumptions are imposed to produce simplified models of fluid motion specific to large scale motion atmospheric dynamics. These equations can then be studied for various conditions (i.e. fast vs. slow planetary rotation rate, stably stratified vs. unstably stratified atmosphere) to see how a planet's characteristics would impact its atmospheric circulation. For example, a planet may fall into one of two regimes based on its rotation rate: geostrophic balance or cyclostrophic balance.
Atmospheric motions
Coriolis force
When considering atmospheric circulation we tend to take the planetary body as the frame of reference. In fact, this is a non-inertial frame of reference which has acceleration due to the planet's rotation about its axis. Coriolis force is the force that acts on objects moving within the planetary frame of reference, as a result of the planet's rotation. Mathematically, the acceleration due to Coriolis force can be written as:
where
is the flow velocity
is the planet's angular velocity vector
This force acts perpendicular to the flow and velocity and the planet's angular velocity vector, and comes into play when considering the atmospheric motion of a rotating planet.
Mathematical models
Navier-Stokes momentum equation
Conservation of momentum for a flow is given by the following equation:
where
is the material derivative
is the pressure
is the density
is the gravitational acceleration
is the vector from the rotation axis
is the force of friction
The term is the centripetal acceleration due to the rotation of the planet.
Simplified model for large-scale motion
The above equation can be simplified to a form suitable for large-scale atmospheric motion. First, the velocity vector is split into the three components of wind:
where
is the zonal wind
is the meridional wind
is the vertical wind
Next, we ignore friction and vertical wind. Thus, the equations for zonal and meridional wind simplify to:
and the equation in the vertical direction simplifies to the hydrostatic equilibrium equation:
where the parameter has absorbed the vertical component of the centripetal force. In the above equations:
is the Coriolis parameter, is the latitude and is the radius of the planet.
Key drivers of circulation
Thermodynamics
Temperature gradients are one of the drivers of circulation, as one effect of atmospheric flow is to transport heat from places of high temperature to those of low temperature in an effort to reach thermal equilibrium. Generally, planets have stably stratified atmospheres. This means that motion due to temperature gradient in the vertical direction is opposed by the pressure gradient in the vertical direction. In this case, it is the horizontal temperature gradients (on constant pressure surfaces) which drive circulation. Such temperature gradients are typically maintained by uneven heating/cooling throughout a planet's atmosphere. On Earth, for example, at the equator, the atmosphere absorbs more net energy from the Sun that it does at the poles.
Planetary rotation
As noted previously, planetary rotation is important when it comes to atmospheric circulation as Coriolis and centripetal forces arise as a results of planetary rotation. When considering a steady version of the simplified equations for large-scale motion presented above, both Coriolis and centripetal forces work to balance out the horizontal pressure gradients. Depending on the rotation rate of the planet, one of these forces will dominate and affect the atmospheric circulation accordingly.
Geostrophic balance
For a planet with rapid rotation, the Coriolis force is the dominant force which balances pressure gradient. In this case the equations for large-scale motion further simplify to:
where the subscript denotes a constant altitude surface and the subscript denotes geostrophic wind. Note that in this case, the geostrophic wind is perpendicular to pressure gradient. This is due to the fact that Coriolis force acts perpendicularly to the direction of wind. Therefore, since pressure gradient induces a wind parallel to the gradient, the Coriolis force will act perpendicularly to the pressure gradient. As Coriolis force dominates in this regime, the resulting winds are perpendicular to pressure gradient.
Cyclostrophic balance
For a planet with a low rotation rate and negligible Coriolis force, pressure gradient may instead be balanced by centripetal acceleration. In this case the equations for large-scale motion further simplify to:
for a prevailing wind in the east-west direction.
See also
Exometeorology
Extraterrestrial atmospheres
References
Exoplanetology
Equations of astronomy | Atmospheric circulation of exoplanets | [
"Physics",
"Astronomy"
] | 1,054 | [
"Concepts in astronomy",
"Equations of astronomy"
] |
70,959,045 | https://en.wikipedia.org/wiki/Amanda%20Paulovich | Amanda Grace Paulovich is an oncologist, and a pioneer in proteomics using multiple reaction monitoring mass spectrometry to study tailored cancer treatment.
Education
Paulovich received a BS in Biological Sciences from Carnegie Mellon University in 1988, a PhD in Genetics from University of Washington in 1996, under the direction of Leland Hartwell. She also received a MD from University of Washington in 1998. Follow her residency in Internal Medicine at Massachusetts General Hospital, she also completed a Postdoctoral Fellowship in Computational Biology at the Massachusetts Institute of Technology Whitehead Center for Genomic Research in 2003, and a Fellowship in Medical Oncology at the Dana Farber Cancer Institute in 2004.
Career
Paulovich is a Professor in Clinical Research, an Aven Foundation Endowed Chair, and the Director of Early Detection Initiative at the Fred Hutchinson Cancer Research Center. She was inducted to the American Society for Clinical Inviestigation in 2012.
Paulovich is an expert in proteomics. Her targeted proteomics method uses multiple reaction monitoring mass spectrometry to target cancer biomarkers with ongoing clinical trials, and was named Method of the Year in 2012 by Nature Methods. She founded Precision Assays in 2016, whose rights to targeted assays were acquired by CellCarta in 2022.
Awards
2014 Life Science Innovation Northwest Woman to Watch in Life Science Award
2015 Human Proteome Organization (HUPO) Distinguished Achievement in Proteomic Sciences Award
Patent applications
Identification and use of biomarkers for detection and quantification of the level of radiation exposure in a biological sample (2011) US 20130052668 A1
Compositions and methods for reliably detecting and/or measuring the amount of a modified target protein in a sample (2011) US 20130052669 A1
References
Living people
University of Washington alumni
Carnegie Mellon University alumni
American oncologists
Mass spectrometrists
University of Washington faculty
20th-century American women scientists
Year of birth missing (living people)
Fred Hutchinson Cancer Research Center people | Amanda Paulovich | [
"Physics",
"Chemistry"
] | 398 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
70,961,697 | https://en.wikipedia.org/wiki/Power-voltage%20curve | Power-voltage curve (also P-V curve) describes the relationship between the active power delivered to the electrical load and the voltage at the load terminals in an electric power system under a constant power factor. When plotted with power as a horizontal axis, the curve resembles a human nose, thus it is sometimes called a nose curve. The overall shape of the curve (similar to a parabola placed on its side) is defined by the basic electrical equations and does not change much when the characteristics of the system vary: leading power factor lead stretches the "nose" further to the right and upwards, while the lagging one shrinks the curve. The curve is important for voltage stability analysis, as the coordinate of the tip of the nose defines the maximum power that can be delivered by the system.
As the load increases from zero, the power-voltage point travels from the top left part of the curve to the tip of the "nose" (power increases, but the voltage drops). The tip corresponds to the maximum power that can be delivered to the load (as long as sufficient reactive power reserves are available). Past this "collapse" point additional loads cause drop in both voltage and power, as the power-voltage point travels to the bottom left corner of the plot. Intuitively this result can be explained when a load that consists entirely of resistors is considered: as the load increases (its resistance thus lowers), more and more of the generator power dissipates inside the generator itself (that has it own fixed resistance connected sequentially with the load). Operation on the bottom part of the curve (where the same power is delivered with lower voltage – and thus higher current and losses) is not practical, as it corresponds to the "uncontrollability" region.
If sufficient reactive power is not available, the limit of the load power will be reached prior to the power-voltage point getting to the tip of the "nose". The operator shall maintain a sufficient margin between the operating point on the P-V curve and this maximum loading condition, otherwise, a voltage collapse can occur.
A similar curve for the reactive power is called Q-V curve.
References
Sources
Electrical engineering | Power-voltage curve | [
"Engineering"
] | 445 | [
"Electrical engineering"
] |
70,966,557 | https://en.wikipedia.org/wiki/Radium%20silk | Radium silk was a commonly used name for a type of lightweight, lustrous silk primarily used in women's clothing and undergarments from the middle of the first decade of the 1900s until the term went out of vogue in the 1920s. Although the name references radium, a radioactive element first discovered in 1898, the substance is not contained in the fabric. As the deleterious effects of radium on the human body became better known in the middle-1920s, the use of the word as an adjective gained a negative connotation and fell out of favor among advertisers and consumers alike.
History
Origin
The descriptive term "radium silk" began to be used no later than 1903. The term apparently originated in the fashion markets of Paris in association with particularly lustrous fabrics, with reference made to the newly discovered element radium, first identified in 1898. The material was notable both for its gloss and strength.
"Radium silk" did not contain radium or any other radioactive material. Rather, it was named for its tendency to shimmer in light, bringing to mind the phosphorescence of the world's newly discovered heavy metal.
The term was successfully trademarked in the United States by the Gilbert Company of New York City, which registered the mark in August 1905. Despite this proprietary claim, the phrase "radium silk" was generally used in a generic context throughout the United States. The word "radium" had a very positive connotation in this period and was used as a vapid qualifying adjective, similar to the way that the words "platinum" and "titanium" are bandied about for products not containing either metal today.
By the fall of 1906, "radium silk" had truly arrived in the fashion world of the United States. A September 1906 syndicated plate, appearing in dozens of newspapers around the country, enthusiastically described the "exquisitely toned material which has had such vogue in Paris for the last few months."
The article continued:
Surely there are few fabrics which can better stand popular favor. There is a delicacy, luster, and wonderful color to the radium silks that makes them peculiarly satisfying to a refined taste.
Akin to the best foulards and liberty gauzes is it, with the best qualities of both. Heavier and finer weaves than the latter, it has all its graceful clinginess, with greater durability, while the softness and simple patterns of the former are enhanced by a high sheen, caused by being woven of organzine so fine that the single thread is barely visible.
But the chief beauty of the radium silks is their opalescent coloring, so indescribably lovely. A pink will have the soft blush of the heart of a shell; the tint of the sky shining through a fleeting cloud on a sunny day is seen in the blues, while the lavenders, greens, yellows, and even the darker colors all have the soft undertones that gives them a beautiful iridescent effect.
Downfall
Although many consumer products of this era were made containing radium, which was initially believed to have highly salutary properties, bright and shiny radium silk did not.
The turning point for "radium silk" came in 1925, when the New York Times broke the news of five deaths of watch face-painters from radiation poisoning, developed by handling radium on the job. Descriptions of the new ailment, referred to as radium necrosis, were grisly, and the disintegrating jaws and cancers developed by these Radium Girls attracted national attention. By the end of the decade the term "radium" was no longer viable as a positive descriptor, and it was hastily abandoned.
See also
Radium Girls
Footnotes
Further reading
Taylor Orci, "How We Realized Putting Radium in Everything Was Not the Answer," The Atlantic, March 7, 2013.
Animal glandular products
Biomaterials
Insect products
Woven fabrics | Radium silk | [
"Physics",
"Biology"
] | 809 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
76,906,020 | https://en.wikipedia.org/wiki/Discharge%20regime | Discharge regime, flow regime, or hydrological regime (commonly termed river regime, but that term is also used for other measurements) is the long-term pattern of annual changes to a stream's discharge at a particular point. Hence, it shows how the discharge of a stream at that point is expected to change over the year. It is thus the hydrological equivalent of climate. The main factor affecting the regime is climate, along with relief, bedrock, soil and vegetation, as well as human activity.
Like general trends can be grouped together into certain named groups, either by what causes them and the part of the year they happen (most classifications) or by the climate in which they most commonly appear (Beckinsale classification). There are many different classifications; however, most of them are localized to a specific area and cannot be used to classify all the rivers of the world.
When interpreting such records of discharge, it is important to factor in the timescale over which the average monthly values were calculated. It is particularly difficult to establish a typical annual discharge regime for rivers with high interannual variability in monthly discharge and/or significant changes in the catchment's characteristics (e.g. tectonic influences or the introduction of water management practices).
Overview
Maurice Pardé was the first to classify river regimes more thoroughly. His classification was based on what the primary reasons for such pattern are, and how many of them there are. According to this, he termed three basic types:
Simple regimes, where there is only one dominant factor.
Mixed or double regimes, where there are two dominant factors.
Complex regimes, where there are multiple dominant factors.
Pardé split the simple regimes further into temperature-dependent (glacial, mountains snow melt, plains snow melt; latter two often called "nival") and rainfall-dependent or pluvial (equatorial, intertropical, temperate oceanic, mediterranean) categories.
Beckinsale later more clearly defined the distinct simple regimes based on climate present in the catchment area and thus splitting the world into "hydrological regions". His main inspiration was the Köppen climate classification, and he also devised strings of letters to define them. However, the system was criticised as it based the regimes on climate instead of purely on discharge pattern and also lacked some patterns.
Another attempt to provide classification of world regimes was made in 1988 by Heines et al., which was based purely on the discharge pattern and classified all patterns into one of 15 categories; however, the determination is sometimes contradictory and quite complex, and the distinction does not differentiate between simple, mixed or complex regimes as it determines the regime solely on the main peak, which is contradictory to commonly used system in the Alpine region. Hence, rivers with nivo-pluvial regimes are commonly split into two different categories, while most pluvio-nival regimes are all grouped into a single category along with complex regimes – the uniform regime, despite showing quite pronounced and regular yearly pattern. Moreover, it does not differentiate between temperature-dependant and rainfall-dependant regimes. Nonetheless, it added one new regime that was not present in Beckinsale's classification, the moderate mid-autumn regime with a peak in November (Northern Hemisphere) or May (Southern Hemisphere). This system too, is very rarely used.
In later years, most of the research was only done in the region around the Alps, so that area is much more thoroughly researched than others, and most names for subclasses of regimes are for those found there. These were mostly further differentiated from Pardé's distinction. The most common names given, although they might be defined differently in different publications, are:
Glacial, for regimes where most of water is due to melting of snow and ice and the peak occurs in late summer.
Nival, with a peak in late spring or early summer and still high importance of snow-melt.
Pluvial, which is (almost) purely based on seasonal rainfall and not on snow. A peak is usually in winter, although it can occur at any point along the year. If it occurs in the time of monsoons, it is sometimes called tropical pluvial.
Nivo-pluvial, with a nival peak in late spring and a pluvial peak in the fall. The main minimum is in winter.
Pluvio-nival, which is similar to nivo-pluvial, but the nival peak is earlier (March/April on the Northern Hemisphere) and the main minimum is in summer, not in winter.
Nivo-glacial, for regimes sharing characteristics of glacial and nival regimes and a peak in mid summer.
The Pardé's differentiation of single regimes from mixed regimes is sometimes rather considered to be based on the number of peaks rather than the number of factors as it is more objective. Most of nival and even glacial regimes have some influence of rainfall and regimes considered pluvial have some influence of snowfall in regions with continental climate; see the coefficient of nivosity. The distinction between both classifications can be seen with the nivo-glacial regime, which is sometimes considered as a mixed regime, but is often considered as a simple regime in more detailed studies. However, many groupings of multiple pluvial or nival peaks are still considered a simple regime in some sources.
Measurement of river regimes
River regimes, similarly to the climate, are compounded by averaging the discharge data for several years; ideally that should be 30 years or more, as with the climate. However, the data is much scarcer, and sometimes data for as low as eight years are used. If the flow is regular and shows very similar year-to-year pattern, that could be enough, but for rivers with irregular patterns or for those that are most of the time dry, that period has to be much longer for accurate results. This is especially the problem with wadis as they often have both traits. The discharge pattern is specific not only to a river, but also a point along a river as it can change with new tributaries and an increase in the catchment area.
This data is then averaged for each month separately. Sometimes, the average maximum and minimum for each month is also added. But unlike climate, rivers can drastically range in discharge, from small creeks with mean discharges less than 0.1 cubic meters per second to the Amazon River, which has average monthly discharge of more than 200,000 cubic meters per second at its peak in May. For regimes, the exact discharge of a river in one month is not as important as is the relation to other monthly discharges measured at the same point along a particular river. And although discharge is still often used for showing seasonal variation, two other forms are more commonly used, the percentage of yearly flow and the Pardé coefficient.
Percentage of yearly flow represents how much of the total yearly discharge the month contributes and is calculated by the following formula:
,
where is the mean discharge of a particular month and is the mean yearly discharge. Discharge of an average month is and the total of all months should add to 100% (or rather, roughly, due to rounding).
Even more common is the Pardé coefficient, discharge coefficient or simply the coefficient, which is more intuitive as an average month would have a value of 1. Anything above that means there is bigger discharge than average and anything lower means that there is lower discharge than the average. It is calculated by the following equation:
,
where is the mean discharge of a particular month and is the mean yearly discharge. Pardé coefficients for all months should add to 12 and are without a unit.
The data is often presented is a special diagram, called a hydrograph, or, more specifically, an annual hydrograph as it shows monthly discharge variation in a year, but no rainfall pattern. The units used in a hydrograph can be either discharge, monthly percentage or Pardé coefficients. The shape of the graph is the same in any case, only the scale needs to be adjusted. From the hydrograph, maxima and minima are easy to spot and the regime can be determined more easily. Hence, they are a vital part for river regimes, just as climographs are for climate.
Yearly coefficient
Similarly to Pardé's coefficient, there are also other coefficients that can be used to analyze the regime of a river. One possibility is to look how many times the discharge during the peak is larger than the discharge during the minimum, rather than the mean as with Pardé's coefficient. It is sometimes called the yearly coefficient and is defined as:
,
where is the mean discharge of the month with the highest discharge and is the mean discharge of the month with the lowest discharge. If is 0, then the coefficient is undefined.
Annual variability
Annual variability shows how much the peaks on average deviate from the perfectly uniform regime. It is calculated as the standard deviation of the mean discharge of months from the mean yearly discharge. That value is then divided by the mean yearly discharge and multiplied by 100%, i.e.:
The most uniform regimes have a value below 10%, while it can reach more than 150% for rivers with the most drastic peaks.
Grimm coefficients
Grimm coefficients, used in Austria, are not defined for a single month, but for 'doppelmonats', i.e., for two consecutive months. The mean flow of both months – January and February, February and March, March and April, and so on – is added, still conserving 12 different values throughout the year. This is done since for nival regimes, this better correlates to different types of peak (nival, nivo-glacial, glacial etc.). They are defined as follows:
(Initial definition)
(Adapted definition so values are closer to Pardé's; version used on Wikipedia)
,
where .
Coefficient of nivosity
Pardé and Beckinsale determined whether the peak is pluvio-nival, nivo-pluvial, nival or glacial based on the fact what percentage of the discharge during the warm season is contributed by the melt-water, and not by the time of the peak as it is common today. However, it has been calculated for few rivers. The values are the following:
0–6%: pluvial
6–14%: pluvio-nival
15–25%: nivo-pluvial
26–38%: transition to nival
39–50%: pure nival to nivo-glacial
more than 50%: glacial
Factors affecting river regimes
There are multiple factors that determine when a river will have a greater discharge and when a smaller one. The most obvious factor is precipitation, since most rivers get their water supply in that way. However, temperature also plays a significant role, as well as the characteristics of its catchment area, such as altitude, vegetation, bedrock, soil and lake storage. An important factor is also the human factor as humans may either fully control the water supply by building dams and barriers, or partially by diverting water for irrigation, industrial and personal use. The factor that differentiates classification of river regimes from climate the most is that rivers can change their regime along its path due to a change of conditions and new tributaries.
Climate
The primary factor affecting river regimes is the climate of its catchment area, both by the amount of rainfall and by the temperature fluctuations throughout the year. This has led Beckinsale to classify regimes based primarily on the climate. Although there is correlation, climate is still not fully reflected in a river regime. Moreover, a catchment area can span through more than one climate and lead to more complex interactions between the climate and the regime.
A discharge pattern can closely resemble the rainfall pattern since rainfall in a river's catchment area contributes to its water flow, rise of the underground water and filling of lakes. There is some delay between the peak rainfall and peak discharge, which is also dependent on the type of soil and bedrock, since the water from rain must reach the gauging station for the discharge to be recorded. The time is naturally longer for bigger catchment areas.
If the water from precipitation is frozen, such as snow or hail, it has to melt first, leading to longer delays and shallower peaks. The delay becomes heavily influenced by the temperature since temperatures below zero cause the snow to stay frozen until it becomes warmer in the spring, when temperatures rise and melt the snow, leading to a peak, which might be again a bit delayed. The time of the peak is determined by when the midday temperature sufficiently soars above , which is usually considered to be when the average temperature reaches above . In the mildest continental climates, bordering the oceanic climate, the peak is usually in March on the Northern Hemisphere or September on the Southern Hemisphere, but can be as late as August/February on the highest mountains and ice caps, where the flow also heavily varies throughout the day.
Melting of glaciers alone can also supply large amounts of water even in areas where there is little to no precipitation, as in ice cap climate and cold dry and semi-dry climates.
On the other side, high temperatures and sunny weather lead to a significant increase in evapotranspiration, either directly from river, or from moist soil and plants, leading to the fact that less precipitation reaches the river and that plants consume more water, respectively. For terrain in darker colors, the rate of evaporation is higher than for a terrain in lighter colors due to lower albedo.
Relief
Relief often determines how sharp and how wide the nival peaks are, leading Pardé to already classify mountain nival and plain nival regimes separately. If the relief is rather flat, the snow will melt everywhere in a short period of time due to similar conditions, leading to a sharp peak about three months wide. However, if the terrain is hilly or mountainous, snow located in lowlands will melt first, with the temperature gradually decreasing with altitude (about per 1000 m). Hence the peak is wider, and especially the decrease after the peak can extend all the way to late summer when the temperatures are highest. Due to this phenomenon, the precipitation in lowland areas might be rainfall, but snow in higher areas, leading to a peak quickly after the rainfall and another when the temperatures start to melt the snow.
Another important aspect is altitude. At exceptionally high altitudes, atmosphere is thinner so the solar insolation is much greater, which is why Beckinsale differentiates between mountain nival and glacial from similar regimes found at higher latitudes.
Additionally, steeper slopes lead to faster surface runoff, leading to more prominent peaks, while flat terrain allows for lakes to spread, which regulate the discharge of the river downstream. Larger catchment areas also lead to shallower peaks.
Vegetation
Vegetation in general decreases surface runoff and consequently discharge of a river, and leads to greater infiltration. Forests dominated by trees that shed their leaves during winter have an annual pattern of the extent of water interception, which shapes the pattern in its own way. The impact of vegetation is noticeable in all areas but the driest and coldest, where vegetation is scarce. Vegetation growing in the river beds can drastically hinder the flow of water, especially in the summer, leading to smaller discharges.
Soil and bedrock
The most important aspect of the ground in this regard is the permeability and water-holding capacity of the rocks and soils in the discharge basin. In general, the more the ground is permeable, the less pronounced the maxima and minima are since the rocks accumulate water during the wet season and release it during the dry season; lag time is also longer since there is less surface runoff. If the wet season is really pronounced, the rocks become saturated and fail to infiltrate excess water, so all rainfall is quickly released into the stream. On the other side, however, if the rocks are too permeable, as in the karst terrain, rivers might have a notable discharge only when the rocks are saturated or the groundwater level rises and would otherwise be dry with all the water accumulating in subterranean rivers or disappearing in ponors. Examples of rocks with high water-holding capacity include limestone, sandstone and basalt, while materials used in urban areas (such as asphalt and concrete) have very low permeability leading to flash floods.
Human activity
Human factors can also greatly change discharge of a river. On one side, water can be extracted either directly from a river or indirectly from groundwater for the purposes of drinking and irrigation, among others, lowering the discharge. For the latter, the consumption usually spikes during the dry season or during crop growth (i.e., summer and spring). On the other side, waste waters are released into streams, increasing the discharge; however, they are more or less constant all year round so they do not impact the regime as much.
Another important factor is the construction of dams, where a lot of water accumulates in a lake, making the minima and maxima less pronounced. In addition, the discharge of water is often in large part regulated in regard to other human needs, such as electricity production, meaning that the discharge of a river downstream of a dam can be completely different than upstream.
Here, an example is given for the Aswan dam. As can be observed, the yearly coefficient is lower at the dam than upstream, showing the effect of the dam.
Simple regimes
Simple regimes are hence only those that have exactly one peak; this does not hold for cases where both peaks are nival or both are pluvial, which are often grouped together into simple regimes. They are grouped into five categories: pluvial, tropical pluvial, nival, nivo-glacial and glacial.
Pluvial regime
Pluvial regimes occur mainly in oceanic and mediterranean climates, such as the UK, New Zealand, southeastern USA, South Africa and the Mediterranean regions. Generally, peaks occur in colder season, from November to May on the Northern Hemisphere (although April and May occur in a small area near Texas) and from June to September on the Southern Hemisphere. Pardé had two different types for this category – the temperate pluvial and the Mediterranean regimes. The peak is due to rainfall in the colder period and the minimum is in summer due to higher evapotranspiration and usually less rainfall.
The temperate pluvial regime (Beckinsale symbol CFa/b) usually has a milder minimum and the discharge is quite high also during the summer.
Meanwhile, the Mediterranean regime (Beckinsale symbol CS) has a more pronounced minimum due to a lack of rainfall in the region, and rivers have a noticeably smaller discharge during summer, or even dry up completely.
Beckinsale distinguished another pluvial regime, with a peak in April or May, which he denoted CFaT as it occurs almost solely around Texas, Louisiana and Arkansas.
Tropical pluvial regime
The name for the regime is misleading; the regime commonly occurs anywhere the main rainfall is during summer. This includes the intertropical region, but also includes parts influenced by monsoon, extending north even to Russia and south to central Argentina. It is characterized by a strong peak during the warm period, with a maximum from May to December on the Northern Hemisphere and from January to June on the Southern Hemisphere. The regime therefore allows for a lot of variation, both in terms of when the peak occurs and how low the minimum is.
Pardé additionally differentiated this category into two subtypes and Beckinsale split it into four. The most common such regime is Beckinsale's regime AM (for monsoon, as in Köppen classification), which is characterized by a period of low discharge for up to four months. It occurs in western Africa, the Amazon basin, and southeastern Asia.
In more arid areas, the period of low water increases to six, seven months and up to nine, which Beckinsale classified as AW. The peak is hence narrower and greater.
In dry climate, ephemeral streams that have irregular year-to-year patterns exist. Most of the time, it is dry and it only has discharge during flash floods. Beckinsale classifies it as BW, but only briefly mentions. Due to irregularity, the peak might be spread out or show multiple peaks, and could resemble other regimes.
The previous three regimes are all called intertropical by Pardé but the next is also differentiated by him as it has two maxima instead of one. He termed the name equatorial regime, while Beckinsale used the symbol AF. It occurs in Africa around Cameroon and Gabon, and in Asia in Indonesia and Malaysia, where one peak is in October/November/December and another in April/May/June, sort of being symmetrical for both hemispheres. Interestingly, the same pattern is not observed in South America.
Nival regime
Nival regime is characterized by a maximum which is contributed by the snow-melt as the temperatures increase above the melting point. Hence, the peaks occur in spring or summer. They occur in regions with continental and polar climate, which is on the Southern Hemisphere mostly limited to the Andes, Antarctica and minor outlying islands.
Pardé split the regimes into two groups: the mountain nival and the plain nival regimes, which Beckinsale also expanded. Plain regimes have maxima that are more pronounced and narrow, usually up to three months, and the minimum is milder and mostly not much lower from other months apart from the peak. The minimum, if the regime is not transitioning to a pluvio-nival regime, is usually quickly after the maximum, while for mountain regimes, it is often right before. Such regimes are exceptionally rare on the Southern Hemisphere.
Nival regimes are commonly intermittent in subarctic climate where the river freezes during winter.
Plain nival regime
Beckinsale differentiates six plain nival to nivo-pluvial regimes, mainly based on when the peak occurs. If the peak occurs in March or April, Beckinsale called this a DFa/b regime, which correlates to Mader's transitional pluvial regime. There, it is defined more precisely that the peak is in March or April, with the second highest discharge in the other of those months, not February or May. This translates to a peak in September or October on the southern hemisphere. This regime occurs in most European plains and parts of St. Lawrence River basin.
If the nivo-pluvial peak occurs later, in April or May (October or November on the Southern Hemisphere), followed by the discharge of the other month, the regime is transitional nival or DFb/c. This regime is rarer and occurs mostly in parts of Russia and Canada, but also at some plains at higher altitudes.
In parts of Russia and Canada and on elevated plains, the peak can be even later, in May or June (November or December on the Southern Hemisphere). Beckinsale denoted this regime with DFc.
Beckinsale also added another category, Dwd, for rivers that completely diminish during the winter due to cold conditions with a sharp maximum in the summer. Such rivers occur in Siberia and northern Canada. The peak can be from May to July on Northern Hemisphere or from November to January on Southern Hemisphere.
Apart from that, he also added another category for regimes with pluvio-nival or nivo-pluvial maxima where the pluvial maximum corresponds to a Texan or early tropical pluvial regime, not the usual temperate pluvial. This regime occurs in parts of PRC and around Kansas.
If this peak happens later, Beckinsale classified it as DWb/c. The peak can occur as late as September on the Northern Hemisphere or March on the Southern Hemisphere.
Mountain nival regime
Pardé and Beckinsale both assigned only one category to the mountain nival regime (symbol HN), but Mader distinguishes several of them. If the peak occurs in April or May on Northern Hemisphere and October or November on Southern Hemisphere with the discharge of the other of those two months following, it is called transitional nival, common for lower hilly areas.
If the peak is in May or June on the Northern Hemisphere, or November or December on Southern Hemisphere, followed by the other of those two, the regime is called mild nival.
The regime which Mader just calls 'nival' is when the highest discharge is in June/December, followed by July/January, and then May/November.
Nivo-glacial regime
The nivo-glacial regime occurs in areas where seasonal snow meets the permanent ice sheets of glaciers on top of mountains or at higher latitudes. Therefore, both melting of snow and ice from glaciers contribute to produce a maximum in early or mid summer. In turn, it could still be distinguished between plain and mountain regimes, but that distinction is rarely made despite being quite obvious. It is also characterized by great diurnal changes, and a sharp maximum. Pardé and Beckinsale did not distinguish this regime from glacial and nival regimes. Mader defines it as having a peak in June or July, followed by the other of the two, and then the August's discharge, which translates to a peak in December or January, followed by the other two and then February for Southern Hemisphere. Such regimes occur in the Alps, the Himalayas, Coast Mountains and southern Andes.
Plain nivo-glacial regimes occur on Greenland, northern Canada and Svalbard.
Glacial regime
The glacial regime is the most extreme variety of temperature-dependent regimes and occurs in areas where more than 20% of its catchment area is covered by glaciers. This is usually at altitudes over , but it can also happen in polar climates which was not explicitly mentioned by Pardé, who grouped both categories together. Rivers with this regime also experience great diurnal variations.
The discharge is heavily dominated by the melting of glaciers, leading to a strong maximum in late summer and a really intense minimum during the rest of the year, unless it has major lake storage, such as the Rhône after the Lake Geneva or the Baker River, which is shown below. Mader defines it to have the highest discharge in July or August, followed by the other month.
In really extreme cases (mostly on Antarctica), there could also be a plain glacial regime.
Mixed regimes
Mixed or double regimes are regimes where one peak is due to a temperature-dependent factor (snow or ice melt) and one is due to rainfall. There are many possible combinations, but only some have been studied in more detail. They can also be split into two categories – plain (versions of Beckinsale's plain nival regimes with another peak) and mountain. They can be in general thought of as combinations of two simple regimes but the cold-season pluvial peak is usually in autumn, not in late winter as is common for temperate pluvial regime.
Mixed regimes are usually split into two other categories: the nivo-pluvial and pluvio-nival regimes, the first having a nival peak in late spring (April to June on Northern Hemisphere, October to December on Southern Hemisphere) and the biggest minimum in the winter while the latter usually has a nival peak in early spring (March or April on Northern Hemisphere, September or October on Southern Hemisphere) and the biggest minimum in the summer.
Plain mixed regime
Beckinsale did not really classify the regimes by the number of factors contributing to the discharge, so such regimes are grouped with simple regimes in his classification as they appear in close proximity to those regimes. For all of his six examples, mixed regimes can be found, although for DFa and DWd, that is quite rare. In the majority of cases, they are nivo-pluvial with the main minimum in winter, except for DFa/b.
Mountain mixed regime
Mountain mixed regimes are thoroughly researched and quite common in the Alps, and rivers with such regimes rise in most mountain chains. Beckinsale does not distinguish them from plain regimes, however, they are classified rather different from his classification in newer sources.
Mader classifies mixed regimes with the nival peaks corresponding to mild nival or Mader's nival as 'winter nival' and 'autumn nival', depending on the pluvial peak. The winter peak is usually small. In monsoonal areas, the peak can be in summer as well.
Mader denoted only those regimes with nival peaks corresponding to transitional nival as 'nivo-pluvial'. Hrvatin in his distinction also differentiated between 'high mountain Alpine nivo-pluvial regime' and 'medium mountain Alpine nivo-pluvial regime', the first showing significant difference between the minima and the other not, although some regimes in his classification also have mild nival peaks. In Japan, the pluvial peak is in the summer.
In Mader's classification, any regime with a transitional pluvial peak is pluvio-nival. Hrvatin also defines it further with a major overlap to Mader's classification. If minima are rather mild, then it is classified as 'Alpine pluvio-nival regime', if minima are more pronounced but the peaks are mild, then it is classified as 'Dinaric-Alpine pluvio-nival regime' and if the peaks are also pronounced, then it is 'Dinaric pluvio-nival regime'. His 'Pannonian pluvio-nival regime' corresponds to a plain mixed regime. Japan has mixed regimes with tropical pluvial peak.
Complex regimes
Complex regimes is the catch-all category for all rivers where the discharge is influenced by many different factors that occur at different times of the year. For rivers that flow through many different climates and have many tributaries from different climates, their regime can become unrepresentative of any area the river's catchment area is in. Many of the world's longest rivers have such regimes, such as the Nile, the Congo, the St. Lawrence River and the Rhone. A special form of such regimes is the uniform regime, where all peaks and minima are extremely mild.
References
Bibliography
Rivers
Hydrology | Discharge regime | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 6,133 | [
"Hydrology",
"Environmental engineering"
] |
76,906,393 | https://en.wikipedia.org/wiki/Freya%20Blekman | Freya Blekman is a Dutch professor at the University of Hamburg and the lead scientist at Deutsches Elektronen-Synchrotron (DESY). She contributed to the discovery of the Higgs boson at CERN and has been endowed the Helmholtz Distinguished Professorship. Her individual work specializes in the physics and experimental aspects of elementary particles and fields, specifically looking at the top quark sector by using precision measurements.
Early life and education
Blekman was born in the Netherlands. She grew up with a relatively arts and social-sciences focused family. Her grandmother was more math-oriented, but was forced to quit studying after high school to support her brother. This event pushed Blekman to go to university herself and pursue the sciences.
As an undergraduate, she was a part of the University of Amsterdam Science Fair team that presented their experiments during the CERN 40th anniversary event in 1994, jumpstarting her future career at the company. While originally studying biology, Blekman ran into her physics teacher in the streets which convinced her to switch to studying physics. Following this undergraduate degree in physics, she received a Masters of Science at the University of Amsterdam in 2000 on R&D for the LHCb experiment at CERN. In 2005, she received her PhD at the University of Amsterdam, although most of the research was based at the D0 experiment at Fermilab in the United States. Her thesis addressed the measurement of top quark pair production in the all-hadronic channel using the D0 experiment.
Career and research
After receiving her PhD Blekman joined the Compact Muon Solenoid (CMS) experiment in 2005 as a postdoctoral researcher at Imperial College London from 2005 to 2007 and Cornell University from 2007 to 2010. As the only woman among 100 people at Imperial College London, Blekman focused on CMS software and triggers, particle flow, and tau identification. At Cornell she researched pixel detector commissioning and measuring of the top quark pair production cross section using the Large Hadron Collider (LHC), which was turning on for the first time.
Following her postdoc Blekman was an assistant professor at Vrije Universiteit Brussel from 2010 to 2014 before becoming an associate professor until 2018. She then became a full-time professor of Vrije from 2019 to 2021, where she researched physics beyond the standard model in the top sector. During her academic career, Blekman was a member of many notable societies, including the German Physical Society, Institute of Physics (UK), and Belgian and Dutch physics societies.
Blekman made considerable contributions within the team that worked to answer questions about dark matter, gravitational waves, quantum theories, and the physics of the Higgs-Boson. Blekman and her team at CERN discovered the new standard model of the Higgs-Boson, earning them the Science Magazine Breakthrough of the Year Award in 2012.
Blekman has led multiple large physics groups throughout her life. The most prominent international research team lead was her creation of the physics outreach activities of the CMS Collaboration, where she currently still holds the role as the first Physics Communication Officer. Within this role, Blekman is responsible for outreach and communication of 4000+ international scientists at CERN that publish 130+ scientific papers per year. She also holds the role as the convener of the "Beyond-Two-Generations" (B2G) physics group, which has over 250 members worldwide and has published over 60 journals. The group works to analyze new particles using heavy quarks and W/Z/H bosons.
Since 2021 Blekman has been a lead scientist at Deutsches Elektronen-Synchrotron (DESY) with joint appointment at the University of Hamburg via the Helmholtz Distinguished Professor Recruitment Initiative. In addition to full-time professorship, Blekman is also a visiting professor at Vrije Unversiteit Brussel and the University of Oxford. Her future work includes searching for signs of new physics by using the Large Hadron Collider (LHC) and the Future Circular e+e- Collider (FCC-ee) at CERN.
Publications
Blekman has a Hirsh index of 210 with over 1500 published works. The majority of her published work is within CMS or D0 collaboration. Her works currently have 62,439 citations. The most cited are:
Chatrchyan, S., et al. (2012). Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC. Physics Letters. Part B, 716(1), 30–61. https://doi.org/10.1016/j.physletb.2012.08.021
The CMS Collaboration. (2008). The CMS experiment at the CERN LHC. Journal of Instrumentation: An IOP and SISSA Journal, 3(08), S08004–S08004. https://doi.org/10.1088/1748-0221/3/08/s08004
Individually produced papers:
Blekman, F. (2005). Top Quark Pair Production in Proton Anti-Proton Collisions. OSTI.GOV. https://doi.org/10.2172/15017276
Blekman, F. (2012). Search for same-sign top production at the LHC. Societa Italiana di Fisica. 173–176. https://doi.org/10.1393/ncc/i2012-11241-6
Blekman, F. (2013). Measurement of the top pair invariant mass distribution and search for New Physics (CMS). Proceedings of Science. https://doi.org/10.22323/1.174.0212
Blekman, F. (2007). Studies for Semileptonic B Decays from B0. Proceedings of Science. https://doi.org/10.22323/1.021.0208
Blekman, F. (2007). Measurement of inclusive differential cross sections for Upsilon(1S) production in ppbar collisions at \sqrt s=1.96 TeV. Proceedings of Science. https://doi.org/10.22323/1.021.0234
Honors and awards
2011 – Odysseus II Grant by the Flemish funding agency FWO
2012 – Science Magazine/Breakthrough of the Year 2012 Award (for discovery of Higgs Boson)
2013 – USA Department of Energy LHC Physics Center Fellow
2013 – European Physics Society High Energy and Particle Physics (HEPP) prize for discovery of Higgs-boson (for CMS collaboration)
2016 – Jaarprijs Science Communication award of the Royal Flemish Academy of Belgium for the Arts and Sciences (KVAB) for promotion of particle physics on social media
2019 – European Physics Society High Energy and Particle Physics (HEPP) prize for physics of the Top quark (for D0 collaboration)
2019 – 2021 – USA Department of Energy LHC Physics Center Senior Distinguished Researcher
2021 – Helmholtz Distinguished Professorship
Memberships
1999 – Member of CERN student program
1999 – Chairperson of ATLAS-Canada Standing Review Committee, Natural Sciences and Engineering Research Council of Canada (NSERC), and Canada IPPP Durham Advisory Board
2016 – 2019 – Member and co-chairperson of the CMS publication committee on Supersymmetry
2014 – 2018 – President of the PR and Outreach Council, VUB Faculty of Science and Bioengineering
2016 – 2018 – Co-convener of the Top physics group for the future electron-positron collider FCC-ee study
2012–present – Convener of Beyond-Two-Generations (B2G) physics group
2018–present – First ever CMS Physics Communication Officer
2021–present – Lead Scientist at DESY
Social media coverage
Blekman is very active on X under the username @freyablekman, where she shares advancements in her work on particle physics. She was also featured at CERN in a YouTube series about the hadron collider restart in 2015. This promotion of particle physics on social media won her the Jaarprijs Science Communication award of the Royal Flemish Academy of Belgium for the Arts and Sciences (KVAB) in 2016.
References
Living people
University of Amsterdam alumni
Academic staff of Vrije Universiteit Brussel
Academic staff of the University of Hamburg
Dutch physicists
Dutch women physicists
Particle physicists
Year of birth missing (living people) | Freya Blekman | [
"Physics"
] | 1,801 | [
"Particle physicists",
"Particle physics"
] |
76,906,612 | https://en.wikipedia.org/wiki/Quantum%20metrological%20gain | The quantum metrological gain is defined in the context of carrying out a metrological task using a quantum state of a multiparticle system. It is the sensitivity of parameter estimation using the state compared to what can be reached using separable states, i.e., states without quantum entanglement. Hence, the quantum metrological gain is given as the fraction of the sensitivity achieved by the state
and the maximal sensitivity achieved by separable states. The best separable state is often the trivial fully polarized state, in which all spins point into the same direction. If the metrological gain is larger than one then the quantum state is more useful for making precise measurements than separable states. Clearly, in this case the quantum state is also entangled.
Background
The metrological gain is, in general, the gain in sensitivity of a quantum state compared to a product state. Metrological gains up to 100 are reported in experiments.
Let us consider a unitary dynamics with a parameter from initial state ,
the quantum Fisher information constrains the achievable precision in statistical estimation of the parameter via the quantum Cramér–Rao bound as
where is the number of independent repetitions. For the formula, one can see that the larger the quantum Fisher information, the smaller can be the uncertainty of the parameter estimation.
For a multiparticle system of spin-1/2 particles
holds for separable states, where is the quantum Fisher information,
and is a single particle angular momentum component. Thus, the metrological gain can be characterize by
The maximum for general quantum states is given by
Hence, quantum entanglement is needed to reach the maximum precision in quantum metrology. Moreover, for quantum states with an entanglement depth ,
holds, where is the largest integer smaller than or equal to and is the remainder from dividing by . Hence, a higher and higher levels of multipartite entanglement is needed to achieve a better and better accuracy in parameter estimation. It is possible to obtain a weaker but simpler bound
Hence, a lower bound on the entanglement depth is obtained as
Mathematical definition for a system of qudits
The situation for qudits with a dimension larger than is more complicated. In this more general case, the metrological gain for a given Hamiltonian is defined as the ratio of the quantum Fisher information of a state and the maximum of the quantum Fisher information for the same Hamiltonian for separable states
where the Hamiltonian is
and acts on the nth spin.
The maximum of the quantum Fisher information for separable states is given as
where and denote the maximum and minimum eigenvalues of respectively.
We also define the metrological gain optimized over all local Hamiltonians as
The case of qubits is special. In this case, if the local Hamitlonians are chosen to be
where are real numbers, and then
,
independently from the concrete values of . Thus, in the case of qubits, the optimization of the gain over the local Hamiltonian can be simpler. For qudits with a dimension larger than 2, the optimization is more complicated.
Relation to quantum entanglement
If the gain larger than one
then the state is entangled, and it is more useful metrologically than separable states. In short, we
call such states metrologically useful.
If all have identical lowest and highest eigenvalues, then
implies metrologically useful -partite entanglement. If for the gain
holds, then the state has metrologically useful genuine multipartite entanglement. In general, for quantum states holds.
Properties of the metrological gain
The metrological gain cannot increase if we add an ancilla to a subsystem or we provide an additional copy of the state.
The metrological gain is convex in the quantum state.
Numerical determination of the gain
There are efficient methods to determine the metrological gain via an optimization over local Hamiltonians. They are based on a see-saw method that iterates two steps alternatively.
References
Quantum information science
Quantum optics | Quantum metrological gain | [
"Physics"
] | 819 | [
"Quantum optics",
"Quantum mechanics"
] |
76,913,611 | https://en.wikipedia.org/wiki/Milnor%20conjecture%20%28Ricci%20curvature%29 | In 1968 John Milnor conjectured that the fundamental group of a complete manifold is finitely generated if its Ricci curvature stays nonnegative. In an oversimplified interpretation, such a manifold has a finite number of "holes". A version for almost-flat manifolds holds from work of Gromov.
In two dimensions has finitely generated fundamental group as a consequence that if for noncompact , then it is flat or diffeomorphic to , by work of Cohn-Vossen from 1935.
In three dimensions the conjecture holds due to a noncompact with being diffeomorphic to or having its universal cover isometrically split. The diffeomorphic part is due to Schoen-Yau (1982) while the other part is by Liu (2013). Another proof of the full statement has been given by Pan (2020).
In 2023 Bruè, Naber and Semola disproved in two preprints the conjecture for six or more dimensions by constructing counterexamples that they described as "smooth fractal snowflakes". The status of the conjecture for four or five dimensions remains open.
References
Differential geometry
Riemannian manifolds
Disproved conjectures | Milnor conjecture (Ricci curvature) | [
"Mathematics"
] | 253 | [
"Space (mathematics)",
"Riemannian manifolds",
"Metric spaces",
"Topology stubs",
"Topology"
] |
68,090,839 | https://en.wikipedia.org/wiki/Matter%20%28journal%29 | Matter is a peer-reviewed scientific journal that covers the general field of materials science. It is published by Cell Press and the editor-in-chief is Steven W. Cranford.
External links
Academic journals established in 2019
Cell Press academic journals
Monthly journals
English-language journals
Materials science journals | Matter (journal) | [
"Materials_science",
"Engineering"
] | 60 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
68,091,202 | https://en.wikipedia.org/wiki/Chromatin%20variant | A chromatin variant (also known as an epigenetic lesion, epimutation or epigenetic alteration) corresponds to a section of the genome that differs in chromatin states across cell types/states within an individual (intra-individual) or between individuals for a given cell type/state (inter-individual). Chromatin variants distinguish DNA sequences that differ in their function in one cell type/state versus another. Chromatin variants are found across the genome, inclusive of repetitive and non-repetitive DNA sequences. Chromatin variants range in sizes. The smallest chromatin variants cover a few hundred DNA base pairs, such as seen at promoters, enhancers or insulators. The largest chromatin variants capture a few thousand DNA base pairs, such as seen at Large Organized Chromatin Lysine domains (LOCKs) and Clusters Of Cis-Regulatory Elements (COREs), such as super-enhancer.
References
Gene expression
Molecular genetics | Chromatin variant | [
"Chemistry",
"Biology"
] | 199 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
68,092,222 | https://en.wikipedia.org/wiki/Institute%20for%20Safe%20Medication%20Practices | The Institute for Safe Medication Practices (ISMP) is an American 501(c)(3) organization focusing on the prevention of medication errors and promoting safe medication practices. It is affiliated with ECRI.
Activities
Among others, ISMP maintains and disseminates a list of "do not crush" medications, as well as clinical best practices. The ISMP's Medication Safety Self-Assessment tool has been used in surveys of medication safety in hospitals in the United States and elsewhere.
The ISMP frequently investigates and reports on medication errors that have occurred in practice. These investigations are often published in the peer-reviewed journal Hospital Pharmacy.
References
Pharmacology
Drug safety
Patient safety | Institute for Safe Medication Practices | [
"Chemistry"
] | 139 | [
"Pharmacology",
"Drug safety",
"Medicinal chemistry stubs",
"Medicinal chemistry",
"Pharmacology stubs"
] |
68,105,840 | https://en.wikipedia.org/wiki/Nitro-Mannich%20reaction | The nitro-Mannich reaction (or aza-Henry reaction) is the nucleophilic addition of a nitroalkane (or the corresponding nitronate anion) to an imine, resulting in the formation of a beta-nitroamine. With the reaction involving the addition of an acidic carbon nucleophile to a carbon-heteroatom double bond, the nitro-Mannich reaction is related to some of the most fundamental carbon-carbon bond forming reactions in organic chemistry, including the aldol reaction, Henry reaction (nitro-aldol reaction) and Mannich reaction.
Although extensive research has been conducted into the aforementioned reactions, the nitro-Mannich reaction has been studied to a far lesser extent even though it has been known for well over 100 years. Significant attention only started to develop after the report of Anderson and co-workers at the turn of the century, and has since resulted in a wide range of novel methodologies. The interest into the nitro-Mannich reaction stems from the synthetic utility of the beta-nitroamine products. They can be further manipulated by various methods, including reductive removal of the nitro group allowing access to monoamines, reduction of the nitro group affords 1,2-diamines and conversion of the nitro group into a carbonyl functionality furnishes beta-aminocarbonyl compounds.
History
Early Examples of the Nitro-Mannich Reaction
The first nitro-Mannich reaction was reported by Henry in 1896. In this report, Henry described the addition of nitroalkanes to an imine derived from hemiaminal. Elimination of water forms in-situ an imine, which then reacts with the nitro group (as a nitronate ion) to form a beta-nitroamine that can subsequently react further forming one of the two adducts. Although this is the first report of the nitro-Mannich reaction, no yields of the products were given.
After Henry’s seminal report, Mousset and Duden made contributions to the field by studying the addition of branched nitroalkanes to hemiaminals using the same procedures reported by Henry. An example of nitro group reduction to an amine using SnCl2 and HCl was also disclosed by Duden and co-workers, thus representing the first use of the nitro-Mannich reaction to prepare polyamines. The next report did not appear until 1931, when Cerf de Mauny conducted a thorough study of Henry’s original work using hemiaminals. The scope of the reaction was extended to higher order nitroalkanes affording a beta-nitroamine in excellent yields.
The next contributions appeared in 1946, when Senkus and Johnson independently reported their studies into the nitro-Mannich reaction. Senkus and co-workers illustrated that nitroalkanes may react with methanal (formaldehyde) and substituted primary amines in the presence of sodium sulfate (Na2SO4) to afford a variety of substituted beta-nitroamines in moderate to good yields. When using primary nitroalkane substrates, double addition of the nitroalkane to the imine was observed, but this could be avoided by employing secondary nitroalkanes. The study reported by Johnson and co-workers also employed formaldehyde, but this was used in conjunction with a selection of secondary amines, furnishing the corresponding beta-nitroamines in moderate to good yields. Both authors also reduced the nitro group to an amine functionality using Raney Nickel.
Up until this point, all of the nitro-Mannich methodologies reported had used imines that were formed in situ from an aldehyde and an amine. In 1950, Hurd and Strong reported the first nitro-Mannich reaction using a preformed imine. Exposing an imine to a nitroalkane afforded a substituted beta-nitroamines in moderate yields. The moderate yields obtained when using the preformed imine could possibly be attributed to a competing decomposition pathway of the imine or the product.
These early nitro-Mannich methodologies have been used by a number of groups for the synthesis of a variety of heterocyclic products, conjugated nitroalkenes (via elimination of the amino group) and dinitroamines.
Non-Enantioselective Nitro-Mannich Reactions
Although the nitro-Mannich reaction enables access to synthetically useful beta-nitroamine motifs, the lack of selectivity in their synthesis remained a significant problem. Interest in the field started to increase considerably after Anderson and co-workers reported the first diastereoselective acyclic nitro-Mannich reaction. A nitroalkane and n-butyllithium (nBuLi) were combined at -78 °C to give the corresponding nitronate ions. A selection of N-PMB imines were then added to the reaction mixture and after quenching with acetic acid, the beta-nitroamine products were afforded in good yields with moderate to good diastereoselectivities.
The authors then converted the beta-nitroamines into unprotected 1,2-diamines via a two step procedure. Firstly, the nitro group was reduced to amines using samarium iodide, followed by PMB removal in the presence of ceric ammonium nitrate (CAN). The same group later reported improvements to this methodology and expanded these preliminary results in further publications.
In 2000, Anderson and co-workers reported the racemic nitro-Mannich reaction of TMS-protected nitronate with N-PMB or N-PMP imines catalysed by Sc(OTf)3. The authors first attempted the nitro-Mannich reaction using lithium-nitronates, however no product was formed using these conditions. As a result, the TMS-protected nitronate was used in conjunction with Scandium(III) trifluoromethanesulfonate [Sc(OTf)3] (4 mol%) to afford the beta-nitroamine products in moderate to excellent yields for a range of alkyl and aryl N-PMB and N-PMP protected imines.
Following Anderson’s report, Qian and co-workers described the Ytterbium(III) trifluoromethanesulfonate [Yb(OiPr)3] catalysed nitro-Mannich reaction of N-sulfonyl imines and nitromethane. Using mild reactions conditions, the β-nitroamines bearing electron-rich and electron-poor aryl substituents were furnished in excellent yields after short reaction times.
Direct Metal Catalysed Enantioselective Nitro-Mannich Reactions
The first enantioselective metal catalysed nitro-Mannich reaction was reported by Shibasaki and co-workers in 1999. The authors used a binaphthol ligated Yb/K heterobimetallic complex to induce enantiocontrol in the reaction, furnishing β-nitroamines in moderate to good yields with good enantioselectivities. However, nitromethane was the only nitroalkane that could be used with the heterobimetallic complex and the reactions were very slow (2.5–7 days) even when using a relatively high catalyst loading of 20 mol%.
Building on the work of Shibasaki, Jørgensen and co-workers reported the asymmetric nitro-Mannich reaction of nitroalkanes and a N-PMP-α-iminoesters. Catalysed by Cu(II)-BOX 52 and triethylamine (Et3N), the reaction afforded β-nitro-α-aminoesters in good yields with excellent enantiocontrol (up to 99% ee). The reaction tolerates a selection of nitroalkanes but is limited exclusively to N-PMP-α-iminoesters. The authors propose that the reaction proceeds via the chair-like transition structure, where both the N-PMP-α-iminoester and the nitronate anion bind to the Cu(II)-BOX complex.
In 2007, Feng and co-workers reported that CuOTf used in conjunction with the shown chiral N-oxide ligand and DIPEA is an efficient catalytic system for the enantioselective nitro-Mannich reaction of nitromethane with N-sulfonyl imines. Combining all of the reagents in THF at –40 °C resulted in the formation of β-nitroamines in excellent yields (up to 99%) and good enantioselectivities for a variety of substituted aryls groups. The postulated intermediate complex is similar to the transition structure proposed by Jørgensen and co-workers, where the ligated copper species binds to the N-sulfonyl imine. A hydrogen bonding interaction is proposed to exist between the amide NH and the nitronate species.
Around the same time as the report of Feng, Shibasaki and co-workers reported one of the most successful enantioselective nitro-Mannich reactions, catalysed by the shown Cu/Sm heterobimetallic complex. Combining N-Boc protected imines and nitroalkanes resulted in moderate to excellent yields and good to excellent enantioselectivities of the products. Interestingly, the nitro-Mannich reaction catalysed by complex affords syn-β-nitroamines, whereas most other enantioselective methodologies favour anti-β-nitroamines. The authors later reported an improved version of the protocol and proposed a mechanistic rational to account for the observed syn diastereoselectivity.
Organocatalysed Enantioselective Nitro-Mannich Reactions
Since the inception of organocatalysis, numerous accounts of organocatalysed enantioselective nitro-Mannich reactions have been reported. These include examples using Brønsted base catalysts, Brønsted acid catalysts, bifunctional Brønsted base/H-bond donor catalysts and phase-transfer catalysts.
Bifunctional Brønsted Base/H-Bond Donor Organocatalysis
Small chiral molecule H-bond donors can be used as a powerful tool for enantioselective synthesis. These low molecular weight entities containing structural frameworks with distinct H-bond donor motifs can catalyse a wide range of carbon-carbon and carbon-heteroatom bond-forming reactions, occurring via H-bond donor activation of the reaction partners as well as through organisation of their spatial arrangement. This area of organic chemistry received limited attention until the seminal work of Jacobsen and Sigman in which they reported a highly enantioselective Strecker reaction using a H-bond donor organocatalyst:
Building on the work of Jacobsen, it was recognised that H-bond donor motifs can be linked via a chiral scaffold to Brønsted basic moieties, creating a new class of bifunctional organocatalysts (see concept figure below). The incorporation of these two functionalities allows the simultaneous activation of the nucleophile (via deprotonation by the Brønsted base) and electrophile (via H-bond donation), thus allowing the development of novel enantioselective reactions through new activation modes.
Based on this concept, Takemoto and co-workers reported the first bifunctional Brønsted base/H-bond donor thiourea organocatalyst 62 (see below) in 2003. This organocatalyst, based on the 1,2-trans-cyclohexanediamine scaffold, imparts high levels of enantiocontrol in the Michael addition of dimethylmalonate to a variety of nitrostyrenes. After this seminal report, numerous other bifunctional organocatalysts were developed derived from the readily available cinchona alkaloid scaffold. The quinidine-derived bifunctional organocatalyst 63 (first reported by Deng and co-workers) acts as a proficient catalyst for Michael addition reactions. In this organocatalytic system, the H-bonding interaction arising from the quinoline alcohol is thought to be crucial for achieving high enantioselectivities.
Also the bifunctional thioureas 64 and 65, again derived from the cinchona alkaloids, are very effective catalysts in Michael addition reactions. The bifunctional thiourea 66 is able to impart high levels of enantiocontrol in the nitro-aldol (Henry) reaction. Bifunctional thiourea 66 differs structurally from bifunctional thioureas 64 and 65, as the thiourea moiety is attached to the quinoline ring of the cinchona scaffold instead of the central stereocentre. Also numerous other bifunctional organocatalyst systems are described, which further expand the range of reactions that can be conducted using bifunctional (thio)urea organocatalysis.
References
Carbon-carbon bond forming reactions
Multiple component reactions
Name reactions | Nitro-Mannich reaction | [
"Chemistry"
] | 2,833 | [
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
68,109,605 | https://en.wikipedia.org/wiki/Secondary%20amino%20acid | In organic chemistry, secondary amino acids are amino acids which do not contain the amino group but is rather a secondary amine (). Secondary amino acids can be classified to cyclic acids, such as proline, and acyclic N-substituted amino acids.
In nature, proline, hydroxyproline, pipecolic acid and sarcosine are well-known secondary amino acids. Proline is the only proteinogenic secondary amino acids. Other secondary amino acids are non-proteinogenic amino acids. In protein, hydroxyproline is incorporated into protein by hydroxylation of proline. Pipecolic acid, a heavier analog of proline, is found in efrapeptin. Sarcosine is a N-methylized glycine so its methyl group is used in many biochemical reactions. Azetidine-2-carboxylic acid, which is a smaller homolog of proline in plants.
Properties
Proline and its higher homolog pipecolic acid affect the secondary structure of protein. D-alpha-amino acid - L-alpha-amino acid sequence can induce beta hairpin. It suggested that acyclic secondary amino acids are more flexible than cyclic secondary amino acids in protein by replacement of pipecolic acid by N-methyl-L-alanine in efrapeptin C.
Ninhydrin tests of proline and hydroxyproline give yellow results.
In enzymology, a N-methyl-L-amino-acid oxidase is an oxidase of a subtype of secondary amino acids.
See also
Imino acid
Imidic acid
Secondary amine
References
External links
Amino acids | Secondary amino acid | [
"Chemistry"
] | 342 | [
"Amino acids",
"Biomolecules by chemical classification"
] |
63,801,136 | https://en.wikipedia.org/wiki/Titanium%28II%29%20bromide | Titanium(II) bromide is the inorganic compound with the formula TiBr2. It is a black micaceous solid. It adopts the cadmium iodide structure, featuring octahedral Ti(II) centers. It arises via the reaction of the elements:
Ti + Br2 → TiBr2
The compound reacts with caesium bromide to give the linear chain compound CsTiBr3.
References
Titanium(II) compounds
Bromides
Titanium halides | Titanium(II) bromide | [
"Chemistry"
] | 98 | [
"Bromides",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts"
] |
63,805,510 | https://en.wikipedia.org/wiki/Electron%20orbital%20imaging | Electron orbital imaging is an X-ray synchrotron technique used to produce images of electron (or hole) orbitals in real space. It utilizes the technique of X-ray Raman scattering (XRS), also known as Non-resonant Inelastic X-Ray Scattering (NIXS) to inelastically scatter electrons off a single crystal. It is an element specific spectroscopic technique for studying the valence electrons of transition metals.
Background
Pictures of electron’s wavefunctions are commonplace in most quantum mechanics textbooks. However, the images shown of these orbital shapes of these electrons are entirely mathematical constructs. As a purely experimental technique electron orbital imaging has the ability to solve some problems in condensed matter physics without the use of complementary theoretical approaches. Theoretical approaches, while indispensable, invariably rely on several underlying assumptions, which vary depending on the approach used. The motivation for developing orbital imaging stemmed from the desire to omit the complex theoretical calculations to model experimental spectra; and instead simply “see” the relevant occupied and unoccupied electron orbitals.
Experimental setup
The non-resonant inelastic x-ray scattering cross section is orders of magnitude smaller than that of photoelectric absorption. Therefore, high-brilliance synchrotron beamlines with efficient spectrometers that are able to span a large solid angle of detection are required. XRS spectrometers are usually based on spherically curved analyzer crystals that act as focusing monochromator after the sample. The energy resolution is on the order of 1 eV for photon energies on the order of 10 keV.
Briefly put, the technique measures the density of electron holes the valence band in the direction of the momentum transfer vector q (Fig. 1), which is defined as the difference in momentum between the incoming qin and outgoing qout photons. The sample is rotated between subsequent measurement (by some angle θ) such that the momentum transfer vector traverses a plane in the crystal. Because holes are simply the inverse of the electron occupation, the occupied (electrons) and unoccupied (holes) orbitals in a given plane can be imaged. In practice, photons ~10keV are used in order to achieve a sufficiently large q (needed to access dipole forbidden transitions, see below Theoretical Basis). The scattered photons are detected at a constant energy, while the incident photon energy is swept above that over a range corresponding to the binding energy of the relevant excitation. For example, if the energy of the photons detected is 10keV, and the nickel 3s (binding energy of 111eV) excitation is of interest, then the incident photons are swept in a range around 10.111keV. In this manner the energy transferred to the sample is measured. The intensity of a core level electron excitation (such as 3s→3d) is integrated for various directions of the momentum transfer vector q relative to the crystal being measured. An s orbital is the most convenient to utilize because it is spherical, and therefore the technique is sensitive only to the shape of the final wavefunction. As such, the integrated intensity of the resulting spectrum is proportional to the hole density in direction of q.
Theoretical basis
The technique is hinged on its ability to access dipole forbidden electronic transitions.
The double differential cross section for a NIXS measurement is given by:
where (dσ/dΩ)Th is the Thomson scattering cross-section (representing the elastic scattering of electromagnetic waves off electrons) and S(q,ω) is the dynamic structure factor, which contains the physics of the material being measured, and is given by:
where q = kf - ki is the momentum transfer and the delta function δ conserves energy: ω is the photon energy loss and Ei & Ef are the initial and final states of the system, respectively. If q is small then the Taylor expansion of the transition matrix eiq·r implies that only the first (dipole) term in the expansion is important. Orbital imaging relies of the fact that as the momentum transfer increases (~4 to 15 Å−1) further terms in the expansion of the transition matrix become relevant, which allows the experimenter to observe higher multipole transitions (quadrupole, octupole, etc.).
Applications
Electron orbital imaging has applications in solid state physics wherein the primary goal is to understand the observed bulk properties of a given material—whether electronic or magnetic—from the atomic perspective of the constituent electrons. In many materials it is the case is that there is a delicate balance of competing interactions that together stabilize a particular orbital state, which in turn determines the physical properties. Electron Orbital Imaging allows scientists to directly image the valence electron orbitals in real space. This has the advantage of bypassing theoretical modelling of experimental spectra (which is often an intractable problem), and observing the relevant orbitals directly.
The first application of the technique was published in 2019 and showed the 3d orbitals (specifically the holes, which are the inverse of the electrons) of Nickel(II) oxide. The shape of the eg orbitals were imaged in real space through a cross-sectional cut of a single crystal of NiO.
It has also been applied to the Ising magnetic material Ca3Co2O6 (Fig. 2) in order to show specifically that it is the sixth electron on the high-spin trigonally coordinated cobalt site that gives rise to the observed bulk large orbital magnetic moment.
References
X-ray scattering
X-ray spectroscopy
Raman scattering | Electron orbital imaging | [
"Physics",
"Chemistry"
] | 1,139 | [
"Spectrum (physical sciences)",
"X-ray scattering",
"Scattering",
"X-ray spectroscopy",
"Spectroscopy"
] |
63,811,183 | https://en.wikipedia.org/wiki/Resource%20smoothing | In project management, resource smoothing is defined by A Guide to the Project Management Body of Knowledge (PMBOK Guide) as a "resource optimization technique in which free and total float are used without affecting the critical path" of a project. Resource smoothing as a resource optimization technique has only been introduced in the Sixth Edition of the PMBOK Guide (since 2017) and did not exist in its previous revisions. It is posed as an alternative and a distinct resource optimization technique beside resource leveling.
The main difference between resource leveling and resource smoothing is that while resource leveling uses the available float, thus may affect a critical path, resource smoothing uses free and total float without affecting any of the critical paths. Thus, while resource leveling can be considered a constraint in order to adjust with certain resource supply limitation, for example, not to over-work some human resources, resource smoothing can be considered a useful method to solve the problem of a more flexible constraint if time of a deadline is a stronger constraint.
Just like resource leveling, a resource smoothing problem could be formulated as an optimization problem. The problem could be solved by different optimization algorithms such as exact algorithms or metaheuristics.
See also
Resource allocation
Resource leveling
References
Further reading
Schedule (project management) | Resource smoothing | [
"Physics"
] | 257 | [
"Spacetime",
"Physical quantities",
"Time",
"Schedule (project management)"
] |
65,338,486 | https://en.wikipedia.org/wiki/Amnon%20Aharony | Amnon Aharony (Hebrew: אמנון אהרוני; born: 7 January 1943) is an Israeli Professor (Emeritus) of Physics in the School of Physics and Astronomy at Tel Aviv University, Israel and in the Physics Department of Ben Gurion University of the Negev, Israel. After years of research on statistical physics (critical phenomena, random systems, fractals, percolation), his current research focuses on condensed matter theory, especially in mesoscopic physics and spintronics.
He is a member of the Israel Academy of Sciences and Humanities, a Foreign Honorary Member of the American Academy of Arts and Sciences and of several other academies. He also received several prizes, including the Rothschild Prize in Physical Sciences, and the Gunnar Randers Research Prize, awarded every other year by the King of Norway.
Early life and education
Amnon Aharony was born in Jerusalem, and grew up in Netanya, Israel. He received his B.Sc. in Physics and Mathematics in 1964 from the Hebrew University of Jerusalem. His M.Sc. thesis, under the supervision of Gideon Rakavy, was on the distorted wave Born approximation for direct nuclear reactions (1965), from the same university. He received his doctorate in 1972 from Tel Aviv University, under the supervision of Yuval Ne'eman. Thesis title: Aspects of time reversal symmetry violation.
Career
Aharony was a senior researcher in the Israel Army and Ministry of Defense during 1965–1972. In those years he was also a teaching instructor in Tel Aviv University. Aharony was a postdoctoral student at Cornell University with Michael Fisher, and also at Harvard University, the University of California, San Diego and at Bell Laboratories in Murray Hill.
He returned to Israel in 1975 to become an associate professor of physics in Tel Aviv University, and a full professor in 1979. From 1990 he held the Moyses Nussenzveig Chair in Statistical Physics. Aharony retired from the university as Professor Emeritus in 2006. At that year he joined Ben Gurion University of the Negev, where he became Distinguished Professor Emeritus during 2013–2020.
During the years, Aharony was a visiting professor at Harvard University, MIT, Boston University, University of Tokyo, NTT Japan, the International Institute of Physics, UFRN, Natal, Brazil, the Institute for Advanced Studies in Jerusalem, the Institute of theoretical physics of the Chinese Academy of Sciences in Beijing. He was also a Distinguished Professor at the National Cheng Kung University, Taiwan and a visiting scientist at the IBM Research laboratories in Yorktown Heights and in Zürich, the US National Laboratories in Argonne and the National Institute of Standards and Technology (NIST), the Beijing Computational Science Research Center and the Institute for Basic Science in Daejeon, Korea.
Aharony was also an adjunct professor in the University of Oslo, Norway during the years 1987–2012, and a Consultant at IBM Research, MIT and the Weizmann Institute of Science (1987–present).
Research
Phase transitions: Aharony applied the renormalization group to identify and classify universality classes of critical (e.g. cubic, dipolar) and multicritical points.
His work on random systems involved systems with random fields
and the general issues of self-averaging. Aharony introduced fractal geometry into several branches of statistical physics, especially in connection with the many fractal sub-structures of dilute percolating systems, with applications to oil recovery.
Quantum magnetism: Aharony explained the structures and phase diagrams of magnetic oxide systems. This includes the magnetic structures of the high temperature superconducting parent cuprates, and the prediction of the spin glass phase there, the discovery of a special symmetry in the Dzyaloshinskii-Moria interaction (now called the Shekhtman-Entin-Wohlman-Aharony symmetry) and the ordered phases of various multiferroic materials.
Mesoscopic physics: Aharony participated in critical discussions of the Aharonov-Bohm interferometer.
In recent years, he concentrates on the effects of the spin-orbit interaction on transport in mesoscopic spintronic systems, including proposals of spin filters which may be relevant to quantum information processing.
Publications
Aharony is the author of 8 books and more than 450 articles. According to Google Scholar (September 2023) he has more than 50,000 citations and his h-index is 87.
Selected books
A. Aharony and J. Feder, editors. Fractals in Physics. Proceedings of a Conference, Vence, France (North Holland, Amsterdam, 1989)
D. Stauffer and A. Aharony. Introduction to Percolation Theory. Taylor and Francis, London (1992); revised 2nd edition (1994); German translation: Perkolationstheorie, Eine Einführung, VCH, Weinheim (1995); Japanese translation: PA-KO RE-SHON NO KI HON GEN RI, Yoshiokashoten, Kyoto (2001).
A. Aharony and O. Entin-Wohlman, editors. Perspectives of Mesoscopic Physics. World Scientific, Singapore (2010)
A. Aharony and O. Entin-Wohlman. Introduction to Solid State Physics. In Hebrew, Open University, Israel (2018), 600 pages; English translation: World Scientific, Singapore (2018)
Honors and awards
Fulbright Fellowship, US, 1972
Fellow, American Physical Society, USA, 1985, "for contributions to the theory of new critical and multicritical points, of random field systems and their experimental realization and of using fractals in statistical physics and in percolation"
Foreign Member, Norwegian Academy of Science and Letters, Oslo, Norway, 1988
Member, Royal Norwegian Society of Sciences and Letters, Trondheim, Norway, 1993
Foreign Honorary Member, American Academy of Arts and Sciences, Cambridge, MA, USA, 2002
Honorary fellow, Institute of Physics, UK, 2011
Elected member, Israel Academy of Sciences and Humanities, 2012
Notable students
Joan Adler
Serge Galam
Yigal Meir
Personal life
Aharony is the father of Professor of Physics Ofer Aharony, psychologist Dr. Tamar Aharony and Professor of music Iddo Aharony.
References
External links
Amnon Aharony, Ben Gurion University
Prof. Amnon Aharony, Tel Aviv University
A lecture by Amnon Aharony: Quantum theory: does God play dice? (in Hebrew), YouTube
Members of the Norwegian Academy of Science and Letters
Royal Norwegian Society of Sciences and Letters
Condensed matter physicists
Israeli physicists
Tel Aviv University alumni
Hebrew University of Jerusalem alumni
Academic staff of Ben-Gurion University of the Negev
Fellows of the American Physical Society
1943 births
Living people
Weizmann Prize recipients | Amnon Aharony | [
"Physics",
"Materials_science"
] | 1,399 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
65,343,961 | https://en.wikipedia.org/wiki/THEMATICS | Theoretical Microscopic Anomalous Titration Curve Shapes (THEMATICS) is a computational method for predicting the biochemically active amino acids in a protein three-dimensional structure.
The method was developed by Mary Jo Ondrechen, James Clifton, and Dagmar Ringe. It is based on computed electrostatic and chemical properties of the individual amino acids in a protein structure. Specifically it identifies anomalous shapes in the theoretical titration curves of the ionizable amino acids. Biochemically active amino acids tend to have wide buffer ranges and non-sigmoidal titration patterns.
While the method predicts biochemically active amino acids successfully, it also provides input features to a machine learning predictor, Partial Order Optimum Likelihood (POOL).
References
Computational chemistry | THEMATICS | [
"Chemistry"
] | 161 | [
"Theoretical chemistry stubs",
"Theoretical chemistry",
"Computational chemistry",
"Computational chemistry stubs",
"Physical chemistry stubs"
] |
65,344,295 | https://en.wikipedia.org/wiki/Human%20Medicines%20Regulations%202012 | The Human Medicines Regulations 2012 in the United Kingdom were created, under statutory authority of the European Communities Act 1972 and the Medicines Act 1968 in 2012. The body responsible for their upkeep is the Medicines and Healthcare products Regulatory Agency. The regulations partially repealed the Medicines Act 1968 in line with EU legislation.
Amendments
In October 2020, the regulations were amended to expand the workforce eligible to administer COVID-19 vaccines, so enabling additional healthcare professionals to vaccinate the public. This was a temporary provision, but in January 2022 it was announced that this would be made permanent as would the provision for community pharmacy contractors to provide COVID-19 and flu vaccines “away from their normal registered premises”.
Regulation 174
Regulation 174 provides an exemption to the requirement for authorisation of Regulation 46, allowing for the sale or supply of any medicinal product to be temporarily authorised by the licensing authority (MHRA) in response to the suspected or confirmed spread of pathogenic agents, toxins, chemical agents or nuclear radiation.
References
External links
National Health Service
Pharmaceutics
Life sciences industry
Pharmacy
2012 establishments in the United Kingdom
Department of Health and Social Care
Medical regulation in the United Kingdom
Biotechnology
Statutory instruments of the United Kingdom
Health law in the United Kingdom | Human Medicines Regulations 2012 | [
"Chemistry",
"Biology"
] | 249 | [
"Pharmacology",
"Life sciences industry",
"Pharmacy",
"Biotechnology",
"nan"
] |
65,349,345 | https://en.wikipedia.org/wiki/Sterbenz%20lemma | In floating-point arithmetic, the Sterbenz lemma or Sterbenz's lemma is a theorem giving conditions under which floating-point differences are computed exactly.
It is named after Pat H. Sterbenz, who published a variant of it in 1974.
The Sterbenz lemma applies to IEEE 754, the most widely used floating-point number system in computers.
Proof
Let be the radix of the floating-point system and the precision.
Consider several easy cases first:
If is zero then , and if is zero then , so the result is trivial because floating-point negation is always exact.
If the result is zero and thus exact.
If then we must also have so . In this case, , so the result follows from the theorem restricted to .
If , we can write with , so the result follows from the theorem restricted to .
For the rest of the proof, assume without loss of generality.
Write in terms of their positive integral significands and minimal exponents :
Note that and may be subnormal—we do not assume .
The subtraction gives:
Let .
Since we have:
, so , from which we can conclude is an integer and therefore so is ; and
, so .
Further, since , we have , so that
which implies that
Hence
so is a floating-point number.
Note: Even if and are normal, i.e., , we cannot prove that and therefore cannot prove that is also normal.
For example, the difference of the two smallest positive normal floating-point numbers and is which is necessarily subnormal.
In floating-point number systems without subnormal numbers, such as CPUs in nonstandard flush-to-zero mode instead of the standard gradual underflow, the Sterbenz lemma does not apply.
Relation to catastrophic cancellation
The Sterbenz lemma may be contrasted with the phenomenon of catastrophic cancellation:
The Sterbenz lemma asserts that if and are sufficiently close floating-point numbers then their difference is computed exactly by floating-point arithmetic , with no rounding needed.
The phenomenon of catastrophic cancellation is that if and are approximations to true numbers and —whether the approximations arise from prior rounding error or from series truncation or from physical uncertainty or anything else—the error of the difference from the desired difference is inversely proportional to . Thus, the closer and are, the worse may be as an approximation to , even if the subtraction itself is computed exactly.
In other words, the Sterbenz lemma shows that subtracting nearby floating-point numbers is exact, but if the numbers one has are approximations then even their exact difference may be far off from the difference of numbers one wanted to subtract.
Use in numerical analysis
The Sterbenz lemma is instrumental in proving theorems on error bounds in numerical analysis of floating-point algorithms.
For example, Heron's formula
for the area of triangle with side lengths , , and , where is the semi-perimeter, may give poor accuracy for long narrow triangles if evaluated directly in floating-point arithmetic.
However, for , the alternative formula
can be proven, with the help of the Sterbenz lemma, to have low forward error for all inputs.
References
Computer arithmetic
Floating point
Numerical analysis | Sterbenz lemma | [
"Mathematics"
] | 679 | [
"Computational mathematics",
"Computer arithmetic",
"Arithmetic",
"Mathematical relations",
"Numerical analysis",
"Approximations"
] |
65,350,604 | https://en.wikipedia.org/wiki/TM%20%28triode%29 | The TM (from , also marketed as TM Fotos and TM Metal) was a triode vacuum tube for amplification and demodulation of radio signals, manufactured in France from November 1915 to around 1935. The TM, developed for the French Army, became the standard small-signal radio tube of the Allies of World War I, and the first truly mass-produced vacuum tube. Wartime production in France is estimated at no less than 1.1 million units. Copies and derivatives of the TM were mass-produced in the United Kingdom as Type R, in the Netherlands as Type E, in the United States and in Soviet Russia as P-5 and П7.
Development
Development of the TM was initiated by colonel Gustave-Auguste Ferrié, chief of French long-distance military communications (Télégraphie Militaire). Ferrié and his closest associate Henri Abraham were well informed about American research in radio and vacuum technology. They knew that Lee de Forest's audion and the British gas-filled lamp designed by H. J. Round were too unstable and unreliable for military service, and that Irving Langmuir's pliotron was too complex and expensive for mass production.
Shortly after the outbreak of World War I, a former Telefunken employee returning from the United States briefed Ferrié on the progress made in Germany and delivered samples of the latest American triodes, but again none of them met the demands of the Army. The problems were traced to insufficiently hard vacuum. Following suggestions made by Langmuire, Ferrié made a strategically correct decision to refine industrial vacuum pump technology that could guarantee sufficiently hard vacuum in mass production. The future French triode needed to be reliable, reproducible and inexpensive.
In October 1914 Ferrié dispatched Abraham and Michel Peri to Grammont incandescent lamp plant in Lyon. Abraham and Peri started with copying American designs. As was expected, the audion was unreliable and unstable, the pliotron and the first three original French prototypes were too complex. By trial and error, Abraham and Peri developed a simpler and inexpensive configuration. Their fourth prototype, which had vertically placed electrode assembly, was selected for mass production and was manufactured by Grammont from February to October of 1915. This triode, known as the Abraham tube, did not pass the test of field service: many tubes were damaged during transportation.
Ferrié instructed Peri to fix the problem, and two days later Peri and Jacques Biguet presented a modified design, with horizontally placed electrode assembly and the novel four-pin Type A socket (the original Abraham tube used an Edison screw with two additional flexible wires). In November 1915 the new triode was pressed into production and became known as the TM after the French service that developed it. Work by Ferrié and Abraham was nominated for the 1916 Nobel Prize in Physics. However, the patent was granted solely to Peri and Biguet, causing future legal disputes.
Design and specifications
The electrode assembly of the TM has nearly perfect cylindrical shape. The anode is a nickel cylinder, 10 mm in diameter and 15 mm long. Grid diameter varies from 4.0 to 4.5 mm; the Lyon plant made grids of pure molybdenum, the plant in Ivry-sur-Seine used nickel. The directly-heated cathode filament is a straight wire of pure tungsten, 0.06 mm in diameter.
Pure tungsten cathode reached proper emission level when heated to white incandescence, which required heating current of over 0.7 A at 4 V. The filament was so bright that in 1923 Grammont replaced clear glass envelope with dark blue cobalt glass. There were rumours that the company tried to discourage alleged use of radio tubes in place of lightbulbs, or that they tried to protect the eyes of radio operators. Most likely, however, dark glass was used to mask harmless but unsightly metal particles that were inevitably sputtered on the inner surface of the bulb.
A typical single-tube radio receiver of World War I used 40 V plate power supply (B battery) and zero bias on the grid (no C battery required). In this mode, the tube operated at 2 mA standing anode current, and had transconductance of 0.4 mA/V, gain (μ) of 10 and anode impedance of 25 kOhm. At higher voltages (i.e. 160 V on the anode and -2 V on the grid), standing plate current rose to 3...6 mA, with reverse grid current up to 1 μA. High grid currents, an inevitable consequence of primitive technology of the 1910s, simplified grid leak biasing.
The TM and its immediate clones were general-purpose tubes. In addition to their original radio receiving function, they were successfully employed in radio transmitters. A single Soviet-made P-5 configured as a class C radio frequency generator withstood 500 to 800 Volts plate voltage, and could deliver up to 1 W into the antenna, while a class A circuit could only deliver 40 mW. Audio frequency amplification in class A was feasible using arrays of parallel-connected TMs.
Lifetime of a genuine French-made TM, built in strict compliance with the design, did not exceed 100 hours. During the war, factories inevitably had to use substandard raw materials which resulted in substandard tubes. These were usually marked with a cross and suffered from unusually high noise levels and random early failures due to cracks in their glass envelopes.
Production history
In the course of World War I the TM became the tube of choice of allied armies. Demand exceeded capacity of the Lyon plant, so additional production was delegated to La Compagnie des Lampes plant in Ivry-sur-Seine. Total production volume is unknown, but it was certainly very high for the period. Estimates of daily wartime production vary from one thousand units (Lyon plant alone) to six thousand units. Estimates of total wartime production vary from 1.1 million units (0.8 million in Lyon and 0.3 million in Ivry-sur-Seine) to 1.8 million units for the Lyon plant alone.
British authorities quickly realized the benefits of the TM over domestic designs. In 1916 British Thomson-Houston developed necessary technology and tooling, and Osram-Robertson (which would later merge into Marconi-Osram Valve) began large-scale production. The British variants became known collectively as type R. In 1916-1917 the Osram plant produced two visually identical triode types: "hard" (high vacuum) R1, almost exactly copying the French original, and "soft" nitrogen-filled R2. The R2 was the last in the line of British gas-filled tubes; all subsequent designs from R3 to R7 were high vacuum tubes. Variants of Type R triodes were made to British order in the United States by Moorhead Laboratories. After the war, Philips launched production of the TM in the Netherlands as Type E. Cylindrical construction patented by Peri and Biguet became a standard feature of British high-power tubes, all the way to the 800-Watt T7X.
When the United States entered the war, annual output of the three largest American manufacturers could barely reach 80 thousand tubes of all types. This was too low for a fighting army; soon after deployment in France American Expeditionary Forces outran the quota and had to adopt French radio equipment. Thus, the AEF relied primarily on French-made tubes.
In Russia, Mikhail Bonch-Bruevich launched small-scale production of the TM in 1917. In 1923 Soviet authorities purchased French technology and tooling, and launched large-scale production at the Leningrad Electro-Vacuum Plant which would later merge into Svetlana. Soviet clones of the TM were named P-5 and П7, a high-efficiency thoriated-cathode variant was named Микро (Micro).
After World War I the general-purpose TM was gradually supplanted with new, specialized receiving and amplifying tubes. In the developed countries of the West the change was largely completed by the end of the 1920s, at which point it had started in less developed countries like the Soviet Union. There is no certain information on the end of production; according to Robert Champeix, production in France probably continued until 1935. In the late 20th century, replicas of the TM were released at least twice, by Rudiger Waltz in Germany (1980s) and by Ricardo Kron in Czech Republic (1992).
References
Sources
(Based on Champeix paper)
(Based on Champeix paper)
Vacuum tubes
French inventions
1915 in France
1915 in technology
1915 in radio
History of radio | TM (triode) | [
"Physics"
] | 1,797 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
66,590,304 | https://en.wikipedia.org/wiki/Su%E2%80%93Schrieffer%E2%80%93Heeger%20model | In condensed matter physics, the Su–Schrieffer–Heeger (SSH) model or SSH chain is a one-dimensional lattice model that presents topological features. It was devised by Wu-Pei Su, John Robert Schrieffer, and Alan J. Heeger in 1979, to describe the increase of electrical conductivity of polyacetylene polymer chain when doped, based on the existence of solitonic defects. It is a quantum mechanical tight binding approach, that describes the hopping of spinless electrons in a chain with two alternating types of bonds. Electrons in a given site can only hop to adjacent sites.
Depending on the ratio between the hopping energies of the two possible bonds, the system can be either in metallic phase (conductive) or in an insulating phase. The finite SSH chain can behave as a topological insulator, depending on the boundary conditions at the edges of the chain. For the finite chain, there exists an insulating phase, that is topologically non-trivial and allows for the existence of edge states that are localized at the boundaries.
Description
The model describes a half-filled one-dimensional lattice, with two sites per unit cell, A and B, which correspond to a single electron per unit cell. In this configuration each electron can either hop inside the unit cell or hop to an adjacent cell through nearest neighbor sites. As with any 1D model, with two sites per cell, there will be two bands in the dispersion relation (usually called optical and acoustic bands). If the bands do not touch, there is a band gap. If the gap lies at the Fermi level, then the system is considered to be an insulator.
The tight binding Hamiltonian in a chain with N sites can be written as
where h.c. denotes the Hermitian conjugate, v is the energy required to hop from a site A to B inside the unit cell, and w is the energy required to hop between unit cells. Here the Fermi energy is fixed to zero.
Bulk solution
The dispersion relation for the bulk can be obtained through a Fourier transform. Taking periodic boundary conditions , where , we pass to k-space by doing
,
which results in the following Hamiltonian
where the eigenenergies are easily calculated as
and the corresponding eigenstates are
where
The eigenenergies are symmetrical under swap of , and the dispersion relation is mostly gapped (insulator) except when (metal). By analyzing the energies, the problem is apparently symmetric about , the has the same dispersion as . Nevertheless, not all properties of the system are symmetrical, for example the eigenvectors are very different under swap of . It can be shown for example that the Berry connection
integrated over the Brillouin zone , produces different winding numbers:
showing that the two insulating phases, and , are topologically different (small changes in v and w change but not over the Brillouin zone). The winding number remains undefined for the metallic case . This difference in topology means that one cannot pass from an insulating phase to another without closing the gap (passing by the metallic phase). This phenomenon is called a topological phase transition.
Finite chain solution and edge states
The physical consequences of having different winding number become more apparent for a finite chain with an even number of lattice sites. It is much harder to diagonalize the Hamiltonian analytically in the finite case due to the lack of translational symmetry.
Dimerized cases
There exist two limiting cases for the finite chain, either or . In both of these cases, the chain is clearly an insulator as the chain is broken into dimers (dimerized). However one of the two cases would consist of dimers, while the other case would consist of dimers and two unpaired sites at the edges of the chain. In the latter case, as there is no on-site energy, if an electron finds itself on any of the two edge sites, its energy would be zero. So either the case or the case would necessarily have two eigenstates with zero energy, while the other case would not have zero-energy eigenstates. Contrary to the bulk case, the two limiting cases are not symmetrical in their spectrum.
Intermediate values
By plotting the eigenstates of the finite chain as function of position, one can show that there are two distinct kinds of states. For non-zero eigenenergies, the corresponding wavefunctions would be delocalized all along the chain while the zero energy eigenstates would portray localized amplitudes at the edge sites. The latter are called edge states. Even if the eigenenergies lie in the gap, the edge states are localized and correspond to an insulating phase.
By plotting the spectrum as a function of for a fixed value of , the spectrum is divided into two insulating regions divided by the metallic intersection at . The spectrum would be gapped in both insulating regions, but one of the regions would show zero energy eigenstates and the other region would not, corresponding to the dimerized cases. The existence of edge states in one region and not in the other demonstrate the difference between insulating phases and it is this sharp transition at that correspond to a topological phase transition.
Correspondence between finite and bulk solutions
The bulk case allows to predict which insulating region would present edge states, depending on the value of the winding number in the bulk case. For the region where the winding number is in the bulk, the corresponding finite chain with an even number of sites would present edge states, while for the region where the winding number is in the bulk case, the corresponding finite chain would not. This relation between winding numbers in the bulk and edge states in the finite chain is called the bulk-edge correspondence.
See also
Kitaev chain
Peierls transition
References
Condensed matter physics | Su–Schrieffer–Heeger model | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,196 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
66,600,012 | https://en.wikipedia.org/wiki/Quantum%20engineering | Quantum engineering is the development of technology that capitalizes on the laws of quantum mechanics. This type of engineering uses quantum mechanics to develop technologies such as quantum sensors and quantum computers.
Devices that rely on quantum mechanical effects such as lasers, MRI imagers and transistors have revolutionized many areas of technology. New technologies are being developed that rely on phenomena such as quantum coherence and on progress achieved in the last century in understanding and controlling atomic-scale systems. Quantum mechanical effects are used as a resource in novel technologies with far-reaching applications, including quantum sensors and novel imaging techniques, secure communication (quantum internet) and quantum computing.
History
The field of quantum technology was explored in a 1997 book by Gerard J. Milburn. It was then followed by a 2003 article by Milburn and Jonathan P. Dowling, and a separate publication by David Deutsch on the same year.
The application of quantum mechanics was evident in several technologies. These include laser systems, transistors and semiconductor devices, as well as other devices such as MRI imagers. The UK Defence Science and Technology Laboratory (DSTL) grouped these devices as 'quantum 1.0' to differentiate them from what it dubbed as 'quantum 2.0'. This is a definition of the class of devices that actively create, manipulate, and read out quantum states of matter using the effects of superposition and entanglement.
From 2010 onwards, multiple governments have established programmes to explore quantum technologies, such as the UK National Quantum Technologies Programme, which created four quantum 'hubs'. These hubs are found at the Centre for Quantum Technologies in Singapore, and QuTech, a Dutch center to develop a topological quantum computer. In 2016, the European Union introduced the Quantum Technology Flagship, a €1 Billion, 10-year-long megaproject, similar in size to earlier European Future and Emerging Technologies Flagship projects.
In December 2018, the United States passed the National Quantum Initiative Act, which provides a US$1 billion annual budget for quantum research. China is building the world's largest quantum research facility with a planned investment of 76 billion Yuan (approx. €10 Billion). Indian government has also invested 8000 crore Rupees (approx. US$1.02 Billion) over 5-years to boost quantum technologies under its National Quantum Mission.
In the private sector, large companies have made multiple investments in quantum technologies. Organizations such as Google, D-wave systems, and University of California Santa Barbara have formed partnerships and investments to develop quantum technology.
Applications
Secure communications
Quantum secure communication is a method that is expected to be 'quantum safe' in the advent of quantum computing systems that could break current cryptography systems using methods such as Shor's algorithm. These methods include quantum key distribution (QKD), a method of transmitting information using entangled light in a way that makes any interception of the transmission obvious to the user. Another method is the quantum random number generator, which is capable of producing truly random numbers unlike non-quantum algorithms that merely imitate randomness.
Computing
Quantum computers are expected to have a number of important uses in computing fields such as optimization and machine learning. They are perhaps best known for their expected ability to carry out Shor's algorithm, which can be used to factorize large numbers and is an important process in the securing of data transmissions.
Quantum simulators are types of quantum computers intended to simulate a real world system, such as a chemical compound. Quantum simulators are simpler to build as opposed to general purpose quantum computers because complete control over every component is not necessary. Current quantum simulators under development include ultracold atoms in optical lattices, trapped ions, arrays of superconducting qubits, and others.
Sensors
Quantum sensors are expected to have a number of applications in a wide variety of fields including positioning systems, communication technology, electric and magnetic field sensors, gravimetry as well as geophysical areas of research such as civil engineering and seismology.
Education programs
Quantum engineering is evolving into its own engineering discipline. The quantum industry requires a quantum-literate workforce, a missing resource at the moment. Currently, scientists in the field of quantum technology have mostly either a physics or engineering background and have acquired their ”quantum engineering skills” by experience. A survey of more than twenty companies aimed to understand the scientific, technical, and “soft” skills required of new hires into the quantum industry. Results show that companies often look for people that are familiar with quantum technologies and simultaneously possess excellent hands-on lab skills.
Several technical universities have launched education programs in this domain. For example, ETH Zurich has initiated a Master of Science in Quantum Engineering, a joint venture between the electrical engineering department (D-ITET) and the physics department (D-PHYS), EPFL offers a dedicated Master’s program in Quantum Science and Engineering, combining coursework in quantum physics and engineering with research opportunities, and the University of Waterloo has launched integrated postgraduate engineering programs within the Institute for Quantum Computing. Similar programs are being pursued at Delft University, Technical University of Munich, MIT, CentraleSupélec and other technical universities.
In the realm of undergraduate studies, opportunities for specialization are sparse. Nevertheless, some institutions have begun to offer programs. The Université de Sherbrooke offers a bachelor of science in quantum information, University of Waterloo offers a quantum specialization in its electrical engineering program, and the University of New South Wales offers a bachelor of quantum engineering.
Students are trained in signal and information processing, optoelectronics and photonics, integrated circuits (bipolar, CMOS) and electronic hardware architectures (VLSI, FPGA, ASIC). In addition, they are exposed to emerging applications such as quantum sensing, quantum communication and cryptography and quantum information processing. They learn the principles of quantum simulation and quantum computing, and become familiar with different quantum processing platforms, such as trapped ions, and superconducting circuits. Hands-on laboratory projects help students to develop the technical skills needed for the practical realization of quantum devices, consolidating their education in quantum science and technologies.
See also
Quantum supremacy
Noisy intermediate-scale quantum era
Timeline of quantum computing and communication
References
Engineering disciplines
Quantum mechanics | Quantum engineering | [
"Physics",
"Engineering"
] | 1,266 | [
"nan",
"Applied and interdisciplinary physics",
"Quantum mechanics",
"Applications of quantum mechanics"
] |
69,460,867 | https://en.wikipedia.org/wiki/Contour%20currents | The term contour currents was first introduced by Heezen et al in 1966 as bottom currents along the continental shelf driven by Coriolis effects and temperature/salinity dependent density gradients. Generally, the currents flow along depth contours, hence called contour currents. Sediments deposited and shaped by the contour currents are called contourites, which are commonly observed in continental rise.
Depositional Processes
Since contour currents generally flow at speed of 2–20 cm/s, their capacity to carry sediments is limited to fine grain particles already in suspension. Redistribution of sediments by contour currents have, however, been reported as evidenced by the sea floor morphological features parallel to regional isobaths.
Turbidity currents, on the other hand, flow down slope across regional isobaths and are mainly responsible for supplying terrigenous sediment across continental margins to deep-water environments, such as continental rise, where fine particles are further carried in suspension by contour currents. The joint depositional processes of the two current systems contribute to the dominant factors influencing the morphology of the lower continental margins.
References
Gradient methods
Oceanography
Geology | Contour currents | [
"Physics",
"Environmental_science"
] | 229 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
69,470,433 | https://en.wikipedia.org/wiki/Surplus%20sharing | Surplus sharing is a kind of a fair division problem where the goal is to share the financial benefits of cooperation (the "economic surplus") among the cooperating agents. As an example, suppose there are several workers such that each worker i, when working alone, can gain some amount ui. When they all cooperate in a joint venture, the total gain is u1+...+un+s, where s>0. This s is called the surplus of cooperation, and the question is: what is a fair way to divide s among the n agents?
When the only available information is the ui, there are two main solutions:
Equal sharing: each agent i gets ui+s/n, that is, each agent gets an equal share of the surplus.
Proportional sharing: each agent i gets ui+(s*ui/Σui), that is, each agent gets a share of the surplus proportional to his external value (similar to the proportional rule in bankruptcy). In other words, ui is considered a measure of the agent's contribution to the joint venture.
Kolm calls the equal sharing "leftist" and the proportional sharing "rightist".
Chun presents a characterization of the proportional rule.
Moulin presents a characterization of the equal and proportional rule together by four axioms (in fact, any three of these axioms are sufficient):
Separability - the division of surplus within any coalition T should depend only on the total amount allocated to T, and on the opportunity costs of agents within T.
No advantageous reallocation - no coalition can benefit from redistributing its ui among its members (this is a kind of strategyproofness axiom).
Additivity - for each agent i, the allocation to i is a linear function of the total surplus s.
Path independence - for each agent i, the allocation to i from surplus s is the same as allocating a part of s, updating the ui, and then allocating the remaining part of s.
Any pair of these axioms characterizes a different family of rules, which can be viewed as a compromise between equal and proportional sharing.
When there is information about the possible gains of sub-coalitions (e.g., it is known how much agents 1,2 can gain when they collaborate in separation from the other agents), other solutions become available, for example, the Shapley value.
See also
Bankruptcy problem - a similar problem in which the goal is to share losses (negative gains).
Cost-sharing mechanism - a similar problem in which the goal is to share costs.
Frederic G. Mather, Both sides of profit sharing: an 1896 article about the need to share the surplus of work fairly between employees and employers.
References
Fair division | Surplus sharing | [
"Mathematics"
] | 564 | [
"Recreational mathematics",
"Game theory",
"Fair division"
] |
70,970,743 | https://en.wikipedia.org/wiki/List%20of%20carbon%20capture%20and%20storage%20projects | This List of carbon capture and storage projects provides documentation of global, industrial-scale projects for carbon capture and storage. According to the Global CCS Institute, in 2020 some 40 million tons CO2 per year capacity of CCS was in operation with 50 million tons per year in development. The world emits about 38 billion tonnes of CO2 every year, so CCS captured about one thousandth of the 2020 total.
Algeria
In Salah was an operational onshore gas field with CO2 injection. CO2 was separated from produced gas and reinjected into the Krechba geologic formation at a depth of 1,900m. Since 2004, about 3.8 Mt of CO2 has been captured during natural gas extraction and stored. Injection was suspended temporarily in June 2011 due to concerns about the integrity of the seal, potential for fracture and leakage into the caprock, and movement of CO2 outside of the Krechba hydrocarbon lease. Injection has not restarted and no leakage of CO2 was reported during the lifetime of the project
Australia
In the early 2020s the government allocated over A$300 million for CCS both onshore and offshore.
Canada
Canadian governments committed $1.8 billion fund CCS projects over the 2008-2018 period. The main programs are the federal government's Clean Energy Fund, Alberta's Carbon Capture and Storage fund, and the governments of Saskatchewan, British Columbia, and Nova Scotia. Canada works closely with the United States through the U.S.–Canada Clean Energy Dialogue launched by the Obama administration in 2009.
Alberta
Alberta committed $170 million in 2013/2014 – and a total of $1.3 billion over 15 years – to fund two large-scale CCS projects.
The CAN $1.2 billion Alberta Carbon Trunk Line Project (ACTL), pioneered by Enhance Energy, became fully operational in June 2020. It is now the world's largest carbon capture and storage system consisting of a 240 km pipeline that collects CO2 industrial emissions from the Agrium fertilizer plant and North West Sturgeon Refinery in Alberta. The capture is then delivered to the matured Clive oil reservoir for use in EOR (enhanced oil recovery) and permanent storage. At full capacity, it can capture 14.6 million tonnes of CO2 per year. For perspective, that translates into capturing CO2 from 2.6 million cars plus.
The Quest Carbon Capture and Storage Project was developed by Shell Canada for use in the Athabasca Oil Sands Project. It is cited as being the world's first commercial-scale CCS project. Construction began in 2012 and ended in 2015. The capture unit is located at the Scotford Upgrader in Alberta, Canada, where hydrogen is produced to upgrade bitumen from oil sands into synthetic crude oil. The steam methane units that produce the hydrogen emit CO2 as a byproduct. The capture unit captures the CO2 from the steam methane unit using amine absorption technology, and the captured CO2 is then transported to Fort Saskatchewan where it is injected into a porous rock formation called the Basal Cambrian Sands. From 2015 to 2018, the project stored 3 Mt CO2 at a rate of 1 Mtpa.
Entropy, a subsidiary of Advantage Energy runs a sequestration project at Glacier plant near Valhalla, Alberta, storing 0.2 MT of CO2 per year as of 2022.
In 2022, Alberta Energy granted 25 CO2 sequestration evaluation licenses covering a total area of 10 million hectares.
Saskatchewan
Boundary Dam Power Station Unit 3 Project
Boundary Dam Power Station, owned by SaskPower, is a coal fired station originally commissioned in 1959. In 2010, SaskPower committed to retrofitting the lignite-powered Unit 3 with a carbon capture unit. The project was completed in 2014. The retrofit utilized a post-combustion amine absorption technology. The captured CO2 was to be sold to Cenovus to be used for Enhanced Oil Recovery (EOR) in Weyburn field. Any CO2 not used for EOR was planned to be used by the Aquistore project and stored in deep saline aquifers. Many complications kept Unit 3 and this project from operating as much as expected, but between August 2017 – August 2018, Unit 3 was online for 65%/day on average. The project has a nameplate capacity of capture of 1 Mtpa. The other units are to be phased out by 2024. The future of the one retrofitted unit is unclear.
Great Plains Synfuel Plant and Weyburn-Midale Project
The Great Plains Synfuel Plant, owned by Dakota Gas, is a coal gasification operation that produces synthetic natural gas and various petrochemicals from coal. The plant began operation in 1984, while CCS began in 2000. In 2000, Dakota Gas retrofitted the plant and planned to sell the CO2 to Cenovus and Apache Energy, for EOR in the Weyburn and Midale fields in Canada. The Midale fields were injected with 0.4 Mtpa and the Weyburn fields are injected with 2.4 Mtpa for a total injection capacity of 2.8 Mtpa. The Weyburn-Midale Carbon Dioxide Project (or IEA GHG Weyburn-Midale CO2 Monitoring and Storage Project), was conducted there. Injection continued even after the study concluded. Between 2000 and 2018, over 30 Mt CO2 was injected.
China
As of 2019 coal accounted for around 60% of China's energy production. The majority of CO2 emissions come from coal-fired power plants or coal-to-chemical processes (e.g. the production of synthetic ammonia, methanol, fertilizer, natural gas, and CTLs). According to the IEA, around 385 out of China's 900 gigawatts of coal-fired power capacity are near locations suitable for CCS. As of 2017 three CCS facilities are operational or in late stages of construction, drawing CO2 from natural gas processing or petrochemical production. At least eight more facilities are in early planning and development, most of which target power plant emissions, with an injection target of EOR.
China's largest carbon capture and storage plant at Guohua Jinjie coal power station was completed in January 2021. The project is expected to prevent 150,000 tons of carbon dioxide emission annually at a 90% capture rate.
CNPC Jilin Oil Field
China's first carbon capture project was the Jilin oil field in Songyuan, Jilin Province. It started as a pilot EOR project in 2009, and developed into a commercial operation for the China National Petroleum Corporation (CNPC). The final development phase completed in 2018. The source of CO2 is the nearby Changling gas field, from which natural gas with about 22.5% is extracted. After separation at the natural gas processing plant, the CO2 is transported to Jilin via pipeline and injected for a 37% enhancement in oil recovery at the low-permeability oil field. At commercial capacity, the facility currently injects 0.6 Mt CO2 per year, and it has injected a cumulative total of over 1.1 million tonnes over its lifetime.
Sinopec Qilu Petrochemical CCS Project
Sinopec is developing a carbon capture unit whose first phase was to be operational in 2019. The facility is located in Zibo City, Shandong Province, where a fertilizer plant produces CO2 from coal/coke gasification. CO2 is to be captured by cryogenic distillation and will be transported via pipeline to the nearby Shengli oil field for EOR. Construction of the first phase began by 2018, and was expected to capture and inject 0.4 Mt CO2 per year. The Shengli oil field is the destination for CO2.
Yanchang Integrated CCS Project
Yanchang Petroleum is developing carbon capture facilities at two coal-to-chemical plants in Yulin City, Shaanxi Province. The first capture plant is capable of capturing 50,000 tonnes per year and was finished in 2012. Construction on the second plant started in 2014 and was expected to be finished in 2020, with a capacity of 360,000 tonnes per year. This CO2 will be transported to the Ordos Basin, one of China's largest coal, oil, and gas-producing regions with a series of low- and ultra-low permeability oil reservoirs. Lack of water has limited the use of water for EOR, so the CO2 increase production.
Germany
From 2008 until 2014 the Schwarze Pumpe power station, about south of the city of Spremberg, was home to the world's first demonstration CCS coal plant. The mini pilot plant was run by an Alstom-built oxy-fuel boiler and is also equipped with a flue gas cleaning facility to remove fly ash and sulfur dioxide. The Swedish company Vattenfall AB invested some €70 million in the two-year project, which began operation 9 September 2008. The power plant, which is rated at 30 megawatts, was a pilot project to serve as a prototype for future full-scale power plants. 240 tonnes a day of CO2 were being trucked to be injected into an empty gas field. Germany's BUND group called it a "fig leaf". For each tonne of coal burned, 3.6 tonnes of CO2 was produced. The CCS program at Schwarze Pumpe ended in 2014 due to nonviable costs and energy use.
As of 2007, the German utility RWE operated a pilot-scale CO2 scrubber at the lignite-fired Niederaußem power station built in cooperation with BASF (supplier of detergent) and Linde engineering.
Japan
The Tomakomai CCS Demonstration Project is an ongoing project led by Japan CCS Co., Ltd. (JCCS) in Tomakomai, Hokkaido prefecture. Funded by METI and commissioned by NEDO, JCCS has been leading CCS-related researches, including CO2 capture, injection and geological measurements at its Tomakomai site since 2012, although CO2 injection has been concluded since November 22, 2019 after reaching 300,012 tons of injected CO2, slightly above the initially proposed 300,000 tons.
The source of the CO2 was Idemitsu Kosan's nearby oil refinery, which was connected to the Tomakomai CCS site via an 1.4km (0.87mi) pipeline. After amine gas treating a CO2 purity of 99% or higher has been achieved, which was then sent to the injection facility, where it was compressed and then injected into two separate undersea reservoirs. The reservoirs are located in the Lower Quaternary Moebetsu (which consists of sandstone) and the Miocene Takinoue (which consists of volcanic and volclaniclastic rocks) formation, located 1000 to 1200m (3280 to 3940ft) and 2400 to 3000m (7875 to 9840ft) deep respectively. In the future the facility may serve as a trial site for transferring liquefied CO2 from vessels directly into the reservoirs.
After the 2018 Hokkaido Eastern Iburi earthquake, a survey conducted by JCCS revealed that the reservoirs did not sustain any detectable damage, as well as no direct link between the earthquake and the CCS facility could be established.
Netherlands
Developed in the Netherlands, an electrocatalysis by a copper complex helps reduce CO2 to oxalic acid.
Norway
In Norway, the CO2 Technology Centre (TCM) at Mongstad began construction in 2009, and completed in 2012. It includes two capture technology plants (one advanced amine and one chilled ammonia), both capturing flue gas from two sources. This includes a gas-fired power plant and refinery cracker flue gas (similar to coal-fired power plant flue gas).
In addition to this, the Mongstad site was also planned to have a full-scale CCS demonstration plant. The project was delayed to 2014, 2018, and then indefinitely. The project cost rose to US$985 million. Then in October 2011, Aker Solutions' wrote off its investment in Aker Clean Carbon, declaring the carbon sequestration market to be "dead".
On 1 October 2013, Norway asked Gassnova, its Norwegian state enterprise for carbon capture and storage, not to sign any contracts for carbon capture and storage outside Mongstad.
In 2015 Norway was reviewing feasibility studies and hoping to have a full-scale carbon capture demonstration project by 2020.
In 2020, it then announced "Longship" ("Langskip" in Norwegian). This 2,7 billion CCS project will capture and store the carbon emissions of Norcem's cement factory in Brevik. Also, it plans to fund Fortum Oslo's Varme waste incineration facility. Finally, it will fund the transport and storage project "Northern Lights", a joint project between Equinor, Shell and Total. This latter project will transport liquid CO2 from capture facilities to a terminal at Øygarden in Vestland County. From there, CO2 will be pumped through pipelines to a reservoir beneath the seabed. The first two CO2 carrier ships for the Øygarden terminal were under construction at Dalian Shipbuilding in China in 2022. They are being equipped with rotor sails estimated to cut emissions by 5%. Øygarden is the world's first open-access transport and storage infrastructure.
Sleipner CO2 Injection
Sleipner is a fully operational offshore gas field with CO2 injection initiated in 1996. CO2 is separated from produced gas and reinjected in the Utsira saline aquifer (800–1000 m below ocean floor) above the hydrocarbon reservoir zones. This aquifer extends much further north from the Sleipner facility at its southern extreme. The large size of the reservoir accounts for why 600 billion tonnes of CO2 are expected to be stored, long after the Sleipner natural gas project has ended. The Sleipner facility is the first project to inject its captured CO2 into a geological feature for the purpose of storage rather than economically compromising EOR.
United Arab Emirates
After the success of their pilot plant operation in November 2011, the Abu Dhabi National Oil Company and Abu Dhabi Future Energy Company moved to create the first commercial CCS facility in the iron and steel industry. CO2 is a byproduct of the iron making process. It is transported via a 50 km pipeline to Abu Dhabi National Oil Company oil reserves for EOR. The facility's capacity is 800,000 tonnes per year. As of 2013, more than 40% of gas emitted by the crude oil production process is recovered within the oil fields for EOR.
United Kingdom
The government aims to capture and store 20-30 Mtpa by 2030, and over 50 Mtpa by 2035 (for comparison greenhouse gas emissions by the United Kingdom were 425 Mt in 2021). The 2020 budget allocated 800 million pounds to attempt to create CCS clusters by 2030, to capture CO2 from heavy industry and a gas-fired power station and store it under the North Sea. The Crown Estate is responsible for storage rights on the UK continental shelf and it has facilitated work on offshore CO2 storage technical and commercial issues, and the North Sea Transition Authority has awarded 6 undersea storage licences including to BP and Equinor.
A trial of bio-energy with carbon capture and storage (BECCS) at a wood-fired unit in Drax power station in the UK started in 2019. If successful this could remove one tonne per day of CO2 from the atmosphere, and the company aims for operations to start in 2027.
In the UK CCS is under consideration to help with industry and heating decarbonization, and it is hoped that building small modular units to fit to existing factories will lower the cost below the carbon price on the UK Emissions Trading Scheme, which was around 80 GBP per tonne in early 2022. Direct air capture is also still being considered, but as of 2022 is much too expensive.
In May 2022, it was announced that Nuada (formerly MOF Technologies) had partnered with HeidelbergCement, Buzzi Unicem and Cementir Holding to build a point source carbon capture plant to further hard to abate industry decarbonization
United States
In addition to individual carbon capture and sequestration projects, various programs work to research, develop, and deploy CCS technologies on a broad scale. These include the National Energy Technology Laboratory's (NETL) Carbon Sequestration Program, regional carbon sequestration partnerships and the Carbon Sequestration Leadership Forum (CSLF).
In September 2020, the U.S. Department Of Energy awarded $72 million in federal funding to support the development and advancement of carbon capture technologies. Under this cost-shared program, DOE awarded $51 million to nine new projects for coal and natural gas power and industrial sources.
The nine projects were to design initial engineering studies to develop technologies for byproducts at industrial sites. The projects selected are:
Enabling Production of Low Carbon Emissions Steel Through CO2 Capture from Blast Furnace Gases — ArcelorMittal USA
LH CO2MENT Colorado Project — Electricore
Engineering Design of a Polaris Membrane CO2 Capture System at a Cement Plant — Membrane Technology and Research (MTR) Inc.
Engineering Design of a Linde-BASF Advanced Post-Combustion CO2 Capture Technology at a Linde Steam Methane Reforming H2 Plant — Praxair
Initial Engineering and Design for CO2 Capture from Ethanol Facilities — University of North Dakota Energy & Environmental Research Center
Chevron Natural Gas Carbon Capture Technology Testing Project — Chevron USA, Inc.
Engineering-scale Demonstration of Transformational Solvent on NGCC Flue Gas — ION Clean Energy Inc.
Engineering-Scale Test of a Water-Lean Solvent for Post-Combustion Capture — Electric Power Research Institute Inc.
Engineering Scale Design and Testing of Transformational Membrane Technology for CO2 Capture — Gas Technology Institute (GTI)
$21 million was also awarded to 18 projects for technologies that remove CO2 from the atmosphere. The focus was on the development of new materials for use in direct air capture and will also complete field testing. The projects:
Direct Air Capture Using Novel Structured Adsorbents — Electricore
Advanced Integrated Reticular Sorbent-Coated System to Capture CO2 from the Atmosphere — GE Research
MIL-101(Cr)-Amine Sorbents Evaluation Under Realistic Direct Air Capture Conditions — Georgia Tech Research Corporation
Demonstration of a Continuous-Motion Direct Air Capture System — Global Thermostat Operations, LLC
Experimental Demonstration of Alkalinity Concentration Swing for Direct Air Capture of CO2 — Harvard University
High-Performance, Hybrid Polymer Membrane for CO2 Separation from Ambient Air — InnoSense, LLC
Transformational Sorbent Materials for a Substantial Reduction in the Energy Requirement for Direct Air Capture of CO2 — InnoSepra, LLC
A Combined Water and CO2 Direct Air Capture System — IWVC, LLC
TRAPS: Tunable, Rapid-uptake, AminoPolymer Aerogel Sorbent for Direct Air Capture of CO2 — Palo Alto Research Center
Direct Air Capture Using Trapped Small Amines in Hierarchical Nanoporous Capsules on Porous Electrospun Hollow Fibers — Rensselaer Polytechnic Institute
Development of Advanced Solid Sorbents for Direct Air Capture — RTI International
Direct Air Capture Recovery of Energy for CCUS Partnership (DAC RECO2UP) — Southern States Energy Board
Membrane Adsorbents Comprising Self-Assembled Inorganic Nanocages (SINCs) for Super-fast Direct Air Capture Enabled by Passive Cooling — SUNY
Low Regeneration Temperature Sorbents for Direct Air Capture of CO2 — Susteon Inc.
Next Generation Fiber-Encapsulated Nanoscale Hybrid Materials for Direct Air Capture with Selective Water Rejection — The Trustees of Columbia University in the City of New York
Gradient Amine Sorbents for Low Vacuum Swing CO2 Capture at Ambient Temperature — The University of Akron
Electrochemically-Driven CO2 Separation — University of Delaware
Development of Novel Materials for Direct Air Capture of CO2 — University of Kentucky Research Foundation
Kemper Project, MS 2010-2021
The Kemper Project is a gas-fired power plant under construction in Kemper County, Mississippi. It was originally planned as a coal-fired plant. Mississippi Power, a subsidiary of Southern Company, began construction in 2010. Had it become operational as a coal plant, the Kemper Project would have been a first-of-its-kind electricity plant to employ gasification and carbon capture technologies at this scale. The emission target was to reduce CO2 to the same level an equivalent natural gas plant would produce. However, in June 2017 the proponents – Southern Company and Mississippi Power – announced that the plant would only burn natural gas.
Construction was delayed and the scheduled opening was pushed back over two years, while the cost increased to $6.6 billion—three times the original estimate. According to a Sierra Club analysis, Kemper is the most expensive power plant ever built for the watts of electricity it will generate.
In October 2021, the coal gasification portion of the plant was demolished.
Terrell Natural Gas Processing Plant
Opening in 1972, the Terrell plant in Texas, United States was the oldest operating industrial CCS project as of 2017. CO2 is captured during gas processing and transported primarily via the Val Verde pipeline where it is eventually injected at Sharon Ridge oil field and other secondary sinks for use in EOR. The facility captures an average of somewhere between 0.4 and 0.5 million tons of CO2 per annum.
Enid Fertilizer
Beginning in 1982, the facility owned by the Koch Nitrogen company is the second oldest large scale CCS facility still in operation. The CO2 that is captured is a high purity byproduct of nitrogen fertilizer production. The process is made economical by transporting the CO2 to oil fields for EOR.
Shute Creek Gas Processing Facility
7 million metric tonnes of CO2 are recovered annually from ExxonMobil's Shute Creek gas processing plant near La Barge, Wyoming, and transported by pipeline to various oil fields for EOR. Started in 1986, as of 2017 this project had the second largest CO2 capture capacity in the world.
Petra Nova (2017-2020)
The Petra Nova project is a billion dollar endeavor undertaken by NRG Energy and JX Nippon to partially retrofit their jointly owned W.A Parish coal-fired power plant with post-combustion carbon capture. The plant, which is located in Thompsons, Texas (just outside of Houston), entered commercial service in 1977. Carbon capture began on 10 January 2017. The WA Parish unit 8 generates 240 MW and 90% of the CO2 (or 1.4 million tonnes) was captured per year. The CO2 (99% purity) is compressed and piped about 82 miles to West Ranch Oil Field, Texas, for EOR. The field has a capacity of 60 million barrels of oil and has increased its production from 300 barrels per day to 4000 barrels daily. On 1 May 2020, NRG shut down Petra Nova, citing low oil prices during the COVID-19 pandemic. The plant had also reportedly suffered frequent outages and missed its carbon sequestration goal by 17% over its first three years of operation. In 2021 the plant was mothballed.
Illinois Industrial, Decatur IL
the Illinois Industrial Carbon Capture and Storage project in Decatur, Illinois is dedicated to geological CO2 storage. The public-private research project spearheaded by Archer Daniels Midland Co received a 171 million dollar investment from the DOE and over 66 million dollars from the private sector. The CO2 is a byproduct of the fermentation process of corn ethanol production and is stored 7000 feet underground in the Mt. Simon Sandstone saline aquifer. Sequestration began in April 2017 with a carbon capture capacity of 1 Mt/a.
NET Power Demonstration Facility, La Porte TX
, the NET Power Demonstration Facility in La Porte, TX was an oxy-combustion natural gas power plant that operated by the Allam power cycle. The plant was able to reduce its air emissions to zero by producing a near pure stream of CO2. and first fired in May 2018.
Century Plant, TX
, Occidental Petroleum, along with SandRidge Energy, operated a West Texas hydrocarbon gas processing plant and related pipeline infrastructure that provides CO2 for Enhanced Oil Recovery (EOR). With a CO2 capture capacity of 8.4 Mt/a, the Century plant was the largest single industrial source CO2 capture facility in the world.
Developing projects by several countries
ANICA - Advanced Indirectly Heated Carbonate Looping Process
The ANICA Project focused on developing economically feasible carbon capture technology for lime and cement plants, which are responsible for 8% of the total anthropogenic carbon dioxide emissions. In 2019, a consortium of 12 partners from Germany, United Kingdom and Greece began working on integrating indirectly heated carbonate lopping (IHCaL) process in cement and lime production. The project aimed at lowering the energy penalty and CO2 avoidance costs for CO2 capture from lime and cement plants.
Port of Rotterdam CCUS Backbone Initiative
Expected in 2021, the Port of Rotterdam CCUS Backbone Initiative aimed to implement a "backbone" of shared CCS infrastructure for use by businesses located around the Port of Rotterdam in Rotterdam, Netherlands. The project is overseen by the Port of Rotterdam, natural gas company Gasunie, and the EBN. It intends to capture and sequester 2 million tons of CO2 per year and increase this number in future years. Although dependent on the participation of companies, the goal of this project is to greatly reduce the carbon footprint of the industrial sector of the Port of Rotterdam and establish a successful CCS infrastructure in the Netherlands following the recently canceled ROAD project. CO2 captured from local chemical plants and refineries will both be sequestered in the North Sea seabed. The possibility of a CCU initiative has also been considered, in which the captured CO2 will be sold to horticultural firms, who will use it to speed up plant growth, as well as other industrial users.
Climeworks Direct Air Capture Plant and CarbFix2 Project
Climeworks opened the first commercial direct air capture plant in Zürich, Switzerland in 2008. Their process captures CO2 from ambient air using a patented filter, isolates the CO2 at high heat, and transports it to a nearby greenhouse as a fertilizer. The plant is built near a waste recovery facility that provides excess heat to power the plant.
Climeworks is also working with Reykjavik Energy on the CarbFix2 project with EU funding. This project, called "Orca," is located in Hellisheidi, Iceland. It uses direct air capture technology in conjunction with a large geothermal power plant. Once CO2 is captured using Climeworks' filters, it is heated using heat from the geothermal plant and used to carbonate water. The geothermal plant then pumps the carbonated water into underground rock formations where the CO2 reacts with basaltic bedrock and forms carbonate minerals for permanent storage.
OPEN100
The OPEN100 project, launched in 2020 by the Energy Impact Center (EIC), is the world's first open-source blueprint for nuclear power plant deployment. The Energy Impact Center and OPEN100 aim to reverse climate change by 2040 and believe that nuclear power is the only feasible energy source to power CCS without the compromise of releasing new CO2.
This project intends to bring together researchers, designers, scientists, engineers, think tanks, etc. to help compile research and designs that will eventually evolve into a blueprint that is available to the public and can be utilized in the development of future nuclear plants.
Nuada
MOF Technologies have developed Nuada, a modular point source carbon capture technology, which uses metal-organic frameworks (MOFs) to deliver energy-efficient removal at a fraction of the cost of conventional amines. After having been selected by the Global Cement and Concrete Association via their Innovandi Open Challenge, Nuada will partner with HeidelbergCement, Buzzi Unicem and Cementir Holding to build pilot plants in 2022.
References
External links
DOE Fossil Energy Department of Energy programs in CO2 capture and storage
US Department of Energy
Zero Emissions Platform - technical adviser to the EU Commission on the deployment of CCS and CCU
National Assessment of Geologic CO2 Storage Resources: Results United States Geological Survey
MIT Carbon Capture and Sequestration Project Database, until 2016
MIT Carbon Capture and Sequestration private public project
List
Bright green environmentalism
Emissions reduction
Gas technologies | List of carbon capture and storage projects | [
"Chemistry",
"Engineering"
] | 5,866 | [
"Greenhouse gases",
"Geoengineering",
"Carbon capture and storage",
"Emissions reduction"
] |
76,920,619 | https://en.wikipedia.org/wiki/Ground-based%20interferometric%20gravitational-wave%20search | Ground-based interferometric gravitational-wave search refers to the use of extremely large interferometers built on the ground to passively detect (or "observe") gravitational wave events from throughout the cosmos. Most recorded gravitational wave observations have been made using this technique; the first detection, revealing the merger of two black holes, was made in 2015 by the LIGO sites.
, major detectors are the two LIGO sites in the United States, Virgo in Italy and KAGRA in Japan, which are all part of the second generation of operational detectors. Developing projects include LIGO-India as part of the second generation, and the Einstein Telescope and Cosmic Explorer forming a third generation. Space-borne interferometers such as LISA are also planned, with a similar concept but targeting different kind of sources and using very different technologies.
History
While gravitational waves were first formulated as part of general relativity by Einstein in 1916, there were no real attempts to detect them until the 1960s, when Joseph Weber created the first of so-called "Weber bars". While these proved unable to reach the required sensitivity for detecting gravitational waves, many research groups focused on this topic were created at that time. While a lot of efforts were dedicated to improving the resonant bar design, the idea of using a large interferometer for gravitational wave detection was formulated in the 1970s and began to gain traction in the 1980s, leading to the foundation of LIGO in 1984 and Virgo in 1989.
Most of the current large interferometers started construction in the 1990s and finished in the early 2000s (1999 for LIGO, 2003 for Virgo, 2002 for GEO 600). After a few years of observation and improvements to reach their target sensitivity, it became clear that a detection was unlikely and that further upgrades were required, leading to large projects now labelled as the "second generation of detectors" (Advanced LIGO and Virgo), with important sensitivity gains. This periods also marked the beginning of joint observing periods between the different detectors, which are crucial to confirm the validity of a signal, and sparked collaborations between the different teams.
The second generation upgrades were made during the early 2010s, lasting from 2010 to 2014 for LIGO and 2011 to 2017 for Virgo. In parallel, the KAGRA project was launched in Japan in 2010. In 2015, soon after restarting observations, the two LIGO detectors achieved the first direct observation of gravitational waves. This marked the beginning of the still ongoing series of gravitational wave observation periods, labelled O1 through O5; Virgo joined the observations in 2017, near the end of the O2 period, leading quickly to the first three-detector observation, and a few days later the GW170817 event, which is the only one to date to have been observed both with gravitational waves and electromagnetic radiation. KAGRA was completed in 2020, only observing for brief periods of time due to its low sensitivity up until now.
The O4 observing run is currently ongoing, and expected to last until June 2025. More than 90 confirmed detections have been published; the collaborations now also produce live alerts when signals are detected, with more than 100 significant alerts already emitted during O4.
Principle
In general relativity, a gravitational wave is a space-time perturbation which propagates at the speed of light. It thus slightly curves space-time, which locally changes the light path. Mathematically speaking, if is the amplitude (assumed to be small) of the incoming gravitational wave and the length of the optical cavity in which the light is in circulation, the change of the optical path due to the gravitational wave is given by the formula:with being a geometrical factor which depends on the relative orientation between the cavity and the direction of propagation of the incoming gravitational wave. In other terms, the change in length is proportional to both to the length of the cavity and the amplitude of the gravitational wave.
Interferometer
In a typical configuration, the detector is a Michelson interferometer whose mirrors are suspended. A laser is divided into two beams by a beam splitter tilted by 45 degrees. The two beams propagate in the two perpendicular arms of the interferometer, are reflected by mirrors located at the end of the arms, and recombine on the beam splitter, generating interferences which are detected by a photodiode. An incoming gravitational wave changes the optical path of the laser beams in the arms, which then changes the interference pattern recorded by the photodiode.
This means the various mirrors of the interferometer must be "frozen" in position: when they move, the optical cavity length changes and so does the interference signal read at the instrument output port. The mirror positions relative to a reference and their alignment are monitored accurately in real time with a precision better than the tenth of a nanometre for the lengths; at the level of a few nano radians for the angles. The more sensitive the detector, the narrower its optimal working point. Reaching that working point from an initial configuration in which the various mirrors are moving freely is a control system challenge; a complex series of steps is required to coordinate all the steerable parts of the interferometer. Once the working point is achieved, corrections are continuously applied to keep it in the optimal configuration.
The signal induced by a potential gravitational wave is thus "embedded" in the light intensity variations detected at the interferometer output. Yet, several external causes—globally denoted as noise—change the interference pattern perpetually and significantly. Should nothing be done to remove or mitigate them, the expected physical signals would be buried in noise and would then remain undetectable. The design of detectors like Virgo and LIGO thus requires a detailed inventory of all noise sources which could impact the measurement, allowing a strong and continuing effort to reduce them as much as possible.
Using an interferometer rather than a single optical cavity allows one to significantly enhance the detector's sensitivity to gravitational waves. Indeed, in this configuration based on an interference measurement, the contributions from some experimental noises are strongly reduced: instead of being proportional to the length of the single cavity, they depend in that case on the length difference between the arms (so equal arm length cancels the noise). In addition, the interferometer configuration benefits from the differential effect induced by a gravitational wave in the plane transverse to its direction of propagation: when the length of an optical path changes by a quantity , the perpendicular optical path of the same length changes by (same magnitude but opposite sign). And the interference at the output port of a Michelson interferometer depends on the difference of length between the two arms: the measured effect is hence amplified by a factor of 2 compared to a simple cavity.
The optimal working point of an interferometric detector of gravitational waves is slightly detuned from the "dark fringe", a configuration in which the two laser beams recombined on the beam splitter interfere in a destructive way: almost no light is detected at the output port.
Detectors
LIGO
LIGO is composed of two different detectors, one in Hanford, Washington and one in Livingston, Louisiana (they are thus separated by around 3000 km); the two detectors have very similar design, with 4 km long arms, although there are minor differences between the two. They were part of the first generation of detectors, and were completed in 2002; in 2010, they were shut down for an important set of upgrades, termed "Advanced LIGO", making the improved detector a part of the second generation. These upgrades were finished in early 2015, following which the two detectors made the first detection of gravitational waves.
Virgo
Virgo is a single detector located near Pisa, Italy, with 3 km long arms. It was part of the first generation of detectors, following its completion in 2003; it was shut down in 2011 to prepare for the "Advanced Virgo" second-generation upgrades. The upgrades were completed in 2017, allowing it to join the "O2" run, quickly making the first three-detector detection jointly with LIGO.
KAGRA
KAGRA (formerly known as LCGT) is a single interferometer with 3 km long arms, based in the Kamioka Observatory in Japan, which is part of the second generation of detectors. It was first made operational in 2020, although it has not been able to make a detection yet. Although the base design is similar to LIGO and Virgo, it is built underground and integrates cryogenic mirrors, which is why it has often been referred to as a "2.5 generation detector".
Other detectors
GEO600 was initially designed as a British-German effort to build an interferometer with 3 km long arms; it was later downscaled to 600 m due to funding reasons. It was completed in 2002 and is located near Hanover, Germany. Although it has limited capacities (especially in the lower frequency range), making a detection unlikely, it plays a key role in the gravitational wave network as a testbed for many new technologies.
TAMA 300 (and its predecessor, the prototype TAMA 20) was a Japanese detector with 300 m arms, built at the Mitaka university. It was partly designed as a stepstone for larger detectors (including KAGRA), and operated between 1999 and 2004. It has now been repurposed as a testbed for new technologies. The CLIO detector, with 100 m arms and located in the Kamioka mine, is another test detector, specifically designed to test the cryogenic technology used in KAGRA.
LIGO-Australia is a defunct project which was envisioned to be built on the model of the LIGO detector in Australia, but was finally not funded by the Australian government; the project was later relocated to become LIGO-India.
The Fermilab Holometer, with its 39 m long arms, probes a pretty different range in frequency than other interferometers, aiming for the MHz range.
Future detectors
LIGO-India
LIGO-India is a current project of a single interferometer based in Aundha, India, following a design very similar to LIGO (with support from the LIGO collaboration). It has received approval from the Indian government in 2023, and is planned to be completed around 2030.
Cosmic Explorer
Cosmic Explorer is a project for a third-generation detector, featuring two interferometers with respectively 40 km and 20 km long arms located in two different places in the United States. It relies on a design similar to LIGO, leveraging the experience from the two LIGO detectors, scaled to the much longer arm length. It is currently going through the process of approval by the NSF. If approved, it should be completed by the end of the 2030s.
Einstein Telescope
Einstein Telescope is a European project for a third-generation detector; it is currently planned to use a design with three 10 km arms arranged in an equilateral triangle (effectively acting as 3 interferometers), which would be built underground; it would also use cryogenic mirrors. It is currently planned to be completed around 2035, with construction starting in 2026.
Science case
Ground-based detectors are designed to study gravitational waves from astrophysical sources. By design, they can only detect waves with a frequency ranging from a few Hz to a few thousand of Hz. The main known gravitational-wave emitting systems within this range are: black hole and/or neutron star binary mergers, rotating neutron stars, bursts and supernovae explosions, and even the gravitational wave background generated in the instants following the Big Bang. Moreover, gravitational radiation may also lead to the discovery of unexpected and theoretically predicted exotic objects.
Transient sources
Coalescences of black holes and neutron stars
When two massive and compact objects such as black holes and neutron stars orbit each other in a binary system, they emit gravitational radiation and, therefore, lose energy. Hence, they begin to get closer to each other, increasing the frequency and the amplitude of the gravitational waves; this first phase of the coalescence phenomenon, called the "inspiral", can last for millions of years. This culminates in the merger of the two objects, eventually forming a single compact object (generally a black hole). The part of the waveform corresponding to the merger has the largest amplitude and highest frequency, and can only be modeled by performing numerical relativity simulations of these systems. In the case of black holes, a signal is still emitted during a few seconds after the merger, while the new black hole "settles in"; this signal is known as the "ringdown". Current detectors are only sensitive to the late stages of the coalescence of black hole and neutron star binaries: only the last seconds of the whole process can currently be observed (including the end of the inspiral phase, the merger itself and part of the ringdown). The typical shape of the detectable signal is known as the "chirp", as it resembles the sound emitted by some birds, with a rapid increase in amplitude and frequency. All the gravitational waves signal detected so far originate from black hole or neutron star mergers.
Bursts
Any signal lasting from a few milliseconds to a few seconds is considered a gravitational wave burst.
Supernova explosions—the gravitational collapse of massive stars at the end of their lives—emit gravitational radiation that may be seen by current interferometers. A multi-messenger detection (electromagnetic and gravitational radiation, and neutrinos) would help to better understand the supernova process and the formation of black holes.
Other possible burst candidates include perturbations in neutron stars, black hole encounters, "memory" effects arising from the non-linearity of general relativity or cosmic strings. Some phenomena may also generate "long" bursts (longer than 1 second), like instabilities in a black hole accretion disk, or in newly formed black holes and neutron stars when some of the matter ejected during the supernova falls back towards the compact object.
Continuous sources
The main expected sources of continuous gravitational waves are neutron stars, very compact objects resulting from the collapse of massive stars. In particular, pulsars are special cases of neutron stars that emit light pulses periodically: they can spin up to hundreds of times per second (the fastest spinning pulsar currently known is PSR J1748−2446ad, which spins 716 times per second). Any small deviation from axial symmetry (a tiny "mountain" on the surface) will generate long duration periodic gravitational waves. A number of potential mechanisms have been identified which could generate some "mountains" due to thermal, mechanic, or magnetic effects; accretion may also induce a break in axial symmetry.
Another possible source of continuous waves in the current detection range could be more exotic objects, such as dark matter candidates. Axions rotating around a black hole or binary systems consisting of a primordial low-mass black hole and another compact object have in particular been suggested as potential sources. Some possible types of dark matter may also be detected by the interferometers directly, by interacting with optical elements of the device.
Stochastic background
Several physical phenomena may be the source of a gravitational wave stochastic background, an additional source of noise of astrophysical and/or cosmological origin. It represents a (usually) continuous source of gravitational waves, but unlike other continuous wave sources (like rotating neutron stars), it comes from large regions of the sky instead of a single location.
The cosmic microwave background (CMB) is the earliest signal of the Universe that can be observed in the electromagnetic spectrum. However, cosmological models predict the emission of gravitational waves generated instants after the Big Bang. Because gravitational waves interact very weakly with matter, detecting such background would give more insight in the cosmological evolution of our Universe. In particular, it could provide evidence for inflation, from gravitational waves emitted either by the process of inflation itself (according to some theories) or at the end of inflation; first-order phase transitions may also produce gravitational waves. Primordial black holes, which may form during the early universe, are also a potential source of a stochastic background for that period.
Moreover, current detectors may be able to detect an astrophysical background resulting from the superposition of all faint and distant sources emitting gravitational waves at all times, which would help to study the evolution of astrophysical sources and star formation. The most likely sources to contribute to the astrophysical background are binary neutron stars, binary black holes, or neutron star-black hole binaries. Other possible sources include supernovae and pulsars. It is expected that this type of background will be the first kind to be detected by the current ground interferometers.
Finally, cosmic strings may represent a source of gravitational wave background, whose detection could provide proof that cosmic strings actually exist.
Exotic sources
Non-conventional, alternative models of compact objects have been proposed by physicists. Some examples of these models can be described within general relativity (quark and strange stars, boson and Proca stars, Kerr black holes with scalar and Proca hair), others arise from some approaches to quantum gravity (cosmic strings, fuzzballs, gravastars), or come from alternative theories of gravity (scalarised neutron stars or black holes, wormholes). Theoretically predicted exotic compact objects could now be detected and would help to elucidate the true nature of gravity or discover new forms of matter. Furthermore, completely unexpected phenomena may be observed, unveiling new physics.
Fundamental properties of gravity
Gravitational wave polarization
Gravitational waves are expected to have two "tensor" polarizations, nicknamed "plus" and "cross" due to their effects on a ring of particle (displayed in the figure below). A single gravitational wave is usually a superposition of these two polarizations, depending on the orientation of the source.
In addition, some theories of gravity allow for additional polarizations to exist: the two "vector" polarizations (x and y), and the two "scalar" polarizations ("breathing" and "longitudinal"). Detecting these additional polarizations could provide evidence for physics beyond general relativity.
The polarizations can only be distinguished using several detectors; they could only be properly probed after Virgo was introduced, as the two LIGO detectors are almost co-aligned. They can be measured from compact binary coalescences, but also from the stochastic background and continuous waves. With the combination of the current detectors, it is possible to determine the presence or absence of the additional polarizations, but not their nature; a total of 5 independent detectors would be required to fully separate all the polarizations (except for the longitudinal and breathing polarizations, which cannot be distinguished from each other by current detector designs).
Lensed gravitational waves
General relativity predicts that a gravitational wave should be subject to gravitational lensing, just as light waves are; that is, the trajectory of a gravitational wave will be curved by the presence of a massive object (typically a galaxy or a galaxy cluster) near its path. This can result in an increase in the amplitude of the wave, or even multiple observations of the event at different times, as we currently observe for the light of supernovae. Such events are predicted to be common enough to be detected by the current detectors in the near future. Microlensing effects are also predicted. Detecting a lensed event would allow for a very precise localization, as well as further tests of the speed of gravity and of the polarization.
Cosmological measurements
Gravitational waves also provide a new way to measure some cosmological parameters, and in particular the Hubble constant , which represents the rate of the expansion of the universe and whose value is currently disputed due to conflicting measurement from different methods. The main benefit of this method is that the source luminosity distance measured from the gravitational wave signal does not rely on other measurements or assumptions, as is usually the case. There are two main possibilities for measuring with gravitational waves in current detectors:
Multi-messenger events with both a gravitational wave and an electromagnetic signal can be used, by measuring the source distance with the gravitational wave signal and their recession velocity by identifying the galaxy in which the event took place, and applying Hubble's law.
A statistical treatment can be applied to the observed population of binary black hole mergers (often called "dark sirens" in this context), constraining both their mass distribution and ; an external galaxy catalog can also be added to the analysis to improve the measurement to identify possible hosts for the sources.
Testing general relativity
The measurement of gravitational wave signals offers a unique perspective for testing results from general relativity, as they are produced in environments where the gravitational field is very strong (e.g., near black holes). Such tests may uncover physics beyond general relativity, or possible issues in the models.
These tests include:
Looking for a residual signal in the data after subtracting models of the signal, which may indicate that some of the signal is not correctly modelled by general relativity.
Checking that the signal from a merger satisfies some basic assumptions, such as verifying that the estimated parameters of the system are consistent across the different phases of the signal ("inspiral-merger-ringdown consistency test").
Introducing perturbations in the models for simulating gravitational waves to see if they fit the data.
Investigating possible dispersion (absent in general relativity but not in alternative theories).
Analyzing the remnant of a merger, by measuring the post-merger phase of the signal ("ringdown") which is supposed to be fully determined by the mass and spin of the remnant. Such measurements can be the predictions for the energy lost to gravitational waves during the merger and the nature of the remnant object; some hypothetical objects may also feature "echoes" of the ringdown signal.
Looking for non-standard polarizations (as seen above).
Data analysis
The detection of gravitational waves within the output of the detectors (typically known as the "strain") is a complex process. Currently, most of the data processing is done within the LIGO-Virgo-KAGRA (LVK) collaboration; teams outside of the collaboration also produce results on the data once it is released publicly.
The data from the current detectors is initially only available to LVK members; segments of data around detected events are released at the time of publication of the related paper, and the full data is released after a proprietary period, currently lasting 18 months. During the third observing run (O3), this resulted in two separated data releases (O3a and O3b), corresponding to the first six months and last six months of the run respectively. The data is then available for anyone on the Gravitational Wave Open Science Center (GWOSC) platform.
Transient searches
Event detection pipelines
The various software used for the analysis of gravitational wave signals are usually referred to as "search pipelines", as they often encompass many steps of the data processing. During the O3 run, five different pipelines were used to identify event candidates within the data and collect a list of observations of short-lived ("transient") gravitational waves signals in a catalog publication. Four of them (GstLAL, PyCBC, MBTA, and SPIIR) were dedicated to the detection of compact binary coalescences (CBC, the only type of event detected so far), while the fifth one (cWB) was designed to detect any transient signal. All five pipelines have been used during the run ("online") as part of the low-latency alert system, and after the run ("offline") to reassess the significance of the candidates and spot events which may have been missed (except for SPIIR, which was only run online) The oLIB pipeline, also looking for generic "burst" signals, has also been used to generate alerts, but not for the catalogs. In addition, two other pipelines have been used specifically for burst searches after the run, as they are too computationally expensive to be run online : BayesWave, a pipeline using Bayesian techniques which was used to further investigate events by cWB, and STAMPS-AS, which is designed to look specifically for long-duration bursts (more than 1 second).
The four CBC pipelines all rely on the concept of matched filtering, a technique used to search for a known signal within noisy data in an optimal way. This technique requires some knowledge of what the signal looks like, and is thus dependent on the model used to simulate it. Although reasonable models exist, the complexity of the equations governing the dynamics of a compact merger makes the generation of accurate waveforms challenging; the development of new waveforms is still an active field of research. In addition, the sources cover a wide range of possible parameters (masses and spins of the two objects, location in the sky) which will yield different waveforms, instead of having one specific signal. This prompts the researchers to generate "template banks" containing a large amount of different waveforms corresponding to different parameters; a compromise has to be done between how tight the bank is (maximizing the number of detections) and the limited computational resources available to carry out the search with all the templates. How to generate such template banks efficiently is also an active field of research. During the search, the matched filtering is performed on every waveform within the (pre-calculated) template bank.
Although the four searches use the same technique, they all have different optimizations and specificities on how they handle the data. In particular, they use different techniques for estimating the significance of an event, for discriminating between real events and glitches, and for combining the data from the different detectors; they also use different template banks.
The cWB (coherent wave burst) pipeline uses a different approach: it works by grouping the data from the different detectors and carrying a joint analysis to look for coherent signals appearing in several detectors at once. Although its sensitivity for binary mergers is less than the dedicated CBC pipelines, its strength lies in being able to detect signals from any kind of sources, as it does not require any assumption on the shape of the signal (which is why it often referred to as an "unmodeled" search).
Low-latency
The low-latency system is designed to produce alerts for astronomers when gravitational events are detected, with the hope that an electromagnetic counterpart can be observed. This is achieved by centralizing the event candidates from the different analysis pipelines in the gravitational-wave candidate event database (GraceDB), from which the data is processed. If an event is deemed significant enough, a rapid sky localization is produced and preliminary alerts are sent autonomously within the span of a few minutes; after a more precise evaluation of the source parameters, as well as human vetting, a new alert or a retraction notice is sent within a day. The alerts are sent through the GCN, which also centralizes alerts from gamma-ray and neutrino telescopes, as well as SciMMA. A total of 78 alerts were sent during the O3 run, of which 23 were later retracted.
Parameter estimation
After an event has been detected by one of the event detection pipelines, a deeper analysis is performed to get a more precise estimation of the parameters of the source and the measurement uncertainty. During the O3 run, this was carried out using several different pipelines, including Bilby and RIFT. These pipelines employ Bayesian methods to quantify the uncertainty, including MCMC and nested sampling.
Search for counterparts
While many astronomers try to follow-up the low-latency alerts from gravitational wave detectors, the reverse also exists: electromagnetic events expected to have an associated gravitational wave emission are subjected to a deeper search. One of the prime targets for these are gamma-ray bursts; these are thought to be associated with supernovae ("long" bursts, lasting more than 2 seconds) and with compact binary coalescences involving neutron stars ("short" bursts). The merger of two neutron stars in particular has been confirmed to be associated with both a gamma-ray burst and gravitational waves with the GW170817 event.
Searches targeted toward gamma-ray bursts observations have been performed on data from the past runs using the pyGRB pipeline for CBC, using methods similar to the regular searches, but centered around the time of the bursts and targeting only the sky area found by gamma-ray observatories. An unmodelled search was also carried out using the X-pipeline package, in a similar fashion as regular unmodelled searches.
In addition to these searches, several pipelines are looking for coincidences between alerts from gravitational waves and alerts from other detectors. In particular, the RAVEN pipeline is part of the low-latency infrastructure and analyzes the coincidence with gamma-ray burst events and other sources. The LLAMA pipeline is also dedicated to identifying such coincidences with neutrino events, predominantly from IceCube.
Continuous wave searches
Searches dedicated to periodic gravitational waves—such as the ones generated by rapidly rotating neutron stars—are generally referred to as continuous wave searches. These can be divided in three categories: all-sky searches, which look for unknown signals from any direction, directed searches, which aim for objects with known positions but unknown frequency, and targeted searches, which hunt for signals from sources where both the position and the frequency are known. The directed and targeted searches are motivated by the fact that all-sky searches are extremely computationally expensive, and thus require trade-offs that limit their sensitivity.
The principal challenge in continuous wave search is that the signal is much weaker than current detected transients, meaning that one must observe a long time period to accumulate enough data to detect it, as the signal-to-noise ratio scales with the square root of the observing time (intuitively, the signal will add up over the observing duration while the noise will not). The issue is that over such long periods of time, the frequency from the source will evolve, and the motion of the Earth around the Sun will affect the frequency via the Doppler effect. This greatly increases the computational cost of the search, even more so when the frequency is unknown. Although there are mitigation strategies, such as semi-coherent searches, where the analysis is performed separately on segments from the data rather than the full data, these result in a loss of sensitivity. Other approaches include cross-correlation, inspired by stochastic wave searches, which takes advantage of having multiple detectors to look for a correlated signal in a pair of detectors.
Stochastic wave searches
The stochastic gravitational wave background is another target for data analysis teams. By definition, it can be seen as a source of noise in the detectors; the main challenge is to separate it from the other sources of noise, and measure its power spectral density. The easiest method for solving this issue is to look for correlations within a network of several detectors; the idea being that the noise related to the gravitational wave background will be identical in all detectors, while the instrumental noise will (in principle) not be correlated across the detectors. Another possible approach would be to look for excess power not accounted by other noise sources; however, this proves impractical for current interferometers as the noise is not known well enough compared to the expected power of the stochastic background. Only searches based on cross-correlation between detectors are currently in use by the LVK collaboration, although other types of searches are also developed.
This kind of search must also account for factors such as the detectors antenna pattern, the motion of the Earth, and the distance between the detectors. Assumptions also have to be made on some properties of the background; it is common to assume that it is Gaussian and isotropic, but searches for anisotropic, non-Gaussian, and more exotic backgrounds also exist.
Gravitational wave properties searches
A number of software have been developed to investigate the physics surrounding gravitational waves. These analyses are generally performed offline (after the run), and often rely on the results from the other searches (currently mostly CBC searches).
Several analyses are performed to look for events observed multiple time due to lensing, first by trying to match all the known events together, and then by performing a joint analysis for the most promising pair of events; these analyses have been performed using LALInference and HANABI software. Additional searches for events which may have been missed by the regular CBC searches are also performed, by reusing the existing CBC pipelines.
Software designed for estimating the Hubble constant has also been developed. The gwcosmo pipeline performs a Bayesian analysis to determine a distribution of the possible values of the constant, both using "dark sirens" (CBC events without electromagnetic counterpart), which can be correlated with a galaxy catalog, and events with an electromagnetic counterpart for which a direct estimation can be made based on the distance measured with gravitational waves and the identified host galaxy. This requires assuming a specific population of black holes, which may be a significant source of bias; recent analyses have been trying to circumvent this issue by fitting both the population and the Hubble constant simultaneously.
References
Interferometric gravitational-wave instruments
Gravitational-wave astronomy | Ground-based interferometric gravitational-wave search | [
"Physics",
"Astronomy"
] | 6,811 | [
"Astronomical sub-disciplines",
"Gravitational-wave astronomy",
"Astrophysics"
] |
76,926,283 | https://en.wikipedia.org/wiki/GUN%20%28graph%20database%29 | GUN (Graph Universe Node) is an open source, offline-first, real-time, decentralized, graph database written in JavaScript for the web browser.
The database is implemented as a peer-to-peer network distributed across "Browser Peers" and optional "Runtime Peers". It employs multi-master replication with a custom commutative replicated data type (CRDT).
GUN is currently used in the decentralized version of the Internet Archive.
References
External links
Official website
Graph databases
Database engines
Peer-to-peer computing
Mesh networking
Distributed computing architecture | GUN (graph database) | [
"Mathematics",
"Technology"
] | 119 | [
"Wireless networking",
"Graph theory",
"Mathematical relations",
"Graph databases",
"Mesh networking"
] |
76,928,066 | https://en.wikipedia.org/wiki/Cytochrome%20P450%20%28individual%20enzymes%29 | In biochemistry, cytochrome P450 enzymes have been identified in all kingdoms of life: animals, plants, fungi, protists, bacteria, and archaea, as well as in viruses. , more than 300,000 distinct CYP proteins are known.
P450s in humans
Human P450s are primarily membrane-associated proteins located either in the inner membrane of mitochondria or in the endoplasmic reticulum of cells. P450s metabolize thousands of endogenous and exogenous chemicals. Some P450s metabolize only one (or a very few) substrates, such as CYP19 (aromatase), while others may metabolize multiple substrates. Both of these characteristics account for medicinal interest. Cytochrome P450 enzymes play roles in hormone synthesis and breakdown (including estrogen and testosterone synthesis and metabolism), cholesterol synthesis, and vitamin D metabolism. Cytochrome P450 enzymes also function to metabolize potentially toxic compounds, including drugs and products of endogenous metabolism such as bilirubin, principally in the liver.
The Human Genome Project has identified 57 human genes coding for the various cytochrome P450 enzymes.
Drug metabolism
P450s are the major enzymes involved in drug metabolism, accounting for about 75% of the total metabolism. Most drugs undergo deactivation by P450s, either directly or by facilitated excretion from the body. However, many substances are bioactivated by P450s to form their active compounds like the antiplatelet drug clopidogrel and the opiate codeine.
The CYP450 enzyme superfamily comprises 57 active subsets, with seven playing roles in the metabolism of most pharmaceuticals. The fluctuation in the amount of CYP450 enzymes (CYP1A2, CYP2C8, CYP2C9, CYP2C19, CYP2D6, CYP3A4, and CYP3A5) in phase 1 (detoxification) can have varying effects on individuals, as genetic expression varies from person to person. This variation is due to the enzyme's genetic polymorphism, which leads to variability in its function and expression. To optimize drug metabolism in individuals, genetic testing should be conducted to determine functional foods and specific phytonutrients that cater to the individual's CYP450 polymorphism. Understanding these genetic variations can help personalize drug therapies for improved effectiveness and reduced adverse reactions.
Drug interaction
Many drugs may increase or decrease the activity of various P450 isozymes either by inducing the biosynthesis of an isozyme (enzyme induction) or by directly inhibiting the activity of the P450 (enzyme inhibition). A classical example includes anti-epileptic drugs, such as phenytoin, which induces CYP1A2, CYP2C9, CYP2C19, and CYP3A4.
Effects on P450 isozyme activity are a major source of adverse drug interactions, since changes in P450 enzyme activity may affect the metabolism and clearance of various drugs. For example, if one drug inhibits the P450-mediated metabolism of another drug, the second drug may accumulate within the body to toxic levels. Hence, these drug interactions may necessitate dosage adjustments or choosing drugs that do not interact with the P450 system.
Many substrates for CYP3A4 are drugs with a narrow therapeutic index, such as amiodarone or carbamazepine. Because these drugs are metabolized by CYP3A4, the mean plasma levels of these drugs may increase because of enzyme inhibition or decrease because of enzyme induction.
Interaction of other substances
Naturally occurring compounds may also induce or inhibit P450 activity. For example, bioactive compounds found in grapefruit juice and some other fruit juices, including bergamottin, dihydroxybergamottin, and paradicin-A, have been found to inhibit CYP3A4-mediated metabolism of certain medications, leading to increased bioavailability and, thus, the strong possibility of overdosing. Because of this risk, avoiding grapefruit juice and fresh grapefruits entirely while on drugs is usually advised.
Other examples:
Saint-John's wort, a common herbal remedy induces CYP3A4, but also inhibits CYP1A1, CYP1B1.
Tobacco smoking induces CYP1A2 (example CYP1A2 substrates are clozapine, olanzapine, and fluvoxamine)
At relatively high concentrations, starfruit juice has also been shown to inhibit CYP2A6 and other P450s. Watercress is also a known inhibitor of the cytochrome P450 CYP2E1, which may result in altered drug metabolism for individuals on certain medications (e.g., chlorzoxazone).
Tributyltin inhibits cytochrome P450, leading to masculinization of mollusks.
Goldenseal, with its two notable alkaloids berberine and hydrastine, has been shown to alter P450-marker enzymatic activities (involving CYP2C9, CYP2D6, and CYP3A4).
Other specific P450 functions
Steroid hormones
A subset of cytochrome P450 enzymes play roles in the synthesis of steroid hormones (steroidogenesis) by the adrenals, gonads, and peripheral tissue:
CYP11A1 (also known as P450scc or P450c11a1) in adrenal mitochondria affects "the activity formerly known as 20,22-desmolase" (steroid 20α-hydroxylase, steroid 22-hydroxylase, cholesterol side-chain scission).
CYP11B1 (encoding the protein P450c11β) found in the inner mitochondrial membrane of adrenal cortex has steroid 11β-hydroxylase, steroid 18-hydroxylase, and steroid 18-methyloxidase activities.
CYP11B2 (encoding the protein P450c11AS), found only in the mitochondria of the adrenal zona glomerulosa, has steroid 11β-hydroxylase, steroid 18-hydroxylase, and steroid 18-methyloxidase activities.
CYP17A1, in endoplasmic reticulum of adrenal cortex has steroid 17α-hydroxylase and 17,20-lyase activities.
CYP21A2 (P450c21) in adrenal cortex conducts 21-hydroxylase activity.
CYP19A (P450arom, aromatase) in endoplasmic reticulum of gonads, brain, adipose tissue, and elsewhere catalyzes aromatization of androgens to estrogens.
Polyunsaturated fatty acids and eicosanoids
Certain cytochrome P450 enzymes are critical in metabolizing polyunsaturated fatty acids (PUFAs) to biologically active, intercellular cell signaling molecules (eicosanoids) and/or metabolize biologically active metabolites of the PUFA to less active or inactive products. These CYPs possess cytochrome P450 omega hydroxylase and/or epoxygenase enzyme activity.
CYP1A1, CYP1A2, and CYP2E1 metabolize endogenous PUFAs to signaling molecules: they metabolize arachidonic acid (i.e. AA) to 19-hydroxyeicosatetraenoic acid (i.e. 19-HETE; see 20-hydroxyeicosatetraenoic acid); eicosapentaenoic acid (i.e. EPA) to epoxyeicosatetraenoic acids (i.e. EEQs); and docosahexaenoic acid (i.e. DHA) to epoxydocosapentaenoic acids (i.e. EDPs).
CYP2C8, CYP2C9, CYP2C18, CYP2C19, and CYP2J2 metabolize endogenous PUFAs to signaling molecules: they metabolize AA to epoxyeicosatetraenoic acids (i.e. EETs); EPA to EEQs; and DHA to EDPs.
CYP2S1 metabolizes PUFA to signaling molecules: it metabolizes AA to EETs and EPA to EEQs.
CYP3A4 metabolizes AA to EET signaling molecules.
CYP4A11 metabolizes endogenous PUFAs to signaling molecules: it metabolizes AA to 20-HETE and EETs; it also hydroxylates DHA to 22-hydroxy-DHA (i.e. 12-HDHA).
CYP4F2, CYP4F3A, and CYP4F3B (see CYP4F3 for latter two CYPs) metabolize PUFAs to signaling molecules: they metabolize AA to 20-HETE. They also metabolize EPA to 19-hydroxyeicosapentaenoic acid (19-HEPE) and 20-hydroxyeicosapentaenoic acid (20-HEPE) as well as metabolize DHA to 22-HDA. They also inactivate or reduce the activity of signaling molecules: they metabolize leukotriene B4 (LTB4) to 20-hydroxy-LTB4, 5-hydroxyeicosatetraenoic acid (5-HETE) to 5,20-diHETE, 5-oxo-eicosatetraenoic acid (5-oxo-ETE) to 5-oxo-20-hydroxy-ETE, 12-hydroxyeicosatetraenoic acid (12-HETE) to 12,20-diHETE, EETs to 20-hydroxy-EETs, and lipoxins to 20-hydroxy products.
CYP4F8 and CYP4F12 metabolize PUFAs to signaling molecules: they metabolizes EPA to EEQs and DHA to EDPs. They also metabolize AA to 18-hydroxyeicosatetraenoic acid (18-HETE) and 19-HETE.
CYP4F11 inactivates or reduces the activity of signaling molecules: it metabolizes LTB4 to 20-hydroxy-LTB4, (5-HETE) to 5,20-diHETE, (5-oxo-ETE) to 5-oxo-20-hydroxy-ETE, (12-HETE) to 12,20-diHETE, (15-HETE) to 15,20-diHETE, EETs to 20-hydroxy-EETs, and lipoxins to 20-hydroxy products.
CYP4F22 ω-hydroxylates extremely long "very long chain fatty acids", i.e. fatty acids that are 28 or more carbons long. The ω-hydroxylation of these special fatty acids is critical to creating and maintaining the skin's water barrier function; autosomal recessive inactivating mutations of CYP4F22 are associated with the lamellar ichthyosis subtype of congenital ichthyosiform erythroderma in humans.
CYP families in humans
Humans have 57 genes and more than 59 pseudogenes divided among 18 families of cytochrome P450 genes and 43 subfamilies. This is a summary of the genes and of the proteins they encode. See the homepage of the cytochrome P450 Nomenclature Committee for detailed information.
P450s in other species
Animals
Other animals often have more P450 genes than humans do. Reported numbers range from 35 genes in the sponge Amphimedon queenslandica to 235 genes in the cephalochordate Branchiostoma floridae. Mice have genes for 101 P450s, and sea urchins have even more (perhaps as many as 120 genes).
Most CYP enzymes are presumed to have monooxygenase activity, as is the case for most mammalian CYPs that have been investigated (except for, e.g., CYP19 and CYP5). Gene and genome sequencing is far outpacing biochemical characterization of enzymatic function, though many genes with close homology to CYPs with known function have been found, giving clues to their functionality.
The classes of P450s most often investigated in non-human animals are those either involved in development (e.g., retinoic acid or hormone metabolism) or involved in the metabolism of toxic compounds (such as heterocyclic amines or polyaromatic hydrocarbons). Often there are differences in gene regulation or enzyme function of P450s in related animals that explain observed differences in susceptibility to toxic compounds (ex. canines' inability to metabolize xanthines such as caffeine). Some drugs undergo metabolism in both species via different enzymes, resulting in different metabolites, while other drugs are metabolized in one species but excreted unchanged in another species. For this reason, one species's reaction to a substance is not a reliable indication of the substance's effects in humans. A species of Sonoran Desert Drosophila that uses an upregulated expression of the CYP28A1 gene for detoxification of cacti rot is Drosophila mettleri. Flies of this species have adapted an upregulation of this gene due to exposure of high levels of alkaloids in host plants.
P450s have been extensively examined in mice, rats, dogs, zebrafish, and turkeys. CYP1A5 and CYP3A37 in turkeys were found to be very similar to the human CYP1A2 and CYP3A4 respectively, in terms of their kinetic properties as well as in the metabolism of aflatoxin B1.
CYPs have also been extensively studied in insects, often to understand pesticide resistance. For example, CYP6G1 is linked to insecticide resistance in DDT-resistant Drosophila melanogaster and CYP6M2 in the mosquito malaria vector Anopheles gambiae is capable of directly metabolizing pyrethroids. Other cytochromes, such as those in Anopheles gambiae, are under preliminary research for their potential role in pesticide resistance, infectious diseases, and malaria.
Microbial
Microbial cytochromes P450 are often soluble enzymes and are involved in diverse metabolic processes. In bacteria the distribution of P450s is very variable with many bacteria having no identified P450s (e.g. E.coli). Some bacteria, predominantly actinomycetes, have numerous P450s (e.g.,). Those so far identified are generally involved in either biotransformation of xenobiotic compounds (e.g. CYP105A1 from Streptomyces griseolus metabolizes sulfonylurea herbicides to less toxic derivatives,) or are part of specialised metabolite biosynthetic pathways (e.g. CYP170B1 catalyses production of the sesquiterpenoid albaflavenone in Streptomyces albus). Although no P450 has yet been shown to be essential in a microbe, the CYP105 family is highly conserved with a representative in every streptomycete genome sequenced so far. Due to the solubility of bacterial P450 enzymes, they are generally regarded as easier to work with than the predominantly membrane bound eukaryotic P450s. This, combined with the remarkable chemistry they catalyse, has led to many studies using the heterologously expressed proteins in vitro. Few studies have investigated what P450s do in vivo, what the natural substrate(s) are and how P450s contribute to survival of the bacteria in the natural environment.Three examples that have contributed significantly to structural and mechanistic studies are listed here, but many different families exist.
Cytochrome P450 cam (CYP101A1) originally from Pseudomonas putida has been used as a model for many cytochromes P450 and was the first cytochrome P450 three-dimensional protein structure solved by X-ray crystallography. This enzyme is part of a camphor-hydroxylating catalytic cycle consisting of two electron transfer steps from putidaredoxin, a 2Fe-2S cluster-containing protein cofactor.
Cytochrome P450 eryF (CYP107A1) originally from the actinomycete bacterium Saccharopolyspora erythraea is responsible for the biosynthesis of the antibiotic erythromycin by C6-hydroxylation of the macrolide 6-deoxyerythronolide B.
Cytochrome P450 BM3 (CYP102A1) from the soil bacterium Bacillus megaterium catalyzes the NADPH-dependent hydroxylation of several long-chain fatty acids at the ω–1 through ω–3 positions. Unlike almost every other known CYP (except CYP505A1, cytochrome P450 foxy), it constitutes a natural fusion protein between the CYP domain and an electron donating cofactor. Thus, BM3 is potentially very useful in biotechnological applications.
Cytochrome P450 119 (CYP119A1) isolated from the thermophillic archea Sulfolobus solfataricus has been used in a variety of mechanistic studies. Because thermophillic enzymes evolved to function at high temperatures, they tend to function more slowly at room temperature (if at all) and are therefore excellent mechanistic models.
Fungi
The commonly used azole class of antifungal drugs works by inhibition of the fungal cytochrome P450 14α-demethylase.
Plants
Cytochromes P450 are involved in a variety of processes of plant growth, development, and defense. It is estimated that P450 genes make up approximately 1% of the plant genome. These enzymes lead to various fatty acid conjugates, plant hormones, secondary metabolites, lignins, and a variety of defensive compounds.
Cytochromes P450 play roles in plant defense– involvement in phytoalexin biosynthesis, hormone metabolism, and biosynthesis of diverse secondary metabolites. The expression of cytochrome p450 genes is regulated in response to environmental stresses indicative of a critical role in plant defense mechanisms.
The biosynthesis of phytoalexins, antimicrobial compounds produced by some plants, involves the P450 enzymes CYP79B2, CYP79B3, CYP71A12, CYP71A13, and CYP71B15. The first step of camalexin biosynthesis produces indole-3-acetaldoxime (IAOx) from tryptophan and is catalyzed by either CYP79B2 or CYP79B3. IAOx is then immediately converted to indole-3-acetonitrile (IAN) and is controlled by either CYP71A13 or its homolog CYP71A12. The last two steps of the biosynthesis pathway of camalexin are catalyzed by CYP71B15. In these steps, indole-3-carboxylic acid (DHCA) is formed from cysteine-indole-3-acetonitrile (Cys(IAN)) followed by the biosynthesis of camalexin. There are some intermediate steps within the pathway that remain unclear, but it is well understood that cytochrome P450 is pivotal in camalexin biosynthesis and that this phytoalexin plays a major role in plant defense mechanisms.
Cytochromes P450 are largely responsible for the synthesis of the jasmonic acid (JA), a common hormonal defenses against abiotic and biotic stresses for plant cells. For example, a P450, CYP74A is involved in the dehydration reaction to produce an insatiable allene oxide from hydroperoxide. JA chemical reactions are critical in the presence of biotic stresses that can be caused by plant wounding, specifically shown in the plant, Arabidopsis. As a prohormone, jasmonic acid must be converted to the JA-isoleucine (JA-Ile) conjugate by JAR1 catalysation in order to be considered activated. Then, JA-Ile synthesis leads to the assembly of the co-receptor complex compo`sed of COI1 and several JAZ proteins. Under low JA-Ile conditions, the JAZ protein components act as transcriptional repressors to suppress downstream JA genes. However, under adequate JA-Ile conditions, the JAZ proteins are ubiquitinated and undergo degradation through the 26S proteasome, resulting in functional downstream effects. Furthermore, several CYP94s (CYP94C1 and CYP94B3) are related to JA-Ile turnover and show that JA-Ile oxidation status impacts plant signaling in a catabolic manner. Cytochrome P450 hormonal regulation in response to extracellular and intracellular stresses is critical for proper plant defense response. This has been proven through thorough analysis of various CYP P450s in jasmonic acid and phytoalexin pathways.
Cytochrome P450 aromatic O-demethylase, which is made of two distinct promiscuous parts: a cytochrome P450 protein (GcoA) and three domain reductase, is significant for its ability to convert Lignin, the aromatic biopolymer common in plant cell walls, into renewable carbon chains in a catabolic set of reactions. In short, it is a facilitator of a critical step in Lignin conversion.
InterPro subfamilies
InterPro subfamilies:
Cytochrome P450, B-class
Cytochrome P450, mitochondrial
Cytochrome P450, E-class, group I
Cytochrome P450, E-class, group II
Cytochrome P450, E-class, group IV
Aromatase
Clozapine, imipramine, paracetamol, phenacetin Heterocyclic aryl amines
Inducible and CYP1A2 5-10% deficient
oxidize uroporphyrinogen to uroporphyrin (CYP1A2) in heme metabolism, but they may have additional undiscovered endogenous substrates.
are inducible by some polycyclic hydrocarbons, some of which are found in cigarette smoke and charred food.
These enzymes are of interest, because in assays, they can activate compounds to carcinogens.
High levels of CYP1A2 have been linked to an increased risk of colon cancer. Since the 1A2 enzyme can be induced by cigarette smoking, this links smoking with colon cancer.
See also
Steroidogenic enzyme
CYP11 family
References
External links
EC 1.14
Pharmacokinetics
Metabolism
Integral membrane proteins | Cytochrome P450 (individual enzymes) | [
"Chemistry",
"Biology"
] | 5,008 | [
"Pharmacology",
"Pharmacokinetics",
"Cellular processes",
"Biochemistry",
"Metabolism"
] |
78,262,701 | https://en.wikipedia.org/wiki/Maya%20Trotz | Maya Trotz is a Guyanese environmental engineer and academic at the University of South Florida.
Early life
Maya Trotz was born to Ulric and Marilyne Trotz on 25 February 1973 in Kitty, Georgetown in Guyana. Her father Ulric was a chemist and involved in the establishment of the Institute of Applied Science and Technology in Guyana.
Her mother died in 2021 from Covid-19, after which she urged officials to take further precautions to improve the response to Covid-19 in the Caribbean.
Education
Trotz's primary education was at St Margaret's Primary and she then went on the Queen's College. Whilst Trotz had an interest in the arts she decided to pursue the sciences.
Trotz majored in Chemical Engineering at the Massachusetts Institute of Technology (MIT) with a minor in Theatre. It was during this time that she discovered Environmental Engineering leading to her post-graduate study of the subject at Stanford University.
Career
On completion of her doctoral research, Trotz joined the University of South Florida as an assistant professor in 2004. She is now a
Professor in Civil and Environmental Engineering at the University of South Florida. Trotz has undertaken research with organisations such as the World Wide Fund for Nature, the Inter-American Development Bank and Guyana Water Incorporated. Trotz's research centres around environmental engineering and education as a tool for sustainable development and widening participation. She has participated in research focused on the Americas and the Caribbean in particular.
In 2013 Trotz was one of the founders for the Caribbean Science Foundation, originally starting in Barbados, the foundation is now active in twelve Caribbean countries. She was also on the governing council of the foundation. During this time she was also part of the Sagicor Visionaries Challenge, named in recognition of the financial contribution to the foundation from Sagicor. Trotz is also a board member for Fragments of Hope Corp, a non-governmental organisation which is dedicated to scalable solutions for coral reef restoration in Belize.
In 2018, Trotz became the president of the Association of Environmental Engineering and Science Professors (AEESP), becoming the first African American woman to do so.
Awards
In 2014 Trotz received the AEESP Award for Outstanding Contribution to Environmental Engineering and Science Education. In 2021 Trotz received the Steven K. Dentel AEESP Award for Global Outreach due to her "extensive" and "sustained portfolio of accomplishments".
References
Guyanese women academics
Environmental engineers
21st-century women engineers
Living people
1973 births
People from Georgetown, Guyana
Massachusetts Institute of Technology alumni
Stanford University alumni
University of South Florida faculty
Association of Environmental Engineering and Science Professors
Guyanese emigrants to the United States | Maya Trotz | [
"Chemistry",
"Engineering"
] | 548 | [
"Environmental engineers",
"Environmental engineering"
] |
78,263,771 | https://en.wikipedia.org/wiki/Defense%20Resources%20Act | The Defense Resources Act (DRA) was draft emergency legislation of the United States Government. A 1983 submission to Congress confirmed the existence of the plan. The present and prior status of the DRA proposal is not immediately certain.
Background
Reagan Administration officials presented a draft plan of legislative text that could be submitted to Congress for approval during a national emergency, such as nuclear war. The decision to make such a request would be the prerogative of the president. Congress would be free to accept, modify or reject the proposals. If enacted into law, the DRA would amend the Defense Production Act (DPA).
These DRA and DPA enactments collectively would form the basis of a substantial part of Reagan's nuclear war and emergency plans. After enactment, the president would be free to sign orders and directives employing the approved legislation as a legal and statutory basis for presidential emergency powers. Such plans for legislation would coexist with a portfolio of other emergency action papers, including Presidential Emergency Action Documents and Other than a Plan D situation documents.
The table of contents of the proposed legislation calls for several Titles that would form the basis of key emergency authorities. Title X authorized the president to direct federal officials to employ limited international censorship of communications entering and leaving the USA. Other draft Titles authorized seizure of industrial plants and economic stabilization via price controls.
DPA proposals to Congress by Reagan staff employed significant constitutional safeguards. They do not contain a request for Congress to suspend per se any part of the Constitution. Sections of the law specifically address Fifth Amendment safeguards. Title X limits censorship to information entering or leaving the USA. It did not call for press censorship of domestic news. It should be remembered these plans contemplated nuclear war, in which many freedoms might be in grave wartime jeopardy as the nation fought for its existence.
Whether enacted as part of a war emergency or other non-military event, Congress at all times retained the constitutional authority to amend, revoke or modify these authorities, by veto override, should the contingency arise. The proposals did not limit oversight by the Congress or federal judiciary and Supreme Court of the United States, nor did they give the president unlimited or perpetual unilateral or unconstitutional power. However, scholars will need to examine this material in light of current jurisprudence.
The question of resolving conflicts between Congress and the White House over authority in these matters would presumably be up to federal courts. The president has inherent constitutional powers that may overlap some proposed plans and that do not derive from Congress. Similarly, presidential powers with respect to the Emergency Alert System would not exist without statute laws of Congress that authorize EAS.
Congress could cancel EAS if desired, leaving no FCC rules to carry out EAS presidential messages. Further, the president does not require approval from Congress as to his Commander in Chief powers, but Congress can limit presidential powers by revocation of funding. DPA amendments approved via DRA would simply grant or revoke authorization for the president to act, which he might or might not do. Further, the president is constrained by impeachment, should Congress decide the president abused his authorities. Lastly, all such plans require funding, the source and quantity of which in nuclear war is questionable. Parts of the DPA proposal address the Fifth Amendment mandate to compensate owners for seizure of private property.
References
Reagan Era
Nuclear warfare | Defense Resources Act | [
"Chemistry"
] | 674 | [
"Radioactivity",
"Nuclear warfare"
] |
78,267,384 | https://en.wikipedia.org/wiki/SOS%3A%20The%20San%20Onofre%20Syndrome | SOS - The San Onofre Syndrome: Nuclear Power’s Legacy is a documentary film that investigates the management of radioactive waste at the San Onofre Nuclear Generating Station in California. The film highlights the station´s proximity to the ocean, at only 108 feet from the rising sea, and addresses concerns about the oversight of radioactive materials at nuclear facilities in the United States and beyond. It was directed by James Heddle, Mary Beth Brangan, and Morgan Peterson.
The film has been featured at several cinema festivals and has earned many accolades. It received the Grand Jury Award for Documentary Feature at the 2023 Awareness Film Festival in Los Angeles, California, as well as the Best Educational Documentary Award at the 2024 International Uranium Film Festival in Rio de Janeiro.
Synopsis
SOS: The San Onofre Syndrome delves into the efforts of Southern California residents to address safety concerns about the condition of the San Onofre Nuclear Generating Station, which final shutdown was in 2013. The film also examines the realization of a new threat: the presence of nuclear waste stored close to the sea, focusing on its long-term radioactivity. SOS is a documentary that raises awareness of the global challenges in nuclear waste management and discusses different approaches to handling these issues. Filmed over 12 years, it includes interviews with residents, activists, engineers, and nuclear energy experts to document public concerns and community responses to the facility.
The film documents Prime Minister Naoto Kan's visit on June 4th, 2013 to San Diego to participate in a panel entitled “Fukushima: Ongoing Lessons for California”. The panel also featured Nuclear Regulatory Commission chairman Gregory Jaczko, former NRC Commissioner Peter A. Bradford, and nuclear engineer Arnie Gundersen, where they discussed topics related to nuclear energy and safety. The producer Mary Beth Brangan stated in an interview that the Fukushima accident catalyzed her and her life partner James Heddle into the making of this film.
Awards
The documentary has been recognized at several international film festivals and has received awards for its impact and social awareness. Notable awards include:
2023 - Grand Jury Award For Documentary Feature at the Awareness Film Festival in Los Angeles, California.
2024 - Best Educational Documentary Award at the International Uranium Film Festival in Rio de Janeiro, Brazil.
2024 - Outstanding Excellence Award for Best Documentary at the Documentaries Without Borders Film Festival.
2024 - Outstanding Excellence Award (Environmental) at the Nature Without Borders International Film Festival.
2024 - Best Actuality Subject In a Documentary at the Global Nonviolent Film Festival.
Featured cast
The following individuals were featured in the film:
External links
References
Documentary films about nuclear technology
2023 films
American documentary films
Films shot in California
American educational films
English-language documentary films
Nuclear power plants in California
Former nuclear power stations in the United States
Anti-nuclear protests in the United States
Environmental issues in California
Buildings and structures in San Diego County, California
History of San Diego County, California
2013 disestablishments in California
Former power stations in California
Nuclear energy
Nuclear power
Nuclear energy in the United States
Nuclear power in the United States
Anti–nuclear power activists
Nuclear engineers
Nuclear reactors
Nuclear power stations in North America
Nuclear energy policy
Non-renewable resource companies established in 1968 | SOS: The San Onofre Syndrome | [
"Physics",
"Chemistry"
] | 650 | [
"Nuclear power",
"Physical quantities",
"Power (physics)",
"Nuclear energy",
"Nuclear physics",
"Radioactivity"
] |
78,270,875 | https://en.wikipedia.org/wiki/XW10508 | XW10508 is an orally active prodrug of esketamine, an NMDA receptor antagonist, which is under development for the treatment of major depressive disorder and chronic pain. It is taken by mouth.
The drug is a novel esketamine analogue and conjugate that acts as a prodrug of esketamine. Esketamine, and by extension XW10508, is an NMDA receptor antagonist and indirect AMPA receptor activator. XW10508 is being developed as once-daily orally administered extended-release and immediate-release formulations with misuse resistance.
As of August 2024, XW10508 is in phase 2 clinical trials for major depressive disorder and is in phase 1 clinical trials for chronic pain. However, no recent development has been reported for these indications. The drug is being developed by XWPharma, which was previously known as XW Laboratories. It is being developed in Australia. The chemical structure of XW10508 does not yet seem to have been disclosed.
References
External links
Pipeline - XWPharma
Arylcyclohexylamines
Dissociative drugs
Drugs with undisclosed chemical structures
Enantiopure drugs
Experimental antidepressants
Experimental drugs
Experimental hallucinogens
NMDA receptor antagonists
Prodrugs | XW10508 | [
"Chemistry"
] | 279 | [
"Chemicals in medicine",
"Stereochemistry",
"Enantiopure drugs",
"Prodrugs"
] |
78,270,894 | https://en.wikipedia.org/wiki/Ascona%20B-DNA%20Consortium | The Ascona B-DNA Consortium (ABC) is a collaborative international research initiative founded in 2001 to investigate the sequence-dependent mechanical properties of DNA using molecular dynamics (MD) simulations. The consortium has contributed significantly to the understanding of DNA structure and dynamics over the past two decades, from the atomic level to larger chromatin structures. The ABC's work includes the development of simulation standards, force fields, and data libraries for DNA, enabling the systematic study of sequence effects across different nucleotide configurations.
History
The ABC was founded in 2001 during an informal meeting led by a group of scientist that was attending to the "Atomistic to Continuum Models for Long Molecules" conference in Ascona, Switzerland. The consortium started by joining efforts from nine laboratories with expertise in DNA molecular dynamics and sequence-dependent DNA effects. The initial aim was to conduct state-of-the-art MD simulations to establish standards for DNA modeling and to analyze the effects of sequence on DNA's structure and flexibility.
Phase I and II
In its initial phase, known as Phase I (2004–2005), the ABC conducted 15-nanosecond simulations of 10 different 15-mer DNA sequences using the parm94 force field. This study, which analyzed sequence effects at the dinucleotide level, marked the first systematic approach to DNA simulation in the field.
Following improvements in force fields, the consortium launched Phase II between 2007 and 2009, re-running the initial simulations using the parmbsc0 force field (developed at the Barcelona Supercomputing Center) and extending simulation times to 50 nanoseconds for a set of 39 DNA sequences. This phase allowed the first comprehensive study of all 136 unique tetranucleotide combinations.
µABC, miniABC, and hexABC
To address limitations in simulation times, the µABC project (2010–2014) pushed simulations into the microsecond range with 39 B-DNA 18-mer sequences containing at least 3 copies of all the unique tetranucleotides, facilitating studies of convergence. Results from this study were key in leading to the creation of the parmbsc1 force field, a state-of-the-art set of parameters for the simulation of DNA alone or in complex with other biomolecules. Using this refined force field a project known as miniABC, involved simulations of a minimal library of 13 B-DNA sequences under diverse salt conditions, which enabled further analysis of tetranucleotides and allowed the extension and refinment of Calladine–Dickerson rules including subtle conformational polymorphisms of DNA structure.
Currently, the hexABC project seeks to advance DNA conformational studies by simulating 950 20-mer sequences over the sub-millisecond timescale. This project aims to investigate the effects of next-to-nearest neighbor interactions, covering all 2080 unique hexanucleotide combinations with the latest force fields parmbsc1 and OL15. HexABC is the joint effort of 14 research institutions: EPFL Lausanne, Kaunas University of Technology, Gdańsk University of Technology, IRB Barcelona, Jülich Supercomputing Center, Louisiana Tech University, University of Cambridge, University of Florida, University of Leeds, University of Nottingham, University of the Republic of Uruguay, University of Utah, University of York and ENS Paris-Saclay.
2023 meeting
In April 2023, the ABC celebrated its 22nd anniversary by hosting a conference back in Ascona, Switzerland. This event brought together consortium members and collaborators to discuss recent theoretical and experimental developments in DNA structure and dynamics, including sequence effects on DNA interactions within chromatin.
The conference, funded by the Centre Européen de Calcul Atomique et Moléculaire and the Congressi Stefano Francini, featured three keynote presentations, 39 oral communications, and two poster sessions.
A special issue of the Biophysical Reviews journal edited by Prof Wilma Olson and published by Springer Nature was devoted to some of the studies presented at the conference.
Current members
The current members of the ABC consortium, as of 2024, are active contributors to the consortium's ongoing projects:
EPFL Lausanne - John Maddocks, Rahul Sharma
Kaunas University of Technology - Daiva Petkevičiūtė-Gerlach
Gdańsk University of Technology - Jacek Czub
IRB Barcelona - Modesto Orozco, Juan Pablo Arcón, Federica Battistini, Adam Hospital, Genís Bayarri, Subhamoy Deb, Milosz Wieczo
Jülich Supercomputing Center - Paolo Carloni, Katya Ahmad
Louisiana Tech University - Thomas Bishop, Ran Sun
University of Cambridge - Rosana Collepardo, Jorge R. Espinosa
University of Florida - Alberto Pérez
University of Leeds - Sarah A. Harris
University of Nottingham - Charles A. Laughton
University of the Republic of Uruguay - Pablo D. Dans, Gabriela da Rosa
University of Utah - Thomas Cheatham III, Rodrigo Galindo-Murillo
University of York - Agnes Noy
ENS Paris-Saclay - Marco Pasi
References
Molecular dynamics | Ascona B-DNA Consortium | [
"Physics",
"Chemistry"
] | 1,046 | [
"Molecular dynamics",
"Computational chemistry",
"Molecular physics",
"Computational physics"
] |
78,274,196 | https://en.wikipedia.org/wiki/Bexirestrant | Bexirestrant is a selective estrogen receptor degrader (SERD) which is being evaluated for the treatment of breast cancer. This orally bioavailable compound has demonstrated potent activity against both wild-type and mutant forms of the estrogen receptor (ER), addressing a critical need in overcoming resistance to current endocrine therapies.
It is structurally characterized by an E-alkene linked to an azetidine core.
References
Antineoplastic drugs
Selective estrogen receptor degraders
Azetidines
Benzopyrans
Fluoroalkanes
Fluorobenzene derivatives
Phenols
Vinylbenzenes | Bexirestrant | [
"Chemistry"
] | 133 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
78,274,217 | https://en.wikipedia.org/wiki/Caleicine | Caleicine is a unique sesquiterpene compound found exclusively in Calea ternifolia, a Mexican flowering plant known for its potential psychoactive properties. This compound has garnered interest in the field of ethnopharmacology and natural product chemistry due to its putative role as a prodrug of eugenol, a potent GABA positive modulator.
Caleicine is the p-Coumaric ester of junenol and has no lactone moiety making it distinctly unique from the other sesquiterpene lactones in Calea ternifolia.
Chemistry
Caleicine is a sesquiterpene that has a phenylpropanoid moiety bonded to junenol
In an investigation, lab mice were administered with an aqueous solution of Calea Ternifolia in doses of 200, 400 and 800 mg and made to undergo a forced swim test. Under dosages of 400 and 800 mg, the mice showed depressive like effects.
Theorised Mechanism of action
The mechanisms of Calea Ternifolia induced somnolence are not well understood, however, Caleicine could play a role due to its potential metabolism.
Caleicine contains p-Coumaric acid. In the body, p-Coumaric acid is biosynthesised into many lignols and phenylpropanoids including eugenol.
Eugenol acts as a positive allosteric modulator of the GABAA receptor which is common amongst oneirogens. In addition, eugenol inhibits both MAO-A and MAO-B, inhibiting the metabolism of serotonin, melatonin and dopamine.
Eugenol is one of many potential metabolites of Caleicine and the mechanisms of both Caleicine and Calea Ternifolia are largely misunderstood.
Caleicine is a unique sesquiterpene compound found only in Calea ternifolia and is one of many GABAergic compounds found in the plant and acts as a prodrug to the known bioactive and potent Eugenol. Caleicine is a strong candidate to be responsible the effects of Calea ternifolia as the GABA modulation Eugenol exhibits are the same that of Calea ternifolia.
Calea ternifolias negative side effects, nausea, vomiting and delirium based hallucinations, are the same that of Eugenol and other GABAergic compounds.
GABA positive allosteric site modulation is found in many sedative substances such as methaqualone, propofol, ethanol, and zolpidem. The properties of GABA positive modulating substances typically are anxiolytic, anticonvulsant, oneirogenic, sedative, hypnotic, euphoriant, and muscle relaxant effects.
See also
GABA receptor
Germacranolides
Myristicin
References
Sesquiterpenes
Esters
4-Hydroxyphenyl compounds
Decalins
Isopropyl compounds | Caleicine | [
"Chemistry"
] | 622 | [
"Organic compounds",
"Esters",
"Functional groups"
] |
78,274,310 | https://en.wikipedia.org/wiki/Rintodestrant | Rintodestrant is an orally bioavailable selective estrogen receptor degrader (SERD) developed by G1 Therapeutics for the treatment of estrogen receptor-positive (ER+) breast cancer. Structurally inspired by the 6-OH-benzothiophene scaffold used in arzoxifene and raloxifene, rintodestrant selectively binds to the estrogen receptor and inhibits ER signaling, demonstrating efficacy in endocrine-resistant tumors.
A phase I clinical trial evaluated rintodestrant as monotherapy and in combination with the CDK4/6 inhibitor palbociclib in patients with ER+/HER2- advanced breast cancer.
References
Antineoplastic drugs
Selective estrogen receptor degraders
Diaryl ethers
Benzothiophenes
Enoic acids
Fluorobenzene derivatives
Ketones
Phenols | Rintodestrant | [
"Chemistry"
] | 185 | [
"Pharmacology",
"Ketones",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
66,612,002 | https://en.wikipedia.org/wiki/Evolutionary%20Classification%20of%20Protein%20Domains | The Evolutionary Classification of Protein Domains (ECOD) is a biological database that classifies protein domains available from the Protein Data Bank. The ECOD tries to determine the evolutionary relationships between proteins.
Similar to Pfam, CATH, and SCOP, ECOD compiles domains instead of whole proteins. However, ECOD focuses on evolutionary relationships more heavily: instead of grouping proteins by folds, which may simply represent convergent evolution, ECOD groups proteins by demonstratable homology only.
References
Protein structure
Protein classification
Protein databases
Protein superfamilies | Evolutionary Classification of Protein Domains | [
"Chemistry",
"Biology"
] | 114 | [
"Protein structure",
"Protein superfamilies",
"Structural biology",
"Protein classification"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.