id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
47,623,626 | https://en.wikipedia.org/wiki/Multi%20expression%20programming | Multi Expression Programming (MEP) is an evolutionary algorithm for generating mathematical functions describing a given set of data. MEP is a Genetic Programming variant encoding multiple solutions in the same chromosome. MEP representation is not specific (multiple representations have been tested). In the simplest variant, MEP chromosomes are linear strings of instructions. This representation was inspired by Three-address code. MEP strength consists in the ability to encode multiple solutions, of a problem, in the same chromosome. In this way, one can explore larger zones of the search space. For most of the problems this advantage comes with no running-time penalty compared with genetic programming variants encoding a single solution in a chromosome.
Representation
MEP chromosomes are arrays of instructions represented in Three-address code format.
Each instruction contains a variable, a constant, or a function. If the instruction is a function, then the arguments (given as instruction's addresses) are also present.
Example of MEP program
Here is a simple MEP chromosome (labels on the left side are not a part of the chromosome):
1: a
2: b
3: + 1, 2
4: c
5: d
6: + 4, 5
7: * 3, 5
Fitness computation
When the chromosome is evaluated it is unclear which instruction will provide the output of the program. In many cases, a set of programs is obtained, some of them being completely unrelated (they do not have common instructions).
For the above chromosome, here is the list of possible programs obtained during decoding:
E1 = a,
E2 = b,
E4 = c,
E5 = d,
E3 = a + b.
E6 = c + d.
E7 = (a + b) * d.
Each instruction is evaluated as a possible output of the program.
The fitness (or error) is computed in a standard manner. For instance, in the case of symbolic regression, the fitness is the sum of differences (in absolute value) between the expected output (called target) and the actual output.
Fitness assignment process
Which expression will represent the chromosome? Which one will give the fitness of the chromosome?
In MEP, the best of them (which has the lowest error) will represent the chromosome. This is different from other GP techniques: In Linear genetic programming the last instruction will give the output. In Cartesian Genetic Programming the gene providing the output is evolved like all other genes.
Note that, for many problems, this evaluation has the same complexity as in the case of encoding a single solution in each chromosome. Thus, there is no penalty in running time compared to other techniques.
Software
MEPX
MEPX is a cross-platform (Windows, macOS, and Linux Ubuntu) free software for the automatic generation of computer programs. It can be used for data analysis, particularly for solving symbolic regression, statistical classification and time-series problems.
libmep
Libmep is a free and open source library implementing Multi Expression Programming technique. It is written in C++.
hmep
hmep is a new open source library implementing Multi Expression Programming technique in Haskell programming language.
See also
Genetic programming
Cartesian genetic programming
Gene expression programming
Grammatical evolution
Linear genetic programming
Notes
External links
Multi Expression Programming website
Multi Expression Programming source code
Machine learning algorithms
Regression and curve fitting software
Software that uses wxWidgets | Multi expression programming | [
"Biology"
] | 681 | [
"Genetics techniques",
"Genetic programming"
] |
47,626,347 | https://en.wikipedia.org/wiki/Setipiprant | Setipiprant (INN; developmental code names ACT-129968, KYTH-105) is an investigational drug developed for the treatment of asthma and scalp hair loss. It was originally developed by Actelion and acts as a selective, orally available antagonist of the prostaglandin D2 receptor 2 (DP2). The drug is being developed as a novel treatment for male pattern baldness by Allergan.
Medical uses
Scalp hair loss
Acting through DP2, PGD2 can inhibit hair growth, suggesting that this receptor is a potential target for bald treatment. A phase 2A study to evaluate the safety, tolerability, and efficacy of oral setipiprant relative to a placebo in 18- to 49-year-old males with androgenetic alopecia was completed in May 2018 and did not find statistically significant improvement.
Allergic conditions
Setipiprant proved to be well tolerated and reasonably effective in reducing allergen-induced airway responses in asthmatic patient clinical trials. However, the drug, while supporting the concept that DP2 contributes to asthmatic disease, did not show sufficient advantage over existing drugs and was discontinued from further development for this application.
Adverse effects
Data from phase II and III clinical trials did not detect any severe adverse effects to setipiprant. The authors were unable to identify any pattern of adverse effects that differ from placebo, including subjective reporting of symptoms and objective laboratory monitoring.
Interactions
While setipiprant mildly induces the drug metabolizing enzyme CYP3A4 in vitro, the interaction appears to not be clinically relevant.
Pharmacology
Mechanism of action
Allergic conditions
Setipiprant binds to the DP2 receptor with a dissociation constant of 6 nM, representing potent antagonism of the receptor. The DP2 receptor, also called the CRTh2 receptor, is a G-protein-coupled receptor (GPCR) that is expressed on certain inflammatory cells, such as eosinophils, basophils, and certain lymphocytes. For its mechanism of action in the treatment of allergic conditions, setipiprant's DP2 antagonism prevents the action of prostaglandin D2 (PGD2) on these receptors. The DP2 receptor mediates the activation of type 2 helper T (Th2) cells, eosinophils, and basophils in the lungs, which are white blood cells implicated in producing the inflammatory response the characterizes allergic conditions. Activation of DP2 on Th2 cells by PGD2 induces the secretion of inflammatory cytokines (interleukin (IL) 4, IL-5, and IL-13), which cause an increase of eosinophils in the blood, remodeling of lung tissue, and hypersensitivity of lung tissue to allergens.
Setipiprant does not antagonize the thromboxane receptor (TP). The bronchoconstricting properties of PGD2 are not inhibited by setipiprant, since these are mediated by the TP receptor. As a point of contrast, ramatroban is a selective TP antagonist and DP2 receptor antagonist.
Setipiprant does not appreciably inhibit the activity of the enzyme cyclooxygenase 1 (COX-1), which is responsible for the synthesis of prostaglandins (including PGD2).
Scalp hair loss
Prostaglandin D2 synthase (PTGDS) is an enzyme that produces PGD2. In men with androgenic alopecia, the enzyme PTGDS is elevated in the bald scalp tissue, as well as its product PGD2. PGD2 inhibits the growth of hair follicles through its activity on the DP2 receptor, but not the DP1 receptor. Theoretically, setipiprant's DP2 receptor antagonism may counteract the activity of PGD2 in hair follicles, thereby stimulating hair growth.
Pharmacokinetics
The oral bioavailability of setipiprant is 44% in rats and 55% in dogs, which suggests that it should be orally bioavailable in humans. The half-life of setipiprant in humans is about 11 hours. The maximum concentration in plasma (Cmax) is 6.04 and 6.44 mcg/mL for setipiprant tablets and capsules respectively, with an area under the curve of 31.88 and 31.50 mcg×hours/mL for setipiprant tablets and capsules respectively. Cmax was reached between 1.8–4 hours after oral administration. The tablet and capsule formulations are bioequivalent.
Chemistry
Setipiprant appears as a light yellow to yellow colored solid. Based on general guidelines, the powder form is considered stable for 2 years at 4 degrees C, and for 3 years as -20 degrees C. When dissolved in a solvent, setipiprant is stable for 1 month at -20 degrees C, and 6 months at -80 degrees C. It is considered soluble in DMSO at concentrations ≥ 36 mg/mL.
History
Setipiprant was initially researched by Actelion as a treatment for allergies and inflammatory disorders, particularly asthma, but despite being well tolerated in clinical trials and showing reasonable efficacy against allergen-induced airway responses in asthmatic patients, it failed to show sufficient advantages over existing drugs and was discontinued from further development in this application.
However, following the discovery in 2012 that the prostaglandin D2 receptor (DP/PGD2) is expressed at high levels in the scalp of men affected by male pattern baldness, the rights to setipiprant were acquired by Kythera to develop the drug as a novel treatment for baldness. The favorable pharmacokinetics and relative lack of side effects seen in earlier clinical trials mean that fresh clinical trials for this new application can be conducted fairly quickly. , setipiprant is currently under development by Allergan for the prevention of androgenic alopecia after their successful acquisition of Kythera.
See also
Prostaglandin DP2 receptor
Fevipiprant
Ramatroban
References
External links
Setipiprant - AdisInsight
Prostaglandins
Receptor antagonists
1-Naphthyl compounds | Setipiprant | [
"Chemistry"
] | 1,333 | [
"Neurochemistry",
"Receptor antagonists"
] |
68,807,012 | https://en.wikipedia.org/wiki/Protein%20aggregation%20predictors | Computational methods that use protein sequence and/ or protein structure to predict protein aggregation. The table below, shows the main features of software for prediction of protein aggregation
Table
See also
PhasAGE toolbox
Amyloid
Protein aggregation
References
Protein structure
Structural bioinformatics software
Proteomics
Neurodegenerative disorders | Protein aggregation predictors | [
"Chemistry"
] | 63 | [
"Protein structure",
"Structural biology"
] |
65,961,732 | https://en.wikipedia.org/wiki/Substrate%20inhibition%20in%20bioreactors | Substrate inhibition in bioreactors occurs when the concentration of substrate (such as glucose, salts, or phenols) exceeds the optimal parameters and reduces the growth rate of the cells within the bioreactor. This is often confused with substrate limitation, which describes environments in which cell growth is limited due to of low substrate. Limited conditions can be modeled with the Monod equation; however, the Monod equation is no longer suitable in substrate inhibiting conditions. A Monod deviation, such as the Haldane (Andrew) equation, is more suitable for substrate inhibiting conditions. These cell growth models are analogous to equations that describe enzyme kinetics, although, unlike enzyme kinetics parameters, cell growth parameters are generally empirically estimated.
General Principles
Cell growth in bioreactors depends on a wide range of environmental and physiological conditions such as substrate concentration. With regards to bioreactor cell growth, substrate refers to the nutrients that the cells consume and is contained within the bioreactor medium. Cell growth can either be substrate limited or inhibited depending on whether the substrate concentration is too low or too high, respectively. The Monod equation accurately describes limiting conditions, but substrate inhibition models are more complex.
Substrate inhibition occurs when the rate of microbial growth lessens due to a high concentration of substrate. Higher substrate concentrations are usually caused by osmotic issues, viscosity, or inefficient oxygen transport. By slowly adding substrate into the medium, fed-batch bioreactor systems can help alleviate substrate inhibition. Substrate inhibition is also closely related to enzyme kinetics which is commonly modeled by the Michaelis–Menten equation. If an enzyme that is part of a rate-limiting step of microbial growth is substrate inhibited, then the cell growth will be inhibited in the same manner. However, the mechanisms are often more complex, and parameters for a model equation need to be estimated from experimental data. Additionally, information on inhibitory effects caused by mixtures of compounds is limited because most studies have been performed with single-substrate systems.
Types of Inhibition
Enzyme Kinetics Overview
One of the most well known equations to describe single-substrate enzyme kinetics is the Michaelis-Menten equation. This equation relates the initial rate of reaction to the concentration of substrate present, and deviations of model can be used to predict competitive inhibition and non-competitive inhibition. The model takes the form of the following equation:
(Michaelis-Menten equation)
Where
is the Michaelis constant
is the initial reaction rate
is the maximum reaction rate
If the inhibitor is different from the substrate, then competitive inhibition will increase Km while Vmax remains the same, and non-competitive will decrease Vmax while Km remains the same. However, under substrate inhibiting effects where two of the same substrate molecules bind to the active sites and inhibitory sites, the reaction rate will reach a peak value before decreasing. The reaction rate will either decrease to zero under complete inhibition, or it will decrease to a non-zero asymptote during partial inhibition. This can be described by the Haldane (or Andrew) equation, which is a common deviation of the Michaelis-Menten equation, and takes the following form:
(Haldane equation for single-substrate inhibition of enzymatic reaction rate)
Where
is the inhibition constant
Cell Growth in Bioreactors
Bioreactor cell growth kinetics is analogous to the equations presented in enzyme kinetics. Under non-inhibiting single-substrate conditions, the specific growth rate of biomass can be modeled by the well-known Monod equation. The Monod equation models the growth of organisms during substrate limiting conditions, and its parameters are determined through experimental observation. The Monod equation is based on a single substrate-consuming enzyme system that follows the Michaelis-Menten equation. The Monod takes the following familiar form:
(Monod equation)
Where:
is the saturation constant
is the specific growth rate
is the maximum specific growth rate
Under single-substrate inhibiting conditions, the Monod equation is no longer suitable, and the most common Monod derivative is once again in the form of the Haldane equation. As in enzyme kinetics, the growth rate will initially increase as substrate is increased before reaching a peak and decreasing at high substrate concentrations. Reasons for substrate inhibition in bioreactor cell growth includes osmotic issues, viscosity, or inefficient oxygen transport due to overly concentrated substrate in the bioreactor medium. Substrates that are known to cause inhibition include glucose, NaCl, and phenols, among others Substrate inhibition is also a concern in wastewater treatment, where one of the most studied biodegradation substrates are the toxic phenols. Due to their toxicity, there is a large interest in bioremediation of phenols, and it is well known that phenol inhibition can be modeled by the following Haldane equation:
(Haldane equation for single-substrate inhibition of cell growth)
Where:
is the inhibition constant
There are several equations that have been developed to describe substrate inhibition. Two equations listed below that are referred to as non-competitive substrate inhibition and competitive substrate inhibition models respectively by Shuler and Michael in Bioprocess Engineering: Basic Concepts. Note that the Haldane equation above is a special case of the following non-competitive substrate inhibition model, where KI >>Ks.
(non-competitive single-substrate inhibition)
(competitive single-substrate inhibition)
These equations also have enzymatic counterparts, where the equations commonly describe the interactions between substrate and inhibitors at the active and inhibitory sites. The concept of competitive and non-competitive substrate inhibition is more well defined in enzyme kinetics, but these analogous equations also apply to cell growth models.
Overcoming Substrate Inhibition in Bioreactors
Substrate inhibition can be characterized by a high substrate concentration and decreased growth rate, resulting in decreased bioreactor outputs. The most common solution is to change the growth from a batch process to a fed-batch process. Other methods to overcome substrate inhibition include the addition of another substrate type in order to develop alternative metabolic pathways, immobilizing the cells or increasing the biomass concentration.
Utilizing Fed-Batch
A fed-batch process is the most common way to decrease the effects of substrate inhibition. Fed-batch processes are characterized by the continuous addition of bioreactor media (which includes the substrate) into the inoculum (cellular solution). The addition of media will increase the overall volume within the reactor along with substrate and other growth materials. A fed-batch process will also have an output flow rate of the substrate/cell/product mixture which can be collected to retrieve the desired product. Fed-batch is a good way to overcome substrate inhibition because the amount of substrate can be changed at various points in the growth process. This allows for the bioreactor technician to provide the cells with the amount of substrate they need rather than providing them too much or too little.
Other methods
Other methods to overcome substrate inhibition include the use of Two Phase Partitioning Bioreactors, the immobilization of cells, and increasing the biomass concentration in the bioreactor.
Two Phase Partitioning Bioreactors are able to reduce the aqueous phase substrate concentration by storing substrate in an alternative phase, which can be re-released into the biomass based on metabolic demand. The cell immobilization method the bioreactor works by encapsulating the cells into a material that makes the removal of inhibitory compounds easier, thus reducing inhibition by creating a matrix with the cells which can act as a protective barrier against the inhibitory effects of toxic materials. The method of increasing cell concentration is done by supporting the cellular material on a scaffold to create a biofilm. Biofilms allow for extremely high cell concentrations while preventing the overgrowth of inhibitory substrates.
Impact On Product Production
The impact of product production depends on how the product is created. Substrate inhibition will affect products produced by enzymatic reactions differently than growth associated product formation. Substrate inhibition of enzymatic product production will inhibit the enzyme's activity, which will lower the reaction rate and reduce the rate of product formation. However, if a product is being produced by cells, then substrate inhibition will narrow product formation by limiting the growth of cells.
Growth Associated Products
There are multiple relationships that may exist between the rate of product formation, the specific rate of substrate consumption, and specific growth rate. The following equations demonstrate the relationship between cell growth and product production for growth associated production. The parameters and (specific rate of product formation and specific growth rate respectively) are defined below.
(specific rate of product formation)
(specific growth rate)
Where is the cell concentration, and is the product concentration.
The product formation and cell growth are both directly linked to the amount of substrate consumed through the yield coefficients, and respectively. These coefficients can be combined to define a yield coefficient, , that relates the product production to cell growth.
This yield coefficient can be further used to directly relate the rate of change of product to the rate of change of cell growth
Rearranging this equation gives the following relationship between the specific rate of product formation and the specific growth rate of the cells for growth associated products.
The above relationships demonstrate that for growth associated product, the specific growth rate is directly proportional to the specific rate of product formation. Furthermore, substrate inhibition limits the specific growth rate, which reduces the final biomass concentration. Increasing the substrate concentration may increase the viscosity of the media, lowers the rate of oxygen diffusivity, and affect the osmolarity of the system. These effects can be detrimental to cell growth, and by extension, the yield of product.
References
Bioreactors | Substrate inhibition in bioreactors | [
"Chemistry",
"Engineering",
"Biology"
] | 1,963 | [
"Bioreactors",
"Biological engineering",
"Chemical reactors",
"Biochemical engineering",
"Microbiology equipment"
] |
62,194,268 | https://en.wikipedia.org/wiki/Robert%20Kennedy%20%28chemist%29 | Robert Travis Kennedy is an American chemist specializing in bioanalytical chemistry including liquid chromatography, capillary electrophoresis, and microfluidics. He is currently the Hobart H. Willard Distinguished University Professor of Chemistry and the chair of the department of chemistry at the University of Michigan. He holds joint appointments with the Department of Pharmacology and Department Macromolecular Science and Engineering. Kennedy is an Associate Editor of Analytical Chemistry and ACS Measurement Science AU.
Early life and education
Kennedy was born on November 11, 1962, in Sault Ste. Marie, Michigan. He earned a Bachelor of Science degree in chemistry at the University of Florida in 1984 and a Ph.D. from the University of North Carolina-Chapel Hill (UNC) in 1988 while working under James Jorgenson. He was an NSF post-doctoral fellow at UNC from 1989-1991 with R. Mark Wightman.
Academic career and research interests
Kennedy became a professor of chemistry at the University of Florida in 1991. After 11 years, he moved to the University of Michigan. He has graduated approximately 80 graduate students. Kennedy’s research focuses on developing analytical instrumentation and methods that can help solve biological problems. He is considered a leader in the field of analytical chemistry, and an expert in endocrinology, neurochemistry, and high-throughput analysis. Major contributions to analytical chemistry include affinity probe capillary electrophoresis, in vivo neurochemical measurements, and ultra-high pressure liquid chromatography. He has been a Lilly Analytical Research Fellow, Alfred P. Sloan Fellow, NSF Presidential Faculty Fellow, and AAAS Fellow.
Honors and awards
EAS Award for Outstanding Achievements in the Fields of Analytical Chemistry (2023)
Martin Medal (2019)
Ralph N. Adams Award in Bioanalyical Chemistry (2016)
ACS Award in Chromatography (2017)
CASSS Award for Outstanding Achievements in Separation Science (2017)
Marcel Golay Award for Lifetime Achievement in Capillary Chromatography (2012)
Eastern Analytical Symposium Award for Separation Science (2012)
McKnight Award for Technical Innovations in Neuroscience (2010)
Rackham Distinguished Faculty Achievement Award (2009)
American Microchemical Society’s Benedetti-Pichler Memorial Award (2001)
References
American chemists
University of Michigan faculty
Electrophoresis
Chromatography
1962 births
Living people
University of Florida alumni
University of North Carolina at Chapel Hill alumni
Fellows of the American Association for the Advancement of Science
Recipients of the Presidential Early Career Award for Scientists and Engineers
People from Sault Ste. Marie, Michigan
University of Florida faculty
Chemists from Michigan | Robert Kennedy (chemist) | [
"Chemistry",
"Biology"
] | 537 | [
"Chromatography",
"Separation processes",
"Instrumental analysis",
"Biochemical separation processes",
"Molecular biology techniques",
"Electrophoresis"
] |
62,198,576 | https://en.wikipedia.org/wiki/Karen%20Aplin | Karen Aplin is a British atmospheric and space physicist. She is currently a professor at the University of Bristol. Aplin has made significant contributions to interdisciplinary aspects of space and terrestrial science, in particular the importance of electrical effects on planetary atmospheres. She was awarded the 2021 James Dungey Lectureship of the Royal Astronomical Society.
Education and research career
After attending The High School, Gloucester, Aplin completed a BSc in Natural Sciences at Durham University in 1997. She was president of Durham University Orchestral Society and received the Norah C. Bowes bequest for the arts. She completed her PhD in experimental atmospheric physics in the Department of Meteorology at the University of Reading in 2000. She took up research posts at the University of Hertfordshire and the STFC Rutherford Appleton Laboratory, working on aspects of space and atmospheric instrumentation, before becoming head of the physics laboratories at Oxford University in 2009. In 2018 she moved to the University of Bristol.
Work on atmospheric electricity
Aplin's research has focussed on innovative instrumentation as applied to problems in space and atmospheric science, in particular electrical effects and measurements. She currently maintains the Snowdon space-weather observatory. She has performed experimental work on the atmospheric effects of ions formed by cosmic rays, but has been keen to stress that the formed "particles are too small to act as cloud condensation nuclei", and thus there is unlikely to be a strong cosmic-ray link to global cloud cover.
Her work on atmospheric electricity also extends to the link between volcanoes, lightning and radon gas, and to other solar system bodies, in particular the ultraviolet and galactic cosmic ray effects on Neptune's atmosphere.
In a similarly interdisciplinary spirit, Aplin has researched the influence of the climate and weather on western orchestral composers.
Awards and recognition
2021: James Dungey Lectureship of the Royal Astronomical Society.
2019: Visiting professor at the University of Bath (previously visiting senior research fellow)
2015 – present: Editor of the Journal of Electrostatics
2009 – present: Editor for the open-access journal History of Geo- and Space Sciences
References
Living people
British space scientists
Atmospheric electricity
Academics of the University of Bristol
Alumni of the University of Reading
Year of birth missing (living people)
British women physicists
Alumni of Trevelyan College, Durham | Karen Aplin | [
"Physics"
] | 454 | [
"Physical phenomena",
"Electrical phenomena",
"Atmospheric electricity"
] |
51,758,505 | https://en.wikipedia.org/wiki/Q-system%20%28genetics%29 | Q-system is a genetic tool that allows to express transgenes in a living organism. Originally the Q-system was developed for use in the vinegar fly Drosophila melanogaster, and was rapidly adapted for use in cultured mammalian cells, zebrafish, worms and mosquitoes. The Q-system utilizes genes from the qa cluster of the bread fungus Neurospora crassa, and consists of four components: the transcriptional activator (QF/QF2/QF2w), the enhancer QUAS, the repressor QS, and the chemical de-repressor quinic acid. Similarly to GAL4/UAS and LexA/LexAop, the Q-system is a binary expression system that allows to express reporters or effectors (e.g. fluorescent proteins, ion channels, toxins and other genes) in a defined subpopulation of cells with the purpose of visualising these cells or altering their function. In addition, GAL4/UAS, LexA/LexAop and the Q-system function independently of each other and can be used simultaneously to achieve a desired pattern of reporter expression, or to express several reporters in different subsets of cells.
Origin
The Q-system is based on two out of the seven genes of the qa gene cluster of the bread fungus Neurospora crassa. The genes of the qa cluster are responsible for the catabolism of quinic acid, which is used by the fungus as a carbon source in conditions of low glucose. The cluster contains a transcriptional activator qa-1F, a transcriptional repressor qa-1S, and five structural genes. The qa-1F binds to a specific DNA sequence, found upstream of the qa genes. The presence of quinic acid disrupts interaction between qa-1F and qa-1S, thus disinhibiting the transcriptional activity of qa-1F.
Genes qa-1F, qa-1S and the DNA binding sequence of qa-1F form the basis of the Q-system. The genes were renamed to simplify their use as follows: transcriptional activator qa-1F as QF, repressor qa-1S as QS, and the DNA binding sequence as QUAS. The quinic acid represents the fourth component of the Q-system.
The original transactivator QF appeared to be toxic when expressed broadly in Drosophila. To overcome this problem, two new transactivators were developed: QF2 and QF2w.
Use in Drosophila
Basic use
The Q-system functions similarly to, and independently of, the GAL4/UAS and the LexA/LexAop systems. QF, QF2 and QF2w are analogous to GAL4 and LexA, and their expression is usually under the control of cell-type specific promoter, such as nsyb (to target neurons) or tubulin (to target all cells). QUAS is analogous to UAS and LexAop, and is placed upstream of an effector gene, such as GFP. QS is analogous to GAL80, and may be driven by any promoter (e.g. tubulin-QS). Quinic acid is a unique feature of the Q-system, and it must be fed to the flies or maggots in order to alleviate the QS-induced repression. In some ways, quinic acid is analogous to temperature in the case of GAL80ts.
In its basic form, two transgenic fly lines, one containing a QF transgene and the other one containing a QUAS transgene, are crossed together. Their progeny that had both a QF transgene and a QUAS transgene will be expressing a reporter gene in a subset of cells (e.g. nsyb-QF2, QUAS-GFP flies express GFP in all neurons). If a fly also expresses QS in some of the cells, the activity of QF will be repressed in these cells, but it may be restored of a fly is fed quinic acid (e.g. a nsyb-QF2, QUAS-GFP, tub-QS fly expresses no GFP when its diet doesn't contain quinic acid, and expresses GFP in its neurons when fed quinic acid). The use of QS repressor and quinic acid allows to fine-tune the temporal control of transgene expression.
Chimeric transactivators
Chimeric transactivators GAL4QF and LexAQF allow to combine the use of all three binary expression systems. GAL4QF binds to UAS, and may be repressed by QS while being unaffected by GAL80. Similarly, LexAQF binds to LexAop, and may be repressed by QS. LexAQF represents a useful extension of the LexA/LexAop system that doesn't have its own repressor.
Intersectional expression
A variety of expression patterns may be achieved by combination of the three binary expression systems and the FLP/FRT or other recombinases. Expression patterns may be constructed as AND, OR, NOR etc. logic gates to e.g. narrow down expression patterns of available GAL4 lines. The resulting expression pattern somewhat depends on the developmental timing of activation of the transcription factors (discussed in ).
Use in other organisms
Q-system appeared to be working successfully in a variety of organisms. It has been used to drive expression of luciferase, as a proof of principle, in cultured mammalian cells. In zebrafish the Q-system has been successfully used with several tissue-specific promoters, and was shown to work independently of the GAL4/UAS system when expressed in the same cell. In C. elegans the Q-system has been shown to work in muscles and in neuronal tissue. In 2016, the Q-system was used to target, for the first time, the olfactory neurons of malaria mosquitoes Anopheles gambiae. In 2019, the Q-system in Anopheles mosquitoes was used to examine the functional responses of olfactory neurons to odors. In 2019, the Q-system was introduced into the Aedes aegypti mosquito to capture tissue specific expression patterns. These successes make the Q-system the system of choice when developing genetic tools for other organisms. Currently the main shortcoming of the Q-system is the low number of available transgenic lines, but it will be overcome as the scientific community creates and shares these resources, such as by the use of the GAL4>QF2 HACK system to convert existing GAL4 transgenic insertions to QF2. DNA binding domain of QF2 fused with VP16 transcriptional activator domain was successfully applied in Penicillium to gain control over the penicillin producing secondary metabolite gene cluster in a scalable manner.
References
Genetics | Q-system (genetics) | [
"Biology"
] | 1,471 | [
"Genetics"
] |
51,758,582 | https://en.wikipedia.org/wiki/Cell%20division%20orientation | Cell division orientation is the direction along which the new daughter cells are formed. Cell division orientation is important for morphogenesis, cell fate and tissue homeostasis. Abnormalities in the cell division orientation leads to the malformations during development and cancerous tissues. Factors that influence cell division orientation are cell shape, anisotropic localization of specific proteins and mechanical tensions.
Implication for morphogenesis
Cell division orientation is one of the mechanisms that shapes tissue during development and morphogenesis. Along with cell shape changes, cell rearrangements, apoptosis and growth, oriented cell division modifies the geometry and topology of live tissue in order to create new organs and shape the organisms. Reproducible patterns of oriented cell divisions were described during morphogenesis of Drosophila embryos, Arabidopsis thaliana embryos, Drosophila pupa, zebrafish embryos and mouse early embryos. Oriented cell divisions contribute to the tissue elongation and the release of mechanical stress. While in the first case oriented cell division acts as active contributor to the morphogenesis, the latter case is a passive response to the external mechanical tensions.
Implication for tissue homeostasis
In several tissues, such as columnar epithelium, the cells divide along the plane of the epithelium. Such divisions insert new formed cells in the epithelium layer. The disregulation of the orientation of cell divisions result in the creation of the cell out of epithelium and is observed at the initial stages of cancer.
Regulation
More than a century ago Oskar Hertwig proposed that the cell division orientation is determined by the shape of the cell (1884), known as Hertwig rule. In the epithelium the cells 'reads' its shape through the specific cell junction called tricellular junctions (TCJ). TCJ provide mechanical and geometrical clues for the spindle apparatus to ensure that cell divide along its long axis. Several factors could regulate cell shape and therefore orientation of cell division. Among these factors is the anisotropic mechanical stress. This stress could be the result of the external mechanical deformation of generated intracellularly by non-isotropic localization of specific proteins.
References
Cell biology
Cell anatomy
Cell cycle
Cellular processes
Developmental biology
Morphology (biology) | Cell division orientation | [
"Biology"
] | 473 | [
"Behavior",
"Developmental biology",
"Cell biology",
"Morphology (biology)",
"Reproduction",
"Cellular processes",
"Cell cycle"
] |
51,766,099 | https://en.wikipedia.org/wiki/Steve%20Granick | Steve Granick is an American scientist and educator. In 2023 he joined the University of Massachusetts-Amherst as the Robert Barrett Endowed Chair of Polymer Science and Engineering, with joint appointment in the Chemistry, Physics, and Chemical Engineering Departments after serving as director of the Institute for Basic Science Center for Soft and Living Matter, an interdisciplinary blue-sky research center in Ulsan, South Korea that pursues basic science research. Until 2015 he was professor at the University of Illinois at Urbana-Champaign. He is a member of the American Academy of Arts and Sciences and the U.S. National Academy of Sciences.
Education
Granick obtained his B.A. in sociology from Princeton University in 1978 by correspondence and after initially dropping out during his Junior year. He earned his Ph.D. in chemistry from the University of Wisconsin in 1982 with John D. Ferry. He did postdoctoral work at the University of Minnesota with M. V. Tirrell and at the Collège de France with Nobel-laureate Pierre-Gilles de Gennes.
Academic career
Granick joined the faculty of the University of Illinois at Urbana-Champaign in 1985 and rose through the ranks to become Racheff Chair Professor of Materials Science and Engineering and concurrently professor of physics and biophysics, professor of chemistry, and professor of chemical and biomolecular engineering. In 2014, after thirty years at the University of Illinois, he moved to South Korea to join the Institute for Basic Science (IBS), founding the Center for Soft and Living Matter with additional appointments as professor of chemistry and physics at UNIST. In 2023 he joined the University of Massachusetts-Amherst as the Robert Barrett Endowed Chair of Polymer Science and Engineering, with joint appointment in the chemistry, physics, and chemical engineering departments.
Research and achievements
Granick is the author of more than 300 scientific articles and has made fundamental contributions to the chemistry and physics of soft materials. By early 2023, his publications had received over 30,000 citations with h-index of 93.
His research interests range from the study of active matter to the chemistry and physics of visualized macromolecules, vesicles, and supracolloidal materials. The early work in Granick's career focused on confined liquids. Granick was a pioneer in the field of nanorheology and molecular tribology. Other early work concerned molecular mobility at polymer surfaces. This progressed to later studies showing how biological membranes interact with their environments.
More recently, Granick and his research team work across disciplines to explore imaging, assembly, behavior and interactions of molecules, colloidal particles, and their assemblies. He made the first measurements of polymer surface diffusion in the key limit of dilute concentration and he identified the important class of physical problems where diffusion is anomalous yet Brownian. His laboratory became interested in many instances of molecular mobility measured at the single-molecule level, including active matter and transport in living cells.
The other principal current area of Granick's research concerns Janus colloidal particles, their self-assembly at rest and driven outside equilibrium. The scientific importance is to understand natural selection in the colloid world.
Public service and international experience
Steve Granick served as Chair of the Department of Energy (DOE) Council on Materials Panel on Polymers at Interfaces and Chair of the Division of Polymer Physics of the American Physical Society (APS). He holds or has held honorary or visiting positions at numerous international universities.
Honors and awards
Granick was elected Member of the U.S. National Academy of Sciences in 2015, and Member of the American Academy of Arts and Sciences in 2016. He is a Fellow of the American Physical Society. He is the recipient of numerous international awards, including the APS (American Physical Society) national Prize for Polymer Physics, the ACS (American Chemical Society) national Prize for Surface and Colloid Science, and the Paris-Sciences Medal.
References
External links
Steve Granick’s group at the University of Illinois
IBS Center for Soft and Living Matter in South Korea
Living people
1953 births
21st-century American physicists
20th-century American physicists
21st-century American chemists
Princeton University alumni
University of Wisconsin–Madison College of Letters and Science alumni
University of Illinois Urbana-Champaign faculty
Members of the United States National Academy of Sciences
American expatriates in South Korea
Institute for Basic Science
Tribologists
Fellows of the American Physical Society
University of Massachusetts Amherst faculty | Steve Granick | [
"Materials_science"
] | 889 | [
"Tribology",
"Tribologists"
] |
64,463,469 | https://en.wikipedia.org/wiki/Basic%20Number%20Theory | Basic Number Theory is an influential book by André Weil, an exposition of algebraic number theory and class field theory with particular emphasis on valuation-theoretic methods. Based in part on a course taught at Princeton University in 1961–62, it appeared as Volume 144 in Springer's Grundlehren der mathematischen Wissenschaften series. The approach handles all 'A-fields' or global fields, meaning finite algebraic extensions of the field of rational numbers and of the field of rational functions of one variable with a finite field of constants. The theory is developed in a uniform way, starting with topological fields, properties of Haar measure on locally compact fields, the main theorems of adelic and idelic number theory, and class field theory via the theory of simple algebras over local and global fields. The word `basic’ in the title is closer in meaning to `foundational’ rather than `elementary’, and is perhaps best interpreted as meaning that the material developed is foundational for the development of the theories of automorphic forms, representation theory of algebraic groups, and more advanced topics in algebraic number theory. The style is austere, with a narrow concentration on a logically coherent development of the theory required, and essentially no examples.
Mathematical context and purpose
In the foreword, the author explains that instead of the “futile and impossible task” of improving on Hecke's classical treatment of algebraic number theory, he “rather tried to draw the conclusions from the developments of the last thirty years, whereby locally compact groups, measure and integration have been seen to play an increasingly important role in classical number theory”. Weil goes on to explain a viewpoint that grew from work of Hensel, Hasse, Chevalley, Artin, Iwasawa, Tate, and Tamagawa in which the real numbers may be seen as but one of infinitely many different completions of the rationals, with no logical reason to favour it over the various p-adic completions. In this setting, the adeles (or valuation vectors) give a natural locally compact ring in which all the valuations are brought together in a single coherent way in which they “cooperate for a common purpose”. Removing the real numbers from a pedestal and placing them alongside the p-adic numbers leads naturally – “it goes without saying” to the development of the theory of function fields over finite fields in a “fully simultaneous treatment with number-fields”. In a striking choice of wording for a foreword written in the United States in 1967, the author chooses to drive this particular viewpoint home by explaining that the two classes of global fields “must be granted a fully simultaneous treatment […] instead of the segregated status, and at best the separate but equal facilities, which hitherto have been their lot. That, far from losing by such treatment, both races stand to gain by it, is one fact which will, I hope, clearly emerge from this book.”
After World War II, a series of developments in class field theory diminished the significance of the cyclic algebras (and, more generally, the crossed product algebras) which are defined in terms of the number field in proofs of class field theory. Instead cohomological formalism became a more significant part of local and global class field theory, particularly in work of Hochschild and Nakayama, Weil, Artin, and Tate during the period 1950–1952.
Alongside the desire to consider algebraic number fields alongside function fields over finite fields, the work of Chevalley is particularly emphasised. In order to derive the theorems of global class field theory from those of local class field theory, Chevalley introduced what he called the élément idéal, later called idèle, at Hasse's suggestion. The idèle group of a number field was first introduced by Chevalley in order to describe global class field theory for infinite extensions, but several years later he used it in a new way to derive global class field theory from local class field theory. Weil mentioned this (unpublished) work as a significant influence on some of the choices of treatment he uses.
Reception
The 1st edition was reviewed by George Whaples for Mathematical Reviews and Helmut Koch for Zentralblatt. Later editions were reviewed by Fernando Q. Gouvêa for the Mathematical Association of America and by Koch for Zentralblatt; in his review of the second edition Koch makes the remark "Shafarevich showed me the first edition in autumn 1967 in Moscow and said that this book will be from now on the book about class field theory". The coherence of the treatment, and some of its distinctive features, were highlighted by several reviewers, with Koch going on to say "This book is written in the spirit of the early forties and just this makes it a valuable source of information for everyone who is working about problems related to number and function fields."
Contents
Roughly speaking, the first half of the book is modern in its consistent use of adelic and idèlic methods and the simultaneous treatment of algebraic number fields and rational function fields over finite fields. The second half is arguably pre-modern in its development of simple algebras and class field theory without the language of cohomology, and without the language of Galois cohomology in particular. The author acknowledges this as a trade-off, explaining that “to develop such an approach systematically would have meant loading a great deal of unnecessary machinery on a ship which seemed well equipped for this particular voyage; instead of making it more seaworthy, it might have sunk it.” The treatment of class field theory uses analytic methods on both commutative fields and simple algebras. These methods show their power in giving the first unified proof that if K/k is a finite normal extension of A-fields, then any automorphism of K over k is induced by the Frobenius automorphism for infinitely many places of K. This approach also allows for a significantly simpler and more logical proof of algebraic statements, for example the result that a simple algebra over an A-field splits (globally) if and only if it splits everywhere locally. The systematic use of simple algebras also simplifies the treatment of local class field theory. For instance, it is more straightforward to prove that a simple algebra over a local field has an unramified splitting field than to prove the corresponding statement for 2-cohomology classes.
Chapter I
The book begins with Witt’s formulation of Wedderburn’s proof that a finite division ring is commutative ('Wedderburn's little theorem'). Properties of Haar measure are used to prove that `local fields’ (commutative fields locally compact under a non-discrete topology) are completions of A-fields. In particular – a concept developed later – they are precisely the fields whose local class field theory is needed for the global theory. The non-discrete non-commutative locally compact fields are then division algebras of finite dimension over a local field.
Chapter II
Finite-dimensional vector spaces over local fields and division algebras under the topology uniquely determined by the field's topology are studied, and lattices are defined topologically, an analogue of Minkowski's theorem is proved in this context, and the main theorems about character groups of these vector spaces, which in the commutative one-dimensional case reduces to `self duality’ for local fields, are shown.
Chapter III
Tensor products are used to study extensions of the places of an A-field to places of a finite separable extension of the field, with the more complicated inseparable case postponed to later.
Chapter IV
This chapter introduces the topological adele ring and idèle group of an A-field, and proves the `main theorems’ as follows:
both the adele ring and the idèle group are locally compact;
the A-field, when embedded diagonally, is a discrete and co-compact subring of its adele ring;
the adele ring is self dual, meaning that it is topologically isomorphic to its Pontryagin dual, with similar properties for finite-dimensional vector spaces and algebras over local fields.
The chapter ends with a generalized unit theorem for A-fields, describing the units in valuation terms.
Chapter V
This chapter departs slightly from the simultaneous treatment of number fields and function fields. In the number field setting, lattices (that is, fractional ideals) are defined, and the Haar measure volume of a fundamental domain for a lattice is found. This is used to study the discriminant of an extension.
Chapter VI
This chapter is focused on the function field case; the Riemann-Roch theorem is stated and proved in measure-theoretic language, with the canonical class defined as the class of divisors of non-trivial characters of the adele ring which are trivial on the embedded field.
Chapter VII
The zeta and L-functions (and similar analytic objects) for an A-field are expressed in terms of integrals over the idèle group. Decomposing these integrals into products over all valuations and using Fourier transforms gives rise to meromorphic continuations and functional equations. This gives, for example, analytic continuation of the Dedekind zeta-function to the whole plane, along with its functional equation. The treatment here goes back ultimately to a suggestion of Artin, and was developed in Tate's thesis.
Chapter VIII
Formulas for local and global different and discriminants, ramification theory, and the formula for the genus of an algebraic extension of a function field are developed.
Chapter IX
A brief treatment of simple algebras is given, including explicit rules for cyclic factor sets.
Chapters X and XI
The zeta-function of a simple algebra over an A-field is defined, and used to prove further results on the norm group and groupoid of maximal ideals in a simple algebra over an A-field.
Chapter XII
The reciprocity law of local class field theory over a local field in the context of a pairing of the multiplicative group of a field and the character group of the absolute Galois group of the algebraic closure of the field is proved. Ramification theory for abelian extensions is developed.
Chapter XIII
The global class field theory for A-fields is developed using the pairings of Chapter XII, replacing multiplicative groups of local fields with idèle class groups of A-fields. The pairing is constructed as a product over places of local Hasse invariants.
Third edition
Some references are added, some minor corrections made, some comments added, and five appendices are included, containing the following material:
A character version of the (local) transfer theorem and its extension to the global transfer theorem.
Šafarevič's theorem on the structure of Galois groups of local fields using the theory of Weil groups.
Theorems of Tate and Sen on the Herbrand distribution.
Examples of L-functions with Grössencharacter.
Editions
References
Mathematics books
Algebraic number theory | Basic Number Theory | [
"Mathematics"
] | 2,241 | [
"Algebraic number theory",
"Number theory"
] |
64,474,040 | https://en.wikipedia.org/wiki/Hiroshi%20Fujita | (born 7 December 1928 in Osaka) is a retired Japanese mathematician who worked in partial differential equations. He obtained his Ph.D. at the University of Tokyo, under the supervision of Tosio Kato.
Mathematical contributions
His most widely cited paper, published in 1966, studied the partial differential equation
and showed that there is a "threshold" value for which implies the existence of nonconstant solutions which exist for all positive and all real values of the variables. By contrast, if is between and then such solutions cannot exist. This paper initiated the study of similar and analogous phenomena for various parabolic and hyperbolic partial differential equations. The impact of Fujita's paper is described by the well-known survey articles of Levine (1990) and Deng & Levine (2000).
In collaboration with Kato, Fujita applied the semigroup approach in evolutionary partial differential equations to the Navier–Stokes equations of fluid mechanics. They found the existence of unique locally defined strong solutions under certain fractional derivative-based assumptions on the initial velocity. Their approach has been adopted by other influential works, such as Giga & Miyakawa (1985), to allow for different assumptions on the initial velocity. The full understanding of the smoothness and maximal extension of such solutions is currently considered as a major problem of partial differential equations and mathematical physics.
Selected publications
Tosio Kato and Hiroshi Fujita. On the nonstationary Navier-Stokes system. Rend. Sem. Mat. Univ. Padova 32 (1962), 243–260.
Hiroshi Fujita and Tosio Kato. On the Navier-Stokes initial value problem. I. Arch. Rational Mech. Anal. 16 (1964), 269–315.
Hiroshi Fujita. On the blowing up of solutions of the Cauchy problem for . J. Fac. Sci. Univ. Tokyo Sect. I 13 (1966), 109–124.
Mathematical theory of sedimentation analysis (book)
Functional-Analytic Methods for Partial Differential Equations (1990, Springer), Proceedings of a Conference and a Symposium held in Tokyo, Japan, July 3–9, 1989. Edited by Hiroshi Fujita, Teruo Ikebe and Shige T. Kuroda.
Proceedings of the Ninth International Congress on Mathematical Education, Edited by Hiroshi Fujita et al.
References
Japanese mathematicians
1928 births
People from Osaka
Mathematical analysts
University of Tokyo alumni
Living people | Hiroshi Fujita | [
"Mathematics"
] | 486 | [
"Mathematical analysis",
"Mathematical analysts"
] |
54,594,383 | https://en.wikipedia.org/wiki/Perfect%20obstruction%20theory | In algebraic geometry, given a Deligne–Mumford stack X, a perfect obstruction theory for X consists of:
a perfect two-term complex in the derived category of quasi-coherent étale sheaves on X, and
a morphism , where is the cotangent complex of X, that induces an isomorphism on and an epimorphism on .
The notion was introduced by for an application to the intersection theory on moduli stacks; in particular, to define a virtual fundamental class.
Examples
Schemes
Consider a regular embedding fitting into a cartesian square
where are smooth. Then, the complex
(in degrees )
forms a perfect obstruction theory for X. The map comes from the composition
This is a perfect obstruction theory because the complex comes equipped with a map to coming from the maps and . Note that the associated virtual fundamental class is
Example 1
Consider a smooth projective variety . If we set , then the perfect obstruction theory in is
and the associated virtual fundamental class is
In particular, if is a smooth local complete intersection then the perfect obstruction theory is the cotangent complex (which is the same as the truncated cotangent complex).
Deligne–Mumford stacks
The previous construction works too with Deligne–Mumford stacks.
Symmetric obstruction theory
By definition, a symmetric obstruction theory is a perfect obstruction theory together with nondegenerate symmetric bilinear form.
Example: Let f be a regular function on a smooth variety (or stack). Then the set of critical points of f carries a symmetric obstruction theory in a canonical way.
Example: Let M be a complex symplectic manifold. Then the (scheme-theoretic) intersection of Lagrangian submanifolds of M carries a canonical symmetric obstruction theory.
Notes
References
See also
Behrend function
Gromov–Witten invariant
Differential topology
Symplectic geometry
Hamiltonian mechanics
Smooth manifolds | Perfect obstruction theory | [
"Physics",
"Mathematics"
] | 385 | [
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Topology",
"Differential topology",
"Dynamical systems"
] |
57,880,587 | https://en.wikipedia.org/wiki/List%20of%20sequenced%20algae%20genomes | This list of sequenced algal genomes contains algal species known to have publicly available complete genome sequences that have been assembled, annotated and published. Unassembled genomes are not included, nor are organelle-only sequences. For plant genomes see the list of sequenced plant genomes. For plastid sequences, see the list of sequenced plastomes. For all kingdoms, see the list of sequenced genomes.
Dinoflagellates (Alveolata)
See also List of sequenced protist genomes.
Cryptomonad
Glaucophyte
Green algae
Haptophyte
Heterokonts/Stramenopiles
Red algae (Rhodophyte)
Rhizaria
References
Plant | List of sequenced algae genomes | [
"Engineering",
"Biology"
] | 157 | [
"Lists of sequenced genomes",
"DNA sequencing",
"Genetic engineering",
"Genome projects"
] |
57,881,967 | https://en.wikipedia.org/wiki/Trinitrogen | Trinitrogen also known as the azide radical is an unstable molecule composed of three nitrogen atoms. Two arrangements are known: a linear form with double bonds and charge transfer, and a cyclic form. Both forms are highly unstable, though the linear form is the more stable of the two. More-stable derivatives exist, such as when it acts as a ligand, and it may participate in azido nitration, which is a reaction between sodium azide and ammonium cerium nitrate.
The linear form of N3 was discovered in 1956 by B. A. Thrush by photolysis of hydrogen azide. As a linear and symmetric molecule, it has D∞h symmetry, with a nitrogen–nitrogen bond length averaging 1.8115 Å. The first excited electronic state, A2Σu, is 4.56 eV above the ground state.
The cyclic form was identified in 2003 by N. Hansen and A. M. Wodtke using ultraviolet photolysis of chlorine azide. Although the reaction yielded mostly the linear form, about 20% of the molecules were cyclic. The ring has C2v symmetry—an isosceles triangle—in contrast to the linear form that has equal N–N bond-lengths.
References
External links
Homonuclear triatomic molecules
Nitrogen compounds
Allotropes of nitrogen | Trinitrogen | [
"Chemistry"
] | 271 | [
"Allotropes of nitrogen",
"Allotropes"
] |
67,444,568 | https://en.wikipedia.org/wiki/Diatoma | Diatoma is a genus of diatoms belonging to the family Fragilariaceae.
The genus has cosmopolitan distribution.
Species:
Diatoma angusticostata
Diatoma arcuatum
Diatoma auritum
Diatoma elongata
References
Diatoms
Diatom genera | Diatoma | [
"Biology"
] | 61 | [
"Diatoms",
"Algae"
] |
67,447,218 | https://en.wikipedia.org/wiki/Huff%20model | In spatial analysis, the Huff model is a widely used tool for predicting the probability of a consumer visiting a site, as a function of the distance of the site, its attractiveness, and the relative attractiveness of alternatives. It was formulated by David Huff in 1963. It is used in marketing, economics, retail research and urban planning, and is implemented in several commercially available GIS systems.
Its relative ease of use and applicability to a wide range of problems contribute to its enduring appeal.
The formula is given as:
where :
is a measure of the attractiveness of store j
is the distance from the consumer's location, i, to store j.
is an attractiveness parameter
is a distance decay parameter
is the total number of stores, including store j
References
Retail analytics
Economics models
Spatial analysis | Huff model | [
"Physics"
] | 163 | [
"Spacetime",
"Space",
"Spatial analysis"
] |
71,775,593 | https://en.wikipedia.org/wiki/Resilience%20engineering | Resilience engineering is a subfield of safety science research that focuses on understanding how complex adaptive systems cope when encountering a surprise. The term resilience in this context refers to the capabilities that a system must possess in order to deal effectively with unanticipated events. Resilience engineering examines how systems build, sustain, degrade, and lose these capabilities.
Resilience engineering researchers have studied multiple safety-critical domains, including aviation, anesthesia, fire safety, space mission control, military operations, power plants, air traffic control, rail engineering, health care, and emergency response to both natural and industrial disasters. Resilience engineering researchers have also studied the non-safety-critical domain of software operations.
Whereas other approaches to safety (e.g., behavior-based safety, probabilistic risk assessment) focus on designing controls to prevent or mitigate specific known hazards (e.g., hazard analysis), or on assuring that a particular system is safe (e.g., safety cases), resilience engineering looks at a more general capability of systems to deal with hazards that were not previously known before they were encountered.
In particular, resilience engineering researchers study how people are able to cope effectively with complexity to ensure safe system operation, especially when they are experiencing time pressure. Under the resilience engineering paradigm, accidents are not attributable to human error. Instead, the assumption is that humans working in a system are always faced with goal conflicts, and limited resources, requiring them to constantly make trade-offs while under time pressure. When failures happen, they are understood as being due to the system temporarily being unable to cope with complexity. Hence, resilience engineering is related to other perspectives in safety that have reassessed the nature of human error, such as the "new look", the "new view", "safety differently", and Safety-II.
Resilience engineering researchers ask questions such as:
What can organizations do in order to be better prepared to handle unforeseeable challenges?
How do organizations adapt their structure and behavior to cope effectively when faced with an unforeseen challenge?
Because incidents often involve unforeseen challenges, resilience engineering researchers often use incident analysis as a research method.
Resilience engineering symposia
The first symposium on resilience engineering was held in October 2004 in Soderkoping, Sweden. It brought together fourteen safety science researchers with an interest in complex systems.
A second symposium on resilience engineering was held in November 2006 in Sophia Antipolis, France. The symposium had eighty participants. The Resilience Engineering Association, an association of researchers and practitioners with an interest in resilience engineering, continues to hold bi-annual symposia.
These symposia led to a series of books being published (see Books section below).
Themes
This section discusses aspects of the resilience engineering perspective that are different from traditional approaches to safety.
Normal work leads to both success and failure
The resilience engineering perspective assumes that the nature of work which people do within a system that contributes to an accident is fundamentally the same as the work that people do that contributes to successful outcomes. As a consequence, if work practices are only examined after an accident and are only interpreted in the context of the accident, the result of this analysis is subject to selection bias.
Fundamental surprise
The resilience engineering perspective posits that a significant number of failure modes are literally inconceivable in advance of them happening, because the environment that systems operate in are very dynamic and the perspectives of the people within the system are always inherently limited. These sorts of events are sometimes referred to as fundamental surprise. Contrast this with the approach of probabilistic risk assessment which focuses on evaluate conceivable risks.
Human performance variability as an asset
The resilience engineering perspective holds that human performance variability has positive effects as well as negative ones, and that safety is increased by amplifying the positive effects of human variability as well as adding controls to mitigate the negative effects. For example, the ability of humans to adapt their behavior based on novel circumstances is a positive effect that creates safety. As a consequence, adding controls to mitigate the effects of human variability can reduce safety in certain circumstances
The centrality of expertise and experience
Expert operators are an important source of resilience inside of systems. These operators become experts through previous experience at dealing with failures.
Risk is unavoidable
Under the resilience engineering perspective, the operators are always required to trade-off risks. As a consequence, in order to create safety, it is sometimes necessary for a system to take on some risk.
Bringing existing resilience to bear vs generating new resilience
The researcher Richard Cook distinguishes two separate kinds of work that tend to be conflated under the heading resilience engineering:
Bringing existing resilience to bear
The first type of resilience engineering work is determining how to best take advantage of the resilience that is already present in the system. Cook uses the example of setting a broken bone as this type of work: the resilience is already present in the physiology of bone, and setting the bone uses this resilience to achieving better healing outcomes.
Cook notes that this first type of resilience work does not require a deep understanding of the underlying mechanisms of resilience: humans have been setting bones long before the mechanism by which bone heals was understood.
Generating new resilience
The second type of resilience engineering work involves altering mechanisms in the system in order to increase the amount of the resilience. Cook uses the example of new drugs such as Abaloparatide and Teriparatide, which mimic Parathyroid hormone-related protein and are used to treat osteoporosis.
Cook notes that this second type of resilience work requires a much deeper understanding of the underlying existing resilience mechanisms in order to create interventions that can effectively increase resilience.
Hollnagel perspective
The safety researcher Erik Hollnagel views resilient performance as requiring four systemic potentials:
The potential to respond
The potential to monitor
The potential to learn
The potential to anticipate.
This has been described in a White Paper from Eurocontrol on Systemic Potentials Management https://skybrary.aero/bookshelf/systemic-potentials-management-building-basis-resilient-performance
Woods perspective
The safety researcher David Woods considers the following two concepts in his definition of resilience:
graceful extensibility: the ability of a system to develop new capabilities when faced with a surprise that cannot be dealt with effectively with a system's existing capabilities
sustained adaptability: the ability of a system to continue to keep adapting to surprises, over long periods of time
These two concepts are elaborated in Woods's theory of graceful extensibility.
Woods contrasts resilience with robustness, which is the ability of a system to deal effectively with potential challenges that were anticipated in advance.
The safety researcher Richard Cook argued that bone should serve as the archetype for understanding what resilience is in the Woods perspective. Cook notes that bone has both graceful extensibility (has a soft boundary at which it can extend function) and sustained adaptability (bone is constantly adapting through a dynamic balance between creation and destruction that is directed by mechanical strain).
In Woods's view, there are three common patterns to the failure of complex adaptive systems:
decompensation: exhaustion of capacity when encountering a disturbance
working at cross purposes: when individual agents in a system behave in a way that achieves local goals but goes against global goals
getting stuck in outdated behaviors: relying on strategies that were previously adaptive but are no longer so due to changes in the environment
Resilient Health care
In 2012 the growing interest for resilience engineering gave rise to the sub-field of Resilient Health Care. This led to a series of annual conferences on the topic that are still ongoing as well as a series of books, on Resilient Health Care, and in 2022 to the establishment of the Resilient Health Care Society (registered in Sweden). (https://rhcs.se/)
Books
Resilience Engineering: Concepts and Precepts by David Woods, Erik Hollnagel, and Nancy Leveson, 2006.
Resilience Engineering in Practice: A Guidebook by Jean Pariès, John Wreathall, and Erik Hollnagel, 2013.
Resilient Health Care, Volume 1: Erik Hollnagel, Jeffrey Braithwaite, and Robert L. Wears (eds), 2015.
Resilient Health Care, Volume 2: The Resilience of Everyday Clinical Work by Erik Hollnagel, Jeffrey Braithwaite, Robert Wears (eds), 2015.
Resilient Health Care, Volume 3: Reconciling Work-as-Imagined and Work-as-Done by Jeffrey Braithwaite, Robert Wears, and Erik Hollnagel (eds), 2016.
Resilience Engineering Perspectives, Volume 1: Remaining Sensitive to the Possibility of Failure by Erik Hollnagel, Christopher Nemeth, and Sidney Dekker (eds.), 2016.
Resilience Engineering Perspectives, Volume 2: Remaining Sensitive to the Possibility of Failure by Christopher Nemeth, Erik Hollnagel, and Sidney Dekker (eds.), 2016.
Governance and Control of Financial Systems: A Resilience Engineering Perspective by Gunilla Sundström and Erik Hollnagel, 2018.
References
Safety engineering
Hazard analysis | Resilience engineering | [
"Engineering"
] | 1,966 | [
"Safety engineering",
"Systems engineering",
"Hazard analysis",
"Reliability engineering"
] |
71,780,252 | https://en.wikipedia.org/wiki/Tank%20cascade%20system | The tank cascade system () is an ancient irrigation system spanning the island of Sri Lanka. It is a network of thousands of small irrigation tanks () draining to large reservoirs that store rainwater and surface runoff for later use. They make agriculture possible in the dry-zone, where periods of drought and flooding otherwise make it difficult to support paddy fields and livestock.
Originating in the 1st millennium BCE, the system was designated as a Globally Important Agricultural Heritage System by the United Nations Food and Agriculture Organization in 2017. Centralized bureaucratic management of large-scale systems was implemented from the 3rd to the 13th centuries. Small-scale systems continued to be well-maintained up until the abolishment of compulsory labor, following British consolidation of control over the island. Efforts since independence to rehabilitate the tanks have resulted in much of the system being restored, as well as the addition and integration of new reservoirs. The reservoirs total to 2.7% of the country's surface area and have a significant effect on the ecology of the island.
Etymology
A catchment site within the system is referred to as a () in Sinhala, and this term is translated into English as "tank".
These tanks are connected in a series, referred to as a cascade, so that an ephemeral waterflow can be used, stored for future use, or conveyed elsewhere. The native term in Sinhala for a cascade is , which is a compound word combining ("hanging") and ("next to one another").
Geography
The tank cascade system is largely located in the semi-arid north-central section of the island, which experiences equatorial heat, limited freshwater, and erratic rainfall patterns. The monsoon cycle in the region, coupled with low water retention in the soils of the region, results in minimal groundwater storage capacity, high rates of evaporation, and low or variable precipitation, meaning that "in this hard rock region...no stable human settlement would have been possible without recourse to the storage of surface water in small tanks." Granite and charnockite underlie in this area, decreasing permeability. The "undulating topography" of the island's dry zone is also appropriate for pond or reservoir construction, with small dams being able to create large reservoirs.
Overall, Sri Lanka has 80 major dams and 18,000 extant tanks. Between 10,000 and 14,000 tanks are in active use as irrigation sources; the majority of these hold water in the north-central lowland dry zone. The total surface area of all reservoirs in Sri Lanka was estimated in 1988 to be , of the country's area. Of this, 39,000 hectares correspond to just 44 major ancient reservoirs.
History
Whereas the agriculture of Fertile Crescent arose from stored water in low bottomland soil, and the agriculture of ancient Egypt was dependent on retained Nile River flood waters, ancient Sri Lankans used a chain of reservoir systems as their water source. Sri Lanka has been called a "hydraulic civilization." Similar ancient water engineering projects in tropical and subtropical climates include the qanats of Iran, oases in the Near East and North Africa, and the Gurganj Dam of Amu Darya.
Researchers theorise that the evolution of the tank cascade began with rain-fed agriculture and then became increasingly sophisticated beginning with diverting rivulets, then permanent rivers, followed by a leap forward with the construction of spillways, weirs and ultimately sluices, then the construction of reservoirs, until, at the apogee of development, ancient Sri Lankans were able to successfully dam up perennial rivers and use the water as they saw fit. Historic uses of the tank cascade system included human needs (drinking water, sanitation, food production), ecosystem enrichment, urban development, administrative boundary setting ("water cordons"), and natural disaster mitigation.
Rainwater reservoirs were being constructed on the island as early as 300 BCE—there are assertions that Sorabora Wewa in Mahiyangana was constructed by the yaksha spirits before the theory postulated as the Indo-Aryan migration to the island—and an estimated total of 30,000 tanks have been built over the history of Sri Lanka.
The existence of what is now called the tank cascade system is recorded in the Dīpavaṃsa and the two Mahāvaṃsa chronicles, which describe tanks, ponds, water holes, dams, canals, irrigation funding grants, irrigation income, irrigation taxes, and irrigation laws.
An estimated 15,000 tanks were built between 300 and 1300 CE, during the Anuradhapura Kingdom (437 CE–845 CE) and Polonnaruwa kingdom (846 CE–1302 CE) eras. Sri Lanka irrigation engineers of this period were supposedly summoned or hired by other kingdoms for their expertise.
In the 9th century, bureaucracy to organise the irrigation system included a committee known as the Twelve Great Reservoirs.
The most famous surviving exemplars of the irrigation infrastructure used by Sri Lankan elites are the Abhayavapi rainwater reservoir in Anuradhapura built by Pandukabhaya (437–366 BCE) and the "lion rock" fortress Sigiriya, a UNESCO World Heritage Site. The only possible source of water at Sigiriya (which sits 360 meters atop the plain) is rainwater, which was cunningly managed through a network of pools, underground channels and drains.
Other historic landmarks of Sri Lanka water engineering include the lion pond of Mihinthale, the stone lotus pond of Polonnaruva, and the architecture of Kumara Pokuna, the royal baths of Parakramabahu the Great.
Thousands of modest tanks with hyperlocal catchment areas were built at the same time as "the larger and more impressive network of irrigation systems that [were]…controlled and directed by the kings and other higher echelons of the irrigation bureaucracy." The extensive tank cascade infrastructure incorporated local and regional Buddhist monasteries by providing them with their own irrigation access and related incomes. In contemporary Sri Lanka, "Buddhist monks of any given village…are often consulted on water management decisions and lead agro-based cultural festivities."
Eventually the tank cascade system entered a period of decline and partial abandonment. Maintenance of the system between the 1200s and the 1700s CE, considered the "dark ages of tank civilization," is poorly understood. Very little is known of this period as the historical record is thin, but the Rājākariya labour system may have been involved. Dutch colonial administrators (1640–1796 CE) mostly concerned themselves with cultivation of coastal areas and lucrative crops like cinnamon and seem to have ignored the inland tank cascade systems. During the British colonial period, the Rājākariya system was abolished and the tank cascade system seemingly suffered as a result.
In the late 1800s CE an effort was made to reclaim and reorganise the surviving remnants of the tank cascade system; water sluices were replaced on several hundred tanks, and restoration projects were initiated for larger elements including Yodha Ela canal, Kala Wewa tank, Kantale tank, Giant's Tank and Minneriya-Elahara. British records also tell of village irrigation managers creating sluices from hollow tree trunks or clay pots turned pipes.
The Sri Lankan Department of Agricultural Services has overseen irrigation-management groups, called Farmers Organizations, since 1979. Sri Lanka's current water management plan seeks to preserve the ecosystem and cultural benefits of the system while making large-scale investments in drinking water systems, sewage treatment plants, and commercial-industrial water infrastructure. In addition to the tank cascade system, surface irrigation has been used on the island since the mid-20th century. One source says "the tanks have been largely untouched since the 1970s with the development of large irrigation and hydropower schemes."
Similar historic tank cascade systems can be found in Tamil Nadu state in southern India and West Bengal state in eastern India.
Hydrology and function
Village tanks and cascades are "naturalized" and generally built with permeable natural materials rather than concreted in place. Tanks can be any size from small vernal pools to huge perennial lakes "thousands of hectares in surface area."
These tanks are connected into a series, the "cascade" or , so that an ephemeral waterflow can be used, stored for future use, or conveyed elsewhere. The water flows through channels and spillways within a small or medium-sized drainage area (called kiul ela and ranging in size from 13 to 26 km2, with an average size of 20 km2.).
The cascade network draws from or serves to a variety of reservoirs: pahala wewa (village tank), kulu wewa (forest tanks), pin wewa (temple tanks), olagam wewa (supplementary tanks), ilaha wewa (storage tanks), et al. Tanks are edged with earthen embankments (or bund) called wekandas with integrated water gates called kuto sorowwas, horowwas (sluice) or bisokotuwas (valve pit) that release water into the canal system. The extent or expanse of water in the reservoir is called diyagiluma; the “dry lakebed” or “meadow” or parkland that the cascade potentially fills with water is wew pitiya. Village livestock congregate at the wew pitiya in the dry season. The upland stream channels are called diya para, the drainage channel exiting a village tank and paddy field is called kiwul ela.
The upstream edge of the tank is usually planted with a protective treeline called gasgommana and a reed bed for filtration, called perahana; the downstream edge is planted with biodiverse "interceptor" vegetation called kattakaduwa, intended as a bioremediation trap for salts and other contaminants. The gosgommana may be planted with indigenous species including Bassia longifolia, Terminalia arjuna, Crateva adansonii and Diosoyros malabarica. Herbs and medicinal plants are grown in the upper thaulla area of the system, and vegetables are often grown on the mounded barriers that separate paddy fields.
Some upstream elements of the system were designed to trap sediment that could eventually block the canals, while other upstream "forest tanks" serve as watering holes to keep wildlife out of the human water supply. Still other tank elements are engineered to recharge the aquifer. Studies of similar tank cascade systems in India found that they increase well recharge by 40 per cent and decrease surface runoff by 75 per cent.
The cascade network can be understood as an integrated, human-managed ecosystem "where water and land resources are organized within the micro-catchments of the dry zone landscape, providing basic needs to human, floral and faunal communities through water, soil, air and vegetation."
Use
The system remains an important part of the modern Sri Lankan irrigation network, and supports much of the agriculture in the country. The stored water is mainly used for paddy field cultivation of Asian rice (Oryza sativa). The paddy fields are called wela; the fields closest to the water gate are called purara wela or purana vela, depending on transliteration (meaning old fields). The purara wela were originally communal. Fields further away are called akkara wela (acre field), and were often developed during the European colonial period, are privately owned, and have a less favourable water supply.
The farmers of the Sri Lankan paddy fields originally grew heritage rice varieties like Suwandal but have now largely transitioned to Green Revolution strains of rice.
There are more than 7,500 village-scale tanks in use today, along with many other reservoirs that are either larger or that are no longer used for traditional purposes.
Locals coordinate water use through Farmers Organizations and "appoint a person called Jala Palaka [water controller], who is supposed to release water according to the requirement of the farmers and the domestic users. The normal practice is that the water controller retains some water in the tank for domestic purposes."
Village water management practices vary and depend on the social structure of the community and "locally evolved" systems.
Historic village tanks had strict codes surrounding the use of the various bodies of water in the tank cascade system, with designated areas for bathing, cleaning, watering animals, laundry and so forth. In many districts, the village tank system provides drinking water through well recharge; the existence of a small to moderately sized tank raises the groundwater levels in the immediate environment. Farmers capitalise on this by digging a series of wells near the tank body, which they use to extract water for drinking and washing.
Larger reservoirs may have buildings or huts built along the shore, and may be used for freshwater fishing, hunting or poaching, and lotus flower picking, in addition to the typical agricultural and pastoral uses.
Development agencies hope that revitalising the system could both mitigate some of the negative effects of climate change and restore some of the comity lost to the Sri Lankan Civil War, although the system (which originated during a golden age of the Sinhalese culture) may be less nostalgic for neighbours of Tamil ethnicity or Muslim faith.
Kidney disease
Some districts of Sri Lanka have epidemic rates of Chronic Kidney Disease of Unknown Etiology (CKDu). Pollution of groundwater by chemical-agricultural runoff is a suspected factor; men are more likely than women to develop the condition.
Kidney disease rates are highest in areas that use water diverted from the Mahaweli River.
Ecological and sociological dimensions
Benefits of the tank cascade system include creating cooler microclimates that serve as wildlife habitats, encouraging biodiversity through the establishment of many ecological niches and ecotones, and establishing conditions for a "unique decentralized social system in Sri Lanka where farmers have held the highest social rank."
The tanks and connecting channels are used as water sources and habitat by both domestic livestock and indigenous wildlife, including Sri Lankan elephants.
A biodiversity survey of just one tank cascade system in the Malwathu Oya river watershed found that it supported approximately 400 plant and animal species.
The local tank cascade systems persisted and stabilised local communities even when changing regimes on the national level led to the decline of the "large-scale centrally managed" tank cascade systems.
Farmers who were interviewed about their relationship with the tank cascade system referenced the Theravada Buddhist principle of Pratītyasamutpāda, suggesting that the "concept of a plurality of causes directly underpins the interconnected eco-systems approach that farmers of the tank cascade system apply to water."
Active restoration of a tank cascade system to historic standards can be observed at Alisthana at the 112-kilometre post on A9 road.
Gallery
See also
Sri Lanka dry-zone dry evergreen forests
Qanat (Middle East and North Africa)
Johad (Northern India)
Minneriya National Park
Yala National Park
Kaudulla National Park
Pidurangala Vihara
Notes
References
External links
United Nations Development Programme: Ancient water tanks of Sri Lanka to adapt to a changing climate
P.B. Dharmasena agriculture and water management teaching slideshows
Google Scholar Vindanage small tank papers
Proposal - Globally Important Agricultural Heritage System (GIAHS) Designation: The Cascaded Tank Village System (CTVS) in the Dry Zone of Sri Lanka - report by Sri Lanka Ministry of Agriculture & FAO UN
Irrigation in Sri Lanka
Permaculture
Rainwater harvesting
Water supply
Environment of Sri Lanka
Globally Important Agricultural Heritage Systems
Dams in Sri Lanka
History of dams | Tank cascade system | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,156 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
71,781,428 | https://en.wikipedia.org/wiki/Robert%20P.%20Lattimer | Robert P. Lattimer (February 2, 1945 - ) is a retired chemist who worked for Lubrizol as an Advanced Materials research and development technical fellow. He is an advocate for including intelligent design in public science curriculum.
Education
Lattimer attended the University of Missouri where he earned a B.S. in chemistry. He obtained his doctoral degree in 1971 in physical/analytical chemistry from the University of Kansas.
Career
Lattimer worked for B.F. Goodrich and later Noveon and Lubrizol as a research chemist. He retired as a Senior Technical Fellow following nearly 40 years of service. His published work on mass spectrometry and polymer characterization and degradation have been widely cited. He is a past Vice-President of the American Society for Mass Spectrometry. Lattimer was Vice-Chairman of the 1985 Gordon Research Conference on Analytical Pyrolysis. His most cited work treated the subject of mass spectrometry of transition metal macrocycles.
Political Advocacy
Lattimer is a board member for the Eagle Forum of Ohio. He has advocated for pro-family issues in the state, and he has been the Science Issues Chairman. He advocated for including Intelligent Design in the Ohio Board of Education's state science curriculum. Lattimer was a founder of the advocacy group Science Excellence for All Ohioans (SEAO). He co-authored a book titled The Evolution Controversy. He is a signer of A Scientific Dissent from Darwinism.
Awards and recognition
1990 - Sparks–Thomas award
2008 - Melvin Mooney Distinguished Technology Award from the ACS Rubber Division
He is a recipient of an Eagle Award from Eagle Forum and a Wedge of Truth Award from IDnet.
References
1945 births
Polymer scientists and engineers
20th-century American engineers
Living people
Intelligent design advocates
University of Missouri alumni
University of Kansas alumni
Mass spectrometrists | Robert P. Lattimer | [
"Physics",
"Chemistry",
"Materials_science"
] | 379 | [
"Physical chemists",
"Spectrum (physical sciences)",
"Biochemists",
"Mass spectrometrists",
"Mass spectrometry",
"Polymer chemistry",
"Polymer scientists and engineers"
] |
56,140,139 | https://en.wikipedia.org/wiki/List%20object | In category theory, an abstract branch of mathematics, and in its applications to logic and theoretical computer science, a list object is an abstract definition of a list, that is, a finite ordered sequence.
Formal definition
Let C be a category with finite products and a terminal object 1.
A list object over an object of C is:
an object ,
a morphism : 1 → , and
a morphism : × →
such that for any object of with maps : 1 → and : × → , there exists a unique : → such that the following diagram commutes:
where〈id, 〉denotes the arrow induced by the universal property of the product when applied to id (the identity on ) and . The notation * (à la Kleene star) is sometimes used to denote lists over .
Equivalent definitions
In a category with a terminal object 1, binary coproducts (denoted by +), and binary products (denoted by ×), a list object over can be defined as the initial algebra of the endofunctor that acts on objects by ↦ 1 + ( × ) and on arrows by ↦ [id1,〈id, 〉].
Examples
In Set, the category of sets, list objects over a set are simply finite lists with elements drawn from . In this case, picks out the empty list and corresponds to appending an element to the head of the list.
In the calculus of inductive constructions or similar type theories with inductive types (or heuristically, even strongly typed functional languages such as Haskell), lists are types defined by two constructors, nil and cons, which correspond to and , respectively. The recursion principle for lists guarantees they have the expected universal property.
Properties
Like all constructions defined by a universal property, lists over an object are unique up to canonical isomorphism.
The object 1 (lists over the terminal object) has the universal property of a natural number object. In any category with lists, one can define the length of a list to be the unique morphism : → 1 which makes the following diagram commute:
References
See also
Natural number object
F-algebra
Initial algebra
Objects (category theory)
Topos theory | List object | [
"Mathematics"
] | 449 | [
"Objects (category theory)",
"Mathematical structures",
"Category theory",
"Topos theory"
] |
56,141,841 | https://en.wikipedia.org/wiki/Stains-all | Stains-all is a carbocyanine dye, which stains anionic proteins, nucleic acids, anionic polysaccharides and other anionic molecules.
Properties
Stains-all is metachromatic and changes its color dependent on its contact to other molecules. The detection limit for phosphoproteins is below 1 ng after one hour of staining, for anionic polysaccharides between 10 and 500 ng. Highly anionic proteins are stained blue, proteoglycans purple and anionic proteins pink. RNA is stained blueish-purple with a detection limit of 90 ng and DNA is stained blue with a detection limit of 3 ng.
Stains-all is light sensitive, therefore the staining is performed in the absence of light and photographed immediately. Staining of proteins can be improved by a subsequent silver stain. The analogue Ethyl-Stains-all has similar properties as stains-all, with differences in solubility and staining properties.
Applications
Stains-all stains nucleic acids, anionic proteins, anionic polysaccharides such as alginate and pectinate, hyaluronic acid and dermatan sulfate, heparin, heparan sulfate and chondroitin sulfate. It is used in SDS-PAGE, agarose gel electrophoresis and histologic staining, e.g. staining of growth lines in bones.
References
Thiazole dyes
Biochemistry methods
Histology | Stains-all | [
"Chemistry",
"Biology"
] | 313 | [
"Biochemistry methods",
"Histology",
"Biochemistry",
"Microscopy"
] |
56,144,090 | https://en.wikipedia.org/wiki/Drosophila%20circadian%20rhythm | Drosophila circadian rhythm is a daily 24-hour cycle of rest and activity in the fruit flies of the genus Drosophila. The biological process was discovered and is best understood in the species Drosophila melanogaster. Many behaviors are under circadian control including eclosion, locomotor activity, feeding, and mating. Locomotor activity is maximum at dawn and dusk, while eclosion is at dawn.
Biological rhythms were first studied in Drosophila pseudoobscura. Drosophila circadian rhythm have paved the way for understanding circadian behaviour and diseases related to sleep-wake conditions in other animals, including humans. This is because the circadian clocks are fundamentally similar. Drosophila circadian rhythm was discovered in 1935 by German zoologists, Hans Kalmus and Erwin Bünning. American biologist Colin S. Pittendrigh provided an important experiment in 1954, which established that circadian rhythm is driven by a biological clock. The genetics was first understood in 1971, when Seymour Benzer and Ronald J. Konopka reported that mutation in specific genes changes or stops the circadian behaviour. They discovered the gene called period (per), mutations of which alter the circadian rhythm. It was the first gene known to control behaviour. After a decade, Konopka, Jeffrey C. Hall, Michael Rosbash, and Michael W. Young discovered novel genes including timeless (tim), Clock (Clk), cycle (cyc), cry. These genes and their product proteins play a key role in the circadian clock. The research conducted in Benzer's lab is narrated in Time, Love, Memory by Jonathan Weiner.
For their contributions, Hall, Rosbash and Young received the Nobel Prize in Physiology or Medicine in 2017.
History
During the process of eclosion by which an adult fly emerges from the pupa, Drosophila exhibits regular locomotor activity (by vibration) that occurs during 8-10 hours intervals starting just before dawn. The existence of this circadian rhythm was independently discovered in D. melanogaster in 1935 by two German zoologists, Hans Kalmus at the Zoological Institute of the German University in Prague (now Charles University), and Erwin Bünning at the Botanical Institute of the University of Jena. Kalmus discovered in 1938 that the brain area is responsible for the circadian activity. Kalmus and Bünning were of the opinion that temperature was the main factor. But it was soon realized that even in different temperature, the circadian rhythm could be unchanged. In 1954, Colin S. Pittendrigh at the Princeton University discovered the importance of light-dark conditions in D. pseudoobscura. He demonstrated that eclosion rhythm was delayed but not stopped when temperature was decreased. He concluded that temperature influenced only the peak hour of the rhythm, and was not the principal factor. It was then known that the circadian rhythm was controlled by a biological clock. But the nature of the clock was then a mystery.
After almost two decades, the existence of the circadian clock was discovered by Seymour Benzer and his student Ronald J. Konopka at the California Institute of Technology. They discovered that mutations in the X chromosome of D. melanogaster could make abnormal circadian activities. When a specific part of the chromosome was absent (inactivated), there was no circadian rhythm; in one mutation (called perS, "S" for short or shortened) the rhythm was shortened to ~19 hours; whereas, in another mutation (perL, "L" for long or lengthened) the rhythm was extended to ~29 hours, as opposed to a normal 24 hour rhythm. They published the discovery in 1971. They named the gene location (locus) as period (per for short) as it controls the period of the rhythm. In opposition, there were other scientists that stated genes could not control such complex behaviors as circadian activities.
Another circadian behavior in Drosophila is courtship between the male and female during mating. Courtship involves a song accompanied by a ritual locomotory dance in males. The main flight activity generally takes place in the morning and another peak occurs before sunset. Courtship song is produced by the male's wing vibration and consists of pulses of tone produced at intervals of approximately 34 msec in D. melanogaster (48 msec in D. simulans). In 1980, Jeffrey C. Hall and his student Charalambos P. Kyriacou, at Brandeis University in Waltham, discovered that courtship activity is also controlled by per gene. In 1984, Konopka, Hall, Michael Roshbash and their team reported in two papers that per locus is the centre of the circadian rhythm, and that loss of per stops circadian activity. At the same time, Michael W. Young's team at the Rockefeller University reported similar effects of per, and that the gene covers 7.1-kilobase (kb) interval on the X chromosome and encodes a 4.5-kb poly(A)+ RNA. In 1986, they sequenced the entire DNA fragment and found the gene encodes the 4.5-kb RNA, which produces a protein, a proteoglycan, composed of 1,127 amino acids. At the same time Roshbash's team showed that PER protein is absent in mutant per. In 1994, Young and his team discovered the gene timeless (tim) that influences the activity of per. In 1998, they discovered doubletime (dbt), which regulate the amount of PER protein.
In 1990, Konopka, Rosbash, and identified a new gene called Clock (Clk), which is vital for the circadian period. In 1998, they found a new gene cycle (cyc), which act together with Clk. In the late 1998, Hall and Roshbash's team discovered cryb, a gene for sensitivity to blue light. They simultaneously identified the protein CRY as the main light-sensitive (photoreceptor) system. The activity of cry is under circadian regulation, and influenced by other genes such as per, tim, clk, and cyc. The gene product CRY is a major photoreceptor protein belonging to a class of flavoproteins called cryptochromes. They are also present in bacteria and plants. In 1998, Hall and Jae H. Park isolated a gene encoding a neuropeptide named pigment dispersing factor (PDF), based on one of the roles it plays in crustaceans. In 1999, they discovered that pdf is expressed by lateral neurone ventral clusters (LNv) indicating that PDF protein is the major circadian neurotransmitter and that the LNv neurones are the principal circadian pacemakers. In 2001, Young and his team demonstrated that glycogen synthase kinase-3 (GSK-3) ortholog shaggy (SGG) is an enzyme that regulates TIM maturation and accumulation in the early night, by causing phosphorylation.
Hall, Rosbash, and Young shared the Nobel Prize in Physiology or Medicine 2017 “for their discoveries of molecular mechanisms controlling the circadian rhythm”.
Mechanism
In Drosophila there are two distinct groups of circadian clocks: the clock neurons and the clock genes. They act concertedly to produce the 24-hour cycle of rest and activity. Light is the source of activation of the clocks. The compound eyes, ocelli, and Hofbauer-Buchner eyelets (HB eyelets) are the direct external photoreceptor organs. But the circadian clock can work in constant darkness. Nonetheless, the photoreceptors are required for measuring the day length and detecting moonlight. The compound eyes are important for differentiating long days from constant light. For the normal masking effects of light, such as inducing activity by light and inhibition by darkness. There are two distinct activity peaks termed the M (for morning) peak, happening at dawn, and E (for evening) peak, at dusk. They monitor the different day lengths in different seasons of the year. The light-sensitive proteins in the eye called, rhodopsins (rhodopsin 1 and 6), are crucial in activating the M and E oscillations. When environmental light is detected, approximately 150 neurones (there are about 100,000 neurones in the Drosophila brain) in the brain regulate the circadian rhythm. The clock neurons are located in distinct clusters in the central brain. The best-understood clock neurons are the large and small lateral ventral neurons (l-LNvs and s-LNvs) at the base of the optic lobe. These neurons produce a pigment dispersing factor (PDF), a neuropeptide that acts as a circadian neuromodulator between different clock neurons.
Drosophila circadian keeps time via daily fluctuations of clock-related proteins which interact in a transcription-translation feedback loop. The core clock mechanism consists of two interdependent feedback loops, namely the PER/TIM loop and the CLK/CYC loop. The CLK/CYC loop occurs during the day in which both Clock protein and cycle protein are produced. CLK/CYC heterodimers act as transcription factors and bind together to initiate the transcription of the per and tim genes by binding to a promoter element called E box, around mid-day. DNA is transcribed to produce PER mRNA and TIM mRNA. PER and TIM proteins are synthesized in the cytoplasm and exhibit a smooth increase in levels over the day. Their RNA levels peak early in the evening and protein levels peak around daybreak. But their protein levels are maintained at constantly low levels until dusk because daylight also activates the double-time (dbt) gene. DBT protein induces post-translational modifications, that is phosphorylation and turnover of monomeric PER proteins. As PER is translated in the cytoplasm, it is actively phosphorylated by DBT (casein kinase 1ε) and casein kinase 2 (synthesized by And and Tik) as a prelude to premature degradation. The actual degradation is through the ubiquitin-proteasome pathway and is carried out by a ubiquitin ligase called Slimb (supernumerary limbs). At the same time, TIM is itself phosphorylated by shaggy, whose activity declines after sunset. DBT gradually disappears, and withdrawal of DBT promotes PER molecules to get stabilized by physical association with TIM. Hence, maximum production of PER and TIM occurs at dusk. At the same time, CLK/CYC also directly activates vri and Pdp1 (the gene for PAR domain protein 1). VRI accumulates first, 3-6 hours earlier, and starts to repress Clk; but the incoming PDP1 creates a competition by activating Clk. PER/TIM dimers accumulate in the early night translocate in an orchestrated fashion into the nucleus several hours later, and bind to CLK/CYC dimers. Bound PER completely stops the transcriptional activity of CLK and CYC.
In the early morning, the appearance of light causes PER and TIM proteins to break down in a network of transcriptional activation and repression. First, the light activates the cry gene in the clock neurons. Although CRY is produced deep inside the brain, it is sensitive to UV and blue light, and thus it easily signals the brain cells the onset of light. It irreversibly and directly binds to TIM causing it to break down through proteosome-dependent ubiquitin-mediated degradation. The CRY's photolyase homology domain is used for light detection and phototransduction, whereas the carboxyl-terminal domain regulates CRY stability, CRY-TIM interaction, and circadian photosensitivity. The ubiquitination and subsequent degradation are aided by a different protein JET. Thus PER/TIM dimer dissociates, and the unbound PER becomes unstable. PER undergoes progressive phosphorylation and ultimately degradation. The absence of PER and TIM allows activation of clk and cyc genes. Thus, the clock is reset to commence the next circadian cycle.
References
Circadian rhythm
circadian | Drosophila circadian rhythm | [
"Biology"
] | 2,564 | [
"Behavior",
"Sleep",
"Circadian rhythm"
] |
56,145,576 | https://en.wikipedia.org/wiki/Vestronidase%20alfa | Vestronidase alfa, sold under brand name Mepsevii, is a medication for the treatment of Sly syndrome. It is a recombinant form of the human enzyme beta-glucuronidase. It was approved in the United States in November 2017, to treat children and adults with an inherited metabolic condition called mucopolysaccharidosis type VII (MPS VII), also known as Sly syndrome. MPS VII is an extremely rare, progressive condition that affects most tissues and organs.
The most common side effects after treatment with vestronidase alfa include infusion site reactions, diarrhea, rash (urticaria) and anaphylaxis (sudden, severe allergic reaction).
The US. Food and Drug Administration (FDA) considers it to be a first-in-class medication. It was approved for use in the European Union in August 2018.
Medical uses
Mepsevii is indicated for the treatment of non-neurological manifestations of Mucopolysaccharidosis VII (MPS VII; Sly syndrome).
History
The safety and efficacy of vestronidase alfa were established in a clinical trial and expanded access protocols enrolling a total of 23 participants ranging from five months to 25 years of age. Participants received treatment with vestronidase alfa at doses up to 4mg/kg once every two weeks for up to 164 weeks. Efficacy was primarily assessed via the six-minute walk test in ten participants who could perform the test. After 24 weeks of treatment, the mean difference in distance walked relative to placebo was 18 meters. Additional follow-up for up to 120 weeks suggested continued improvement in three participants and stabilization in the others. Two participants in the vestronidase alfa development program experienced marked improvement in pulmonary function. Overall, the results observed would not have been anticipated in the absence of treatment. The effect of vestronidase alfa on the central nervous system manifestations of MPS VII has not been determined.
The FDA approved vestronidase alfa-vjbk based primarily on evidence from one clinical trial (NCT02230566) of 12 participants with mucopolysaccharidosis VII. The trial was conducted at four sites in the United States.
The benefit and side effects of vestronidase alfa were based primarily on one trial. Participants were randomly assigned to four groups. Three groups of participants received placebo treatment before starting vestronidase alfa treatment and one group received vestronidase alfa only. vestronidase alfa or placebo were given once every two weeks as intravenous (IV) infusions. Neither participants nor healthcare providers knew which treatment was given until after the trial was completed.
The benefit of 24 weeks of vestronidase alfa treatment was primarily evaluated by the 6-minute walking test (6MWT) and compared to placebo treatment in ten participants who could perform the test. The 6MWT measured the distance a patient could walk on a flat surface in 6 minutes. An additional follow-up using 6MWT was done for up to 120 weeks.
The application for vestronidase alfa was granted fast track designation, orphan drug designation, and a rare pediatric disease priority review voucher. This was the twelfth rare pediatric disease priority review voucher issued.
The US Food and Drug Administration (FDA) granted approval of Mepsevii to Ultragenyx Pharmaceutical, Inc, and required the manufacturer to conduct a post-marketing study to evaluate the long-term safety of the product.
References
External links
Orphan drugs
Recombinant proteins | Vestronidase alfa | [
"Biology"
] | 729 | [
"Recombinant proteins",
"Biotechnology products"
] |
74,594,353 | https://en.wikipedia.org/wiki/Diphenyl%20sulfide | Diphenyl sulfide is an organosulfur compound with the chemical formula , often abbreviated as , where Ph stands for phenyl. It is a colorless liquid with an unpleasant odor. Diphenyl sulfide is an aromatic sulfide. The molecule consists of two phenyl groups attached to a sulfur atom.
Synthesis, reactions, occurrence
Many methods exist for the preparation of diphenyl sulfide. It arises by a Friedel-Crafts-like reaction of sulfur monochloride and benzene. Diphenyl sulfide and its analogues can also be produced by coupling reactions using metal catalysts. It can also be prepared by reduction of diphenyl sulfone.
Diphenyl sulfide is a product of the photodegradation of the fungicide edifenphos.
Diphenyl sulfide is a precursor to triarylsulfonium salts, which are used as photoinitiators. The compound can be oxidized to the sulfoxide with hydrogen peroxide.
References
Aromatic compounds
Cyclic compounds
Organosulfur compounds
Sulfur compounds
Thioethers | Diphenyl sulfide | [
"Chemistry"
] | 225 | [
"Organic compounds",
"Aromatic compounds",
"Organosulfur compounds"
] |
74,595,119 | https://en.wikipedia.org/wiki/Mustard%20cake | Mustard cake is the residue obtained after extraction of oil from mustard, which is used as organic fertilizer. Mustard cake powder is excellent organic fertilizer containing food ingredients and even catalysts for herbaceous plants (fruit, flower and vegetable plants). Mustard cake are very useful as feed for the livestock and cattle.
Effectivity
Mustard cake powder is a universal and harmless fertilizer as it contains no other ingredients except mustard. It can be used both by mixing it with the soil and as a liquid organic fertilizer.
Meets the needs of nitrogen, potassium and various macro and micro elements required by plants. It makes flowers, fruits and plants grow to the right size.
Natural and eco-friendly best organic fertilizer.
Provides phosphorus to plants.
How to use
Mustard cake powder can be used in two ways.
Direct use
Mustard cakes must be finely ground to apply directly to the soil. The powder can be applied at a distance of half a meter from the base of the plant.
Liquid form
Mustard cake powder need to be soaked in water for a week to rot. After a week, it should be applied to the soil near the base of the plant in a ratio of 1:10 with composted water and fresh water.
References
Fertilizers | Mustard cake | [
"Chemistry"
] | 256 | [
"Fertilizers",
"Soil chemistry"
] |
74,597,445 | https://en.wikipedia.org/wiki/Chain%20reactions%20in%20living%20organisms | Chain reaction in chemistry and physics is a process that produces products capable of initiating subsequent processes of a similar nature. It is a self-sustaining sequence in which the resulting products continue to propagate further reactions. Examples of chain reactions in living organisms are lipid peroxidation in cell membranes and propagation of excitation of neurons in epilepsy.
Lipid peroxidation in cell membranes
Nonenzymatic peroxidation occurs through the action of reactive oxygen species (ROS), specifically hydroxyl (HO•) and hydroperoxyl (HO) radicals, which initiate the oxidation of polyunsaturated fatty acids. Other initiators of lipid peroxidation include ozone (O3), nitrogen oxide (NO), nitrogen dioxide (NO2), and sulfur dioxide. The process of nonenzymatic peroxidation can be divided into three phases: initiation, propagation, and termination.
During the initiation phase, fatty acid radicals are generated, which can propagate peroxidation to other molecules. This occurs when a free radical removes a hydrogen atom from a fatty acid, resulting in a lipid radical (L•) with an unpaired electron.
In the propagation phase, the lipid radical reacts with oxygen (O2) or a transition metal, forming a peroxyl radical (LOO•). This peroxyl radical continues the chain reaction by reacting with a new unsaturated fatty acid, producing a new lipid radical (L•) and lipid hydroperoxide (LOOH). These primary products can further decompose into secondary products.
The termination phase involves the interaction of a radical with an antioxidant molecule, such as α-tocopherol (vitamin E), which inhibits the propagation of chain reactions, thus terminating peroxidation. Another method of termination is the reaction between a lipid radical and a lipid peroxide, or the combination of two lipid peroxide molecules, resulting in stable nonreactive molecules. Reinforced lipids that become part of the membrane if consumed with heavy isotope diet also inhibit peroxidation.
Propagation of excitation of neurons in epilepsy
Epilepsy is a neurological condition marked by recurring seizures. It occurs when the brain's electrical activity becomes unbalanced, leading to repeated seizures. These seizures disrupt the normal electrical patterns in the brain, causing sudden and synchronized bursts of electrical energy. As a result, individuals may experience temporary changes in consciousness, movements, or sensations.
Glutamate excitotoxicity is thought to play an important role in the initiation and maintenance of epileptic seizures. The seizure-induced high flux of glutamate overstimulated glutamate receptors, which triggered a chain reaction of excitation in glutamatergic networks.
References
Chemical reactions
Biochemistry | Chain reactions in living organisms | [
"Chemistry",
"Biology"
] | 592 | [
"Biochemistry",
"Chemical reaction stubs",
"nan"
] |
74,598,694 | https://en.wikipedia.org/wiki/Reciprocal%20human%20machine%20learning | Reciprocal Human Machine Learning (RHML) is an interdisciplinary approach to designing human-AI interaction systems. RHML aims to enable continual learning between humans and machine learning models by having them learn from each other. This approach keeps the human expert "in the loop" to oversee and enhance machine learning performance and simultaneously support the human expert continue learning.
Background
RHML emerged in the context of the rise of big data analytics and artificial intelligence for intelligent tasks like sense-making and decision-making. As machine learning advanced to take on more roles, researchers realized fully autonomous systems had limitations and needed human guidance.
RHML extends the concept of human-in-the-loop systems by promoting reciprocal learning. Humans learn from their interactions with machine learning models, staying up-to-date on evolving technology. The models also learn from human feedback and oversight. This amplification of learning on both sides is a key focus of RHML.
The approach draws on theories of learning in dyads from education and psychology. It also builds on human-computer interaction and human-centered design principles. Implementing RHML requires developing specialized tools and interfaces tailored to the application
Applications
RHML has been explored across diverse domains including:
Cybersecurity - Software to enable reciprocal learning between experts and AI models for social media threat detection.
Organizational decision-making - RHML to structure collaboration between humans and AI systems.
Workplace training - Using RHML for workers to learn from AI technologies on the job.
Open science - Using human and AI collaboration to promote open science.
Production and logistics - turning workers and intelligent machines into teammates.
RHML maintains human oversight and control over AI systems, while enabling cutting-edge machine learning performance. This collaborative approach highlights the importance of keeping the human expert involved in the loop.
References
Machine learning
Human–computer interaction | Reciprocal human machine learning | [
"Engineering"
] | 374 | [
"Artificial intelligence engineering",
"Human–computer interaction",
"Human–machine interaction",
"Machine learning"
] |
76,122,044 | https://en.wikipedia.org/wiki/SmithKline%20Beecham%20Clinical%20Laboratories | SmithKline Beecham Clinical Laboratories (SBCL) was an American-based medical laboratory company that was acquired by Quest Diagnostics in 1999 for $1.3 billion.
Controversies
In 1989, SBCL had to pay a $1.5 million fine for illegal laboratory referral kickbacks.
In 1997, Operation LabScam forced SBCL to agree to pay a $325 million settlement for billing Medicare and Medicaid for tests that physicians were misled into believing were free, violating the 1863 False Claims Act.
In 1998, a phlebotomist at an SBCL facility in Palo Alto, California was exposed as reusing needles to save money. As a result, over 3,600 patients had to receive testing and counseling for HIV and hepatitis. The incident led to phlebotomy licensure in California.
References
Life sciences industry | SmithKline Beecham Clinical Laboratories | [
"Biology"
] | 172 | [
"Life sciences industry"
] |
76,122,969 | https://en.wikipedia.org/wiki/Action%20principles | Action principles lie at the heart of fundamental physics, from classical mechanics through quantum mechanics, particle physics, and general relativity. Action principles start with an energy function called a Lagrangian describing the physical system. The accumulated value of this energy function between two states of the system is called the action. Action principles apply the calculus of variation to the action. The action depends on the energy function, and the energy function depends on the position, motion, and interactions in the system: variation of the action allows the derivation of the equations of motion without vector or forces.
Several distinct action principles differ in the constraints on their initial and final conditions.
The names of action principles have evolved over time and differ in details of the endpoints of the paths and the nature of the variation. Quantum action principles generalize and justify the older classical principles. Action principles are the basis for Feynman's version of quantum mechanics, general relativity and quantum field theory.
The action principles have applications as broad as physics, including many problems in classical mechanics but especially in modern problems of quantum mechanics and general relativity. These applications built up over two centuries as the power of the method and its further mathematical development rose.
This article introduces the action principle concepts and summarizes other articles with more details on concepts and specific principles.
Common concepts
Action principles are "integral" approaches rather than the "differential" approach of Newtonian mechanics. The core ideas are based on energy, paths, an energy function called the Lagrangian along paths, and selection of a path according to the "action", a continuous sum or integral of the Lagrangian along the path.
Energy, not force
Introductory study of mechanics, the science of interacting objects, typically begins with Newton's laws based on the concept of force, defined by the acceleration it causes when applied to mass: This approach to mechanics focuses on a single point in space and time, attempting to answer the question: "What happens next?". Mechanics based on action principles begin with the concept of action, an energy tradeoff between kinetic energy and potential energy, defined by the physics of the problem. These approaches answer questions relating starting and ending points: Which trajectory will place a basketball in the hoop? If we launch a rocket to the Moon today, how can it land there in 5 days? The Newtonian and action-principle forms are equivalent, and either one can solve the same problems, but selecting the appropriate form will make solutions much easier.
The energy function in the action principles is not the total energy (conserved in an isolated system), but the Lagrangian, the difference between kinetic and potential energy. The kinetic energy combines the energy of motion for all the objects in the system; the potential energy depends upon the instantaneous position of the objects and drives the motion of the objects. The motion of the objects places them in new positions with new potential energy values, giving a new value for the Lagrangian.
Using energy rather than force gives immediate advantages as a basis for mechanics. Force mechanics involves 3-dimensional vector calculus, with 3 space and 3 momentum coordinates for each object in the scenario; energy is a scalar magnitude combining information from all objects, giving an immediate simplification in many cases. The components of force vary with coordinate systems; the energy value is the same in all coordinate systems. Force requires an inertial frame of reference; once velocities approach the speed of light, special relativity profoundly affects mechanics based on forces. In action principles, relativity merely requires a different Lagrangian: the principle itself is independent of coordinate systems.
Paths, not points
The explanatory diagrams in force-based mechanics usually focus on a single point, like the center of momentum, and show vectors of forces and velocities. The explanatory diagrams of action-based mechanics have two points with actual and possible paths connecting them. These diagrammatic conventions reiterate the different strong points of each method.
Depending on the action principle, the two points connected by paths in a diagram may represent two particle positions at different times, or the two points may represent values in a configuration space or in a phase space. The mathematical technology and terminology of action principles can be learned by thinking in terms of physical space, then applied in the more powerful and general abstract spaces.
Action along a path
Action principles assign a number—the action—to each possible path between two points. This number is computed by adding an energy value for each small section of the path multiplied by the time spent in that section:
action
where the form of the kinetic () and potential () energy expressions depend upon the physics problem, and their value at each point on the path depends upon relative coordinates corresponding to that point. The energy function is called a Lagrangian; in simple problems it is the kinetic energy minus the potential energy of the system.
Path variation
A system moving between two points takes one particular path; other similar paths are not taken. Each path corresponds to a value of the action.
An action principle predicts or explains that the particular path taken has a stationary value for the system's action: similar paths near the one taken have very similar action value. This variation in the action value is key to the action principles.
The symbol is used to indicate the path variations so an action principle appears mathematically as
meaning that at the stationary point, the variation of the action with some fixed constraints is zero.
For action principles, the stationary point may be a minimum or a saddle point, but not a maximum. Elliptical planetary orbits provide a simple example of two paths with equal action one in each direction around the orbit; neither can be the minimum or "least action". The path variation implied by is not the same as a differential like . The action integral depends on the coordinates of the objects, and these coordinates depend upon the path taken. Thus the action integral is a functional, a function of a function.
Conservation principles
An important result from geometry known as Noether's theorem states that any conserved quantities in a Lagrangian imply a continuous symmetry and conversely. For examples, a Lagrangian independent of time corresponds to a system with conserved energy; spatial translation independence implies momentum conservation; angular rotation invariance implies angular momentum conservation.
These examples are global symmetries, where the independence is itself independent of space or time; more general local symmetries having a functional dependence on space or time lead to gauge theory. The observed conservation of isospin was used by Yang Chen-Ning and Robert Mills in 1953 to construct a gauge theory for mesons, leading some decades later to modern particle physics theory.
Distinct principles
Action principles apply to a wide variety of physical problems, including all of fundamental physics. The only major exceptions are cases involving friction or when only the initial position and velocities are given. Different action principles have different meaning for the variations; each specific application of an action principle requires a specific Lagrangian describing the physics. A common name for any or all of these principles is "the principle of least action". For a discussion of the names and historical origin of these principles see action principle names.
Fixed endpoints with conserved energy
When total energy and the endpoints are fixed, Maupertuis's least action principle applies. For example, to score points in basketball the ball must leave the shooters hand and go through the hoop, but the time of the flight is not constrained. Maupertuis's least action principle is written mathematically as the stationary condition
on the abbreviated action
(sometimes written ), where are the particle momenta or the conjugate momenta of generalized coordinates, defined by the equation
where is the Lagrangian. Some textbooks write as , to emphasize that the variation used in this form of the action principle differs from Hamilton's variation. Here the total energy is fixed during the variation, but not the time, the reverse of the constraints on Hamilton's principle. Consequently, the same path and end points take different times and energies in the two forms. The solutions in the case of this form of Maupertuis's principle are orbits: functions relating coordinates to each other in which time is simply an index or a parameter.
Time-independent potentials; no forces
For time-invariant system, the action relates simply to the abbreviated action on the stationary path as
for energy and time difference . For a rigid body with no net force, the actions are identical, and the variational principles become equivalent to Fermat's principle of least time:
Fixed events
When the physics problem gives the two endpoints as a position and a time, that is as events, Hamilton's action principle applies. For example, imagine planning a trip to the Moon. During your voyage the Moon will continue its orbit around the Earth: it's a moving target. Hamilton's principle for objects at positions is written mathematically as
The constraint means that we only consider paths taking the same time, as well as connecting the same two points and . The Lagrangian is the difference between kinetic energy and potential energy at each point on the path. Solution of the resulting equations gives the world line . Starting with Hamilton's principle, the local differential Euler–Lagrange equation can be derived for systems of fixed energy. The action in Hamilton's principle is the Legendre transformation of the action in Maupertuis' principle.
Classical field theory
The concepts and many of the methods useful for particle mechanics also apply to continuous fields. The action integral runs over a Lagrangian density, but the concepts are so close that the density is often simply called the Lagrangian.
Quantum action principles
For quantum mechanics, the action principles have significant advantages: only one mechanical postulate is needed, if a covariant Lagrangian is used in the action, the result is relativistically correct, and they transition clearly to classical equivalents.
Both Richard Feynman and Julian Schwinger developed quantum action principles based on early work by Paul Dirac. Feynman's integral method was not a variational principle but reduces to the classical least action principle; it led to his Feynman diagrams. Schwinger's differential approach relates infinitesimal amplitude changes to infinitesimal action changes.
Feynman's action principle
When quantum effects are important, new action principles are needed. Instead of a particle following a path, quantum mechanics defines a probability amplitude at one point and time related to a probability amplitude at a different point later in time:
where is the classical action.
Instead of single path with stationary action, all possible paths add (the integral over ), weighted by a complex probability amplitude . The phase of the amplitude is given by the action divided by the Planck constant or quantum of action: . When the action of a particle is much larger than , , the phase changes rapidly along the path: the amplitude averages to a small number.
Thus the Planck constant sets the boundary between classical and quantum mechanics.
All of the paths contribute in the quantum action principle. At the end point, where the paths meet, the paths with similar phases add, and those with phases differing by subtract. Close to the path expected from classical physics, phases tend to align; the tendency is stronger for more massive objects that have larger values of action. In the classical limit, one path dominates the path of stationary action.
Schwinger's action principle
Schwinger's approach relates variations in the transition amplitudes to variations in an action matrix element:
where the action operator is
The Schwinger form makes analysis of variation of the Lagrangian itself, for example, variation in potential source strength, especially transparent.
The optico-mechanical analogy
For every path, the action integral builds in value from zero at the starting point to its final value at the end. Any nearby path has similar values at similar distances from the starting point. Lines or surfaces of constant partial action value can be drawn across the paths, creating a wave-like view of the action. Analysis like this connects particle-like rays of geometrical optics with the wavefronts of Huygens–Fresnel principle.
Applications
Action principles are applied to derive differential equations like the Euler–Lagrange equations or as direct applications to physical problems.
Classical mechanics
Action principles can be directly applied to many problems in classical mechanics, e.g. the shape of elastic rods under load,
the shape of a liquid between two vertical plates (a capillary),
or the motion of a pendulum when its support is in motion.
Chemistry
Quantum action principles are used in the quantum theory of atoms in molecules (QTAIM), a way of decomposing the computed electron density of molecules in to atoms as a way of gaining insight into chemical bonding.
General relativity
Inspired by Einstein's work on general relativity, the renowned mathematician David Hilbert applied the principle of least action to derive the field equations of general relativity. His action, now known as the Einstein–Hilbert action,
contained a relativistically invariant volume element and the Ricci scalar curvature . The scale factor is the Einstein gravitational constant.
Other applications
The action principle is so central in modern physics and mathematics that it is widely applied including in thermodynamics, fluid mechanics, the theory of relativity, quantum mechanics, particle physics, and string theory.
History
The action principle is preceded by earlier ideas in optics. In ancient Greece, Euclid wrote in his Catoptrica that, for the path of light reflecting from a mirror, the angle of incidence equals the angle of reflection. Hero of Alexandria later showed that this path has the shortest length and least time.
Building on the early work of Pierre Louis Maupertuis, Leonhard Euler, and Joseph-Louis Lagrange defining versions of principle of least action,
William Rowan Hamilton and in tandem Carl Gustav Jacob Jacobi developed a variational form for classical mechanics known as the Hamilton–Jacobi equation.
In 1915, David Hilbert applied the variational principle to derive Albert Einstein's equations of general relativity.
In 1933, the physicist Paul Dirac demonstrated how this principle can be used in quantum calculations by discerning the quantum mechanical underpinning of the principle in the quantum interference of amplitudes. Subsequently Julian Schwinger and Richard Feynman independently applied this principle in quantum electrodynamics.
References
Dynamics (mechanics)
Classical mechanics | Action principles | [
"Physics"
] | 2,939 | [
"Physical phenomena",
"Classical mechanics",
"Motion (physics)",
"Mechanics",
"Dynamics (mechanics)"
] |
76,126,836 | https://en.wikipedia.org/wiki/Hellings-Downs%20curve | The Hellings-Downs curve (also known as the Hellings and Downs curve) is a theoretical tool used to establish the telltale signature that a galactic-scale pulsar timing array has detected gravitational waves, typically of wavelengths . The method entails searching for spatial correlations of the timing residuals from pairs of pulsars and comparing the data with the Hellings-Downs curve. When the data fit exceeds the standard 5 sigma threshold, the pulsar timing array can declare detection of gravitational waves. More precisely, the Hellings-Downs curve is the expected correlations of the timing residuals from pairs of pulsars as a function of their angular separation on the sky as seen from Earth. This theoretical correlation function assumes Einstein's general relativity and a gravitational wave background that is isotropic.
Pulsar timing array residuals
Albert Einstein's theory of general relativity predicts that a mass will deform spacetime causing gravitational waves to emanate outward from the source. These gravitational waves will affect the travel time of any light that interacts with them. A pulsar timing residual is the difference between the expected time of arrival and the observed time of arrival of light from pulsars. Because pulsars flash with such a consistent rhythm, it is hypothesised that if a gravitational wave is present, a specific pattern may be observed in the timing residuals from pairs of pulsars. The Hellings-Downs curve is used to infer the presence of gravitational waves by finding patterns of angular correlations in the timing residual data of different pulsar pairings. More precisely, the expected correlations on the vertical axis of the Hellings-Downs curve are the expected values of pulsar-pairs correlations averaged over all pulsar-pairs with the same angular separation and over gravitational-wave sources very far away with noninterfering random phases. Pulsar timing residuals are measured using pulsar timing arrays.
History
Not long after the first suggestions of pulsars being used for gravitational wave detection in the late 1970’s, Donald Backer discovered the first millisecond pulsar in 1982. The following year Ron Hellings and George Downs published the foundations of the Hellings-Downs curve in their 1983 paper "Upper Limits on the Isotropic Gravitational Radiation Background from Pulsar Timing Analysis". Donald Backer would later go on to become one of the founders of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).
Examples in the scientific literature
In 2023, NANOGrav used pulsar timing array data collected over 15 years in their latest publications supporting the existence of a gravitational wave background. A total of 2,211 millisecond pulsar pair combinations (67 individual pulsars) were used by the NANOGrav team to construct their Hellings-Downs plot comparison. The NANOGrav team wrote that "The observation of Hellings–Downs correlations points to the gravitational-wave origin of this signal." The Hellings-Downs curve has also been referred to as the "smoking gun" or "fingerprint" of the gravitational-wave background. These examples highlight the critical role that the Hellings-Downs curve plays in contemporary gravitational wave research.
Equation of the Hellings-Downs curve
Reardon et al. (2023) from the Parkes pulsar timing array team give the following equation for the Hellings-Downs curve, which in the literature is also called the overlap reduction function:
where:
,
is the kronecker delta function
represents the angle of separation between the two pulsars and as seen from Earth
is the expected angular correlation function.
This curve assumes an isotropic gravitational wave background that obeys Einstein's general relativity. It is valid for "long-arm" detectors like pulsar timing arrays, where the wavelengths of typical gravitational waves are much shorter than the "long-arm" distance between Earth and typical pulsars.
References
External links
Pulsars
Functions of space and time
Equations of astronomy | Hellings-Downs curve | [
"Physics",
"Astronomy"
] | 828 | [
"Concepts in astronomy",
"Spacetime",
"Functions of space and time",
"Equations of astronomy"
] |
76,128,123 | https://en.wikipedia.org/wiki/Driver%20safety%20arms%20race | The driver safety arms race is phenomenon whereby car drivers are incentivized to buy larger auto-vehicles in order to protect themselves against other large auto-vehicles. This has a spiralling effect whereby cars get increasingly larger, which has adverse overall effects on traffic safety. It is an example of a prisoners' dilemma, as it can be individually rational to attain larger vehicles while having adverse outcomes on all traffic users.
References
Road safety
Moral psychology
Social psychology | Driver safety arms race | [
"Physics"
] | 92 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
76,129,348 | https://en.wikipedia.org/wiki/Emadike%20Shoreline%20project | The Emadike Shoreline project is an ecological intervention effort of the Federal Government of Nigeria to address the coastal and flooding challenges at Emadike and Epebu communities in the Ogbia Local Government Area of Bayelsa State, Nigeria. The project, which was executed through the Ecological Project Office of the Federal Government of Nigeria, involved an 850m shoreline protection with the use of sheet pile method to address ocean surge, while also sand filling up to 16 hectares of land as a reclamation strategy to arrest flood erosion in the community.
History of the Project
The Emadike Shoreline project was awarded in 2022 under the President Muhammadu Buhari administration of the Federal Government of Nigeria. The project was awarded to help address the perennial flooding issues facing the communities over the years.
References
Projects in Africa
Flood control | Emadike Shoreline project | [
"Chemistry",
"Engineering"
] | 162 | [
"Flood control",
"Environmental engineering"
] |
77,567,287 | https://en.wikipedia.org/wiki/Lichenostigma%20svandae | Lichenostigma svandae is a species of lichenicolous (lichen-dwelling) fungus in the family Phaeococcomycetaceae. It was described as a new species in 2007 by Jan Vondrák and Jaroslav Šoun. The authors collected the type specimen from a limestone hill in the protected area Karadagskyj Zapovednik (Feodosia, Crimea) at an elevation of about ; it was growing on the thalli and apothecia (fruiting bodies) crustose lichen species Acarospora cervina, which was itself growing on sun-exposed limestone rock. The species epithet honours bus driver
Jaroslav Švanda, who drove the bus to the excursion where the type was collected.
References
Arthoniomycetes
Fungus species
Fungi described in 2007
Fungi of Europe
Lichenicolous fungi
Taxa named by Jan Vondrák | Lichenostigma svandae | [
"Biology"
] | 186 | [
"Fungi",
"Fungus species"
] |
60,964,611 | https://en.wikipedia.org/wiki/Plasmodium%20helical%20interspersed%20subtelomeric%20protein | The Plasmodium helical interspersed subtelomeric proteins (PHIST) or ring-infected erythrocyte surface antigens (RESA) are a family of protein domains found in the malaria-causing Plasmodium species. It was initially identified as a short four-helical conserved region in the single-domain export proteins, but the identification of this part associated with a DnaJ domain in P. falciparum RESA (named after the ring stage of the parasite) has led to its reclassification as the RESA N-terminal domain. This domain has been classified into three subfamilies, PHISTa, PHISTb, and PHISTc.
The PHIST proteins are exported to the cytoplasm of the infected erythrocyte. The human malaria parasites P. falciparum and P. vivax have shown a lineage-specific expansion of proteins with this domain. Of the two PHIST genes in the mouse parasite P. berghei, only one is required for infection. The PHIST domain folds into three long helices (forming a bundle) and two smaller N-terminal helices, and is monomeric in solution. It binds PfEMP1 ATS C-terminus and plays a role in "knob" formation.
RESA
The full RESA protein in P. falciparum also contains a few other domains, namely the DnaJ domain and the DnaJ-associated X domain. A part of the X-domain, RESA/P13830663-670, appears to bind and reinforce spectrin cytoskeleton so that each erythrocyte only hosts one parasite.
P. falciparum isolate 3D7 encodes three RESA-family proteins, RESA-1 (P13830//PF3D7_0102200), RESA-2 (M91672.1//PF3D7_1149500), RESA-3 (/PF3D7_1149200). RESA-2 is usually considered a transcribed pseudogene due to a premature stop codon. However, a missense mutation T1526G or T1526C in RESA-2 that removes this stop codon is commly found. It is associated with increased severity of disease.
Notes
References
Protein domains
Helical interspersed subtelomeric protein
Antigens
Apicomplexan proteins | Plasmodium helical interspersed subtelomeric protein | [
"Chemistry",
"Biology"
] | 516 | [
"Antigens",
"Protein domains",
"Biomolecules",
"Protein classification"
] |
60,974,262 | https://en.wikipedia.org/wiki/Breeding%20blanket | The tritium breeding blanket (also known as a fusion blanket, lithium blanket or simply blanket), is a key part of many proposed fusion reactor designs. It serves several purposes; primarily it is to produce (or "breed") further tritium fuel for the nuclear fusion reaction, which owing to the scarcity of tritium would not be available in sufficient quantities, through the reaction of neutrons with lithium in the blanket. The blanket may also act as a cooling mechanism, absorbing the energy from the neutrons produced by the reaction between deuterium and tritium ("D-T"), and further serves as shielding, preventing the high-energy neutrons from escaping to the area outside the reactor and protecting the more radiation-susceptible portions, such as ohmic or superconducting magnets, from damage.
Of these three duties, it is only the breeding portion that cannot be replaced by other means. For instance, a large quantity of water makes an excellent cooling system and neutron shield, as in the case of a conventional nuclear reactor. However, tritium is not a naturally occurring resource, and thus is difficult to obtain in sufficient quantity to run a reactor through other means, so if commercial fusion using the D-T cycle is to be achieved, successful breeding of the tritium in commercial quantities is a requirement.
ITER runs a major effort in blanket design and will test a number of potential solutions. Concepts for the breeder blanket include helium-cooled lithium lead (HCLL), helium-cooled pebble bed (HCPB), and water-cooled lithium lead (WCLL) methods. Six different tritium breeding systems, known as Test Blanket Modules (TBM) will be tested in ITER.
Some breeding blanket designs are based on lithium containing ceramics, with a focus on lithium titanate and lithium orthosilicate. These materials, mostly in a pebble form, are used to produce and extract tritium and helium; must withstand high mechanical and thermal loads; and should not become excessively radioactive upon completion of their useful service life.
To date no large-scale breeding system has been attempted, and it is an open question whether such a system is possible to create.
A fast breeder reactor uses a blanket of uranium or thorium.
References
External links
Nuclear fusion
Lithium | Breeding blanket | [
"Physics",
"Chemistry"
] | 470 | [
"Nuclear fission",
"Physical quantities",
"Nuclear power",
"Plasma physics",
"Fusion power",
"Power (physics)",
"Nuclear energy",
"Nuclear physics",
"Nuclear fusion",
"Radioactivity"
] |
70,338,934 | https://en.wikipedia.org/wiki/Kepler-289 | Kepler-289 (PH3) is a rotating variable star slightly more massive than the Sun, with an unknown spectral type, 2370 light-years away from Earth in the constellation of Cygnus. In 2014, three exoplanets were discovered orbiting it.
Planetary system
Kepler-289 hosts four planets, three confirmed (Kepler-289b, Kepler-289c, Kepler-289d) and one unconfirmed candidate (Kepler-289e). The discovery of this system was made using the transit method. The inner three planets were found in 2014 with the Kepler space telescope and the Planet Hunters team, while planet e was discovered by follow-up studies in 2017.
References
Cygnus (constellation)
Planetary systems with three confirmed planets
J19495168+4252582
273234825 | Kepler-289 | [
"Astronomy"
] | 168 | [
"Cygnus (constellation)",
"Constellations"
] |
70,342,229 | https://en.wikipedia.org/wiki/Allium%20pervestitum | Allium pervestitum is a species of wild garlic in the family Amaryllidaceae, mainly found growing in the coastal area of the Sea of Azov. It is a halophyte.
References
pervestitum
Halophytes
Flora of Ukraine
Flora of the Crimean Peninsula
Flora of South European Russia
Plants described in 1950 | Allium pervestitum | [
"Chemistry"
] | 68 | [
"Halophytes",
"Salts"
] |
70,342,891 | https://en.wikipedia.org/wiki/Apiotrichum%20mycotoxinivorans | Apiotrichum mycotoxinivorans (synonym Trichosporon mycotoxinivorans) is a yeast species purportedly useful in the detoxification of various mycotoxins. It was first isolated from the hindgut of the termite Mastotermes darwiniensis. It has been shown to detoxify mycotoxins such as ochratoxin A and zearalenone. It can occasionally become a human pathogen.
References
Further reading
Tremellomycetes
Fungal pathogens of humans
Fungus species | Apiotrichum mycotoxinivorans | [
"Biology"
] | 112 | [
"Fungi",
"Fungus species"
] |
70,343,570 | https://en.wikipedia.org/wiki/Tim%20Hawarden | Timothy George Hawarden (24 December 1943 – 10 November 2009) was a South African astrophysicist known for his pioneering work on passive cooling techniques for space telescopes for which he won NASA's Exceptional Technology Achievement Medal.
Biography
Hawarden was born in Mossel Bay, Cape Province, South Africa. He graduated from the University of Natal in 1966 with a BSc in Physics and Applied Mathematics, and then graduated from the University of Cape Town with an MSc in Astronomy 1970 and then a PhD in 1975 on old open clusters. While undertaking his PhD he worked as an optical astronomer at the Royal Observatory, Cape of Good Hope and then from 1972 as the Deputy Head of the Photometry Department at the South African Astronomical Observatory in Cape Town. In 1975 he worked as the Deputy Astronomer-in-Charge of the UK Schmidt Telescope at the Siding Spring Observatory in New South Wales, Australia.
In 1978 he moved to work at the Royal Observatory in Edinburgh, Scotland, from which he was based for the rest of his career. In 1981 he began working on the United Kingdom Infrared Telescope in Hawaii. In 1987 he moved to Hawaii and led the telescope's ambitious upgrades programme throughout the 1990s. He returned to Edinburgh in 2001 and became the UK Astronomy Technology Centre Project Scientist developing extremely large telescopes (ELT) before retiring in 2006 to care for his wife Frances. He remained active in the field of astronomy until his sudden death in Edinburgh in 2009.
Passive cooling of space telescopes
Hawarden was involved in the development of the Infrared Space Observatory as the Co-Investigator for the infrared camera (ISOCAM) but he considered the cryogenic cooling system "horrendously complicated". The dependency of infrared space telescopes on cryogenic cooling limited the telescope's lifespan as well as adding significant weight. In the early 1980s Hawarden began developing the idea of using passive cooling for infrared space telescopes through a combination of radiators, sunshields, and by locating the telescope further from Earth. Having a telescope orbit the Sun–Earth L2 Lagrange point enables the sunshield to shelter the telescope from the radiant heat of the Sun, the Earth, and the Moon. A passively cooled telescope is significantly lighter and permits much larger optics and instruments.
In 1989 Hawarden proposed such a telescope, the Passively Cooled Orbiting Infrared Observatory Telescope (POIROT) to the European Space Agency but the design was rejected. In 1991 Hawarden and Harley Thronson proposed a similar design to NASA for the Edison project but the proposal was also rejected. The ideas continued to face resistance though some passive cooling was incorporated into the design of the diameter Spitzer Space Telescope launched in 2003. The ideas were later adopted in full for the diameter James Webb Space Telescope launched in 2021.
In 2010 Hawarden was posthumously awarded the NASA Exceptional Technology Achievement Medal for his work on passive cooling techniques, the award citing "the breakthrough concepts that made possible the James Webb Space Telescope and its successors". The award was accepted on behalf of Hawarden's widow Frances by the Nobel-laureate physicist John C. Mather.
References
Astrophysicists
1943 births
2009 deaths
South African astronomers
Fellows of the Royal Astronomical Society
University of Cape Town alumni
South African emigrants to the United Kingdom
People from Mossel Bay
20th-century astronomers
21st-century astronomers | Tim Hawarden | [
"Physics"
] | 673 | [
"Astrophysicists",
"Astrophysics"
] |
73,212,538 | https://en.wikipedia.org/wiki/Bating%20%28leather%29 | Bating is a technical term used in the tanning industry to denote leather that has been treated with hen or pigeon manure, similar to puering (see puer) where the leather has been treated with dog excrement, and which treatment, in both cases, was performed on the raw hide prior to tanning in order to render the skins, and the subsequent leather, soft and supple. Today, both practices are obsolete and have been replaced in the tanneries with other natural proteolytic enzymes.
Leather processing
Since early times, tanners have made use of either dog fæces, or hen and pigeon manure, in one of the early phases of leather treatment to produce a soft leather. A bath solution containing the animal extracts was made and the raw hide inserted and left there for a few days, which activated the bacteria and enzymes that reacted with the collagen in the animal skin to make the leather soft and supple. This step was followed by drenching, a term denoting skins that were thoroughly washed in a bath solution of bran (usually of barley or rye), or ash bark. This process was thought to open up the fibre, and, if lime (CaO) was used to remove hair before the actual bating, drenching removed excess or residual lime trapped in the leather.
Early inventors who concerned themselves with tanning looked upon bating as a process for removing lime from the skins, and nothing more, and since the use of animal fæces was repulsive, sought to substitute them by inventing artificial bates. What they failed to realize, however, was that bating also acts upon the skin fibres, rendering portions of the skins soluble, bringing about the finished condition. One of the early inventions made to replicate bating was the chemical use of old lime liquors (with high levels of ammonia) neutralized with sulphuric acid. This method more nearly approximates to the conditions of the dung.
Experimentation and research
Puering fell into disuse after began producing the enzyme pancreatin on an industrial scale between 1895 and 1897. By 1907, it was used by Otto Röhm in the tannery. J.T. Wood, investigating the microbial properties of dog fæces, was able to isolate species of different bacteria, determining that aged dog fæces was more potent (hence, more efficacious) than fresh dog fæces. The bacteria that settles on the excrement releases, under right conditions, the principal enzyme trypsin.
Natural bates
Primitive tanning methods differed from country to country, but the use of puering and bating was not prevalent in all of them, as tanners had moved away from their use and employed vegetable tanning which achieved nearly the same result. In western societies, modern tanning techniques tried to replicate the effect of puering and bating by using a natural bate. Papain, the active proteolytic enzyme found in the latex taken from the skin of the papaya fruit (Carica papaya), is thought to replicate the action of traditional puering and bating. The protein-digesting enzyme is now used extensively in the leather industry, and follows the dehairing of the animal skin, usually with lime and other proteolytic enzymes, and the deliming of the animal hide with mineral acid. This process is thought to release traces of lime still trapped in the hide after the deliming process, in addition to removing unwanted grease, besides aiding in the subsequent tanning process by the alteration of protein.
Today, in the modern tanning industry where almost all innovations have been made by substituting vegetable tanning agents with chemical agents, bating is the only step in leather processing where enzymatic process cannot be substituted by chemical processes, as the process of bating gives certain desired characteristics to the finished leather. Large-scale use of microbial enzymes, following the introduction of fermentation technology, has become standard in the tanning industry.
Enzymatic soaking of the raw hides has been shown to loosen the scud, initiate the opening of the fibre structure, and to render a leather product with less wrinkled grain when used at an alkaline pH of less than 10.5. In rabbit skins it improves the softness and elasticity, and increases the surface area yield of the fur by 3.3%. Bating also acts to hydrolyze casein, elastin, albumin, globulin-like proteins, and nonstructural proteins that are not essential for leather making.
Primitive practices
One of the earliest references to puering is found in the old rabbinic Minor tractate, Kallah Rabbati (end of chapter 7): "What is the reason that dogs were privileged to have books of the Law and doorpost scripts prepared from their excrement? It is because it says [of them]: 'not a dog shall bark against any of the people of Israel' (Exo. 11:7)." A record of primitive tanning bequeathed in the 12th century by Abraham ben Isaac of Narbonne (1085–1158) mentions the tanning method employed in his day, in southern France, where the treatment of the rawhide by puering was still in use and done after the hairs of the animal were removed by lime in preparation for writing a Torah scroll and the hide had once again become stiff:
After taking dry [sheep]-skins whose wool had been soaked [in lime water for removal], they leave them in the water for the duration of time needed for them to become soft [=soaking]. Afterwards, they put them inside a pit made for them, and they put therein a little dog fæces, having no prescribed quantity [=puering], and a little salt [is added thereto], and then they seal the mouth of the pit, leaving it there for one day in summer months, and three days in winter months, no longer [than the duration of that time], so that they be not eaten up. They then remove them and check them for holes, and if there be a hole found, they sew it, and then lay them out over a wooden frame that is prepared in advance [for this purpose] and they rinse them thoroughly with running water [=drenching], and then bring out a heaping batch of gallnuts which they then pound or grind thoroughly. They then put on each sheet of leather three litres of the Baghdad measure, and plaster thereon the gallnuts, over its two sides, and sprinkle a little water over them, and they put more gallnuts on that side of the leather where the hairs once were (grain layer) than what they do on the flesh-side [of the leather], doing likewise with each sheet of leather, the application [of gallnuts] made twice daily, while, on the third application, they once more plaster with what remains of the gallnuts [onto the leather] and lay it out in the sun, for the duration of time that it takes for it to whiten, leaving it in that state until it dries [=tanning]. They afterwards shake-off the excess gallnuts and then cut the leather.
Tanners in Egypt in the 12th century and in Yemen of late made use of different methods in varying degrees, yet without the use of puering and bating, and without the use of gallnuts. Rather, after soaking and fleshing, tanners utilized the tannins found in the ground leaves and crushed tender stems of Acacia (Acacia etbaica and Acacia nilotica kraussiana), with which a bath solution was made and the raw hides inserted and left there for about two weeks, constantly stirring and changing the water after one week. In some places in Yemen, the leaves of African rue (Peganum harmala) were used instead of Acacia leaves. In Yemen and Ethiopia, castor-bean oil derived from the castor plant (Ricinus communis) was applied by some tanners to the finished leather product which gave additional softness and suppleness to the leather.
References
Notes
Bibliography
(reprinted in 2015, )
Further reading
External links
The Glasgow Herald, p. 6 ("Chemistry: Leather Manufacture, its Scientific Aspect", by A.E. Caunce), 14 September 1923
Another Important Role Played by Enzymes in Bating, by J.A. Wilson & H.B. Merrill. February 1926
Leathermaking
Manufacturing
Microbiology techniques
Proteases | Bating (leather) | [
"Chemistry",
"Engineering",
"Biology"
] | 1,763 | [
"Microbiology techniques",
"Manufacturing",
"Mechanical engineering"
] |
73,216,226 | https://en.wikipedia.org/wiki/Metallaborane | In chemistry, a metalloborane is a compound that contains one or more metal atoms and one or more boron hydride. These compounds are related conceptually and often synthetically to the boron-hydride clusters by replacement of BHn units with metal-containing fragments. Often these metal fragments are derived from metal carbonyls or cyclopentadienyl complexes. Their structures can often be rationalized by polyhedral skeletal electron pair theory. The inventory of these compounds is large, and their structures can be quite complex.
Examples
Two simple examples are . The MB4 cores (M = Fe or Co) of these two compounds adopt structures expected for nido 5-vertex clusters. The iron compound is produced by reaction of diiron nonacarbonyl with pentaborane. and cyclobutadieneiron tricarbonyl have similar structures.
Metallacarboranes
Even greater in scope than metalloboranes are metallacarboranes. These cages have carbon vertices, often CH, in addition to BH and M vertices. A well-developed class of metallacarboranes are prepared from dicarbollides, anions of the formula [C2B9H11]2-. These anions function as ligands for a variety of metals, often forming sandwich complexes.
Some metalloboranes are derived by the metalation of neutral carboranes. Illustrative are the six-and seven-vertex cages prepared from closo-. Reaction of this carborane with iron carbonyl sources gives closo Fe- and Fe2-containing products, according to these idealized equations:
A further example of insertion into a closo carborane is the synthesis of the yellow-orange solid closo-1,2,3-:
A closely related reaction involves the capping of an anionic nido carborane
The last reaction is worked up with acid and air.
References
Cluster chemistry | Metallaborane | [
"Chemistry"
] | 408 | [
"Cluster chemistry",
"Organometallic chemistry"
] |
47,634,064 | https://en.wikipedia.org/wiki/C5H7NO | {{DISPLAYTITLE:C5H7NO}}
The molecular formula C5H7NO (molar mass: 97.11 g/mol, exact mass: 97.0528 u) may refer to:
Furfurylamine
Oxazepine | C5H7NO | [
"Chemistry"
] | 57 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
47,634,311 | https://en.wikipedia.org/wiki/Balliol-Trinity%20Laboratories | The Balliol-Trinity Laboratories in Oxford, England, was an early chemistry laboratory at the University of Oxford.
The laboratory was located between Balliol College and Trinity College, hence the name. It was especially known for physical chemistry.
Chemistry was first recognized as a separate discipline at Oxford University in the 19th century. From 1855, a chemistry laboratory existed in a basement at Balliol College. In 1879, Balliol and Trinity agreed to have a laboratory at the boundary of the two colleges. The laboratory became the strongest of the Oxford college research institutions in chemistry. It remained in operation until the Second World War when a new Physical Chemistry Laboratory (PCL) was constructed by Oxford University in the Science Area.
People
The following scientists of note worked in the Balliol-Trinity Laboratories:
E. J. Bowen
Sir John Conroy
Sir Harold Hartley
Sir Cyril Norman Hinshelwood (Nobel Prize winner)
Henry Moseley
See also
Abbot's Kitchen, Oxford, another early chemistry laboratory in Oxford
Department of Chemistry, University of Oxford
Physical Chemistry Laboratory, which replaced the Balliol-Trinity Laboratories
References
1879 establishments in England
1940 disestablishments in England
Buildings and structures completed in 1879
Buildings and structures of the University of Oxford
History of the University of Oxford
University and college laboratories in the United Kingdom
Chemistry laboratories
Demolished buildings and structures in Oxfordshire
Balliol College, Oxford
Trinity College, Oxford
Physical chemistry | Balliol-Trinity Laboratories | [
"Physics",
"Chemistry"
] | 280 | [
"Chemistry laboratories",
"Applied and interdisciplinary physics",
"nan",
"Physical chemistry",
"Physical chemistry stubs",
"Chemistry organization stubs"
] |
47,634,858 | https://en.wikipedia.org/wiki/Meltwater%20pulse%201B | Meltwater pulse 1B (MWP1b) is the name used by Quaternary geologists, paleoclimatologists, and oceanographers for a period of either rapid or just accelerated post-glacial sea level rise that some hypothesize to have occurred between 11,500 and 11,200 years ago at the beginning of the Holocene and after the end of the Younger Dryas. Meltwater pulse 1B is also known as catastrophic rise event 2 (CRE2) in the Caribbean Sea.
Other named, postglacial meltwater pulses are known most commonly as meltwater pulse 1A0 (meltwater pulse19ka), meltwater pulse 1A, meltwater pulse 1C, meltwater pulse 1D, and meltwater pulse 2. It and these other periods of proposed rapid sea level rise are known as meltwater pulses because the inferred cause of them was the rapid release of meltwater into the oceans from the collapse of continental ice sheets.
Sea level
There is considerable unresolved disagreement over the significance, timing, magnitude, and even existence of meltwater pulse 1B. It was first recognized by Richard G Fairbanks in his coral reef studies in Barbados. From the analysis of data from cores of coral reefs surrounding Barbados, he concluded that during meltwater pulse 1B, sea level rose in about 500 years about 11,300 years ago.
However, in 1996 and 2010, Bard and others published detailed analysis of data from cores from coral reefs surrounding Tahiti. They concluded that meltwater pulse 1B was, at best, just an acceleration of sea level rise at about 11,300 years ago and it was, at worst, not statistically different from a constant rate sea level rise between 11,500 and 10,200 years ago. They argued that meltwater pulse 1B was certainly not an abrupt jump in sea level, which they would consider to be a meltwater pulse. They argue that the rise in sea level estimated by Fairbanks from cores is an artifact created by differential tectonic uplift between different sides of a tectonic structure lying between the two Barbados cores used to identify meltwater pulse 1B and calculate its magnitude.
Other differing estimates about the magnitude of meltwater pulse 1B have been published. In 2010, Standford and others found it to be "robustly expressed" as a multi-millennial interval of enhanced rates of sea-level rise between 11,500 and 8,800 years ago with peak rates of rise of up to 25 mm/yr. In 2004, Liu and Milliman reexamined the original data from Barbados and Tahiti and reconsidered the mechanics and sedimentology of reef drowning by sea level rise. They concluded that meltwater pulse 1B occurred between 11,500 and 11,200 years ago, a 300-year interval, during which sea level rose from to , giving a mean annual rate of around 40mm/yr Other studies have revised the estimated magnitude of meltwater pulse 1B downward to between and less than .
Source(s) of meltwater pulse 1B
Given the disagreement over its timing, magnitude, and even existence, it has been very difficult to constrain the source of meltwater pulse 1B. In his modeling of global glacial isostatic adjustment, Peltier assumed that the predominant source for MWP-1B was the Antarctic Ice Sheet. However, no justification for this assumption is provided in his papers. In addition, Leventer and others argue that the timing of deglaciation in eastern Antarctica roughly coincides with the onset of meltwater pulse 1B and the Antarctic Ice Sheet is a likely source. Finally, McKay and others suggested that recession of the West Antarctic Ice Sheet may have supplied the meltwater needed to the start meltwater pulse 1B.
However, later studies involving the surface exposure dating of glacial erratics, nunataks, and other formerly glaciated exposures using cosmogenic dating contradicted the above arguments and assumptions. These studies tentatively concluded that the actual amount of thinning of the East Antarctic Ice Sheet is too small, , and likely too gradual and too late to have contributed any significant amount of meltwater to meltwater pulse 1B. They also concluded that the ice sheet retreat and thinning accelerated for the West Antarctic Ice Sheet only after 7,000 years ago. Although other researchers have concluded that the abrupt decay of the Laurentide Ice Sheet might have been sufficient to have been responsible for meltwater pulse 1B, its sources remain an unresolved mystery. For example, recent research in West Antarctica found that sufficient deglaciation contemporaneous with meltwater pulse 1B occurred to readily explain this rapid period of global sea level rise.
Mississippi River superflood events MWF-5
A variety of paleoclimate and paleohydrologic proxies, which can be used to reconstruct the prehistoric discharge of the Mississippi River, can be found in the sediments of the Louisiana continental shelf and slope, including the Orca and Pygmy basins, within the Gulf of Mexico. These proxies have been used by Quaternary geologists, paleoclimatologists, and oceanographers to reconstruct both the duration and discharge the mouth of the prehistoric Mississippi River for the Late glacial and postglacial periods, including the time of meltwater pulse 1B. The chronology of flooding events found by the study of cores on the Louisiana continental shelf and slope are in agreement with the timing of meltwater pulses. For example, meltwater pulse 1A in the Barbados coral record matches quite well with a group of two separate Mississippi River meltwater flood events, MWF-3 (12,600 ) and MWF-4 (11,900 ). In addition, meltwater pulse 1B in the Barbados coral record matches a cluster of four Mississippi River superflood events, MWF-5, that occurred between 9,900 and 9,100 . In 2003, Aharon reported that flood event MWF-5 consists of four separate and distinct superfloods at 9,970-9,870; 9,740-9,660; 9,450-9,290; and 9,160-8,900 . The discharge at the mouth of the Mississippi River during three of the four superfloods of MWF-5 is estimated to have varied between 0.07 and 0.08 sverdrups (million cubic meters per second). The superflood at 9450-9290 is estimated to have had a discharge of 0.10 sverdrups (million cubic meters per second). This research also shows that the Mississippi superfloods of MWF-5 occurred during the Preboreal. The same research found an absence of either meltwater floods or superfloods discharging into the Gulf of Mexico from the Mississippi River during the preceding thousand years, which is known as the cessation event, that corresponds with the Younger Dryas stadial.
The Pleistocene deposits blanketing the Louisiana Continental shelf and slope between the mouth of the Mississippi River and Orca and Pygmy basins largely consist of sediments transported down the Mississippi River mixed with variable additions of local biologically generated carbonate. Because of this, the provenance of the meltwater and superfloods can be readily inferred from the sediment's composition. The composition of the sediments brought into the Gulf of Mexico and deposited on the Louisiana continental shelf and slope during the superfloods of MWF-5 reflect an abrupt change in mineralogy, fossil content, organic matter, and amount after 12,900 years ago at the start of the Younger Dryas interval.
First, after 12,900 years ago, smectite-rich sediments from the Missouri River drainage are progressively and quickly replaced by sediments associated with the Great Lakes region and further south along the Mississippi River, as indicated by their clay mineralogy. Second, after 12,900 years ago, the overall quantity of sediment being transported down the Mississippi River abruptly decreases with a corresponding and significantly increased proportion of locally produced biologically generated carbonate and organic matter. Third, after 12,900 years ago, various analyses, e.g. C/N ratio and Rock–Eval Pyrolysis, indicate that the type of organic matter present changes from organic matter that was reworked from old formations by glacials to well-preserved Holocene organic matter that is mainly of marine origin. Finally, after 12,900 years ago, the presence of reworked nannofossils disappear from sediments accumulating on the Louisiana continental shelf and slope.
The above noted changes in the nature of accumulating sediments indicate that after the start of the Younger Dryas, the southern route for Laurentide Ice Sheet meltwater was largely blocked. On the rare occasions it could flow southward, glacial meltwater flowed through Lake Agassiz and sometimes the Great Lakes to the Mississippi River. As the water moved through either Lake Agassiz or other proglacial lakes, they completely trapped and removed any glacial outwash and the older, reworked organic material and reworked nannofossils that the outwash contained. As a result, the sediment carried by the Mississippi River after the start of the Younger Dryas consisted of illite and chlorite enriched sediments from the Great Lakes region that lacked any reworked nannofossils. These changes argue that the superfloods of MWF-5 which fed Meltwater Pulse B are related to either rare periods of southerly discharge of meltwater through Lake Agassiz, nonglacial periods of climate-enhanced discharge within the Mississippi River Basin, or a combination of both.
Antarctic iceberg discharge events
In case of the Antarctic Ice Sheet, an equivalent well-dated, high-resolution record of the discharge of icebergs from various parts of the Antarctic Ice Sheet for the past 20,000 years is also available. Research by Weber and others constructed a record from variations in the amount of iceberg-rafted debris versus time and other environmental proxies in two cores taken from the ocean bottom within Iceberg Alley of the Weddell Sea. The cores of ocean bottom sediments within Iceberg Alley provide a spatially integrated signal of the variability of the discharge of icebergs into the marine waters by the Antarctic Ice Sheet because it is a confluence zone in which icebergs calved from the entire Antarctic Ice Sheet drift along currents, converge, and exit the Weddell Sea to the north into the Scotia Sea.
Between 20,000 and 9,000 years ago, Weber and others documented eight well-defined periods of increased iceberg calving and discharge from various parts of the Antarctic Ice Sheet. Five of these periods, AID5 through AID2 (Antarctic Iceberg Discharge events), are comparable in duration and have a repeat time of about 800–900 years. The largest of the Antarctic Iceberg Discharge events is AID2. Its peak intensity at about 11,300 years ago, which is synchronous with meltwater pulse 1B in the Barbados sea-level record, is consistent with a significant Antarctic contribution to meltwater pulse 1B. The lack of a sea level response in the Tahiti coral record might indicate a regionally specific sea-level response to a deglaciation event only from the Pacific sector of the Antarctica Ice Sheet.
See also
Deglaciation
Holocene glacial retreat
Younger Dryas
References
External links
Gornitz, V. (2007) Sea Level Rise, After the Ice Melted and Today. Science Briefs, NASA's Goddard Space Flight Center. (January 2007)
Gornitz, V. (2012) The Great Ice Meltdown and Rising Seas: Lessons for Tomorrow. Science Briefs, NASA's Goddard Space Flight Center. (June 2012)
Liu, J.P. (2004) Western Pacific Postglacial Sea-level History., River, Delta, Sea Level Change, and Ocean Margin Research Center, Marine, Earth and Atmospheric Sciences, North Carolina State University, Raleigh, NC.
Glaciology
Oceanography
Paleoclimatology
Sea level
10th millennium BC | Meltwater pulse 1B | [
"Physics",
"Environmental_science"
] | 2,452 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
47,635,920 | https://en.wikipedia.org/wiki/DBGp | Common DeBugGer Protocol as used by Xdebug and potentially other implementations. DBGp is a simple protocol for use with language tools and engines for the purpose of debugging applications.
The protocol provides a means of communication between a debugger engine (scripting engine, Virtual Machine, etc.) and a debugger IDE.
Criticisms
DBGp has not received widespread adoption as a server protocol. Most implementations are client-side so that IDEs may be compatible specifically with Xdebug, which remains popular.
Criticisms have included:
Performance (DBGp is a text-mode protocol)
Security (DBGp has a complex connection mechanism that could lead to buggy vulnerable implementations)
Generality (DBGp is designed to be compatible with multiple programming languages rather than being optimized for PHP)
A primary author of the DBGp specification has defended the design.
References
Communications protocols
Debuggers | DBGp | [
"Technology"
] | 186 | [
"Computer standards",
"Communications protocols"
] |
59,545,995 | https://en.wikipedia.org/wiki/Kimberly%20Prather | Kimberly A. Prather is an American atmospheric chemist. She is a distinguished chair in atmospheric chemistry and a distinguished professor at the Scripps Institution of Oceanography and department of chemistry and biochemistry at UC San Diego. Her work focuses on how humans are influencing the atmosphere and climate. In 2019, she was elected a member of the National Academy of Engineering for technologies that transformed understanding of aerosols and their impacts on air quality, climate, and human health. In 2020, she was elected as a member of the National Academy of Sciences. She is also an elected Fellow of the American Philosophical Society, American Geophysical Union, the American Association for the Advancement of Science, American Philosophical Society, and the American Academy of Arts and Sciences.
Education and early career
Prather was born in Santa Rosa, California. She studied at Santa Rosa Junior College and University of California, Davis, earning a bachelor's degree in 1985 and a PhD in 1990. She served as a postdoctoral fellow at the University of California, Berkeley between 1990 and 1992, working with Nobel Laureate Yuan T. Lee. Prather joined University of California, Riverside as an assistant professor in 1992. During her time at UC Riverside she began to work on aerosol mass spectrometry, developing ways to make it compact and transportable. She patented the technology.
Research
In 2001, Prather joined the faculty at the University of California, San Diego as a member of the Department of Chemistry and Biochemistry and Scripps Institution of Oceanography. Prather's early research focused on determining the major sources of fine particle pollution in California as well as in the Northeastern United States. As part of this research, she explored methods to distinguish between different aerosol sources based on their single particle composition and size. She developed aerosol time-of-flight mass spectrometry (ATOFMS), a technique with high temporal and size resolution. In 1999 she began to work with the University of Rochester studying the health effects of ultrafine particles. She refined the detection technique so that it would precisely measure the size and composition of small particles. The ultrafine ATOFMS was able to examine exhaust particles from gasoline and diesel powered vehicles. She found that alongside the freeway, particles between 50 and 300 nm were mainly due to heavy-duty vehicles (51%) and light-duty vehicles (32%). She used the ultrafine ATOFMS to study atmospheric composition, combining it with ozone and NOx measurements. ATOFMS is now widely used in atmospheric studies around the world.
In 2003, she joined the advisory board of United States Environmental Protection Agency PM2.5 Clean Air. Between 2003 and 2006 Prather studied whether ATOFMS could be used to measure the carbonaceous components of aerosols (including PAHs) and help to understand atmospheric processes, distinguishing between organic (OC) and elemental carbon (EC). Prather showed it was possible to distinguish EC and OC on a single particle level, and investigated their chemical associations with ammonium, nitrate, and sulfate. Her group explored ways to calibrate the ATOFMS data, making real-time apportionment of ambient particles possible. They did this by classifying particles using an artificial neural network (ART-2a). In 2008 she became the co-lead scientist in CalWater in collaboration with F. Martin Ralph; a multi-year interdisciplinary research effort focusing on how aerosols are impacting the water supply in the West Coast of the United States. Her PhD student Kerri Pratt led the Ice in Clouds Experiment - Layer Clouds (ICE-L) study. ICE-L included the first aircraft ATOFMS, named Shirley. Pratt and Prather studied ice crystals in situ on high speed aircraft flying above Wyoming, and found that the particles were mainly composed of dust or biological particles (bacteria, fungal spores or plants). Understanding the composition of airborne particles is imperative to properly evaluate their impact on climate change, as well as provide insight into how aerosol impact cloud formation and precipitation.
In 2010 she became the founding director of the NSF Center for Aerosol Impacts on Climate and the Environment (CAICE). CAICE became a National Science Foundation Phase II Center for Chemical Innovation in 2013. In this role, Prather develops new analytical techniques for studying aerosol chemistry. Her group demonstrated that dust and bioaerosols that travel from as far away as the Sahara can enhance precipitation in Western United States. Prather's group is studying the microbes that transfer from the ocean, become airborne and contribute to the global temperature. Ocean-in-the lab experiments are conducted by transferring thousands of gallons of seawater from the Pacific Ocean, producing waves, and adding nutrients to induce the growth of microbes. As part of CAICE, her group was the first to identify the major factors controlling chemical composition of sea spray, finding that the characteristics depended on the physical forces and ocean biology of the waves. They demonstrated two types of droplets; "film" drops that were full of microbes and organic materials, and "jet" drops that mainly contained sea salt and other biological species. Prather's research team can now explore the impact of carbon dioxide on the global temperature by controlling the amount entering their ocean simulation chamber. The Scripps Ocean Atmosphere Research Simulator (SOARS) became operational in the summer of 2022 and is being used to study how wind, temperature, sunlight and pollution impact the ocean and atmosphere. CAICE funding was extended by the National Science Foundation in 2018, with a second $20 million grant allowing them to investigate the interaction of human pollution with ocean-produced gases and aerosols.
Prather received the 2024 National Academy of Sciences Award in Chemical Sciences for her work furthering the understanding of atmospheric aerosols and their impact on air quality, climate, and human health.
Awards and honors
1994 American Society for Mass Spectrometry Research Award
1994 National Science Foundation Young Investigator
1997 National Science Foundation Special Creativity Award
1998 Gesellschaft für Aerosolforschung Smoluchowski Award
1999 American Association for Aerosol Research Kenneth T. Whitby Award
2000 ACS Analytical Chemistry Arthur F. Findeis Award
2009 UCSD Faculty Sustainability Award
2009 American Association for the Advancement of Science Fellow
2010 American Geophysical Union Fellow
2010 American Academy of Arts and Sciences Fellow
2010 ACS Creative Advances in Environmental Science and Technology
2011 ACS San Diego Distinguished Scientist Award
2015 California Air Resources Board Haagen-Smit Clean Air Award
2018 UC San Diego Chancellor’s Associates Excellence Award in Research in Science and Engineering
2019 Elected to the National Academy of Engineering
2020 ACS Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry
2020 Elected to the National Academy of Sciences
2022 Elected to the American Philosophical Society
2023 Gustavus John Esselen Award for Chemistry in the Public Interest
2023 Analytical Scientist the Power list - Leaders and Advocates
2024 National Academy of Sciences Award in Chemical Sciences
2024 Analytical Scientist the Power List - Plant Protectors
References
American women chemists
Environmental scientists
University of California, Davis alumni
University of California, San Diego faculty
University of California, Riverside faculty
Living people
Year of birth missing (living people)
21st-century American women
Mass spectrometrists | Kimberly Prather | [
"Physics",
"Chemistry",
"Environmental_science"
] | 1,477 | [
"Environmental scientists",
"Spectrum (physical sciences)",
"American environmental scientists",
"Mass spectrometrists",
"Mass spectrometry",
"Biochemists"
] |
59,551,167 | https://en.wikipedia.org/wiki/Mixed%20quantum-classical%20dynamics | Mixed quantum-classical (MQC) dynamics is a class of computational theoretical chemistry methods tailored to simulate non-adiabatic (NA) processes in molecular and supramolecular chemistry. Such methods are characterized by:
Propagation of nuclear dynamics through classical trajectories;
Propagation of the electrons (or fast particles) through quantum methods;
A feedback algorithm between the electronic and nuclear subsystems to recover nonadiabatic information.
Use of NA-MQC dynamics
In the Born-Oppenheimer approximation, the ensemble of electrons of a molecule or supramolecular system can have several discrete states. The potential energy of each of these electronic states depends on the position of the nuclei, forming multidimensional surfaces.
Under usual conditions (room temperature, for instance), the molecular system is in the ground electronic state (the electronic state of lowest energy). In this stationary situation, nuclei and electrons are in equilibrium, and the molecule naturally vibrates near harmonically due to the zero-point energy.
Particle collisions and photons with wavelengths in the range from visible to X-ray can promote the electrons to electronically excited states. Such events create a non-equilibrium between nuclei and electrons, which leads to an ultrafast response (picosecond scale) of the molecular system. During the ultrafast evolution, the nuclei may reach geometric configurations where the electronic states mix, allowing the system to transfer to another state spontaneously. These state transfers are nonadiabatic phenomena.
Nonadiabatic dynamics is the field of computational chemistry that simulates such ultrafast nonadiabatic response.
In principle, the problem can be exactly addressed by solving the time-dependent Schrödinger equation (TDSE) for all particles (nuclei and electrons). Methods like the multiconfigurational self-consistent Hartree (MCTDH) have been developed to do such task. Nevertheless, they are limited to small systems with two dozen degrees of freedom due to the enormous difficulties of developing multidimensional potential energy surfaces and the costs of the numerical integration of the quantum equations.
NA-MQC dynamics methods have been developed to reduce the burden of these simulations by profiting from the fact that the nuclear dynamics is near classical. Treating the nuclei classically allows simulating the molecular system in full dimensionality. The impact of the underlying assumptions depends on each particular NA-MQC method.
Most of NA-MQC dynamics methods have been developed to simulate internal conversion (IC), the nonadiabatic transfer between states of the same spin multiplicity. The methods have been extended, however, to deal with other types of processes like intersystem crossing (ISC; transfer between states of different multiplicities) and field-induced transfers.
NA-MQC dynamics has been often used in theoretical investigations of photochemistry and femtochemistry, especially when time-resolved processes are relevant.
List of NA-MQC dynamics methods
NA-MQC dynamics is a general class of methods developed since the 1970s. It encompasses:
Trajectory surface hopping (TSH; FSSH for fewest switches surface hopping);
Mean-field Ehrenfest dynamics (MFE);
Coherent Switching with Decay of Mixing (CSDM; MFE with Non-Markovian decoherence and stochastic pointer state switch);
Multiple spawning (AIMS for ab initio multiple spawning; FMS for full multiple spawning);
Coupled-Trajectory Mixed Quantum-Classical Algorithm (CT-MQC);
Mixed quantum−classical Liouville equation (QCLE);
Mapping approach;
Nonadiabatic Bohmian dynamics (NABDY);
Multiple cloning; (AIMC for ab initio multiple cloning)
Global Flux Surface Hopping (GFSH);
Decoherence Induced Surface Hopping (DISH)
Integration of NA-MQC dynamics
Classical trajectories
The classical trajectories can be integrated with conventional methods, as the Verlet algorithm. Such integration requires the forces acting on the nuclei. They are proportional to the gradient of the potential energy of the electronic states and can be efficiently computed with diverse electronic structure methods for excited states, like the multireference configuration interaction (MRCI) or the linear-response time-dependent density functional theory (TDDFT).
In NA-MQC methods like FSSH or MFE, the trajectories are independent of each other. In such a case, they can be separately integrated and only grouped afterward for the statistical analysis of the results. In methods like CT-MQC or diverse TSH variants, the trajectories are coupled and must be integrated simultaneously.
Electronic subsystem
In NA-MQC dynamics, the electrons are usually treated by a local approximation of the TDSE, i.e., they depend only on the electronic forces and couplings at the instantaneous position of the nuclei.
Nonadiabatic algorithms
There are three basic algorithms to recover nonadiabatic information in NA-MQC methods:
Spawning - new trajectories are created at regions of large nonadiabatic coupling.
Hopping - trajectories are propagated on a single potential energy surface (PES), but they are allowed to change surface near regions of large nonadiabatic couplings.
Averaging - trajectories are propagated on a weighted average of potential energy surfaces. The weights are determined by the amount of nonadiabatic mixing.
Relation to other nonadiabatic methods
NA-MQC dynamics are approximated methods to solve the time-dependent Schrödinger equation for a molecular system. Methods like TSH, in particular in the fewest switches surface hopping (FSSH) formulation, do not have an exact limit. Other methods like MS or CT-MQC can in principle deliver the exact non-relativistic solution.
In the case of multiple spawning, it is hierarchically connected to MCTDH, while CT-MQC is connected to the exact factorization method.
Drawbacks in NA-MQC dynamics
The most common approach in NA-MQC dynamics is to compute the electronic properties on-the-fly, i.e., at each timestep of the trajectory integration. Such an approach has the advantage of not requiring pre-computed multidimensional potential energy surfaces. Nevertheless, the costs associated with the on-the-fly approach are significantly high, leading to a systematic level downgrade of the simulations. This downgrade has been shown to lead to qualitatively wrong results.
The local approximation implied by the classical trajectories in NA-MQC dynamics also leads to failing in the description of non-local quantum effects, as tunneling and quantum interference. Some methods like MFE and FSSH are also affected by decoherence errors. New algorithms have been developed to include tunneling and decoherence effects. Global quantum effects can also be considered by applying quantum forces between trajectories.
Software for NA-MQC dynamics
Survey of NA-MQC dynamics implementations in public software.
a Development version.
References
Computational chemistry | Mixed quantum-classical dynamics | [
"Chemistry"
] | 1,476 | [
"Theoretical chemistry",
"Computational chemistry"
] |
59,552,027 | https://en.wikipedia.org/wiki/Microfluidic%20diffusional%20sizing | Microfluidic diffusional sizing (MDS) is a method to measure the size of particles based on the degree to which they diffuse within a microfluidic laminar flow. It allows size measurements to be taken from extremely small quantities of material (nano-grams) and is particularly useful when sizing molecules which may vary in size depending on their environment - e.g. protein molecules which may unfold or become denatured in unfavourable conditions.
Applications
MDS is primarily used in protein analyses, where size, concentration and interactions are important.
Protein size measurement
Measuring the size of a protein molecule is useful as an overall quality indicator, since misfolding, unfolding, oligomerization, aggregation or degradation can all affect size.
The literature specifically demonstrates the use of MDS in sizing protein-nanobody complexes, monitoring the formation of α-synuclein amyloid fibrils. and in observing protein assembly into oligomers
MDS can also be used to size membrane proteins, as the use of a protein specific labelling and detection system allows other species present in the solution (such as free lipid micelles or detergents) to be ignored.
Protein interactions
MDS has been used to characterise interactions between biomolecules under native conditions, and has been demonstrated to detect specific interactions within complex mixtures. It has also been used in detecting and quantifying protein-ligand interactions and protein-lipid interactions.
Protein concentration
The concentration of purified protein solutions in the laboratory is useful in determining yield and measuring the success of a prep. MDS reports concentration as well as size for each test.
Since the detection is not based on inherent fluorescence of tryptophan or tyrosine residues, MDS has been used as an alternative to A280 UV-Vis quantification.
Advantages
If protein specific labelling is applied, MDS allows membrane proteins to be sized. This is particularly useful as it is an area where other biophysical techniques can struggle - for example dynamic light scattering (DLS) is of limited use, since free detergent molecules may also scatter light and affect the results.
Furthermore, as the size reported is an average of all detectable species present there is no bias towards large species, as is found in DLS measurements.
Another key advantage is that results can be obtained with very small quantities of material which may be particularly important where samples are scarce or expensive.
With commercially available MDS instruments, testing is very simple and there is no need to input test parameters or sample conditions. This makes it a very repeatable method of testing as most of the functions such as flow rates, detector settings etc. are automated by the instrument rather than set by the operator.
In addition to size, MDS is able to calculate concentration so two parameters can be assessed in one test.
Finally the method does not require calibration, as it relies on a ratio-metric measurement to determine diffusion rate.
Theory
In an MDS analysis, a stream of liquid containing the particles to be sized is introduced alongside an auxiliary stream in a laminar flow in a microfluidic channel. Because there is no convective mixing of the two streams, the only way particles can move to the auxiliary stream is by diffusion. The rate of this diffusion is dependent on the particle's size, as determined by the Stokes–Einstein equation, so small particles diffuse quicker than large particles.
After a period of diffusion the original and auxiliary streams are split and the degree of diffusion is fixed. The number of particles in each stream can then be detected (in the case of proteins this is achieved by addition of an amine reactive fluorogenic dye). The ratio between the two streams is used to determine the diffusion co-efficient, which is used to calculate the hydrodynamic radius. The sum of particles in both streams can also be used to measure the concentration of the analyte.
References
Biochemistry methods | Microfluidic diffusional sizing | [
"Chemistry",
"Biology"
] | 806 | [
"Biochemistry methods",
"Biochemistry"
] |
53,304,994 | https://en.wikipedia.org/wiki/Danoprevir | Danoprevir (INN) is an orally available 15-membered macrocyclic peptidomimetic inhibitor of NS3/4A HCV protease. It contains acylsulfonamide, fluoroisoindole and tert-butyl carbamate moieties. Danoprevir is a clinical candidate based on its favorable potency profile against multiple HCV genotypes 1–6 and key mutants (GT1b, IC50 = 0.2–0.4 nM; replicon GT1b, EC50 = 1.6 nM).
Danoprevir has been evaluated in an open-label, single arm clinical trial in combination with ritonavir for treating COVID-19 and favourably compared to lopinavir/ritonavir in a second trial.
History
Danaoprevir was initially developed by Array BioPharma then licensed to Roche for further development and commercialization. In 2013, Danoprevir was licensed to Ascletis by Roche for development and production in China under the tradename Ganovo.
References
Further reading
Anti–hepatitis C agents
Antiviral drugs
COVID-19 drug development
Macrocycles
NS3/4A protease inhibitors
Carbamates
Cyclopropyl compounds
Organofluorides
Pyrrolidines
Acylsulfonamides | Danoprevir | [
"Chemistry",
"Biology"
] | 281 | [
"Antiviral drugs",
"Drug discovery",
"Organic compounds",
"COVID-19 drug development",
"Macrocycles",
"Biocides"
] |
53,307,338 | https://en.wikipedia.org/wiki/FlowFET | A flowFET is a microfluidic component which allows the rate of flow of liquid in a microfluidic channel to be modulated by the electrical potential applied to it. In this way, it behaves as a microfluidic analogue to the field effect transistor, except that in the flowFET the flow of liquid takes the place of the flow of electric current. Indeed, the name of the flowFET is derived from the naming convention of electronic FETs (e.g. MOSFET, FINFET etc.).
Mechanism of action
A flowFET relies on the principle of electro-osmotic flow (EOF). In many liquid-solid interfaces, there is an electrical double layer that develops due to interactions between the two phases. In the case of a microfluidic channel, this results in a charged layer of liquid on the periphery of the fluid column which surrounds the bulk of the liquid. This electric double layer has an associated potential difference known as the zeta potential. When an appropriately-oriented electrical field is applied to this interfacial double layer (i.e. parallel to the channel and in the plane of the electric double layer), the charged liquid ions experience a motive Lorentz force. Since this layer sheaths the fluid column, and since this layer moves, the entire column of liquid will begin to move with a speed . The velocity of the fluid layer "diffuses" into the bulk of the channel from the periphery towards the centre due to viscous coupling. The speed is related to the strength of the electric field , the magnitude of the zeta potential , the permittivity and the viscosity of the fluid:
In a FlowFET, the zeta potential between the channel walls and the fluid can be altered by applying an electrical field perpendicular to the channel walls. This has the effect of altering the motive force experienced by the mobile liquid atoms in the double layer. This change in the zeta-potential can be used to control both the magnitude and direction of the electro-osmotic flow in the microchannel.
The controlling voltage need only be in the range of 50 V for a typical microfluidic channel, since this correlates to a gradient of 1.5 MV/cm due to the channel size.
Operational limitations
Variation of the FlowFET dimensions (e.g. insulating layer thickness between the channel wall and gate electrode) due to the manufacturing process can lead to inexact control of the zeta potential. This can be exacerbated in the case of wall contamination, which can alter the channel wall surface's electrical properties adjacent to the gate electrode. This will affect the local flow characteristics, which may be especially important in chemical synthesis systems whose stoichiometry are directly related to the transport rate of reaction precursors and reaction products.
There are constraints placed on the fluid that can be manipulated in a FlowFET. Since it relies on EOF, only fluids producing an EOF in response to an applied electric field may be used.
While the controlling voltage need only be on the order of 50V, the EOF-producing voltage along the channel axis is larger, on the order of 300V. It is noticed experimentally that electrolysis may occur at the electrode contacts. This water electrolysis can alter the pH in the channel and adversely affect biological cells and biomolecules, while gas bubbles tend to "clog" microfluidic systems.
In further analogy with microelectronic systems, the switching time for a flowFET is inversely proportional to its size. Scaling down a flowFET results in a reduction in the amount of time for the flow to equilibrate to a new flow rate following a change in the applied electrical field. It should be noted, however, that the frequency of flowFET is many orders of magnitude slower than with an electronic FET.
Applications
A FlowFET sees potential uses in massively parallel microfluidic manipulation, for example in DNA microarrays.
Without using a FlowFET, it is necessary to control the rate of EOF by changing the magnitude of the EOF-producing field (i.e. the field parallel to the channel's axis) while leaving the zeta potential unaltered. In this arrangement, however, simultaneous control of EOF in channels connected with each other cannot easily be accomplished.
A FlowFET provides a way of controlling microfluidic flow in a way that uses no moving parts. This is in stark contrast to other solutions including pneumatically-actuated peristaltic pumps such as presented by Wu et al. Fewer moving parts allows less opportunity for mechanical breakdown of a microfluidic device. This may be increasingly relevant as large future iterations of large microelectronic fluidic (MEF) arrays continue to increase in size and complexity.
The use of bi-directional electronically-controlled flow has interesting options for particle and bubble cleaning operations.
See also
Fluidics
Microfluidics
Electro-osmosis
Lab-on-a-chip
References
Fluid dynamics
Nanotechnology
Biotechnology | FlowFET | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 1,051 | [
"Microfluidics",
"Microtechnology",
"Chemical engineering",
"Materials science",
"Biotechnology",
"nan",
"Piping",
"Nanotechnology",
"Fluid dynamics"
] |
63,107,907 | https://en.wikipedia.org/wiki/Unification%20of%20theories%20in%20physics | Unification of theories about observable fundamental phenomena of nature is one of the primary goals of physics. The two great unifications to date are Isaac Newton’s unification of gravity and astronomy, and James Clerk Maxwell’s unification of electromagnetism; the latter has been further unified with the concept of electroweak interaction. This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything.
Unification of gravity and astronomy
The "first great unification" was Isaac Newton's 17th century unification of gravity, which brought together the understandings of the observable phenomena of gravity on Earth with the observable behaviour of celestial bodies in space.
His work is credited with laying the foundations of future endeavors for a grand unified theory. For example, it has been stated that "If we have to take any single individual as the originator of the quest for a unified theory of physics, and, by implication, the whole of knowledge, it has to be Newton." Physicist Steven Weinberg stated that "It is with Isaac Newton that the modern dream of a final theory really begins".
Unification of magnetism, electricity, light and related radiation
The ancient Chinese people observed that certain rocks such as lodestone and magnetite were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. However, prior to ancient Chinese observations of magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, work in the 19th century revealed that these two forces were just two different aspects of one force electromagnetism.
The "second great unification" was James Clerk Maxwell's 19th century unification of electromagnetism. It brought together the understandings of the observable phenomena of magnetism, electricity and light (and more broadly, the spectrum of electromagnetic radiation). This was followed in the 20th century by Albert Einstein's unification of space and time, and of mass and energy through his theory of special relativity. Later, Paul Dirac developed quantum field theory, unifying quantum mechanics and special relativity.
A relatively recent unification of electromagnetism and the weak nuclear force now consider them to be two aspects of the electroweak interaction.
Unification of the remaining fundamental forces: theory of everything
This process of "unifying" forces continues today, with the ultimate goal of finding a theory of everything it remains perhaps the most prominent of the unsolved problems in physics. There remain four fundamental forces which have not been decisively unified: the gravitational and electromagnetic interactions, which produce significant long-range forces whose effects can be seen directly in everyday life, and the strong and weak interactions, which produce forces at minuscule, subatomic distances and govern nuclear interactions. Electromagnetism and the weak interactions are widely considered to be two aspects of the electroweak interaction. Attempts to unify quantum mechanics and general relativity into a single theory of quantum gravity, a program ongoing for over half a century, have not yet been decisively resolved; current leading candidates are M-theory, superstring theory and loop quantum gravity.
References
Concepts in physics | Unification of theories in physics | [
"Physics"
] | 698 | [
"nan"
] |
63,109,497 | https://en.wikipedia.org/wiki/Kuwait%20Space%20Rocket | The Kuwait Space Rocket (KSR), is a Kuwaiti project to build and launch the first suborbital liquid bi-propellant rocket in Arabia. The project is intended to be the first step towards starting a space industry in the country and a launch service provider in the GCC region. The project is divided into two phases with two separate vehicles. An initial testing phase with KSR-1 as a test vehicle capable of reaching an altitude of and a more expansive suborbital test phase with the KSR-2 planned to fly to an altitude of . in May 16 Ambition-1 launched but had a malfunction with the parachute and crashed in free fall.
History
The project began in January 2018 for conceptual design and planning. The team started the fabrication of KSR-1 in early 2019, and as of January 2020, KSR-1 was fully built.
KSR-1
KSR-1 is a vertically-launched single stage rocket. It uses a liquid bi-propellant rocket engine burning methanol as fuel and nitrous oxide as the oxidizer. KSR-1 is intended to be a test vehicle for the development of KSR-2, the goal of which is to reach space. As such, all the major components and technologies that are expected to be used in KSR-2 are present in KSR-1. The main components of KSR 1 are the engine—consisting of the injector, nozzle, and cooling jacket— with fuel and oxidizer tanks, a nitrogen gas tank, and various valves and pressure regulators.
KSR-1 Development Process
Engine Fabrication
The KSR-1 engine was built locally in Kuwait and it utilizes a pressure fed cycle. The engine utilizes the nitrous not only as an oxidizer but as a cooling agent, that flows around the nozzle and back into the injector again.
Cold Flow Testing
The KSR team performed a cold flow testing in October 2019 to verify the engine's flow rate and plumbing.
Static Testing
KSR performed a static testing of the Injector in November 2019.
Structural Assembly
The KSR-1 was fully assembled and presented at the Kuwait Aviation Show 2020.
KSR-2
KSR-2 is a planned liquid bi-propellant suborbital launch vehicle. It is the second installment of the KSR Rocket Family, composed of a single stage, fueled by nitrous oxide and methanol. KSR-2 will have a total length of 4 m, a diameter of 0.4 m and a total mass of 591 kg, its apogee will be around 100 km.
See also
Sub-orbital spaceflight
Launch vehicle
References
Suborbital spaceflight
Aerospace
Research groups | Kuwait Space Rocket | [
"Physics"
] | 561 | [
"Spacetime",
"Space",
"Aerospace"
] |
63,109,976 | https://en.wikipedia.org/wiki/List%20of%20body%20armor%20performance%20standards | Body armor performance standards are lists generated by national authorities, of requirements for armor to perform reliably, clearly indicating what the armor may and may not defeat. Different countries have different standards, which may include threats that are not present in other countries.
VPAM armor standard (Europe)
The VPAM scale as of 2009 runs from 1 to 14, with 1-5 being soft armor, and 6-14 being hard armor. Tested armor must withstand three hits, spaced apart, of the designated test threat with no more than of back-face deformation in order to pass. Of note is the inclusion of special regional threats such as Swiss P AP from RUAG and .357 DAG. According to VPAM's website, it is apparently used in France and Britain.
The VPAM scale is as follows:
TR armor standard (Germany)
The Technische Richtlinie (TR) Ballistische Schutzwesten is a regulation guide in Germany for body armor. It is mainly issued for body armor used by the German police, but also for the German armed forces and civilian available body armor. Producers have to meet the criteria of the TR, if they want to participate in open competitive bidding made by German agencies. The TR specifies different Schutzklassen (SK), which translates to protection classes, which a body armor can have. It specifies five different classes ranging from L to 4 of ballistic protection (e.g. SK 4). It also gives specifications for additional Stichschutz (ST), protection against knives, using the same classes as the ballistic protection, but giving it the additional ST label (e.g. SK L ST). The ballistic tests to determine a class are now integrated into the VPAM guidelines, so that the tests differ just in minor details and only one test (SK 1) is significantly different as of 2008.
The TR scale is as follows:
The German TR are generally comparable to the American NIJ, but the German TR usually tests more threat scenarios, as there are no point-blank shots as well as no police special rounds. In contrast the NIJ tests for bigger calibers and higher man stopping power. And while the German TR tests smaller calibers and lighter bullets, it also tests more aggressive rounds, as the first test already uses steel FMJ bullets, while the NIJ uses normal FMJ rounds. In addition SK 4, the highest protection class, is specified to withstand three hits, while Level IV needs only to withstand one hit - although by a bigger caliber (7.62×63mm).
HOSDB armor standard (United Kingdom)
The Home Office Scientific Development Branch is governing standards and testing protocols for police body armor.
BFD (Back Face Deformation) to be measured after each shot, maximum allowed BFD for HG1/A class is and for the rest.
GOST armor standard (Russia)
GOST R 50744-95 is the Russian Federation standard for body armor. Prior to the 2017 revision, the threat levels ran from 1 to 6. Noticeably, it included threats with the suffix A, which denote heightened ratings as opposed to lowered ratings in the NIJ standard.
The old (pre-2017) standards are as follows:
With the 2017 revision, the standards have changed significantly. Threat classes now range from BR1 to BR6. 'A'-suffixed classes have been eliminated, and their test threats have been either merged into the new categories, such as Classes 6 and 6A being moved into Class BR5, or removed entirely, as in the case of Class 2A. Additionally, several of the threat levels have been increased in difficulty with the introduction of new test threats; most notably is the introduction of Class BR6, which requires the tested armor to survive three hits of 12.7×108mm B32 API. In spite of the more difficult test threats, the back-face deformation limit remains unchanged.
The updated standards from the 2017 revision are as follows:
NIJ armor standard (United States)
Ballistic resistance (before April 2024)
NIJ Standard-0101.06 had specific performance standards for bullet resistant vests used by law enforcement. This rated vests on the following scale against penetration and also blunt trauma protection (deformation):
"Special Threats" were ratings of armor which provide protection against specific projectiles. For example, the NIJ guidelines did not have any specification for armor that can stop M855 armor piercing ammunition. As a result, some manufacturers designated specific armors as "Level III+" (a designation not recognized by the NIJ) to specify armor which had up to level III protection and could protect against special threats like the M855, but did not provide level IV protection.
Ballistic resistance (after April 2024)
In April 2024, NIJ began testing with NIJ Standard-0101.07 in conjunction with NIJ Standard-0123.00. NIJ Standard-0101.07 outlines testing procedures, while NIJ Standard-0123.00 describes ballistic protection levels. These standards completely replaced the NIJ Standard-0101.06. HG is rated for handgun threats and RF is rated for rifle threats.
The ballistic protection levels outlined in NIJ Standard 0123.00 are as follows:
NIJ standards are used for law enforcement armors. Armor used by the United States military is not required to be tested under NIJ standards. Textile armor is tested for both penetration resistance by bullets and for the impact energy transmitted to the wearer.
Backface deformation
Backface deformation is defined in NIJ Standard-0101.07 as "the indentation in the backing material caused by a projectile impact on the test item during testing". It is measured by shooting armor mounted in front of a backing material, typically oil-based modeling clay. The clay is used at a controlled temperature and verified for impact flow before testing. After the armor is impacted with the test bullet, the vest is removed from the clay and the depth of the indentation in the clay is measured.
Conditioned armor
Some armor tested under NIJ Standard-0101.07 is conditioned before testing, meaning it has been subjected to stress factors such as submersion, vibration, or impacts. These stress factors have been shown in some cases to degrade the performance of some armor material. The test-round velocity for conditioned armor is the same as that for unconditioned armor during testing, whereas in the previous standard the velocities would have varied. For example, under NIJ Standard-0101.06, conditioned Level IIIA would have been shot with a .44 Magnum round at , while unconditioned Level IIIA would have been shot at . Under NIJ Standard-0101.07, the velocity used for testing conditioned and unconditioned armor is the same. Armor conditioning procedures are outlined in ASTM E3078 Standard Practice for Conditioning of Hard Armor Test Items.
Generally, textile armor material temporarily degrades when wet. As a result of this, the major test standards call for wet testing of textile armor. Mechanisms for this loss of performance are not known. Neutral water at room temp has not been shown in testing to negatively affect the performance of para-aramid or UHMWPE but acidic, basic and some other solutions can permanently reduce para-aramid fiber tensile strength.
From 2003 to 2005, a large study of the environmental degradation of Zylon armor was undertaken by the US-NIJ. This concluded that water, long-term use, and temperature exposure significantly affect tensile strength and the ballistic performance of PBO or Zylon fiber. This NIJ study on vests returned from the field demonstrated that environmental effects on Zylon resulted in ballistic failures under standard test conditions.
Stab resistance
The NIJ's stab resistance standards (Standard–0115.00) define three levels of protection:
Level 1 armor is low-level protection suitable for extended wear and is usually covert. This armor protects against stab threats with a strike energy of 24±0.50 J (17.7±0.36 ft·lbf). The overtest condition for this level is 36±0.60 J (26.6±0.44 ft·lbf).
Level 2 armor is medium-level protection suitable for extended wear and may be either overt or covert. This armor protects against stab threats with a strike energy of 33±0.60 J (24.3±0.44 ft·lbf). The overtest condition for this level is 50±0.70 J (36.9±0.51 ft·lbf).
Level 3 is high-level protection suitable for wear in high risk situations and is usually overt. This armor protects against stab threats with a strike energy of 43±0.60 J (31.7±0.44 ft·lbf). The overtest condition for this level is 65±0.80 J (47.9±0.59 ft·lbf).
For all three levels, the maximum blade or spike penetration allowed is 7 mm (0.28 in), with this limit being determined through research indicating that internal injuries to organs would be extremely unlikely at this depth of penetration. The overtest condition, which is intended to ensure an adequate margin of safety in the armor design, permits a maximum blade or spike penetration of 20 mm (0.79 in).
The standard does not directly address slash resistance and instead notes that, since stab threats are more difficult to defeat, any armor that can defeat a stab threat will also defeat a slash threat.
US military armor standards
Although the US military requirements for body armor mirror the NIJ's on a surface level, the two are very different systems. The two systems share a limit on back-face deformation, but SAPI-series plates increase linearly in protection (with each plate tested against the preceding plate's threats), and require a soft armor backer in order to reach their stated level of protection.
Armor is tested using a standard set of test methods under ARMY MIL-STD-662F and STANAG 2920 Ed2. The Department of Defense armor programs-of-record (Modular Tactical Vest for example) procure armor using these test standards. In addition, special requirements can be defined under this process such as flexible rifle protection, fragment protection for the extremities, etc.
GA141 armor standard (China)
The Chinese Ministry of Public Security has maintained GA141, a standard document for describing the ballistic resistance of police armor, since 1996. , the latest revision is GA141-2010. The standard defines the following grades using domestic weapons:
Levels higher than 6 are marked "special". Levels 1 through 5 are to be tested with 6 shots. Level 6 is to be tested with 2 shots.
Annex A describes the use of GA grades against other "common" threats. 9×18mm Makarov is assigned to GA 1, 9×19mm to GA 2, 9×19mm AP (steel) and 5.8×21mm DAP92 AP to GA 4, 5.8×42mm DBP87 to GA 6, and "type 53" 7.62×54mmR API to "special grade".
Ballistic testing V50 and V0
Measuring the ballistic performance of armor is based on determining the kinetic energy of a bullet at impact (Ek = mv2). Because the energy of a bullet is a key factor in its penetrating capacity, velocity is used as the primary independent variable in ballistic testing. For most users the key measurement is the velocity at which no bullets will penetrate the armor. Measuring this zero penetration velocity (v0) must take into account variability in armor performance and test variability. Ballistic testing has a number of sources of variability: the armor, test backing materials, bullet, casing, powder, primer and the gun barrel, to name a few.
Variability reduces the predictive power of a determination of V0. If for example, the v0 of an armor design is measured to be with a 9 mm FMJ bullet based on 30 shots, the test is only an estimate of the real v0 of this armor. The problem is variability. If the v0 is tested again with a second group of 30 shots on the same vest design, the result will not be identical.
Only a single low velocity penetrating shot is required to reduce the v0 value. The more shots made the lower the v0 will go. In terms of statistics, the zero penetration velocity is the tail end of the distribution curve. If the variability is known and the standard deviation can be calculated, one can rigorously set the V0 at a confidence interval. Test Standards now define how many shots must be used to estimate a v0 for the armor certification. This procedure defines a confidence interval of an estimate of v0. (See "NIJ and HOSDB test methods".)
v0 is difficult to measure, so a second concept has been developed in ballistic testing called the ballistic limit (v50). This is the velocity at which 50 percent of the shots go through and 50 percent are stopped by the armor. US military standard MIL-STD-662F V50 Ballistic Test define a commonly used procedure for this measurement. The goal is to get three shots that penetrate that are slower than a second faster group of three shots that are stopped by the armor. These three high stops and three low penetrations can then be used to calculate a v50 velocity.
In practice this measurement of v50 requires 1–2 vest panels and 10–20 shots. A very useful concept in armor testing is the offset velocity between the v0 and v50. If this offset has been measured for an armor design, then v50 data can be used to measure and estimate changes in v0. For vest manufacturing, field evaluation and life testing both v0 and v50 are used. However, as a result of the simplicity of making v50 measurements, this method is more important for control of armor after certification.
Military testing: fragment ballistics
After the Vietnam War, military planners developed a concept of "Casualty Reduction". The large body of casualty data made clear that in a combat situation, fragments, not bullets, were the most important threat to soldiers. After WWII, vests were being developed and fragment testing was in its early stages. Artillery shells, mortar shells, aerial bombs, grenades, and antipersonnel mines are all fragmentation devices. They all contain a steel casing that is designed to burst into small steel fragments or shrapnel, when their explosive core detonates. After considerable effort measuring fragment size distribution from various NATO and Soviet bloc munitions, a fragment test was developed. Fragment simulators were designed, and the most common shape is a right circular cylinder or RCC simulator. This shape has a length equal to its diameter. These RCC Fragment Simulation Projectiles (FSPs) are tested as a group. The test series most often includes 2 grain (0.13 g), 4 grain (0.263 g), 16 grain (1.0 g), and 64 grain (4.2 g) mass RCC FSP testing. The 2-4-16-64 series is based on the measured fragment size distributions.
The second part of "Casualty Reduction" strategy is a study of velocity distributions of fragments from munitions. Warhead explosives have blast speeds of to . As a result, they are capable of ejecting fragments at very high speeds of over , implying very high energy (where the energy of a fragment is mass × velocity2, neglecting rotational energy). The military engineering data showed that, like the fragment size, the fragment velocities had characteristic distributions. It is possible to segment the fragment output from a warhead into velocity groups. For example, 95% of all fragments from a bomb blast under have a velocity of or less. This established a set of goals for military ballistic vest design.
The random nature of fragmentation required the military vest specification to trade off mass vs. ballistic-benefit. Hard vehicle armor is capable of stopping all fragments, but military personnel can only carry a limited amount of gear and equipment, so the weight of the vest is a limiting factor in vest fragment protection. The 2-4-16-64 grain series at limited velocity can be stopped by an all-textile vest of approximately 5.4 kg/m2 (1.1 lb/ft2). In contrast to the design of vest for deformable lead bullets, fragments do not change shape; they are steel and can not be deformed by textile materials. The FSP (the smallest fragment projectile commonly used in testing) is about the size of a grain of rice; such small fast moving fragments can potentially slip through the vest, moving between yarns. As a result, fabrics optimized for fragment protection are tightly woven, although these fabrics are not as effective at stopping lead bullets.
Backing materials for testing
Ballistic
One of the critical requirements in soft ballistic testing is measurement of "back side signature" (i.e. energy delivered to tissue by a non-penetrating projectile) in a deformable backing material placed behind the targeted vest. The majority of military and law enforcement standards have settled on an oil/clay mixture for the backing material, known as Roma Plastilena. Although harder and less deformable than human tissue, Roma represents a "worst case" backing material when plastic deformations in the oil/clay are low (less than ). (Armor placed over a harder surface is more easily penetrated.) The oil/clay mixture of "Roma" is roughly twice the density of human tissue and therefore does not match its specific gravity, however "Roma" is a plastic material that will not recover its shape elastically, which is important for accurately measuring potential trauma through back side signature.
The selection of test backing is significant because in flexible armor, the body tissue of a wearer plays an integral part in absorbing the high energy impact of ballistic and stab events. However the human torso has a very complex mechanical behavior. Away from the rib cage and spine, the soft tissue behavior is soft and compliant. In the tissue over the sternum bone region, the compliance of the torso is significantly lower. This complexity requires very elaborate bio-morphic backing material systems for accurate ballistic and stab armor testing. A number of materials have been used to simulate human tissue in addition to Roma. In all cases, these materials are placed behind the armor during test impacts and are designed to simulate various aspects of human tissue impact behavior.
One important factor in test backing for armor is its hardness. Armor is more easily penetrated in testing when backed by harder materials, and therefore harder materials, such as Roma clay, represent more conservative test methods.
Stab
Stab and spike armor standards have been developed using 3 different backing materials. The Draft EU norm calls out Roma clay, The California DOC called out 60% ballistic gelatin and the current standard for NIJ and HOSDB calls out a multi-part foam and rubber backing material.
Using Roma clay backing, only metallic stab solutions met the 109 joule Calif. DOC ice pick requirement
Using 10% Gelatin backing, all fabric stab solutions were able to meet the 109 joule Calif. DOC ice pick requirement.
Most recently the Draft ISO prEN ISO 14876 norm selected Roma as the backing for both ballistics and stab testing.
This history helps explain an important factor in Ballistics and Stab armor testing, backing stiffness affects armor penetration resistance. The energy dissipation of the armor-tissue system is Energy = Force × Displacement when testing on backings that are softer and more deformable the total impact energy is absorbed at lower force. When the force is reduced by a softer more compliant backing the armor is less likely to be penetrated. The use of harder Roma materials in the ISO draft norm makes this the most rigorous of the stab standards in use today.
References
Performance standards
Ballistics
Lists of standards
Weapons countermeasures | List of body armor performance standards | [
"Physics"
] | 4,097 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
63,111,964 | https://en.wikipedia.org/wiki/Petroleum%20industry%20in%20Equatorial%20Guinea | Equatorial Guinea is a significant oil producer in Africa. Crude oil produced by the country is primarily extracted from the Alba, Zafiro, and Ceiba regions. As a result of the recent increase in the extraction of petroleum, the country's economy has grown significantly. In fact, during the period from 1997 to 2001, the country experienced an average GDP growth of 41.6% per year. However, there have been recent accusations of corruption and repression by the government resulting from the nation's newfound wealth.
History
Throughout much of the 1990s, Equatorial Guinea was considered to be a poor country with little prospects of economic growth. Although the country had been rather successful at independence due to its cocoa industry, the Macias regime that quickly exerted its repressive forces eventually diminished it to a shell of its former glory. In fact, the cocoa industry accounted for 75% of the nation's GDP in 1968-- the year of their independence from Spain. However, just 10 years later, the overall cocoa production reduced to one seventh of what it was before.
In 1980, the Spanish petroleum corporation Hispanoil signed in agreement with the Equatoguinean government to form the joint venture GEPSA. Shortly after, the firm successfully drilled a gas well in the Alba region that appeared to be promising. However, the Spanish oil company decided to withdraw from the country in 1990 due to the lack of a viable market for the gas discovered there. As a result, the government allowed for bids from other companies to extract oil.
Walter International was given the rights to drill in the region. Just one year later, a site close to the original drilling site set up by GEPSA began producing oil. Yet, it was not until 1995, when Mobil struck oil in its Zafiro field, that the country truly became a major oil-producing nation. Soon after in 1999, the American oil firm Triton discovered oil at its Ceiba field.
Due to several corporate changes through the early 2000s, the major oil companies that operated in the country were now owned by American firms. In contrast, there is a notable absence of British oil firms in the country. While Americans dominate the industry, Shell and BP both had yet to explore for oil. As a result of the prevalent presence of foreign firm in the country, foreign direct investment from all over has flooded the nation.
Production and exploration
From the dramatic increase in oil production in recent years, Equatorial Guinea has managed to claim the spot as the third largest oil producer in Africa. As a result, its GDP per capita is among the highest in the world. In fact, in 2005, the country had an estimated GDP per capita of $50,240-- only second to that of Luxembourg. In terms of oil extraction, accounting for the three main oil fields that the nation counts on, over 425,000 barrels were extracted per day that same year.
The true turning point in the nation's oil industry came with Mobil's discovery of oil in the Zafiro region. In just a few years, the overall Equatoguinean production of oil increased more than five times over. To the benefit of Mobil and foreign investors in general, the Nigerian and Equatoguinean governments were able to settle a land dispute in the Zafiro region. This paved the way for more confidence among foreign firms hoping to set up shop in the country. However, another important development was the development of the Ceiba oil field by Triton. This proved to be an important development due to its location; it is located far south from the other two oil-producing regions-- away from the Niger Delta. Despite its significant location, it started off as a relatively small operation-- producing just 40,000 barrels per day.
Operating agreements
As is the case in many other developing countries, the Equatoguinean government maintains a stake in much of the oil operations in the country. However, they in no way represent a key player in the industry. For example, they only retain a 3% share in operations in the Alba field and a 5% share in Zafiro field operations, which are significantly low shares in comparison with other industry players in the region and seems like signs of corruption concerning some major oil agreements and the final conditions agreed.
The American-based Riggs Bank was involved in a corruption scandal in which the US government accused them and Obiang of embezzling millions of dollars from the government treasury into personal bank accounts. These allegations highlight the increased level of corruption by high level officials as a result of the amount of wealth that has been brought to Equatorial Guinea's shores.
Political implications
Economic and demographic changes
The rapid rise of the petroleum industry in Equatorial Guinea has provided money for the government from two fronts: oil profits and foreign aid. It is important to note that both of these are without any strings attached. Unlike other developing countries that often have to meet certain requirements to receive aid from foreign donors, the control the government retains overs its oil industry gives them a bargaining chip against any sort of involvement in domestic policies. In fact, there has not been a World Bank lending program for the country since 1999. Additionally, its vast oil reserves allow the government to obtain loans backed by future oil revenues. From these financial resources combined, greater investments in patronage and security forces can implemented.
Given that pools of resources have been secured, Obiang has made it a priority to increase international legitimacy. Juridical statehood to control the money that entered the country and the benefits that come from them. Specifically, he has personally asked the British government for help in running its government in a more efficient manner and providing for more transparency. In return, the government has received praise from many foreign leaders, as the IMF did in 2003. Despite these initiatives, little has actually been done to reduce corruption and improve the lives of the general population.
A number of demographic changes have come since the oil boom. From the increased wealth of the country, populations are more inclined to live in urban settings. However, the higher level of spending has in turn led the domestic currency to inflate. This coupled with a reduction in foreign aid has led to an overall reduction in the standard of living. Although those that are directly employed by oil firms are well-paid by local standards, they represent a small sliver of the population. Additionally, that money stays with the oil companies, as company town models are quite prevalent. Therefore, much of the luxuries that are within the country are only accessible by a select few. This creates a divide between those that are involved with the oil industry and those that are not. In fact, it is a common policy that only those directly employed by the companies are allowed to live within the walls of these compounds; even servants must come and go daily. Thus, this system calls into question whether or not oil money trickles down within the country's society.
Corruption and scandals
Certainly, Obiang's clique has benefited from the control of the country's new wealth despite the fact that the government only has access to less than 10% of the benefits of oil production according to the contracts signed with extractive gas and oil companies, mainly some American oil companies, which makes clear the complicity and corruption in force in several cases of corruption of international coverage. However, there has in recent years been conflicts within the elite as a result of this. In his Esangui clan, there is dispute over who has the greatest influence over domestic policy-- which has traditionally been controlled by his brothers and sons. The public works projects that have been initiated have been inefficient and mostly to facilitate corruption, for which many seek to control the funneling of such funds. Additionally, a lot of wealth is also created through oil contracting companies, from which oil firms derive their labor. However, the nation's biggest ones are controlled by those that are direct family members of Obiang. This highlights the amount of nepotism that is coupled with corruption that cripples the country. From both these sources of wealth, elites have been able to amass a number of luxuries overseas, such as in Washington, D.C., where Obiang and his son are known to frequent and own several opulent properties.
While in many ways Equatorial Guinea can be considered a rentier state, it is not as effective as many states in the Middle East, such as the United Arab Emirates. While it does implement many tactics of repression, it has very little to offer its citizens, and thus the stability of the regime is questionable. Because of this, the number of repressive tactics and human rights abuses have increased since the oil boom. A particular example of this is Black's Prison in Malabo, which has received worldwide attention for the alleged violence and torture that occurs there. This has been particularly useful to coerce or suppress opposition forces that threaten the current regime.
In order to appease critics of corruption within the country, the government in 2004 pledged to sign onto the Extractive Industries Transparency Initiative. However this does little to combat corruption in that it only makes government salaries more transparent, not expenditures. Additionally, the country has delayed putting into place all the protocols the initiative calls for. Thus, commitments to tackle the corruption and nepotism that are prevalent in the country have been shallow and only for appearances; they have become a handicap towards the development of democracy in Equatorial Guinea.
References
Energy in Equatorial Guinea
Petroleum industry
Political economy | Petroleum industry in Equatorial Guinea | [
"Chemistry"
] | 1,931 | [
"Petroleum industry",
"Petroleum",
"Chemical process engineering"
] |
65,968,923 | https://en.wikipedia.org/wiki/Entanglement%20depth | In quantum physics, entanglement depth characterizes the strength of multiparticle entanglement. An entanglement depth means that the quantum state of a particle ensemble cannot be described under the assumption that particles interacted with each other only in groups having fewer than particles. It has been used to characterize the quantum states created in experiments with cold gases.
Definition
Entanglement depth appeared in the context of spin squeezing. It turned out that to achieve larger and larger spin squeezing, and thus larger and larger precision in parameter estimation, a larger and larger entanglement depth is needed.
Later it was formalized in terms of convex sets of quantum states, independent of spin squeezing as follows. Let us consider a pure state that is the tensor product of multi-particle quantum states
The pure state is said to be -producible if all are states of at most particles. A mixed state is called -producible, if it is a mixture of pure states that are all at most -producible.
The -producible mixed states form a convex set.
A quantum state contains at least multiparticle entanglement of particles, if it is not -producible. A -particle state with -entanglement is called genuine multipartite entangled.
Finally, a quantum state has an entanglement depth , if it is -producible, but not -producible.
It was possible to detect the entanglement depth close to states different from spin-squeezed states. Since there is not a general method to detect multipartite entanglement, these methods had to be tailored to experiments with various relevant quantum states.
Thus, entanglement criteria has been developed to detect entanglement close to symmetric Dicke states with
They are very different from spin-squeezed states, since they do not have a large spin polarization.
They can provide Heisenberg limited metrology, while they are more robust to particle loss than Greenberger-Horne-Zeilinger (GHZ) states.
There are also criteria for detecting the entanglement depth in planar-squeezed states. Planar squeezed states are quantum states that can be used to estimate a rotation angle that is not expected to be small.
Finally, multipartite entanglement can be detected based on the metrological usefulness of the quantum state. The criteria applied are based on bounds on the quantum Fisher information.
Experiments
The entanglement criterion in Ref. has been used in many experiments with cold gases in spin-squeezed states.
There have also been experiments in cold gases for detecting multipartite entanglement in symmetric Dicke states.
There have been also experiments with Dicke states that detected entanglement based on metrological usefulness in cold gases and in photons.
References
Quantum information science
Quantum optics
Optical quantities | Entanglement depth | [
"Physics",
"Mathematics"
] | 580 | [
"Physical quantities",
"Quantum optics",
"Quantity",
"Quantum mechanics",
"Optical quantities"
] |
65,973,341 | https://en.wikipedia.org/wiki/Intermediate%20luminosity%20optical%20transient | An Intermediate Luminosity Optical Transient (ILOT) is an astronomical object which undergoes an optically detectable explosive event with an absolute magnitude (M) brighter than a classical nova (M ~ −8) but fainter than that of a supernova (M ~ −17). That nine magnitude range corresponds to a factor of nearly 4000 in luminosity, so the ILOT class may include a wide variety of objects. The term ILOT first appeared in a 2009 paper discussing the nova-like event NGC 300 OT2008-1. As the term has gained more widespread use, it has begun to be applied to some objects like KjPn 8 and CK Vulpeculae for which no transient event has been observed, but which may have been dramatically affected by an ILOT event in the past. The number of ILOTs known is expected to increase substantially when the Vera C. Rubin Observatory becomes operational.
A very wide variety of objects have been classified as ILOTs in the astronomical literature. Kashi and Soker proposed a model for the outburt of ASASSN-15qi, in which a Jupiter-mass planet is tidally destroyed and accreted onto a young main sequence star. Luminous red novae, believed be caused by the merger of two stars, are classified as ILOTs. Some luminous blue variables, such as η Car have been classified as ILOTs. Some objects which have been classified as failed supernovae may be ILOTs. The common thread tying all of these objects together is a transfer of a large amount of mass (0.001 M⊙ to a few M⊙) from a planet or star to a companion star, over a short period of time, leading to a massive eruption. That large range in accretion mass explains the large range in ILOT event brightness.
See also
Fast blue optical transient
References
External links
The ILOT Club
Stellar phenomena
Astronomical events | Intermediate luminosity optical transient | [
"Physics",
"Astronomy"
] | 389 | [
"Physical phenomena",
"Stellar phenomena",
"Astronomical events"
] |
65,974,052 | https://en.wikipedia.org/wiki/Annick%20Pouquet | Annick Gabrielle Pouquet (born 24 December 1946) is a computational plasma physicist specializing in plasma turbulence. She was awarded the 2020 Hannes Alfvén Prize for "fundamental contributions to quantifying energy transfer in magneto-fluid turbulence". She currently holds positions in the Laboratory for Atmospheric and Space Physics and National Center for Atmospheric Research at the University of Colorado Boulder.
The Politano–Pouquet relation, an exact scaling law for magnetohydrodynamic turbulence, was partly named after her.
Early life and career
Pouquet graduated with a thèse d'état in astrophysics (French equivalent of Ph.D.) from the Observatoire de Nice in 1976, where she studied turbulence in the presence of a magnetic field using models and direct numerical simulation. She then remained at the observatory upon graduation, eventually becoming the director of the observatory's Cassini Laboratory in 1998.
In 2000, Pouquet joined the National Center for Atmospheric Research (NCAR) at the University of Colorado Boulder as director of the Geophysical Turbulence Program and section head of the Turbulence Numerics Team, where she led efforts to investigate wave-turbulence interactions in the Earth's atmosphere and in space. Pouquet also founded the Earth and Sun System Laboratory as the first acting director in 2004 and became its deputy director between 2006 and 2009.
Pouquet retired in 2013 and became an emeritus Senior Scientist at NCAR. She also became an adjunct professor at the Department of Applied Mathematics, and a visiting scientist of the Laboratory for Atmospheric and Space Physics at the University of Colorado Boulder.
Honours and awards
Pouquet was inducted as a fellow of the American Physical Society in 2004. She was awarded the Hannes Alfvén Prize in 2020.
References
1946 births
Living people
21st-century American physicists
20th-century French physicists
Plasma physicists
Fellows of the American Physical Society
Côte d'Azur University alumni
Paris Diderot University alumni | Annick Pouquet | [
"Physics"
] | 393 | [
"Plasma physicists",
"Plasma physics"
] |
51,774,438 | https://en.wikipedia.org/wiki/Active%20design | Active design is a set of building and planning principles that promote physical activity. Active design in a building, landscape or city design integrates physical activity into the occupants' everyday routines, such as walking to the store or making a photocopy. Active design involves urban planners, architects, transportation engineers, public health professionals, community leaders and other professionals in building places that encourage physical activity as an integral part of life. While not an inherent part of active design, most designers employing "active design" are also concerned with the productive life of their buildings and their building's ecological footprint.
History
In England
Sport England considers that the built environment has a vital role to play to encourage people to be physically active as part of their daily lives, enabling communities to lead more active and healthy lifestyles. In 2007 Sport England and David Lock Associates published Active Design, which provided a set of design guidelines to help promote opportunities for sport and physical activity in the design and layout of new development. The guidance was developed in two phases. Phase one (2005) developed the three key active design objects of improving accessibility, enhancing amenity and increasing awareness ("the 3 A's"). Phase two included two stakeholder sessions (May and October 2006) which expanded "the 3 A's" into a criterion-based approach. These criteria formed the guidance which was published in 2007. The guidance was supported by CABE, Department of Health and Department for Culture Media and Sport.
In 2014, Sport England held a stakeholder session made up of a range of bodies and individuals including urban planning and public health professionals to discuss whether active design was still relevant in the current planning and health context, and they concluded that it was. The guide was revised, retaining "the 3 A's" and refining the criteria-based approach to the ten principles of active design. The revised Active Design was published in 2015, and was supported by Public Health England.
In 2016 Active Design: Planning for Health and Wellbeing through Sport and Physical Activity was shortlisted for an award at the Royal Town Planning Institute (RTPI) Awards for Planning Excellence. Active Design was shortlisted in the category of "Excellence in Planning for Community and Wellbeing".
In 2017 Sport England prepared two animated films, Active Design by Sport England and The Ten Principles of Active Design, in addition to three further case studies.
The active design principles are becoming increasingly embedded into built environment practice and placemaking design, with a growing list of local authorities in England making reference to Sport England's active design guidance in planning policy. In 2018 active design was embedded into the principles of the revised "Essex Design Guide" (prepared by Essex County Council and supported by Sport England).
In New York
Recognizing that physical inactivity was a significant factor in decreased life spans, notably because it promoted obesity, high blood pressure and high blood glucose, all precursors of early death, those responsible for planning in New York City developed a set of guidelines that, inter alia, they hoped would promote health by promoting physical activity. They released these guidelines in January 2010. The guidelines were also based on concerns about building longevity and ecological costs, which is generally known as "sustainable design". Impetus for the guidelines began in 2006 with the NYC Department of Health and Mental Hygiene (DOHMH) who then partnered with the American Institute of Architects New York Chapter to hold a series of conferences known as the "Fit City" conferences.
Four key concepts came out of this process:
Buildings should encourages greater physical movement within them for users and visitors
Cities should provide recreational spaces that are accessible and encourage physical activity for a variety of ages, interests, and abilities
Transportation systems in cities should encourage physical activity and should protect non-motor vehicle use
Cities, market areas and buildings should provide ready access to food and healthy eating environments
From New York City the active design movement spread throughout the United States and the world.
Goals
Sickness can lead to not working efficiently and effectively. Ineffective workers in the work force cause harm to the company and the people in the community. Active design strives to impact public health not only physically but also mentally and socially. For example, active design in transportation supports a safe and vibrant environment for pedestrians, cyclists and transit riders. It creates buildings that encourage greater physical movement within a building by both users and visitors. The active design of recreation sites shapes play and activity spaces for people of different ages, interests, and abilities. Also, improved food accessibility can improve nutrition in communities that need it the most.
Effects
There are few studies of the effects of implementing active design concepts, but they are in general agreement that the physical activity of occupants is increased. Moving to an active design building seemed to have physical health benefits for workers, but workers' perceptions on productivity about the new work environment have varied. A study reported that staff moved into an active design building decreased the time spent sitting by 1.2 hours per day. There was no significant increase in self-rated quality of work or work related motivation but there was no negative feedback in these areas.
The National Institute for Health and Care Research (NIHR) has published a review of research on public health interventions to prevent obesity. The review covers interventions looking at active travel (including walk and cycle lanes), the impact of new roads, public transport, access to green spaces, blue spaces, and parks, and urban regeneration.
Implementation
Active design concepts may be applied in remodeling or repurposing existing buildings and landscapes. Some elements include widening sidewalks and crosswalks; installing traffic calming elements that slow driving speeds; making stairs that are accessible, visible, attractive, and well-lit; making recreation areas, such as parks, plazas, and playgrounds, more accessible by pedestrians and cyclists. People would be more likely to be active if places for recreation were within walking distance.
There are a number of concerns with the adoption of active design programmes. Developing communities are not always accepting of new forms of architecture and living. Integration of active design may come in conflict with making sure historical culture survives. Vernacular architecture may be abandoned due to it being considered insufficient or uncomfortable.
Future
The future of active design may be to further incorporate requirements into law, as in the city of New York which set active design guidelines to improve public health in the city.
See also
Car-free movement
Cycling infrastructure
Street reclamation
Walkability
Walking audit
References
Further reading
Urban design
Architectural design
Landscape architecture
Public health
Architecture articles needing expert attention | Active design | [
"Engineering"
] | 1,300 | [
"Design",
"Landscape architecture",
"Architectural design",
"Architecture"
] |
51,775,967 | https://en.wikipedia.org/wiki/Real-time%20path%20planning | Real-Time Path Planning is a term used in robotics that consists of motion planning methods that can adapt to real time changes in the environment. This includes everything from primitive algorithms that stop a robot when it approaches an obstacle to more complex algorithms that continuously takes in information from the surroundings and creates a plan to avoid obstacles.
These methods are different from something like a Roomba robot vacuum as the Roomba may be able to adapt to dynamic obstacles but it does not have a set target. A better example would be Embark self-driving semi-trucks that have a set target location and can also adapt to changing environments.
The targets of path planning algorithms are not limited to locations alone. Path planning methods can also create plans for stationary robots to change their poses. An example of this can be seen in various robotic arms, where path planning allows the robotic system to change its pose without colliding with itself.
As a subset of motion planning, it is an important part of robotics as it allows robots to find the optimal path to a target. This ability to find an optimal path also plays an important role in other fields such as video games and gene sequencing.
Concepts
In order to create a path from a target point to a goal point there must be classifications about the various areas within the simulated environment. This allows a path to be created in a 2D or 3D space where the robot can avoid obstacles.
Work Space
The work space is an environment that contains the robot and various obstacles. This environment can be either 2-dimensional or 3-dimensional.
Configuration Space
The configuration of a robot is determined by its current position and pose. The configuration space is the set of all configurations of the robot. By containing all the possible configurations of the robot, it also represents all transformations that can be applied to the robot.
Within the configuration sets there are additional sets of configurations that are classified by the various algorithms.
Free Space
The free space is the set of all configurations within the configuration space that does not collide with obstacles.
Target Space
The target space is the configuration that we want the robot to accomplish.
Obstacle Space
The obstacle space is the set of configurations within the configuration space where the robot is unable to move to.
Danger Space
The danger space is the set of configurations where the robot can move through but does not want to. Oftentimes robots will try to avoid these configurations unless they have no other valid path or are under a time restraint. For example, a robot would not want to move through a fire unless there were no other valid paths to the target space.
Methods
Global
Global path planning refers to methods that require prior knowledge of the robot's environment. Using this knowledge it creates a simulated environment where the methods can plan a path.
Rapidly Exploring Random Tree (RRT)
The rapidly exploring random tree method works by running through all possible translations from a specific configuration . By running through all possible series of translations a path is created for the robot to reach the target from the starting configuration.
Local
Local path planning refers to methods that take in information from the surroundings in order to generate a simulated field where a path can be found. This allows a path to be found in the real-time as well as adapt to dynamic obstacles.
Probabilistic Roadmap (PRM)
The probabilistic roadmap method connects nearby configurations in order to determine a path that goes from the starting to target configuration. The method is split into two different parts: preprocessing phase and query phase. In the preprocessing phase, algorithms evaluate various motions to see if they are located in free space. Then in the query phase, the algorithms connects the starting and target configurations through a variety of paths. After creating the paths, it uses Dijkstra's shortest path query to find the optimal path.
Evolutionary Artificial Potential Field (EAPF)
The evolutionary artificial potential field method uses a mix of artificial repulsive and attractive forces in order to plan a path for the robot. The attractive forces originate from the target which leads the path to the target in the end. The repulsive forces come from the various obstacles the robot will come across. Using this mix of attractive and repulsive forces, algorithms can find the optimal path.
Indicative Route Method (IRM)
The indicative route method uses a control path towards the target and an attraction point located at the target. Algorithms are often used to find the control path, which is oftentimes the path with the shortest minimum-clearance path. As the robot stays on the control path the attraction point on the target configuration leads the robot towards the target.
Modified Indicative Routes and Navigation (MIRAN)
The modified indicative routes and navigation method gives various weights to different paths the robot can take from its current position. For example, a rock would be given a high weight such as 50 while an open path would be given a lower weight such as 2. This creates a variety of weighted regions in the environment which allows the robot to decide on a path towards the target.
Applications
Humanoid Robots
For many robots the number of degrees of freedom is no greater than three. Humanoid robots on the other hand have a similar number of degrees of freedom to a human body which increases the complexity of path planning. For example, a single leg of a humanoid robot can have around 12 degrees of freedom. The increased complexity comes from the greater possibility of the robot colliding with itself. Real-time path planning is important for the motion of humanoid robots as it allows various parts of the robot to move at the same time while avoiding collisions with the other parts of the robot.
For example, if we were to look at our own arms we can see that our hands can touch our shoulders. For a robotic arm this may pose a risk if the parts of the arms were to collide unintentionally with each other. This is why path planning algorithms are needed to prevent these accidental collisions.
Self-Driving Vehicles
Self-driving vehicles are a form of mobile robots that utilizes real-time path planning. Oftentimes a vehicle will first use global path planning to decide which roads to take to the target. When these vehicles are on the road they have to constantly adapt to the changing environment. This is where local path planning methods allow the vehicle to plan a safe and fast path to the target location.
An example of this would be the Embark self-driving semi-trucks, which uses an array of sensors to take in information about their environment. The truck will have a predetermined target location and will use global path planning to have a path to the target. While the truck is on the road it will use its sensors alongside local path planning methods to navigate around obstacles to safely reach the target location.
Video games
Oftentimes in video games there are a variety of non-player characters that are moving around the game which requires path planning. These characters must have paths planned for them as they need to know where to move to and how to move there.
For example, in the game Minecraft there are hostile mobs that track and follow the player in order to kill the player. This requires real-time path planning as the mob must avoid various obstacles while following the player. Even if the player were to add additional obstacles in the way of the mob, the mob would change its path to still reach the player.
References
Robotics engineering | Real-time path planning | [
"Technology",
"Engineering"
] | 1,477 | [
"Computer engineering",
"Robotics engineering"
] |
51,779,233 | https://en.wikipedia.org/wiki/Cosmic%20wind | Cosmic wind is a powerful cosmic stream of charged particles that can push interstellar dust clouds of low density into intergalactic space. Although it easily pushes low density gas and dust clouds, it cannot easily push high density clouds. As the cosmic winds start to push the clouds, they start to separate and start looking like taffy being pulled apart. It has a primary composition of photons ejected from large stars and sometimes thermal energy from exploding stars. It can be caused by orbital motion of gas in the cluster of a galaxy, or can be ejected from a black hole. Because new stars and planets form from gases, the cosmic winds that push the gases away are preventing new stars from forming and are ultimately playing a role in galaxy evolution.
Description
These winds come from the thermal expansion of galactic halos in O and B stars and are further increased by cosmic rays, which shoot out and help push gas out of the halo and disk of its galaxy. In these supernovae, these winds are a result of the conversion of the supernova's thermal energy into kinetic energy which is also further increased by cosmic rays. It is a combination of these hot and cooling flows that cause cosmic wind. In smaller stars, such as the Sun, the wind comes from the Sun's corona and is referred to as solar wind.
Observation
The presence of cosmic wind in the vicinity of a black hole can be noted through the meticulous inspection of absorption line features in the spectra of the accretion disk surrounding said black hole. These features are commonly seen through X-ray telescopes such as the Chandra X-ray Observatory, NuSTAR, and NICER. Before 2007, this was only theorized to occur but several physicists including an astrophysicist named Andrew Robinson analyzed the accretion disk of galaxy that is about 3 billion light years away from the Milky Way. They used the William Herschel Telescope to observe this galaxy, and they noticed that the light surrounding the accretion disk was rotating at similar speeds, proving that accretion disks do release winds. The investigation of the origin and regulating mechanisms of the wind is an active research topic.
Calculations
A method used to calculate these winds is done by using the absorption lines. At low redshifts of ultraviolet star forming galaxies, the outflow velocity and mass loading factor of the wind, scale with the star formation rate (SFR) and stellar mass of the galaxy. The surface area of these winds can be estimated by finding the radius, in the case of a spherically symmetric thin shell, the formula to find this is , where is the covering fraction, the radius, the column density of Hydrogen atoms, the mass of the hydrogen atoms, and is the mean molecular weight.
See also
Galactic superwind
Stellar wind
Solar wind
Planetary wind
Stellar-wind bubble
Colliding-wind binary
Pulsar wind nebula
Superwind
References
Astronomical events
Cosmic dust
Interstellar media
Free-floating substellar objects
Black holes | Cosmic wind | [
"Physics",
"Astronomy"
] | 596 | [
"Black holes",
"Physical phenomena",
"Interstellar media",
"Physical quantities",
"Outer space",
"Astronomical events",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Cosmic dust"
] |
51,779,654 | https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20NS5A%20inhibitors | Nonstructural protein 5A (NS5A) inhibitors are direct acting antiviral agents (DAAs) that target viral proteins, and their development was a culmination of increased understanding of the viral life cycle combined with advances in drug discovery technology. However, their mechanism of action is complex and not fully understood. NS5A inhibitors were the focus of much attention when they emerged as a part of the first curative treatment for hepatitis C virus (HCV) infections in 2014. Favorable characteristics have been introduced through varied structural changes, and structural similarities between NS5A inhibitors that are clinically approved are readily apparent. Despite the recent introduction of numerous new antiviral drugs, resistance is still a concern and these inhibitors are therefore always used in combination with other drugs.
Hepatitis C virus
HCV is a positive-sense single-stranded RNA virus that has been demonstrated to replicate in the hepatocytes of both humans and chimpanzees. A single HCV polyprotein is translated, and then cleaved by cellular and viral proteases into three structural proteins (core, E1, and E2) and seven nonstructural proteins (p7, NS2, NS3, NS4A, NS4B, NS5A, and NS5B).
HCV is among the leading causes of liver disease around the world. It is transmitted by blood and is most commonly contracted through the use of infected needles. Patients with chronic HCV infection are at significant risk of cirrhosis and hepatocellular carcinoma, which are the leading causes of death for those infected.
The virus has been around for over a millennium and has been classified into six known genotypes, each of which contains numerous subtypes. The seventh remains uncharacterized. The genotype contracted dictates which specific treatments are viable.
NS5A receptor
Basic structure and chemical properties
NS5A is a large hydrophilic phosphoprotein that is essential for the HCV life cycle and is found in association with virus-induced membrane vesicles, termed the membranous web. NS5A is a proline-rich protein composed of approximately 447 amino acids, which is divided into three domains. These domains are linked by two low-complexity sequences that are either serine- or proline-rich. Domain I is a zinc binding domain and X-ray crystallography studies indicated alternative dimer conformations of domain I of NS5A. Domain II and III are unstructured, shown by NMR studies. Domain I is preceded by an N-terminal amphipathic helix which allows the protein to associate with endoplasmic reticulum-derived membranes. Although X-ray crystallographic studies revealed dimer conformations of NS5A domain1, recent in solution structural characterization studies showed that NS5A proteins form higher-order structures by dimeric subunits of NS5A domain 1. Moreover, the overall structural model of NS5A highlights the variability of intrinsic conformations of the D2 and D3 domains between HCV genotypes. Therefore, it is still under debate which conformation\s of NS5A is functional and also targeted by NS5A inhibitors.
NS5A mainly exists in two distinct phosphorylated forms, a hypophosphorylated and a hyperphosphorylated form, but the exact function of the phosphorylation has not been determined.
Function
The NS5A protein plays an important role in viral RNA replication, viral assembly, and complex interactions with cellular functions. The protein has been implicated in the modulation of host defenses, apoptosis, the cell cycle, and stress-responsive pathways. However, its function and complete structure have yet to be elucidated.
NS5A seems to be key in triggering the formation of the membranous web in the absence of other similar nonstructural proteins. Many proteins within the host cell can be affected by NS5A, e.g. phosphatidylinositol 4-kinase IIIα (PI4KIIIα), a kinase required for the replication of HCV. This kinase takes part in the biosynthesis of phosphatidylinositol 4-phosphate (PI4P) by interacting with NS5A, which stimulates its activity and appears to improve the integrity of the membranous web.
Recently, the central role of NS5A in viral proliferation has made it the target for drug development. As a result, new antiviral agents have been introduced for the treatment of HCV.
Mechanism of action
NS5A inhibitors have been developed to target the NS5A protein. These inhibitors have achieved a significant reduction in HCV RNA blood levels and can therefore be considered as potent antivirals. Their mechanism of action is thought to be diverse but the exact mechanism is not fully understood. Most studies assume that NS5A inhibitors act on two essential stages of the HCV life cycle; the replication of the genomic RNA, and virion assembly. Other studies propose an alteration of host cell factors as a possible third mechanism.
The structure of NS5A inhibitors is characterized by dimeric symmetry. This suggests that NS5A inhibitors act on dimers of NS5A. A number of modeling studies have shown that daclatasvir, which is an NS5A inhibitor, only binds to the "back-to-back" NS5A dimer and that the binding has to be symmetrical. Other modeling studies have shown that binding to other conformations of NS5A might be possible, as well as asymmetrical binding. Research has shown that daclatasvir's target is most likely domain I of NS5A. Even though the mechanism is not completely understood, it has been demonstrated that the inhibitors downregulate NS5A hyperphosphorylation, leading to the suppression of HCV replication and its processing of polyproteins, as well as resulting in an unusual protein location. Hitherto, this inhibition was thought to require only NS5A domain I, but not domains II and III. However, recent studies have shown that both domains I and II are relevant to this disruption of RNA replication.
NS5A inhibitors appear to furthermore disrupt the formation of new replicase complexes resulting in a gradual slowing of viral RNA synthesis. Effect on previously formed complexes has yet to be demonstrated.
Available evidence suggests that NS5A inhibitors modify the location of NS5A inside the cell. This may cause abnormal assembly leading to malformed viruses. Some studies have revealed that inhibition of the viral assembly has a more important role in RNA reduction than viral replication reduction.
Studies have shown that NS5A inhibitors block the formation of the membranous web, which protects the viral genome and features the main sites for viral replication and assembly. This mechanism is thought to be independent of RNA replication, but seems to be affected by NS5A inhibitors blocking the formation of the PI4KIIIα-NS5A complex, essential to the synthesis of the PI4P, resulting in decreased integrity of the membranous web and therefore reduced HCV RNA replication.
History
HCV research has taken great strides in recent years with the discovery and clinical development of multiple new HCV drugs. Among those drugs are the DAAs which include NS5A inhibitors.
NS5A inhibitors have been found particularly effective in the treatment of HCV where they have been used in combination with protease inhibitors such as glecaprevir or NS5B inhibitors (e.g., sofosbuvir), pegylated interferons (e.g. peginterferon alfa-2a), and ribonucleic analogs (e.g. ribavirin). The ever present risk of viral strains developing resistance has been a main factor in why they are used in combination with one or more complementary drug.
Adverse effects, and extensive and complicated drug regimens with accompanying low compliance rates, have been a hindrance in the development of antiviral treatments. The combination of NS5A and NS5B inhibitors has produced positive results in this regard.
Drug discovery and development
Discovery
The discovery of NS5A inhibitors took place within the context of a pursuit for a treatment for HCV. NS5A is among the seven nonstructural proteins that form a complex with viral RNA within infected cells to initiate HCV replication. HCV research has produced several DAAs including NS3A, NS4A and NS5B inhibitors, as well as NS5A inhibitors.
Development
The development of antiviral drugs capable of interfering with the proteins responsible for viral replication has been intimately linked with advancements in techniques for establishing the efficient cell culture systems needed to screen for them.
In 1999 a breakthrough came when a full-length consensus genome cloned from HCV RNA was found to replicate at high levels when transfected into a human hepatoma cell line. This method has since been improved upon with the use of cell culture-adaptive mutations that enhance RNA replication.
Screening has now produced a number of NS5A inhibitors, which have been incorporated into treatments for HCV. The first in this new class of drugs was daclatasvir (Daklinza), gaining first global approval from the Japanese Ministry of Health, Labour and Welfare (MHLW) in July 2014 in combination with asunaprevir. Daclatasvir received FDA approval in July 2015. Other drugs have since been approved, among them notably the first FDA-approved NS5A inhibitor ledipasvir, approved October 2014 in combination with sofosbuvir to comprise the HCV drug Harvoni.
Although NS5A inhibitors have proven effective antivirals, they must be used alongside complementary antiviral drugs due to how quickly they lead to the development of resistant mutations when given as a single agent. This has shaped the focus of NS5A inhibitor development, from which asymmetrical variants that metabolize into analogues with complementary resistance profiles have emerged, amongst other discoveries.
Structure-activity relationship
The structural similarities between the inhibitors are readily apparent. The appendages of the central core are typically symmetrical and have an imidazole-proline structure. The natural L-configuration of the proline derivatives was found to be critical for inhibition since the unnatural D-configuration had drastically weaker activity. The potency of the inhibitors was correspondingly sensitive to changes in the amine capping element. These observations suggest that the amine region of the molecules plays an important role in the inhibitory activity.
Favorable characteristics in an NS5A inhibitor include high potency and long plasma half-life in order to achieve a once-daily-dosage. Slightly asymmetrical appendages, as seen in ledipasvir, were found to have distinctive benefits for the optimization of inhibitor potency and pharmacokinetics. The structure of the central core changes the spacing and the projection of the appendages as well as the position of the lipophilicity in the central core, which affects inhibitory activity notably. Structures with fused central rings consistently show greater inhibitory activity, whereas less lipophilic central cores provide weaker activity. Symmetrical bis-imidazol structures, such as daclatasvir, experience a loss in potency when fluorene is substituted for the biaryl group. This substitution also gives rise to some serious stability problems. However, a smaller lipophilic connector such as difluoromethylene generates the most potent inhibitor in an asymmetrical structure. Additionally, it provides improved bioavailability and more favorable plasma half-life. There is also a remarkable increase in potency when phenyl is replaced with naphthyl as a central core. This increase is significantly higher in an asymmetrical structure than it is in a symmetrical structure. In asymmetrical structures, a difference in potency between the phenyl-alkyne inhibitors demonstrates the importance of the position of the lipophilicity. A more centrally located alkyne, which is a less lipophilic connector than phenyl, improves potency.
Resistance
The potential HCV resistance against DAA drugs is a concern. Among the HCV quasispecies there are pre-existing variants with the potential to confer resistance to NS5A inhibitors without having any previous exposure to those drugs. Generally, the replication of these variants happens only in minute quantities, making them undetectable by current techniques. On the other hand, it is possible to selectively grow immune variants in the presence of NS5A inhibitors. HCV resistance is characterized by a certain escape pattern. This pattern is often associated with amino acid substitutions that confer upon the virus a robust drug resistance without impairing the viral fitness. It has been established that NS5A inhibitors possess a relatively low threshold for resistance, and variants that are associated with NS5A resistance have been shown to endure for up to six months in patients following treatment cessation. Therefore, combination therapies produce higher efficacy and shorter treatment periods.
Future research and new generations of NS5A inhibitors
DAA developers face foreseeable challenges in the years to come. Therapeutic gaps for individuals with complicating conditioned such as chronic kidney disease and cirrhosis will need to be bridged. Shorter therapies with milder side effects would yield greater adherence, and the ever present spectre of drug resistance is looming. The highly adaptive HCV has evolved into a number of different genomes that all need to be adequately treated, preferably with pan-genotypic regimens.
Some of these challenges already have possible solutions in sight. The protease inhibitor ABT-493 and the next-generation NS5A inhibitor ABT-530 are considered active against all HCV genotypes, including the hard to treat genotype 3. In vitro, ABT-530 showed potency against the resistance associated variants which are immune to the first generations of NS5A inhibitors, including ledipasvir, daclatasvir and ombitasvir. Because this drug combination has the additional quality of being hepatically cleared, it holds the promise that patients with chronic kidney disease and HCV could receive a safe, non-sofosbuvir-based treatment in the near future.
At least three drug combinations for the treatment of HCV are in the pipeline to be approved in 2016-2017: Sofosbuvir in combination with velpatasvir, ABT-493 in combination with ABT-530, and grazoprevir in combination with elbasvir, of which velpatasvir, ABT-530 and elbasvir are NS5A inhibitors.
See also
References
NS5A inhibitors
Drug discovery | Discovery and development of NS5A inhibitors | [
"Chemistry",
"Biology"
] | 3,071 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
64,479,442 | https://en.wikipedia.org/wiki/Simcenter%20STAR-CCM%2B | Simcenter STAR-CCM+ is a commercial Computational Fluid Dynamics (CFD) based simulation software developed by Siemens Digital Industries Software. Simcenter STAR-CCM+ allows the modeling and analysis of a range of engineering problems involving fluid flow, heat transfer, stress, particulate flow, electromagnetics and related phenomena.
Formerly known as STAR-CCM+, the software was first developed by CD-adapco and was acquired by Siemens Digital Industries Software as part of the purchase of CD-adapco in 2016. It is now a part of the Simcenter Portfolio of software tools.
History
Development work on STAR-CCM+ was started after a decision was taken to design a new, integrated CFD tool to replace the existing product STAR-CD which had been developed during the 1980s and 1990s by Computational Dynamics Ltd, a spin-off company from an Imperial College London CFD research group. STAR-CD was widely used most notably in the automotive industry. STAR-CCM+ aimed to take advantage of more modern programming methods and to provide an expandable framework.
STAR-CCM+ was announced at the 2004 AIAA Aerospace Sciences Conference in Reno, Nevada. A unique feature was a generalized polyhedral cell formulation, allowing the solver to handle any mesh type imported. The first official release included the first commercially available polyhedral mesher, offering faster model convergence compared to an equivalent tetrahedral mesh.
Development
Simcenter STAR-CCM+ is developed according to a continual improvement process, with a new version released every four months. The program uses a client-server architecture, implemented using object-oriented programming.
Capabilities
Simcenter STAR-CCM+ is primarily Computational fluid dynamics software which uses the Finite element analysis or Finite volume method
to calculate the transport of physical quantities on a discretized mesh.
For fluid flow the Navier–Stokes equations are solved in each of the cells.
Simcenter STAR-CCM+ has multiphysics capabilities including:
Fluid flow through porous media
Multiphase flow
Discrete element method
Volume of fluid method
Non-Newtonian fluid
Rheology
Turbulence
Viscoelasticity
Release history
Usage
Prior to CD-adapco's acquisition by Siemens, the customer base was approximately 3,200 accounts with 52% of licence sales attributed to the automotive industry.
See also
Computational fluid dynamics
Computer simulation
Computer-aided design
Computer-aided engineering
References
External links
Simcenter STAR-CCM+ webpage
Simcenter STAR-CCM+ on Simcenter Community
Simcenter STAR-CCM+ Technical Forum
Simulation software
Numerical software
Computational fluid dynamics
Computer-aided engineering
Computer-aided engineering software
Finite element software | Simcenter STAR-CCM+ | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 541 | [
"Computational fluid dynamics",
"Mathematical software",
"Industrial engineering",
"Computational physics",
"Construction",
"Computer-aided engineering",
"Numerical software",
"Fluid dynamics"
] |
64,482,200 | https://en.wikipedia.org/wiki/Selenite%20fluoride | A selenite fluoride is a chemical compound or salt that contains fluoride and selenite anions ( and ). These are mixed anion compounds. Some have third anions, including nitrate (), molybdate (), oxalate (), selenate (), silicate () and tellurate ().
Naming
A selenite fluoride compound may also be called a fluoride oxoselenate(IV) using IUPAC naming for inorganic compounds.
Production
Rare earth selenite fluorides can be produced by dissolving the rare earth selenate into molten lithium fluoride at over 800°C, whereupon the selenate loses oxygen to become selenite, and crystals of the selenite fluoride can form. Another similar method involves heating the rare earth oxide, with a rare earth fluoride and selenium dioxide with a caesium bromide flux. If glass or silica containers are used, they are eaten away by the molten flux and silicates are formed some of which may be fluoride selenite silicate compounds.
Related
Related to these are the selenite chlorides and selenite bromides by varying the halogenide. Similar compounds by varying the chalcogen also include the sulfite fluorides and tellurite fluorides.
List
SHG=second harmonic generator
References
Selenites
Fluorides
Mixed anion compounds | Selenite fluoride | [
"Physics",
"Chemistry"
] | 307 | [
"Matter",
"Mixed anion compounds",
"Salts",
"Fluorides",
"Ions"
] |
54,606,306 | https://en.wikipedia.org/wiki/Letters%20in%20Organic%20Chemistry | Letters in Organic Chemistry (usually abbreviated as Lett. Org. Chem.), is a peer-reviewed monthly scientific journal, published since 2004 by Bentham Science Publishers. Letters in Organic Chemistry is indexed in: Chemical Abstracts Service (CAS), EBSCOhost, British Library, PubMed, Web of Science, and Scopus.
Letters in Organic Chemistry publishes letters and articles on all areas related to organic chemistry.
According to the Journal Citation Reports, the impact factor of this journal is 0.867 for the year 2020. The Editor-in-Chief is Alberto Marra (University of Montpellier, France). who took over from Gwilherm Evano (Université libre de Bruxelles, Belgium) who resigned in February 2018 after a strong disagreement with Bentham on the scientific management of this journal.
References
Organic chemistry journals
Bentham Science Publishers academic journals | Letters in Organic Chemistry | [
"Chemistry"
] | 180 | [
"Organic chemistry journals"
] |
68,824,618 | https://en.wikipedia.org/wiki/11-Hydroxycannabinol | 11-Hydroxycannabinol (11-OH-CBN) is the main active metabolite of cannabinol (CBN), one of the active components of cannabis, and has also been isolated from cannabis itself. It is more potent than CBN itself, acting as an agonist of CB1 with around the same potency as THC, but is a weak antagonist at CB2.
See also
11-Hydroxyhexahydrocannabinol
11-Hydroxy-THC
11-Hydroxy-Delta-8-THC
Cannabinodiol
References
Cannabinoids
Benzochromenes
Diols
Hydroxyarenes
Human drug metabolites | 11-Hydroxycannabinol | [
"Chemistry"
] | 142 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
47,643,062 | https://en.wikipedia.org/wiki/Centre%20for%20Advanced%20Structural%20Ceramics | The Centre for Advanced Structural Ceramics is a multidisciplinary research centre focusing on materials science and engineering involving ceramic materials for applications such as aerospace, energy and tissue engineering. It is located within Imperial College London in the United Kingdom. The college's Department of Materials is closely involved with the centre's research.
The centre was founded to facilitate research between associated institutions and academics, and the UK's industrial structural ceramics community, with a stated goal to provide a "critical mass of UK expertise in the fundamental understanding of structural ceramics that is highly relevant to key areas of the economy including, energy generation, aerospace and defence, transport and healthcare". Funding initially came through the Engineering and Physical Sciences Research Council but it is now funded through an industrial consortium, including members such as Rolls-Royce Holdings, Morgan Advanced Materials, and Reaction Engines.
The CASC's work draws from skills of multiple fields, including chemistry, physics, materials and earth sciences and business. It supports the dissemination of knowledge of ceramic materials through annual summer schools, industry days, workshops and lectures.
References
External links
Official webpage
Research institutes of Imperial College London
Research institutes in London
Research institutes established in 2008
Materials science institutes | Centre for Advanced Structural Ceramics | [
"Materials_science"
] | 241 | [
"Materials science organizations",
"Materials science institutes"
] |
67,454,670 | https://en.wikipedia.org/wiki/Quantum%20state%20discrimination | The term quantum state discrimination collectively refers to quantum-informatics techniques, with the help of which, by performing a small number of measurements on a physical system, its specific quantum state can be identified . And this is provided that the set of states in which the system can be is known in advance, and we only need to determine which one it is. This assumption distinguishes such techniques from quantum tomography, which does not impose additional requirements on the state of the system, but requires many times more measurements.
If the set of states in which the investigated system can be is represented by orthogonal vectors, the situation is particularly simple. To unambiguously determine the state of the system, it is enough to perform a quantum measurement in the basis formed by these vectors. The given quantum state can then be flawlessly identified from the measured value. Moreover, it can be easily shown that if the individual states are not orthogonal to each other, there is no way to tell them apart with certainty. Therefore, in such a case, it is always necessary to take into account the possibility of incorrect or inconclusive determination of the state of the system. However, there are techniques that try to alleviate this deficiency. With exceptions, these techniques can be divided into two groups, namely those based on error minimization and then those that allow the state to be determined unambiguously in exchange for lower efficiency.
The first group of techniques is based on the works of Carl W. Helstrom from the 60s and 70s of the 20th century and in its basic form consists in the implementation of projective quantum measurement, where the measurement operators are projective representations. The second group is based on the conclusions of a scientific article published by ID Ivanovich in 1987 and requires the use of generalized measurement, in which the elements of the POVM set are taken as measurement operators. Both groups of techniques are currently the subject of active, primarily theoretical, research, and apart from a number of special cases, there is no general solution that would allow choosing measurement operators in the form of expressible analytical formula.
More precisely, in its standard formulation, the problem involves performing some POVM on a given unknown state , under the promise that the state received is an element of a collection of states , with occurring with probability , that is, . The task is then to find the probability of the POVM correctly guessing which state was received. Since the probability of the POVM returning the -th outcome when the given state was has the form , it follows that the probability of successfully determining the correct state is .
Helstrom Measurement
The discrimination of two states can be solved optimally using the Helstrom measurement. With two states comes two probabilities and POVMs . Since for all POVMs, . So the probability of success is:
To maximize the probability of success, the trace needs to be maximized. That's accomplished when is a projector on the positive eigenspace of , and the maximal probability of success is given by
where denotes the trace norm.
Discriminating between multiple states
If the task is to discriminate between more than two quantum states, there is no general formula for the optimal POVM and success probability. Nonetheless, the optimal success probability, for the task of discriminating between the elements of a given ensemble , can always be written asThis is obtained observing that is the a priori probability of getting the -th state, and is the probability of (correctly) guessing the input to be , conditioned to having indeed received the state .
While this expression cannot be given an explicit form in the general case, it can be solved numerically via Semidefinite programming. An alternative approach to discriminate between a given ensemble of states is to the use the so-called Pretty Good Measurement (PGM), also known as the square root measurement. This is an alternative discrimination strategy that is not in general optimal, but can still be shown to work pretty well.
References
External links
Interactive demonstration about quantum state discrimination
Quantum information science
Quantum information theory
Quantum measurement
Quantum computing | Quantum state discrimination | [
"Physics"
] | 827 | [
"Quantum measurement",
"Quantum mechanics"
] |
67,461,503 | https://en.wikipedia.org/wiki/Hydrogel%20fiber | Hydrogel fiber is a hydrogel made into a fibrous state, where its width is significantly smaller than its length. The hydrogel's specific surface area at fibrous form is larger than that of the bulk hydrogel, and its mechanical properties also changed accordingly. As a result of these changes, hydrogel fiber has a faster matter exchange rate and can be woven into different structures.
As a water swollen network with usually low toxicity, hydrogel fiber can be used in a variety of biomedical applications such as drug carrier, optical sensor, and actuator.
But the production of hydrogel fiber can be challenging as the hydrogel is crosslinked and can not be shaped into a fibrous state after polymerization. To make hydrogel into a fibrous state, the pregel solution must be made into fibrous form and then crosslinked while maintaining this shape.
Production method
To produce hydrogel fiber, the solidification of the pregel solution is the most important step. The pregel solution needs to be solidified while maintaining its fibrous shape. To achieve this, several methods based on chemical crosslinking, phase change, rheological property change have been developed.
Physical solidification based
Change in physical interactions can be utilized for the solidification process, and the fibrous state is usually achieved outside of the extrusion nozzle. Due to the reversibility of those physical interactions, subsequent crosslinking is traditionally required.
Electrospinning
Hydrogel fiber can be produced by electrospinning with solidification done by the evaporation of the solvent. The fibrous state is created by the combination of electrostatic repulsion and the surface tension of the solution. But subsequent crosslinking is usually needed to form a crosslinked network. One advantage of electrospun hydrogel fiber is that it has a diameter in range in the order between nm to μm, which is desirable for fast matter exchange. However, utilization of single fiber can be hard to achieve due to the weak mechanical strength of the microscopic fiber and its entanglements after production.
An example of this method would be the production of polyacrylamide (PAAM) semi-interpretation network developed by Tahchi et al. Where the first linear PAAM (provide solidification) was mixed with AAM monomer (form subsequent network) and crosslinker N,N′-methylenebisacrylamide (MBA). During the electrospinning process, the linear PAAM provided the required physical properties to achieve electrospinning, while the AAM monomer and MBA crosslinker were used to form a second crosslinked network inside the PAAM fiber. Although no crosslinking was formed between the first and second networks, the physical entanglement will prevent linear PAAM from leaking.
Drawspinning
Through supramolecular chemistry, pregel solution can solidify through reversible supramolecular interactions such as host-guest interactions. Such interaction can be manipulated through the mechanical force or the temperature. When energy exerted to the network is high enough, physical crosslinking point will break and the polymer will be at liquid state, after leaving the nozzle, the crosslinking can be rapidly formed to solidify the solution.
A case would be the Host–Guest Chemistry reported by Scherman et al. Where the formation of inclusion complex between Cucurbit[8]uril and 1-benzyl-3-vinylimidazolium bromide (BVIm) formed physical crosslinking point for the network. The formation of this physical crosslinking point is controlled by the temperature of the solution. By heating up the solution and cooling it down rapidly at extrusion nuzzle, the hydogel fiber is formed. Also, subsequent crosslinking is performed to form a perment network.
Meltspinning
Some hydrophilic polymer can be made into hydrogel fiber via melt-spinning method, where the solidification is done by the phase transition from the molten state. Similar to the electro-spinning, the pregel solution was kept liquid in the container. After leaving the nuzzle at filament state, the fiber solidified after the encounter of cool ambient air and maintained their shape.
An example would be the meltspinning apparatus built by Long et al., where meltspinning of polylactic acid (PLA) and polycaprolactone (PCL) fiber are achieved.
Direct ink writing
Similar to the draw spinning technique the direct ink writing technique utilized reversible physical solidification to produce hydrogel fibers. The pregel solution was liqufied through shear thinning process which can be generated by adding microscopic particles such as mircrogel. After leaving the nuzzle, the hydrogel will solidify and retain their shape, and network will be made perment after crosslinking.
An example would be the production of the fiber developed by Lewis et al. Where Silk fibroin was used to generate the desired shear-thinning properties. And the network was formed when the solvent was subsequently changed.
Chemical crosslink based
Similar to physical solidification, some chemical crosslinking methods have been developed to produce hydrogel fibers. And the key for the achievement of hydrogel production through the chemical crosslinking method is the effective separation between the formed network and the tube wall.
Microfluid spinning
Many microfluid device-based methods have been developed to produce hydrogel fibers.
Crosslinking of alginate
One of the most commonly used fiber production methods is the crosslinking of sodium alginate by CaCl2, where the formed calcium alginate will act as the crosslinking point to link the alginate chains together to form the network and solidified the polymer. Afterward, this alginate hydrogel fiber can be used as a template for the polymerization of secondary networks. Additionally, by controlling the fluid dynamics inside the microfluid device, the diameter and the shape of the resulting fiber can be tuned without doing modification to the devices.
A practice would be the production of alginate solution reported by Yang et al. They used the sodium alginate as core fluid and CaCl2 as shealth fluid, the crosslinked network (hydrogel fiber) formed once this two fluid met, the laminar flow kept its tubular shape during the reaction.
Photoinitiated crosslinking
Other photoinitiated free radical polymerization reactions can also be used for fiber production. In this case, the shealth fluid was only used to separate the core fluid from the tube wall. Also, to achieve the solidification rapid enough, a more concentrated monomer solution was usually used.
An example would be the production of 4-hydroxybutyl acrylate fiber reported by Beebe et al. The microfluid device they used was built with ethylvinyl acetate caplliary and PDMS rubber. The core fluid was a mixture of 4-hydroxybutyl acrylate, acrylic acid, ethyleneglycol dimethacrylate (crosslinker), 2,2′-dimethoxy-2-phenyl-acetonephenone (photoinitiator). The sheath fluid was only for separation. The crosslinked network was formed by free radical polymerization when the UV light met the core fluid.
Polymerization in tubular molds
Although only being able to produce short hydrogel fibers, production of hydrogel fiber by polymerizing the hydrogel network inside a tubular mold and push out the fiber forcefully can also be achieved. But the friction will increase with the increasing length, and only short hydrogel fibers are feasible.
A case would be the production of poly(acrylamide-co-poly(ethylene glycol) diacrylate) fiber reported by yun et al. The pregel solution was a mixture of AAM, poly(ethylene glycol) diacrylate (PEGDA, crosslinker), and 2-hydroxy-2-methylpropiophenone (photoinitiator). The mixture was injected into a tubular mold and extracted through hydrostatic force afterwards.
Self-lubricate spinning
An interesting phenomenon called self-lubricate spinning can facilitate the demolding of the fiber and enables the continuous production of hydrogel fiber from tubular mold. During the polymerization process, if an inert second polymer is present, it will be particularly expelled from the formed network and being able to move with relative ease. The linear polymer on the surface of the crosslinked network also contains water solvent due to the osmic pressure, thus, a lubrication layer is formed. Therefore, the solidified polymer fiber can exit the tube with decreased friction force and continuous production can be achieved.
An example would be the production the PAAM/PAMPS semi-interpenetration network hydrogel fiber reported by Zhao et al. The pregel solution was the mixture of PAMPS, AAM, PEGDA (crosslinker), and 2-hydroxy-4'-(2-hydroxyethoxy)-2-methylpropiophenone (photoinitiator). The pregel solution was fed into a PTFE tube at a constant speed, with UV light being used to initiate the reaction.
Characterization methods
Surface morphology
The surface morphology and shape of the cross-section can be observed via scanning electron microscope (SEM) imaging after removal of solvent. Also, environmental scanning electron microscope (ESEM) can be used to observe wet hydrogel fibers. But different treatments will affect the surface morphology of the hydrogel fiber drastically. If the hydrogel fiber was dried directly, a smooth surface would be obtained because of the collapse of the polymer network after the removal of the solvent. If the hydrogel fiber was lyophilized, a porous surface will usually be found due to the pore-forming effect of the ice crystal. ESEM can directly observe the surface morphology. The resulting image usually indicates a smooth surface with some wrinkled formed due to the gradual loss of water.
Mechanical properties
The mechanical properties of the fibers are tested, but the process can be tricky due to practical reasons. The mechanical properties are tested with Universal Test Machine by fixing the hydrogel fibers between two holders. However, due to the compress of the holder, hydrogel fiber might have a trend to break at the holding point. Also, the loss of water during the test will impact the resulting data, and precaution needs to be taken to meditate the loss. And the tensile strength of the hydrogel fiber is usually smaller than 1 MPa.
Optical properties
Optical properties are tested for optical sensing-related applications. This can include light attenuation, refractive index, transmission, etc. These optical properties are significantly influenced by the composition of the hydrogel.
Biocompatibility
Cell toxicity tests are performed for applications such as cell growth scaffolds. By growing the cell with the ability to produce fluorescent protein, the growth of the cell can be monitored with fluorescent imaging techniques.
Applications
Optical fiber sensors
Transparent hydrogel fibers can be used as optical fiber, and stimuli-responsive functional groups can be grafted on to create optical sensors. For example, in the research done by Yun et al. the glucose-sensitive phenylboronic acid was grafted onto the polymer network. When the glucose concentration changes, the adsorption of the phenylboronic acid will change accordingly and can be recorded with the light intensity at a certain wavelength.
Additive manufacture
Although suffering from poor mechanical strength, some approach has been made to construct hydrogel fiber with textile methods. Also, the electrospun, meltspun, DIW method can produce hydrogel fiber structures at higher dimensions directly.
Biomedical scaffolds
Hydrogel fiber can be used to fabricate scaffolds for cell growth and drug release.
Actuators
Stimuli-responsive hydrogel fibers can be used as actuators and soft robots. By braiding the hydrogel fiber together, the force of the single fiber can be magnified. Also, due to the slipping between hydrogel fibers, the stain of the bending can be reduced to further enhance the performance.
References
Colloidal chemistry
Polymer chemistry | Hydrogel fiber | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,530 | [
"Colloidal chemistry",
"Materials science",
"Surface science",
"Colloids",
"Polymer chemistry"
] |
67,464,025 | https://en.wikipedia.org/wiki/2-Iodomelatonin | 2-Iodomelatonin is a melatonin analog used as a radiolabelled ligand for the melatonin receptors, MT1, MT2, and MT3. It acts as a full agonist at both MT1 and MT2 receptors.
References
Tryptamines
Melatonin receptor agonists
Acetamides
Iodoarenes
Methoxy compounds | 2-Iodomelatonin | [
"Chemistry"
] | 78 | [
"Melatonin receptor agonists",
"Drug discovery"
] |
67,465,897 | https://en.wikipedia.org/wiki/Affine%20symmetric%20group | The affine symmetric groups are a family of mathematical structures that describe the symmetries of the number line and the regular triangular tiling of the plane, as well as related higher-dimensional objects. In addition to this geometric description, the affine symmetric groups may be defined in other ways: as collections of permutations (rearrangements) of the integers () that are periodic in a certain sense, or in purely algebraic terms as a group with certain generators and relations. They are studied in combinatorics and representation theory.
A finite symmetric group consists of all permutations of a finite set. Each affine symmetric group is an infinite extension of a finite symmetric group. Many important combinatorial properties of the finite symmetric groups can be extended to the corresponding affine symmetric groups. Permutation statistics such as descents and inversions can be defined in the affine case. As in the finite case, the natural combinatorial definitions for these statistics also have a geometric interpretation.
The affine symmetric groups have close relationships with other mathematical objects, including juggling patterns and certain complex reflection groups. Many of their combinatorial and geometric properties extend to the broader family of affine Coxeter groups.
Definitions
The affine symmetric group may be equivalently defined as an abstract group by generators and relations, or in terms of concrete geometric and combinatorial models.
Algebraic definition
One way of defining groups is by generators and relations. In this type of definition, generators are a subset of group elements that, when combined, produce all other elements. The relations of the definition are a system of equations that determine when two combinations of generators are equal. In this way, the affine symmetric group is generated by a set
of elements that satisfy the following relations: when ,
(the generators are involutions),
if is not one of , indicating that for these pairs of generators, the group operation is commutative, and
.
In the relations above, indices are taken modulo , so that the third relation includes as a particular case . (The second and third relation are sometimes called the braid relations.) When , the affine symmetric group is the infinite dihedral group generated by two elements subject only to the relations .
These relations can be rewritten in the special form that defines the Coxeter groups, so the affine symmetric groups are Coxeter groups, with the as their Coxeter generating sets. Each Coxeter group may be represented by a Coxeter–Dynkin diagram, in which vertices correspond to generators and edges encode the relations between them. For , the Coxeter–Dynkin diagram of is the -cycle (where the edges correspond to the relations between pairs of consecutive generators and the absence of an edge between other pairs of generators indicates that they commute), while for it consists of two nodes joined by an edge labeled .
Geometric definition
In the Euclidean space with coordinates , the set of points for which forms a (hyper)plane, an -dimensional subspace. For every pair of distinct elements and of and every integer , the set of points in that satisfy forms an -dimensional subspace within , and there is a unique reflection of that fixes this subspace. Then the affine symmetric group can be realized geometrically as a collection of maps from to itself, the compositions of these reflections.
Inside , the subset of points with integer coordinates forms the root lattice, . It is the set of all the integer vectors such that . Each reflection preserves this lattice, and so the lattice is preserved by the whole group.
The fixed subspaces of these reflections divide into congruent simplices, called alcoves. The situation when is shown in the figure; in this case, the root lattice is a triangular lattice, the reflecting lines divide into equilateral triangle alcoves, and the roots are the centers of nonoverlapping hexagons made up of six triangular alcoves.
To translate between the geometric and algebraic definitions, one fixes an alcove and consider the hyperplanes that form its boundary. The reflections through these boundary hyperplanes may be identified with the Coxeter generators. In particular, there is a unique alcove (the fundamental alcove) consisting of points such that , which is bounded by the hyperplanes ..., and illustrated in the case . For , one may identify the reflection through with the Coxeter generator , and also identify the reflection through with the generator .
Combinatorial definition
The elements of the affine symmetric group may be realized as a group of periodic permutations of the integers. In particular, say that a function is an affine permutation if
it is a bijection (each integer appears as the value of for exactly one ),
for all integers (the function is equivariant under shifting by ), and
, the th triangular number.
For every affine permutation, and more generally every shift-equivariant bijection, the numbers must all be distinct modulo . An affine permutation is uniquely determined by its window notation , because all other values of can be found by shifting these values. Thus, affine permutations may also be identified with tuples of integers that contain one element from each congruence class modulo and sum to .
To translate between the combinatorial and algebraic definitions, for one may identify the Coxeter generator with the affine permutation that has window notation , and also identify the generator with the affine permutation . More generally, every reflection (that is, a conjugate of one of the Coxeter generators) can be described uniquely as follows: for distinct integers , in and arbitrary integer , it maps to , maps to , and fixes all inputs not congruent to or modulo .
Representation as matrices
Affine permutations can be represented as infinite periodic permutation matrices. If is an affine permutation, the corresponding matrix has entry 1 at position in the infinite grid for each integer , and all other entries are equal to 0. Since is a bijection, the resulting matrix contains exactly one 1 in every row and column. The periodicity condition on the map ensures that the entry at position is equal to the entry at position for every pair of integers . For example, a portion of the matrix for the affine permutation is shown in the figure. In row 1, there is a 1 in column 2; in row 2, there is a 1 in column 0; and in row 3, there is a 1 in column 4. The rest of the entries in those rows and columns are all 0, and all the other entries in the matrix are fixed by the periodicity condition.
Relationship to the finite symmetric group
The affine symmetric group contains the finite symmetric group of permutations on elements as both a subgroup and a quotient group. These connections allow a direct translation between the combinatorial and geometric definitions of the affine symmetric group.
As a subgroup
There is a canonical way to choose a subgroup of that is isomorphic to the finite symmetric group .
In terms of the algebraic definition, this is the subgroup of generated by (excluding the simple reflection ). Geometrically, this corresponds to the subgroup of transformations that fix the origin, while combinatorially it corresponds to the window notations for which (that is, in which the window notation is the one-line notation of a finite permutation).
If is the window notation of an element of this standard copy of , its action on the hyperplane in is given by permutation of coordinates: . (In this article, the geometric action of permutations and affine permutations is on the right; thus, if and are two affine permutations, the action of on a point is given by first applying , then applying .)
There are also many nonstandard copies of contained in . A geometric construction is to pick any point in (that is, an integer vector whose coordinates sum to 0); the subgroup of of isometries that fix is isomorphic to .
As a quotient
There is a simple map (technically, a surjective group homomorphism) from onto the finite symmetric group . In terms of the combinatorial definition, an affine permutation can be mapped to a permutation by reducing the window entries modulo to elements of , leaving the one-line notation of a permutation. In this article, the image of an affine permutation is called the underlying permutation of .
The map sends the Coxeter generator to the permutation whose one-line notation and cycle notation are and , respectively.
The kernel of is by definition the set of affine permutations whose underlying permutation is the identity. The window notations of such affine permutations are of the form , where is an integer vector such that , that is, where . Geometrically, this kernel consists of the translations, the isometries that shift the entire space without rotating or reflecting it. In an abuse of notation, the symbol is used in this article for all three of these sets (integer vectors in , affine permutations with underlying permutation the identity, and translations); in all three settings, the natural group operation turns into an abelian group, generated freely by the vectors .
Connection between the geometric and combinatorial definitions
The affine symmetric group has as a normal subgroup, and is isomorphic to the semidirect product
of this subgroup with the finite symmetric group , where the action of on is by permutation of coordinates. Consequently, every element of has a unique realization as a product
where is a permutation in the standard copy of in and is a translation in .
This point of view allows for a direct translation between the combinatorial and geometric definitions of : if one writes where and then the affine permutation corresponds to the rigid motion of defined by
Furthermore, as with every affine Coxeter group, the affine symmetric group acts transitively and freely on the set of alcoves: for each two alcoves, a unique group element takes one alcove to the other. Hence, making an arbitrary choice of alcove places the group in one-to-one correspondence with the alcoves: the identity element corresponds to , and every other group element corresponds to the alcove that is the image of under the action of .
Example:
Algebraically, is the infinite dihedral group, generated by two generators subject to the relations . Every other element of the group can be written as an alternating product of copies of and .
Combinatorially, the affine permutation has window notation , corresponding to the bijection for every integer . The affine permutation has window notation , corresponding to the bijection for every integer . Other elements have the following window notations:
Geometrically, the space on which acts is a line, with infinitely many equally spaced reflections. It is natural to identify the line with the real line , with reflection around the point , and with reflection around the point . In this case, the reflection reflects across the point for any integer , the composition translates the line by , and the composition translates the line by .
Permutation statistics and permutation patterns
Many permutation statistics and other features of the combinatorics of finite permutations can be extended to the affine case.
Descents, length, and inversions
The length of an element of a Coxeter group is the smallest number such that can be written as a product of Coxeter generators of .
Geometrically, the length of an element in is the number of reflecting hyperplanes that separate and , where is the fundamental alcove (the simplex bounded by the reflecting hyperplanes of the Coxeter generators ).
Combinatorially, the length of an affine permutation is encoded in terms of an appropriate notion of inversions: for an affine permutation , the length is
Alternatively, it is the number of equivalence classes of pairs such that and under the equivalence relation if for some integer .
The generating function for length in is
Similarly, there is an affine analogue of descents in permutations: an affine permutation has a descent in position if . (By periodicity, has a descent in position if and only if it has a descent in position for all integers .)
Algebraically, the descents corresponds to the right descents in the sense of Coxeter groups; that is, is a descent of if and only if . The left descents (that is, those indices such that ) are the descents of the inverse affine permutation ; equivalently, they are the values such that occurs before in the sequence .
Geometrically, is a descent of if and only if the fixed hyperplane of separates the alcoves and
Because there are only finitely many possibilities for the number of descents of an affine permutation, but infinitely many affine permutations, it is not possible to naively form a generating function for affine permutations by number of descents (an affine analogue of Eulerian polynomials). One possible resolution is to consider affine descents (equivalently, cyclic descents) in the finite symmetric group . Another is to consider simultaneously the length and number of descents of an affine permutation. The multivariate generating function for these statistics over simultaneously for all is
where is the number of descents of the affine permutation and is the -exponential function.
Cycle type and reflection length
Any bijection partitions the integers into a (possibly infinite) list of (possibly infinite) cycles: for each integer , the cycle containing is the sequence where exponentiation represents functional composition.
For an affine permutation , the following conditions are equivalent: all cycles of are finite, has finite order, and the geometric action of on the space has at least one fixed point.
The reflection length of an element of is the smallest number such that there exist reflections such that . (In the symmetric group, reflections are transpositions, and the reflection length of a permutation is , where is the number of cycles of .) In , the following formula was proved for the reflection length of an affine permutation : for each cycle of , define the weight to be the integer k such that consecutive entries congruent modulo differ by exactly . Form a tuple of cycle weights of (counting translates of the same cycle by multiples of only once), and define the nullity to be the size of the smallest set partition of this tuple so that each part sums to 0. Then the reflection length of is
where is the underlying permutation of .
For every affine permutation , there is a choice of subgroup of such that , , and for the standard form implied by this semidirect product, the reflection lengths are additive, that is, .
Fully commutative elements and pattern avoidance
A reduced word for an element of a Coxeter group is a tuple of Coxeter generators of minimum possible length such that . The element is called fully commutative if any reduced word can be transformed into any other by sequentially swapping pairs of factors that commute. For example, in the finite symmetric group , the element is fully commutative, since its two reduced words and can be connected by swapping commuting factors, but is not fully commutative because there is no way to reach the reduced word starting from the reduced word by commutations.
proved that in the finite symmetric group , a permutation is fully commutative if and only if it avoids the permutation pattern 321, that is, if and only if its one-line notation contains no three-term decreasing subsequence. In , this result was extended to affine permutations: an affine permutation is fully commutative if and only if there do not exist integers such that .
The number of affine permutations avoiding a single pattern is finite if and only if avoids the pattern 321, so in particular there are infinitely many fully commutative affine permutations. These were enumerated by length in .
Parabolic subgroups and other structures
The parabolic subgroups of and their coset representatives offer a rich combinatorial structure. Other aspects of affine symmetric groups, such as their Bruhat order and representation theory, may also be understood via combinatorial models.
Parabolic subgroups, coset representatives
]
A standard parabolic subgroup of a Coxeter group is a subgroup generated by a subset of its Coxeter generating set. The maximal parabolic subgroups are those that come from omitting a single Coxeter generator. In , all maximal parabolic subgroups are isomorphic to the finite symmetric group . The subgroup generated by the subset consists of those affine permutations that stabilize the interval , that is, that map every element of this interval to another element of the interval.
For a fixed element of , let be the maximal proper subset of Coxeter generators omitting , and let denote the parabolic subgroup generated by . Every coset has a unique element of minimum length. The collection of such representatives, denoted , consists of the following affine permutations:
In the particular case that , so that is the standard copy of inside , the elements of may naturally be represented by abacus diagrams: the integers are arranged in an infinite strip of width , increasing sequentially along rows and then from top to bottom; integers are circled if they lie directly above one of the window entries of the minimal coset representative. For example, the minimal coset representative is represented by the abacus diagram at right. To compute the length of the representative from the abacus diagram, one adds up the number of uncircled numbers that are smaller than the last circled entry in each column. (In the example shown, this gives .)
Other combinatorial models of minimum-length coset representatives for can be given in terms of core partitions (integer partitions in which no hook length is divisible by ) or bounded partitions (integer partitions in which no part is larger than ). Under these correspondences, it can be shown that the weak Bruhat order on is isomorphic to a certain subposet of Young's lattice.
Bruhat order
The Bruhat order on has the following combinatorial realization. If is an affine permutation and and are integers, define
to be the number of integers such that and . (For example, with , one has : the three relevant values are , which are respectively mapped by to 1, 2, and 4.) Then for two affine permutations , , one has that in Bruhat order if and only if for all integers , .
Representation theory and an affine Robinson–Schensted correspondence
In the finite symmetric group, the Robinson–Schensted correspondence gives a bijection between the group and pairs of standard Young tableaux of the same shape. This bijection plays a central role in the combinatorics and the representation theory of the symmetric group. For example, in the language of Kazhdan–Lusztig theory, two permutations lie in the same left cell if and only if their images under Robinson–Schensted have the same tableau , and in the same right cell if and only if their images have the same tableau . In , Jian-Yi Shi showed that left cells for are indexed instead by tabloids, and in he gave an algorithm to compute the tabloid analogous to the tableau for an affine permutation. In , the authors extended Shi's work to give a bijective map between and triples consisting of two tabloids of the same shape and an integer vector whose entries satisfy certain inequalities. Their procedure uses the matrix representation of affine permutations and generalizes the shadow construction, introduced in .
Inverse realizations
In some situations, one may wish to consider the action of the affine symmetric group on or on alcoves that is inverse to the one given above. These alternate realizations are described below.
In the combinatorial action of on , the generator acts by switching the values and . In the inverse action, it instead switches the entries in positions and . Similarly, the action of a general reflection will be to switch the entries at positions and for each , fixing all inputs at positions not congruent to or modulo .
In the geometric action of , the generator acts on an alcove by reflecting it across one of the bounding planes of the fundamental alcove . In the inverse action, it instead reflects across one of its own bounding planes. From this perspective, a reduced word corresponds to an alcove walk on the tessellated space .
Relationship to other mathematical objects
The affine symmetric groups are closely related to a variety of other mathematical objects.
Juggling patterns
In , a correspondence is given between affine permutations and juggling patterns encoded in a version of siteswap notation. Here, a juggling pattern of period is a sequence of nonnegative integers (with certain restrictions) that captures the behavior of balls thrown by a juggler, where the number indicates the length of time the th throw spends in the air (equivalently, the height of the throw). The number of balls in the pattern is the average . The Ehrenborg–Readdy correspondence associates to each juggling pattern of period the function defined by
where indices of the sequence a are taken modulo . Then is an affine permutation in , and moreover every affine permutation arises from a juggling pattern in this way. Under this bijection, the length of the affine permutation is encoded by a natural statistic in the juggling pattern:
where is the number of crossings (up to periodicity) in the arc diagram of a. This allows an elementary proof of the generating function for affine permutations by length.
For example, the juggling pattern 441 has and . Therefore, it corresponds to the affine permutation . The juggling pattern has four crossings, and the affine permutation has length .
Similar techniques can be used to derive the generating function for minimal coset representatives of by length.
Complex reflection groups
In a finite-dimensional real inner product space, a reflection is a linear transformation that fixes a linear hyperplane pointwise and negates the vector orthogonal to the plane. This notion may be extended to vector spaces over other fields. In particular, in a complex inner product space, a reflection is a unitary transformation of finite order that fixes a hyperplane. This implies that the vectors orthogonal to the hyperplane are eigenvectors of , and the associated eigenvalue is a complex root of unity. A complex reflection group is a finite group of linear transformations on a complex vector space generated by reflections.
The complex reflection groups were fully classified by : each complex reflection group is isomorphic to a product of irreducible complex reflection groups, and every irreducible either belongs to an infinite family (where , , and are positive integers such that divides ) or is one of 34 other (so-called "exceptional") examples. The group is the generalized symmetric group: algebraically, it is the wreath product of the cyclic group with the symmetric group . Concretely, the elements of the group may be represented by monomial matrices (matrices having one nonzero entry in every row and column) whose nonzero entries are all th roots of unity. The groups are subgroups of , and in particular the group consists of those matrices in which the product of the nonzero entries is equal to 1.
In , Shi showed that the affine symmetric group is a generic cover of the family , in the following sense: for every positive integer , there is a surjection from to , and these maps are compatible with the natural surjections when that come from raising each entry to the th power. Moreover, these projections respect the reflection group structure, in that the image of every reflection in under is a reflection in ; and similarly when the image of the standard Coxeter element in is a Coxeter element in .
Affine Lie algebras
Each affine Coxeter group is associated to an affine Lie algebra, a certain infinite-dimensional non-associative algebra with unusually nice representation-theoretic properties. In this association, the Coxeter group arises as a group of symmetries of the root space of the Lie algebra (the dual of the Cartan subalgebra). In the classification of affine Lie algebras, the one associated to is of (untwisted) type , with Cartan matrix for and
(a circulant matrix) for .
Like other Kac–Moody algebras, affine Lie algebras satisfy the Weyl–Kac character formula, which expresses the characters of the algebra in terms of their highest weights. In the case of affine Lie algebras, the resulting identities are equivalent to the Macdonald identities. In particular, for the affine Lie algebra of type , associated to the affine symmetric group , the corresponding Macdonald identity is equivalent to the Jacobi triple product.
Braid group and group-theoretic properties
Coxeter groups have a number of special properties not shared by all groups. These include that their word problem is decidable (that is, there exists an algorithm that can determine whether or not any given product of the generators is equal to the identity element) and that they are linear groups (that is, they can be represented by a group of invertible matrices over a field).
Each Coxeter group is associated to an Artin–Tits group , which is defined by a similar presentation that omits relations of the form for each generator . In particular, the Artin–Tits group associated to is generated by elements subject to the relations for (and no others), where as before the indices are taken modulo (so ). Artin–Tits groups of Coxeter groups are conjectured to have many nice properties: for example, they are conjectured to be torsion-free, to have trivial center, to have solvable word problem, and to satisfy the conjecture. These conjectures are not known to hold for all Artin–Tits groups, but in it was shown that has these properties. (Subsequently, they have been proved for the Artin–Tits groups associated to affine Coxeter groups.) In the case of the affine symmetric group, these proofs make use of an associated Garside structure on the Artin–Tits group.
Artin–Tits groups are sometimes also known as generalized braid groups, because the Artin–Tits group of the (finite) symmetric group is the braid group on strands. Not all Artin–Tits groups have a natural representation in terms of geometric braids. However, the Artin–Tits group of the hyperoctahedral group (geometrically, the symmetry group of the n-dimensional hypercube; combinatorially, the group of signed permutations of size n) does have such a representation: it is given by the subgroup of the braid group on strands consisting of those braids for which a particular strand ends in the same position it started in, or equivalently as the braid group of strands in an annular region. Moreover, the Artin–Tits group of the hyperoctahedral group can be written as a semidirect product of with an infinite cyclic group. It follows that may be interpreted as a certain subgroup consisting of geometric braids, and also that it is a linear group.
Extended affine symmetric group
The affine symmetric group is a subgroup of the extended affine symmetric group. The extended group is isomorphic to the wreath product . Its elements are extended affine permutations: bijections such that for all integers . Unlike the affine symmetric group, the extended affine symmetric group is not a Coxeter group. But it has a natural generating set that extends the Coxeter generating set for : the shift operator whose window notation is generates the extended group with the simple reflections, subject to the additional relations .
Combinatorics of other affine Coxeter groups
The geometric action of the affine symmetric group places it naturally in the family of affine Coxeter groups, each of which has a similar geometric action on an affine space. The combinatorial description of the may also be extended to many of these groups: in , an axiomatic description is given of certain permutation groups acting on (the "George groups", in honor of George Lusztig), and it is shown that they are exactly the "classical" Coxeter groups of finite and affine types A, B, C, and D. (In the classification of affine Coxeter groups, the affine symmetric group is type A.) Thus, the combinatorial interpretations of descents, inversions, etc., carry over in these cases. Abacus models of minimum-length coset representatives for parabolic quotients have also been extended to this context.
History
The study of Coxeter groups in general could be said to first arise in the classification of regular polyhedra (the Platonic solids) in ancient Greece. The modern systematic study (connecting the algebraic and geometric definitions of finite and affine Coxeter groups) began in work of Coxeter in the 1930s. The combinatorial description of the affine symmetric group first appears in work of , and was expanded upon by ; both authors used the combinatorial description to study the Kazhdan–Lusztig cells of . The proof that the combinatorial definition agrees with the algebraic definition was given by .
References
Notes
Works cited
Coxeter groups
Reflection groups
Permutation groups
Symmetry
Representation theory | Affine symmetric group | [
"Physics",
"Mathematics"
] | 6,132 | [
"Euclidean symmetries",
"Reflection groups",
"Fields of abstract algebra",
"Geometry",
"Representation theory",
"Symmetry"
] |
67,468,472 | https://en.wikipedia.org/wiki/Estonian%20Air%20Sports%20Federation | Estonian Air Sports Federation (abbreviation EASF; ) is one of the sport governing bodies in Estonia which deals with air sports.
EASF is a member of World Air Sports Federation (FAI) and a member of Estonian Olympic Committee.
References
External links
Sports governing bodies in Estonia
Aviation in Estonia
Fédération Aéronautique Internationale | Estonian Air Sports Federation | [
"Engineering"
] | 64 | [
"Fédération Aéronautique Internationale",
"Aeronautics organizations"
] |
67,469,176 | https://en.wikipedia.org/wiki/Reverse%20complement%20polymerase%20chain%20reaction | Reverse complement polymerase chain reaction (RC-PCR) is a modification of the polymerase chain reaction (PCR). It is primarily used to generate amplicon libraries for DNA sequencing by next generation sequencing (NGS). The technique permits both the amplification and the ability to append sequences or functional domains of choice independently to either end of the generated amplicons in a single closed tube reaction. RC-PCR was invented in 2013 by Daniel Ward and Christopher Mattocks at Salisbury NHS Foundation Trust, UK.
Principles
In RC-PCR, no target specific primers are present in the reaction mixture. Instead target specific primers are formed as the reaction proceeds. A typical reaction employing the approach requires four oligonucleotides. The oligonucleotides interact with each other in pairs; one oligonucleotide probe and one universal primer (containing functional domains of choice), which hybridize with each other at their 3’ ends. Once hybridized, the universal primer can be extended, using the oligonucleotide probe as the template, to yield fully formed, target specific primers, which are then available to amplify the template in subsequent rounds of thermal cycling as per a standard PCR reaction.
The oligonucleotide probe may also be blocked at the 3’ end preventing equivalent extension of the probe, but this is not essential. The probe is not consumed; it is available to act as a template for the universal primer to be ‘converted’ into target specific primer throughout successive PCR cycles. This generation of target specific primer occurs in parallel with standard PCR amplification under standard PCR conditions.
Advantages
RC-PCR provides significant advantages over other methods of amplicon library preparation methods. Most significantly it is a single closed tube reaction, this eliminates cross contamination associated with other two-step PCR approaches as well as utilising less reagent and requiring less labour to perform.
The technique also provides the significant advantage of the flexibility of appending any desired sequence or functional domain of choice to either end of any amplicon. This is currently most advantageous in modern next generation sequencing (NGS) laboratories where a single target specific probe pair can be used with a whole library of universal primers. This benefit is used with NGS applications to apply sample specific indexes independently to each end of the amplicon construct. A Laboratory employing this approach would only require a single set of index primers, which can be used with all target specific probes compatible with that index set. This significantly reduces the number and length of oligonucleotides required by the laboratory compared to using full length pre-synthesised indexed target specific primers.
The generation of the target specific primer in the reaction as it progresses also leads to more balanced reaction components. Concentrations of target specific primer are more aligned with target molecule concentration thereby reducing the potential of both off target priming and primer dimerisation.
Variations
Multiplex RC-PCR – where two or more universal primer probe sets are present in the reaction mixture to amplify two or more targets simultaneously.
RT-RC-PCR – This modification is used when the template material supplied in the reaction is RNA rather than DNA. In this modification the reaction mixture also contains reverse transcriptase enzymes and reverse transcription primers as well as the universal primers and Reverse complement probes of the method. This approach permits reverse transcription of the provided RNA template, the formation of tailed target specific primers and the amplification of the desired targets in a single closed tube reaction.
Single ended RC-PCR – This variation of the method is used when only one complementary universal primer probe pair is provided in the reaction to generate one target specific primer. The other target specific primer is provided as a traditional primer as per standard PCR.
History
Following the invention of RC-PCR in 2013 the technique was clinically validated and employed diagnostically for a range of both inherited diseases such as hemochromatosis and thrombophilia as well as somatically acquired disorders including Myeloproliferative neoplasms and Acute myeloid leukemia in the Wessex Regional Genetics Laboratory (WRGL), Salisbury UK. More recently work has been undertaken to utilise the technology in the fight against the SARS-CoV-2 pandemic.
The patent application was filed in the UK in 2015 and awarded in 2020. Patent applications have been filed in other jurisdictions worldwide and are currently pending.
In May 2019 the Intellectual property was licensed to Nimagen B.V. to develop, manufacture and market kits exploiting the technology. Currently commercially available kits employing the technology include those for Human identification and for the whole genome sequencing of the SARS-CoV-2 virus for variant identification, tracking and treatment response. In August 2022 Nimagen officially launched a range of products employing the RC-PCR technology for human forensics applications under the trademark IDseek®. The Short Tandem Repeat version of the kit is validated by the Netherlands Forensic Institute as an improved method for routine massively parallel sequencing of short tandem repeats.
The RC-PCR approach is becoming more widely used for human health and several CE IVD kits are available for human clinical diagnostics including BRCA, TP53, PALB2 and CFTR analysis. The technique has also been proven as a useful and powerful tool in the identification of the causative infectious pathogen in patients suspected of having a bacterial infection, in this setting it has been shown to provide a significant increase in the number of clinical samples in which a potentially clinically relevant pathogen is identified compared to the commonly used 16S Sanger method. It has also been shown to provide similar advantages over traditional methods in the deconvolution of microbial communities in environmental samples.
References
External links
RC-PCR animation
WIPO patent filing information page
Polymerase chain reaction
SARS-CoV-2
DNA sequencing methods
Molecular biology techniques
DNA profiling techniques
Laboratory techniques
Amplifiers
British inventions | Reverse complement polymerase chain reaction | [
"Chemistry",
"Technology",
"Biology"
] | 1,241 | [
"Biochemistry methods",
"Genetics techniques",
"DNA profiling techniques",
"Polymerase chain reaction",
"DNA sequencing methods",
"Molecular biology techniques",
"DNA sequencing",
"nan",
"Molecular biology",
"Amplifiers"
] |
56,151,614 | https://en.wikipedia.org/wiki/Sirius%20%28synchrotron%20light%20source%29 | Sirius is a diffraction-limited storage ring synchrotron light source at the Brazilian Synchrotron Light Laboratory (LNLS) in Campinas, São Paulo State, Brazil. It has a circumference of , a diameter of , and an electron energy of 3 GeV. The produced synchrotron radiation covers the range of infrared, optical, ultraviolet and X-ray light.
Costing R$1.8 billion, it was funded by the Ministry of Science, Technology, Innovation and Communications (Brazil) and the São Paulo Research Foundation. Discussion started in 2008, and initial funding of R$2 million was granted in 2009. Construction started in 2015, and was finished in 2018. The first electron loop around the storage ring was achieved in November 2019. Its first experiments were made during COVID-19 pandemic at MANACÁ beamline, dedicated to macromolecular crystallography.
Sirius is the second synchrotron lightsource constructed in Brazil. The first one, UVX, was a second-generation machine operated by LNLS from 1997 to 2019.
History
In 2008, LNLS former director José Antônio Brum asked for a preview of a new accelerator, which was then shown to the minister of science Sérgio Machado Rezende. Construction began in 2014 under the Dilma Rousseff government. Sirius is the second operational particle accelerator in Brazil, the first one being the UVX..
The first part of the complex was inaugurated on 14th November 2018 by then-president Michel Temer, and included the main building and two of the three accelerators. The second part included the third accelerator, the storage ring and the commissioning of the first beamlines. Sirius currently operates at 100mA in top-up mode and has 6 beamlines open to external researchers.
Characteristics
Sirius is used to understand the atomic structure of molecules, which can help in the development of new drugs, new materials used in construction, oil exploration and in many other areas. The 68,000-square-meter building houses a ring-shaped, circumferential 500-meter facility. To protect people from the radiation released by machine operation, designed to be the most advanced of its kind in the world, the whole is shielded by 1 kilometer of concrete walls. Around R$1.8 billion were invested in the project, which makes it the most ambitious scientific project ever made in Brazil.
Beamlines
Currently, Sirius has 9 operational beamlines, 1 in scientific commissioning, 2 in the assembly phase and 1 the design phase.
References
External links
Synchrotron radiation facilities
Science and technology in Brazil | Sirius (synchrotron light source) | [
"Physics",
"Materials_science"
] | 536 | [
"Particle physics stubs",
"Materials testing",
"Particle physics",
"Synchrotron radiation facilities"
] |
77,580,922 | https://en.wikipedia.org/wiki/White%20phosphorus | White phosphorus, yellow phosphorus, or simply tetraphosphorus (P4) is an allotrope of phosphorus. It is a translucent waxy solid that quickly yellows in light (due to its photochemical conversion into red phosphorus), and impure white phosphorus is for this reason called yellow phosphorus. White phosphorus is the first allotrope of phosphorus, and in fact the first elementary substance to be discovered that was not known since ancient times. It glows greenish in the dark (when exposed to oxygen) and is highly flammable and pyrophoric (self-igniting) upon contact with air. It is toxic, causing severe liver damage on ingestion and phossy jaw from chronic ingestion or inhalation. The odour of combustion of this form has a characteristic garlic odor, and samples are commonly coated with white "diphosphorus pentoxide", which consists of tetrahedra with oxygen inserted between the phosphorus atoms and at their vertices. White phosphorus is only slightly soluble in water and can be stored under water. is soluble in benzene, oils, carbon disulfide, and disulfur dichloride.
Structure
White phosphorus exists as molecules of four phosphorus atoms in a tetrahedral structure, joined by six phosphorus—phosphorus single bonds. The tetrahedral arrangement results in ring strain and instability. Although both are called "white phosphorus", in fact two different crystal allotropes are known, interchanging reversibly at 195.2 K. The element's standard state is the body-centered cubic α form, which is actually metastable under standard conditions. The β form is believed to have a hexagonal crystal structure.
Molten and gaseous white phosphorus also retains the tetrahedral molecules, until when it starts decomposing to molecules. The molecule in the gas phase has a P-P bond length of rg = 2.1994(3) Å as was determined by gas electron diffraction. The β form of white phosphorus contains three slightly different molecules, i.e. 18 different P-P bond lengths — between 2.1768(5) and 2.1920(5) Å. The average P-P bond length is 2.183(5) Å.
Chemical properties
Despite white phosphorus not being the most stable allotropes of phosphorus, its molecular nature allows it to be easily purified. Thus, it's defined to have a zero enthalpy of formation.
In base, white phosphorus spontaneously disproportionates to phosphine and various phosphorus oxyacid salts.
Many reactions of white phosphorus involve insertion into the P-P bonds, such as the reaction with oxygen, sulfur, phosphorus tribromide and the NO+ ion.
It ignites spontaneously in air at about , and at much lower temperatures if finely divided (due to melting-point depression). Phosphorus reacts with oxygen, usually forming two oxides depending on the amount of available oxygen: (phosphorus trioxide) when reacted with a limited supply of oxygen, and when reacted with excess oxygen. On rare occasions, , , and are also formed, but in small amounts. This combustion gives phosphorus(V) oxide:
Production and applications
The white allotrope can be produced using several methods. In the industrial process, phosphate rock is heated in an electric or fuel-fired furnace in the presence of carbon and silica. Elemental phosphorus is then liberated as a vapour and can be collected under phosphoric acid. An idealized equation for this carbothermal reaction is shown for calcium phosphate (although phosphate rock contains substantial amounts of fluoroapatite, which would also form silicon tetrafluoride):
In this way, an estimated 750,000 tons were produced in 1988.
Most (83% in 1988) white phosphorus is used as a precursor to phosphoric acid, half of which is used for food or medical products where purity is important. The other half is used for detergents. Much of the remaining 17% is mainly used for the production of chlorinated compounds phosphorus trichloride, phosphorus oxychloride, and phosphorus pentachloride:
Other products derived from white phosphorus include phosphorus pentasulfide and various metal phosphides.
Other polyhedrane analogues
Although white phosphorus forms the tetrahedron, the simplest possible Platonic solid, no other polyhedral phosphorus clusters are known. White phosphorus converts to the thermodynamically-stabler red allotrope, but that allotrope is not isolated polyhedra.
A cubane-type cluster, in particular, is unlikely to form, and the closest approach is the half-phosphorus compound , produced from phosphaalkynes. Other clusters are more thermodynamically favorable, and some have been partially formed as components of larger polyelemental compounds.
Safety
White phosphorus is rather acutely toxic, with a lethal dose of 50-100 mg (1 mg/kg body weight). Its mode of action is thought to involve its reducing properties. It is metabolized to phosphate, which is not toxic.
White phosphorus is used as a weapon because it is pyrophoric. For the same reasons, it is dangerous to handle. Measures are taken to protect samples from air. Anecdotal report of problems for beachcombers who may collect washed-up samples while unaware of their true nature.
See also
Red phosphorus
Allotropes of phosphorus
Phosphorus
References
Allotropes | White phosphorus | [
"Physics",
"Chemistry"
] | 1,140 | [
"Periodic table",
"Properties of chemical elements",
"Allotropes",
"Materials",
"Matter"
] |
77,581,037 | https://en.wikipedia.org/wiki/PCB-Investigator | PCB-Investigator is a software tool used for the analysis, visualization, and optimization of printed circuit boards (PCBs). It is used for tasks such as PCB design validation and quality assurance. It is developed by EasyLogix owned by Schindler & Schill GmbH, a German company specializing in electronic design automation (EDA) software.
History
The software was introduced in 2008 to meet the increasing demand for tools that support PCB analysis and visualization. Since its release, PCB-Investigator has been updated regularly to include new features and support for various PCB file formats. The currently available version is 15.1.
Features
PCB-Investigator provides features for PCB design and manufacturing, supporting various file formats like ODB++, Gerber, and IPC-2581. It includes tools for design rule checks (DRC) and other analyses to verify compliance with industry standards. The software also offers 3D visualization of PCBs and supports scripting for task automation and workflow customization. In 2017, PCB-Investigator introduced browser-based support for PCB design reviews.
See also
Comparison of EDA software
Altium Designer
KiCad
Cadence Allegro
References
External links
Printed circuit board manufacturing | PCB-Investigator | [
"Engineering"
] | 250 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
77,581,443 | https://en.wikipedia.org/wiki/Aron%20Walsh | Aron Walsh (born February 28, 1983) is a chemist known for his research in the fields of computational chemistry and materials science.
Early life and education
Walsh received his undergraduate degree in chemistry from Trinity College Dublin. He went on to complete his PhD in computational chemistry at the same institution. His postdoctoral research included a Marie Curie Fellowship at University College London and a fellowship at the National Renewable Energy Laboratory in the United States.
Academic career
Walsh began his academic career as a Royal Society University Research Fellow at the University of Bath, where he also served as a professor of Materials Theory. He holds a faculty position at Imperial College London leading the Materials Design Group.
Research contributions
Walsh's research integrates quantum mechanics with data-driven machine learning and multi-scale modeling approaches.
Awards and honours
Royal Society of Chemistry Harrison-Meldola Memorial Prize (2013)
Marsh Prize for Best Chemistry Publication (2014) by the University of Bath.
Publications and editorial work
Walsh has written or co-written over 500 research articles. Additionally, he serves as an Associate Editor for the Journal of the American Chemical Society (JACS).
References
1983 births
Living people
Alumni of Trinity College Dublin
Computational chemists
Materials scientists and engineers
Academics of Imperial College London
Academics of the University of Bath | Aron Walsh | [
"Chemistry",
"Materials_science",
"Engineering"
] | 253 | [
"Materials science",
"Computational chemists",
"Computational chemistry",
"Theoretical chemists",
"Materials scientists and engineers"
] |
77,582,898 | https://en.wikipedia.org/wiki/Type%20IIA%20supergravity | In supersymmetry, type IIA supergravity is the unique supergravity in ten dimensions with two supercharges of opposite chirality. It was first constructed in 1984 by a dimensional reduction of eleven-dimensional supergravity on a circle. The other supergravities in ten dimensions are type IIB supergravity, which has two supercharges of the same chirality, and type I supergravity, which has a single supercharge. In 1986 a deformation of the theory was discovered which gives mass to one of the fields and is known as massive type IIA supergravity. Type IIA supergravity plays a very important role in string theory as it is the low-energy limit of type IIA string theory.
History
After supergravity was discovered in 1976 with pure 4D supergravity, significant effort was devoted to understanding other possible supergravities that can exist with various numbers of supercharges and in various dimensions. The discovery of eleven-dimensional supergravity in 1978 led to the derivation of many lower dimensional supergravities through dimensional reduction of this theory. Using this technique, type IIA supergravity was first constructed in 1984 by three different groups, by F. Giani and M. Pernici, by I.C.G. Campbell and P. West, and by M. Huq and M. A. Namazie. In 1986 it was noticed by L. Romans that there exists a massive deformation of the theory. Type IIA supergravity has since been extensively used to study the low-energy behaviour of type IIA string theory. The terminology of type IIA, type IIB, and type I was coined by J. Schwarz, originally to refer to the three string theories that were known of in 1982.
Theory
Ten dimensions admits both and supergravity, depending on whether there are one or two supercharges. Since the smallest spinorial representations in ten dimensions are Majorana–Weyl spinors, the supercharges come in two types depending on their chirality, giving three possible supergravity theories. The theory formed using two supercharges of opposite chiralities is denoted by and is known as type IIA supergravity.
This theory contains a single multiplet, known as the ten-dimensional nonchiral multiplet. The fields in this multiplet are , where is the metric corresponding to the graviton, while the next three fields are the 3-, 2-, and 1-form gauge fields, with the 2-form being the Kalb–Ramond field. There is also a Majorana gravitino and a Majorana spinor , both of which decompose into a pair of Majorana–Weyl spinors of opposite chiralities and . Lastly, there a scalar field .
This nonchiral multiplet can be decomposed into the ten-dimensional multiplet , along with four additional fields . In the context of string theory, the bosonic fields in the first multiplet consists of NSNS fields while the bosonic fields are all RR fields. The fermionic fields are meanwhile in the NSR sector.
Algebra
The superalgebra for supersymmetry is given by
where all terms on the right-hand side besides the first one are the central charges allowed by the theory. Here are the spinor components of the Majorana supercharges while is the charge conjugation operator. Since the anticommutator is symmetric, the only matrices allowed on the right-hand side are ones that are symmetric in the spinor indices , . In ten dimensions is symmetric only for modulo , with the chirality matrix behaving as just another matrix, except with no index. Going only up to five-index matrices, since the rest are equivalent up to Poincare duality, yields the set of central charges described by the above algebra.
The various central charges in the algebra correspond to different BPS states allowed by the theory. In particular, the , and correspond to the D0, D2, and D4 branes. The corresponds to the NSNS 1-brane, which is equivalent to the fundamental string, while corresponds to the NS5-brane.
Action
The type IIA supergravity action is given up to four-fermion terms by
Here and where corresponds to a -form gauge field. The 3-form gauge field has a modified field strength tensor with this having a non-standard Bianchi identity of . Meanwhile, , , , and are various fermion bilinears given by
The first line of the action has the Einstein–Hilbert action, the dilaton kinetic term, the 2-form field strength tensor. It also contains the kinetic terms for the gravitino and spinor , described by the Rarita–Schwinger action and Dirac action, respectively. The second line has the kinetic terms for the 1-form and 3-form gauge fields as well as a Chern–Simons term. The last line contains the cubic interaction terms between two fermions and a boson.
Supersymmetry transformations
The supersymmetry variations that leave the action invariant are given up to three-fermion terms by
They are useful for constructing the Killing spinor equations and finding the supersymmetric ground states of the theory since these require that the fermionic variations vanish.
Related theories
Massive type IIA supergravity
Since type IIA supergravity has p-form field strengths of even dimensions, it also admits a nine-form gauge field . But since is a scalar and the free field equation is given by , this scalar must be a constant. Such a field therefore has no propagating degrees of freedom, but does have an energy density associated to it. Working only with the bosonic sector, the ten-form can be included in supergravity by modifying the original action to get massive type IIA supergravity
where is equivalent to the original type IIA supergravity up to the replacement of and . Here is known as the Romans mass and it acts as a Lagrange multiplier for . Often one integrates out this field strength tensor resulting in an action where acts as a mass term for the Kalb–Ramond field.
Unlike in the regular type IIA theory, which has a vanishing scalar potential , massive type IIA has a nonvanishing scalar potential. While the supersymmetry transformations appear to be realised, they are actually formally broken since the theory corresponds to a D8-brane background. A closely related theory is Howe–Lambert–West supergravity which is another massive deformation of type IIA supergravity, but one that can only be described at the level of the equations of motion. It is acquired by a compactification of eleven-dimensional MM theory on a circle.
Relation to 11D supergravity
Compactification of eleven-dimensional supergravity on a circle and keeping only the zero Fourier modes that are independent of the compact coordinates results in type IIA supergravity. For eleven-dimensional supergravity with the graviton, gravitino, and a 3-form gauge field denoted by , then the 11D metric decomposes into the 10D metric, the 1-form, and the dilaton as
Meanwhile, the 11D 3-form decomposes into the 10D 3-form and the 10D 2-form . The ten-dimensional modified field strength tensor directly arises in this compactification from .
Dimensional reduction of the fermions must generally be done in terms of the flat coordinates , where is the 11D vielbein. In that case the 11D Majorana graviton decomposes into the 10D Majorana gravitino and the Majorana fermion , although the exact identification is given by
where this is chosen to make the supersymmetry transformations simpler. The ten-dimensional supersymmetry variations can also be directly acquired from the eleven-dimensional ones by setting .
Relation to type IIA string theory
The low-energy effective field theory of type IIA string theory is given by type IIA supergravity. The fields correspond to the different massless excitations of the string, with the metric, 2-form , and dilaton being NSNS states that are found in all string theories, while the 3-form and 1-form fields correspond to the RR states of type IIA string theory. Corrections to the type IIA supergravity action come in two types, quantum corrections in powers of the string coupling , and curvature corrections in powers of . Such corrections often play an important role in type IIA string phenomenology. The type IIA superstring coupling constant corresponds to the vacuum expectation value of , while the string length is related to the gravitational coupling constant through .
When string theory is compactified to acquire four-dimensional theories, this is often done at the level of the low-energy supergravity. Reduction of type IIA on a Calabi–Yau manifold yields an theory in four dimensions, while reduction on a Calabi–Yau orientifold further breaks the symmetry down to give the phenomenologically viable four-dimensional supergravity. Type IIA supergravity is automatically anomaly free since it is a non-chiral theory.
Notes
References
Supersymmetric quantum field theory
Theories of gravity
String theory | Type IIA supergravity | [
"Physics",
"Astronomy"
] | 1,941 | [
"Astronomical hypotheses",
"Supersymmetric quantum field theory",
"Theoretical physics",
"Theories of gravity",
"String theory",
"Supersymmetry",
"Symmetry"
] |
77,583,575 | https://en.wikipedia.org/wiki/Type%20IIB%20supergravity | In supersymmetry, type IIB supergravity is the unique supergravity in ten dimensions with two supercharges of the same chirality. It was first constructed in 1983 by John Schwarz and independently by Paul Howe and Peter West at the level of its equations of motion. While it does not admit a fully covariant action due to the presence of a self-dual field, it can be described by an action if the self-duality condition is imposed by hand on the resulting equations of motion. The other types of supergravity in ten dimensions are type IIA supergravity, which has two supercharges of opposing chirality, and type I supergravity, which has a single supercharge. The theory plays an important role in modern physics since it is the low-energy limit of type IIB string theory.
History
After supergravity was discovered in 1976, there was a concentrated effort to construct the various possible supergravities that were classified in 1978 by Werner Nahm. He showed that there exist three types of supergravity in ten dimensions, later named type I, type IIA and type IIB. While both type I and type IIA can be realised at the level of the action, type IIB does not admit a covariant action. Instead it was first fully described through its equations of motion, derived in 1983 by John Schwartz, and independently by Paul Howe and Peter West. In 1995 it was realised that one can effectively describe the theory using a pseudo-action where the self-duality condition is imposed as an additional constraint on the equations of motion. The main application of the theory is as the low-energy limit of type IIB strings, and so it plays an important role in string theory, type IIB moduli stabilisation, and the AdS/CFT correspondence.
Theory
Ten-dimensional supergravity admits both and supergravities, which differ by the number of the Majorana–Weyl spinor supercharges that they possess. The type IIB theory has two supercharges of the same chirality, equivalent to a single Weyl supercharge, with it sometimes denoted as the ten-dimensional supergravity. The field content of this theory is given by the ten dimensional chiral supermultiplet . Here is the metric corresponding to the graviton, while are 4-form, 2-form, and 0-form gauge fields. Meanwhile, is the Kalb–Ramond field and is the dilaton. There is also a single left-handed Weyl gravitino , equivalent to two left-handed Majorana–Weyl gravitinos, and a single right-handed Weyl fermion , also equivalent to two right-handed Majorana–Weyl fermions.
Algebra
The superalgebra for ten-dimensional supersymmetry is given by
Here with are the two Majorana–Weyl supercharges of the same chirality. They therefore satisfy the projection relation where is the left-handed chirality projection operator and is the ten-dimensional chirality matrix.
The matrices allowed on the right-hand side are fixed by the fact that they must be representations of the R-symmetry group of the type IIB theory, which only allows for , and trace-free symmetric matrices . Since the anticommutator is symmetric under an exchange of the spinor and indices, the maximally extended superalgebra can only have terms with the same chirality and symmetry property as the anticommutator. The terms are therefore a product of one of the matrices with , where is the charge conjugation operator. In particular, when the spinor matrix is symmetric, it multiplies or while when it is antisymmetric it multiplies . In ten dimensions is symmetric for modulo and antisymmetric for modulo . Since the projection operator is a sum of the identity and a gamma matrix, this means that the symmetric combination works when modulo and the antisymmetric one when modulo . This yields all the central charges found in the superalgebra up to Poincaré duality.
The central charges are each associated to various BPS states that are found in the theory. The central charges correspond to the fundametnal string and the D1 brane, is associated with the D3 brane, while and give three 5-form charges. One is the D5-brane, another the NS5-brane, and the last is associated with the KK monopole.
Self-dual field
For the supergravity multiplet to have an equal number of bosonic and fermionic degrees of freedom, the four-form has to have 35 degrees of freedom. This is achieved when the corresponding field strength tensor is self-dual , eliminating half of the degrees of freedom that would otherwise be found in a 4-form gauge field.
This presents a problem when constructing an action since the kinetic term for the self-dual 5-form field vanishes. The original way around this was to only work at the level of the equations of motion where self-duality is just another equation of motion. While it is possible to formulate a covariant action with the correct degrees of freedom by introducing an auxiliary field and a compensating gauge symmetry, the more common approach is to instead work with a pseudo-action where self-duality is imposed as an additional constraint on the equations of motion. Without this constraint the action cannot be supersymmetric since it does not have an equal number of fermionic and bosonic degrees of freedom. Unlike for example type IIA supergravity, type IIB supergravity cannot be acquired as a dimensional reduction of a theory in higher dimensions.
Pseudo-action
The bosonic part of the pseudo-action for type IIB supergravity is given by
Here and are modified field strength tensors for the 2-form and 4-form gauge fields, with the resulting Bianchi identity for the 5-form being given by . The notation employed for the kinetic terms is where are the regular field strength tensors associated to the gauge fields. Self-duality has to be imposed by hand onto the equations of motion, making this a pseudo-action rather than a regular action.
The first line in the action contains the Einstein–Hilbert action, the dilaton kinetic term, and the Kalb–Ramond field strength tensor . The first term on the second line has the appropriately modified field strength tensors for the three gauge fields, while the last term is a Chern–Simons term. The action is written in the string frame which allows one to equate the fields to type IIB string states. In particular, the first line consists of kinetic terms for the NSNS fields, with these terms being identical to those found in type IIA supergravity. The first integral on the second line meanwhile consists of the kinetic term for the RR fields.
Global symmetry
Type IIB supergravity has a global noncompact symmetry. This can be made explicit by rewriting the action into the Einstein frame and defining the axio-dilaton complex scalar field . Introducing the matrix
and combining the two 3-form field strength tensors into a doublet , the action becomes
This action is manifestly invariant under the transformation which transforms the 3-forms and the axio-dilaton as
Both the metric and the self-dual field strength tensor are invariant under these transformations. The invariance of the 3-form field strength tensors follows from the fact that .
Supersymmetry transformations
The equations of motion acquired from the supergravity action are invariant under the following supersymmetry transformations
Here are the field strength tensors associated with the gauge fields, including all their magnetic duals for , while . Additionally, when is even and when it is odd. The type IIB pseudo-action can also be reformulated in a way that treats all RR fluxes equally in the so-called democratic formulation. Here the action is expressed in terms of all even fluxes up to , with a duality constraint imposed on all of them to get the correct number of degrees of freedom.
Relation to string theory
Type IIB supergravity is the low-energy limit of type IIB string theory. The fields of the supergravity in the string frame are directly related to the different massless states of the string theory. In particular, the metric, Kalb–Ramond field, and dilaton are NSNS fields, while the three p-forms are RR fields. Meanwhile, the gravitational coupling constant is related to the Regge slope through .
The global symmetry of the supergravity is not a symmetry of the full type IIB string theory since it would mix the and fields. This does not happen in the string theory since one of these is an NSNS field and the other an RR field, with these having different physics, such as the former coupling to strings but the latter not. The symmetry is instead broken to the discrete subgroup which is believed to be a symmetry of the full type IIB string theory.
The quantum theory is anomaly free, with the gravitational anomalies cancelling exactly. In string theory the pseudo-action receives much studied corrections that are classified into two types. The first are quantum corrections in terms of the string coupling and the second are string corrections terms of the Regge slope . These corrections play an important role in many moduli stabilisation scenarios.
Dimensional reduction of type IIA and type IIB supergravities necessarily results in the same nine-dimensional theory since only one superalgebra of this type exists in this dimension. This is closely linked to the T-duality between the corresponding string theories.
Notes
References
Supersymmetric quantum field theory
Theories of gravity
String theory | Type IIB supergravity | [
"Physics",
"Astronomy"
] | 2,024 | [
"Astronomical hypotheses",
"Supersymmetric quantum field theory",
"Theoretical physics",
"Theories of gravity",
"String theory",
"Supersymmetry",
"Symmetry"
] |
77,590,024 | https://en.wikipedia.org/wiki/WA93%20experiment | WA93 experiment (Synonym: Light Universal Detector or LUD) was a detector experiment conducted at CERN for studying the correlations between photons and charged particles. It was an experimental program of CERN and part of the research programme SPS. The experiment was majorly conducted by the Indian High-Energy Heavy Ion Physics Team at CERN-SPS. For measurement of the multiplicity and the rapidity and azimuthal distributions of photons in ultra-relativistic heavy ion collisions, Photon Multiplicity Detector was implemented in the experiment. The experiment was led by Indian physicist Y P Viyogi. Hans H. Gutbrod was the spokesperson of the experimental project. The experimental project was approved on 22 November 1990. The experiment was completed on 9 May 2002.
Description
WA93 experiment was a high-energy physics experiment conducted at CERN's Super Proton Synchrotron (SPS). Its primary goal was studying the properties of quark-gluon plasma (QGP), a hypothetical state of matter believed to have existed in the early universe. According to Big Bang theory, the entire universe was filled with quark–gluon plasma before the matter as we know it was created.
In the experiment, heavy ions, such as sulfur and gold at extremely high energies were involved in collisions to create conditions similar to those shortly after the Big Bang.
Components of experimental setup
The major components of the WA93 experimental setup were beam counter, large magnet dipole, Multi Step Avalanche Chambers, Silicon Drift Detector, Photon Multiplicity Detector, Lead-Glass Spectrometer, Streamer-Tube Detectors, Mid-rapidity Calorimeter, Zero-Degree Calorimeter, Trigger System and Charged-Particle Spectrometer.
References
Experiments
Physics experiments
CERN experiments | WA93 experiment | [
"Physics"
] | 368 | [
"Experimental physics",
"Physics experiments"
] |
77,590,592 | https://en.wikipedia.org/wiki/Palam%C3%B3s%20Canyon | The Palamós Canyon, also known as La Fonera Canyon, is an underwater canyon that forms the underwater valley located off the coast of Palamós, in the province of Girona, Catalonia. This underwater canyon is a significant geological feature of the Balearic Sea and plays an important role in the region's marine biodiversity.
Geography
The Palamós Canyon is a prominent submarine canyon located in the Northwestern Mediterranean Sea, off the coast of Catalonia, Spain. It is situated near the town of Palamós, from which it derives its name. The canyon begins at a depth of approximately 200 meters and extends down to depths exceeding 2,000 meters. The Palamós Canyon was formed through a combination of tectonic activity and sedimentary processes. It is part of the larger Catalan margin, which is characterized by its steep slopes and complex geological structures. The canyon plays a significant role in the transfer of sediments from the continental shelf to the deep sea.
Ecological importance
Submarine canyons like Palamós Canyon are known for their high biodiversity and productivity. The unique topography and hydrography of the canyon create habitats for a variety of marine species, including commercially important fish and invertebrates. The canyon also serves as a conduit for organic matter, supporting deep-sea ecosystems. Is the home of a great diversity of marine species. Among the benthic organisms that inhabit the canyon are corals, sponges, and a variety of invertebrates. In addition, it is a passage area for several species of pelagic fish and marine mammals, including dolphins. Dense cold-water corals have recently been discovered on its walls, living at temperatures around . It is like an oasis of biodiversity, for many crustaceans and fish, with numerous species of coral and other associated species. It forms an area very rich in biodiversity, since its rocky walls are the shelter of an immense variety of organisms, some of which, like corals, sponges and gorgonians, are protected and in danger of extinction.
Research and exploration
The Palamós Canyon has been the subject of various scientific studies, particularly in the fields of oceanography and marine biology. Research has shown that the canyon’s sedimentary dynamics are significantly impacted by both natural and anthropogenic factors. Studies have used various methods, including autonomous hydrographic profilers and near-bottom current meters, to monitor sediment transport and water turbidity within the canyon. Research efforts have focused on understanding sediment dynamics, current patterns, and the ecological significance of the canyon. Notably, the CANYONS project deployed multiple moorings equipped with sediment traps and current meters to study the canyon’s sedimentary processes.
Human impact
Human activities, such as fishing and pollution, have impacted the Palamós Canyon. Efforts are being made to mitigate these impacts through sustainable fishing practices and conservation initiatives. Collaborative projects between scientists and local fishing communities aim to preserve the ecological integrity of the canyon.
See also
Blanes Canyon
Catalan Sea
Submarine canyon
References
External links
Submarine canyons
Oceanography | Palamós Canyon | [
"Physics",
"Environmental_science"
] | 609 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
57,886,751 | https://en.wikipedia.org/wiki/NGC%203840 | NGC 3840 is a spiral galaxy located about 320 million light-years away in the constellation Leo. The galaxy was discovered by astronomer Heinrich d'Arrest on May 8, 1864. NGC 3840 is a member of the Leo Cluster. The galaxy is rich in neutral atomic hydrogen (H I) and is not interacting with its environment.
NGC 3840 is likely to be a low-luminosity AGN (LLAGN).
See also
List of NGC objects (3001–4000)
References
External links
3840
36477
6702
Leo (constellation)
Leo Cluster
Spiral galaxies
Astronomical objects discovered in 1864
Active galaxies | NGC 3840 | [
"Astronomy"
] | 127 | [
"Leo (constellation)",
"Constellations"
] |
57,899,033 | https://en.wikipedia.org/wiki/Breakthrough%20Laminar%20Aircraft%20Demonstrator%20in%20Europe | The Breakthrough Laminar Aircraft Demonstrator in Europe (BLADE) is an Airbus project within the European Clean Sky framework to flight-test experimental laminar-flow wing sections on an A340 from September 2017.
Design
Natural laminar flow is opposed to hybrid laminar flow artificially induced through hardware.
It is difficult to industrialise a wing smooth enough to sustain the laminar flow in operation, due to having very low design and manufacturing tolerances, leading-edge retractable slats, and fasteners, that is aerodynamically robust enough, and can withstand surface deformations and dirt, de-icing fluid, and rain-droplet contamination.
The metallic outboard section with a carbon fiber reinforced plastic upper laminar flow surface is isolated from the rest of the wing and has two ailerons on each side.
Its wing sweep is around 20° for a Mach 0.75 cruise, instead of 30° for a Mach 0.82–0.84 cruise.
Laminar flow is expected along 50% of chord length instead of just aft of the leading edge, halving the wing friction drag, reducing the overall aircraft drag by 8% and saving up to 5% in fuel on an 800nm (1,480km) sector.
Development
The demonstrator took off on 26 September, 2017.
In April 2018, after 66 flight hours, drag reduction is better than expected at 10% and laminar flow is more stable than anticipated, including when the wing twists and flexes.
Both wings with their carbon fibre upper sustainably generate the desired effect, while the carbon fibre left wing leading edge and metallic right wing leading edge have small differences in aerodynamic effects.
The aerodynamic benefits could be sustained at Mach 0.78 up from Mach 0.75 and next-generation single-aisles could use from the late 2020s.
Tests will continue until 2019 and will include wing contamination and a fixed Krüger flap.
Morphing flaps should be flight tested from May 2020.
References
Further reading
2010s international experimental aircraft
Aerodynamics
Air pollution organizations
Airbus A340
European Commission projects
European Union and science and technology
International aviation organizations
Pan-European trade and professional organizations
Research and development in Europe
Research projects | Breakthrough Laminar Aircraft Demonstrator in Europe | [
"Chemistry",
"Engineering"
] | 452 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
71,800,629 | https://en.wikipedia.org/wiki/Nondestructive%20Evaluation%204.0 | Nondestructive Evaluation 4.0 (NDE 4.0) has been defined by Vrana et al. as "the concept of cyber-physical non-destructive evaluation (including nondestructive testing) arising from Industry 4.0 digital technologies, physical inspection methods, and business models. It seeks to enhance inspection performance, integrity engineering and decision making for safety, sustainability, and quality assurance, as well as provide timely and relevant data to improve design, production, and maintenance characteristics."
NDE 4.0 arose in response to the emergence of the Fourth Industrial Revolution, which can be traced to the development of a high-tech strategy for the German government in 2015, under the term Industrie 4.0. The term became widely known in 2016 following its adoption as the theme of the World Economic Forum annual meeting in Davos.
The concept gained strength following the opening of the Center for the Fourth Industrial Revolution in 2016 in San Francisco. NDE 4.0 evolved in conjunction with Industry 4.0. It is recognized as a future goal by several global NDE organizations: the International Committee for Nondestructive Testing (ICNDT) has a Specialist international Group (SIG) on NDE 4.0, and the European Federation for Nondestructive Testing (EFNDT) created a working group designated as "EFNDT Working Group 10: NDE 4.0" (WG10). The importance of NDE 4.0 is reflected in the activities of NDE organizations throughout the world, including the American Society of Nondestructive Testing (ASNT), the British Institute of Non-Destructive Testing (BINDT), and the German Society for Non-Destructive Testing (DGZfP), through publications and training.
History
Leading to NDE 4.0, just as those leading to Industry 4.0 were prior developments that are divided into prior revolutions based on distinct technological and historical markers. These are usually defined for industry and hence for nondestructive evaluation.
NDE 1.0
The first revolution in nondestructive evaluation coincides with the first industrial revolution and refers to the period between approximately 1770 (following the invention of the Watt’s steam engine in 1769) and 1870. The transition from hand and artisanal production and “muscle power” to mechanized production and steam- and hydro-power necessitated the introduction of nondestructive testing. Prior to this period, people have tested objects for thousands of years through simple methods based human sensory perception – feeling, smelling listening and observing as appropriate.
The development in the first industrial revolution gave birth to non-destructive inspection through the introduction of tools that sharpened the human senses, and through tentative attempts at standardized procedures. Simple tools such as lenses, stethoscopes, tap and listen procedures and others, improved detection capabilities by enhancing human senses. Establishing procedures, made the outcome of the inspection comparable over time. At the same time, industrialization also made it necessary to expand quality assurance measures, a process that continues to this day.
NDE 2.0
The second revolution in NDE is commonly referred to as the period between 1870, with the appearance of first means of mass production, marked by the introduction of the conveyor belt, and 1969. As with the second revolution in industry, it is characterized by use of physical, chemical, mechanical and electrical knowledge to improve testing and evaluation.
The transformation of electromagnetic and acoustic waves, which lie outside the range of human perception, into signals that can be interpreted by humans, resulted in means of interrogating components for better visualization of material inhomogeneities at or close to the surface. Following the discovery of X-rays in 1895, it became a dominant method for testing, followed by gamma-ray testing and later, electromagnetic means of testing.
With the introduction of the transistor into electronics, testing methods such as ultrasound developed further into lighter, portable systems suitable for field testing. The first detectors for infrared and terahertz detection were invented around the same time and the first eddy current devices became available. Although these are critical methods of testing that persist to this day, further breakthroughs had to wait until digitization and digital electronics developed in the third NDE revolution.
NDE 3.0
The third revolution in NDE parallels the advent of microelectronics, digital technologies and computers. It is usually thought of as the period starting in 1969, marked by the introduction of the first programmable logic controller (PLC), and 2016. Digital inspection equipment, such as X-ray detectors, digital ultrasonic and eddy current equipment, and digital cameras became integral parts of the system of testing and evaluation. Robotics lead to automated processes, improving convenience, safety, speed and repeatability.
Digital technologies offered leaps in managing inspection data acquisition, storage, processing, 2D and 3D imaging, interpretation, and communication. Data processing and sharing became the norm. At the same time, these developments created new challenges and opportunities such as data security and integrity and introduced new concepts such as value of data and its monetization.
NDE 4.0
Whereas prior revolutions focused on improving testing and evaluation by taking advantage of the tools, methods and development available at the respective periods, the 4th NDE revolution is characterized by integration; integration of tools, testing methods, digital technologies, and communication into coherent closed-loop systems that allows both feedback and feed-forward to manufacturing. The purpose is improvement in testing and evaluation taking advantage of current and emerging production technologies and communication and information systems.
At the heart of NDE 4.0 are digitalization, networking, information transparency, communication and processing tools such as artificial intelligence and machine learning. One of the primary added values in NDE 4.0 is the possibility of product design and concurrent nondestructive evaluation through use of digital twins and digital threads, so that both design and testing can influence each other continuously. Another is the ability to serve emerging trends such as testing in custom manufacturing, remote testing and predictive maintenance over the lifetime of products.
NDE 4.0 is not a fixed set of rules and concepts but rather and evolving progression of ideas, tools and procedures brought about by advances in production, communication and processing. Its global purpose is to serve the needs of industry and respond to changes brought about by emergence of new opportunities.
Drivers and components
The primary driver of NDE 4.0 is the same as that of the fourth industrial revolution – the integration of digital tools and physical methods, driven by current digital technologies through introduction of new ways of digitalization of specific steps in NDE processes, with a promise of overall efficiency and reliability. There are three recognizable components of NDE 4.0. First, Industry 4.0 emerging digital technologies can be used to enhance NDE capabilities in what has been termed “Industry 4.0 for NDE”. Second, statistical analysis of NDE data provides insight into product performance and reliability. This is a valuable data source for Industry 4.0 to continuously improve the product design in the “NDE for Industry 4.0” process. Third, immersive training experiences, remote operation, intelligence augmentation, and data automation can enhance the NDE value proposition in terms of inspector safety and human performance in the third component of NDE 4.0 – the “Human Consideration”.
International Conference on NDE 4.0
The International Conference on NDE 4.0 was initiated by the ICNDT Specialist international Group (SIG) on NDE 4.0 and is planned to be organized bi-annually (this plan is currently altered due to the Corona Crises):
14/15 & 20/21 April 2021: Virtual Conference with 4 keynotes, 26 invited presentations and four panel discussions organized (video recordings available online) by DGZfP and co-sponsored by ICNDT
24 – 27 October 2022 in Berlin, Germany with 4 keynotes and 15 technical sessions (including Artificial intelligence, Digital twin, Additive Manufacturing, Extended Reality, Reliability, and Predictive Maintenance). This conference was organized by DGZfP and co-sponsored by ICNDT. At this conference the Kurzweil Award for High Impact in NDE 4.0 (named after Ray Kurzweil) was initiated and awarded to Prof. Dr. Norbert Meyendorf and Prof. Dr. Bernd Valeske for their work "Starting the Field of NDE 4.0".
3–6 March 2025 in Taj Yeshwantpur, India. This conference is organized by the Indian Society for Non-destructive Testing (ISNT) and co-sponsored by ICNDT.
Further reading
Peer-reviewed publications on the topic of NDE 4.0 were covered in multiple special issues and books:
2020: NDE 4.0 (Special Issue of Materials Evaluation)
2020: NDE 4.0 (Special Issue of Research in Nondestructive Evaluation)
2020/2021: Trends in NDE 4.0: Purpose, Technology, and Application (Topical Collection in Journal of Nondestructive Evaluation)
2021/2022: NDE 4.0: Technical Basics, Applications and Role of Societies (Topical Collection in Journal of Nondestructive Evaluation)
2022: Handbook of Nondestructive Evaluation 4.0 (Major Reference Work)
2022/2023: NDE 4.0 Creating success stories, building the eco-system and continuing research (Topical Collection in Journal of Nondestructive Evaluation)
References
Sources
https://www.gartner.com/en/information-technology/glossary/data-monetization
Industrial automation
Industrial computing
Internet of things
Technology forecasting
Big data
Fourth Industrial Revolution
Knowledge economy
Digital technology
Maintenance
Quality control | Nondestructive Evaluation 4.0 | [
"Materials_science",
"Technology",
"Engineering"
] | 1,974 | [
"Information and communications technology",
"Industrial automation",
"Industrial engineering",
"Automation",
"Nondestructive testing",
"Materials testing",
"Digital technology",
"Data",
"Big data",
"Mechanical engineering",
"Maintenance",
"Industrial computing"
] |
53,319,015 | https://en.wikipedia.org/wiki/Official%20Medicines%20Control%20Laboratory | Official Medicines Control Laboratory (OMCL) is the term coined in Europe for a public institute in charge of controlling the quality of medicines and, depending on the country, other similar products (for example, medical devices). They are part of or report to national competent authorities (NCAs).
By testing medicines independently of manufacturers (that is, without any conflict of interest and with guaranteed impartiality), OMCLs play a fundamental role in ensuring the quality and contributing to the safety and efficacy of medicines, whether already on the market or not, for human and veterinary use.
OMCLs assess human and veterinary medicines to determine whether they meet the relevant requirements for content, purity, etc., as specified in the marketing authorisation dossier or an official pharmacopoeia. They can also check whether packaging and labelling comply with legal requirements, and provide support during quality assessment, good manufacturing practice (GMP) inspections and investigations of quality defects and pharmacovigilance. Investigations may also be carried out on products suspected of being falsified, in support of police, customs, health or judicial authorities. OMCLs also actively contribute to the development and verification of pharmacopoeial methods.
To take into account the cross-border and global dimension of medicines markets, OMCLs co-operate actively at the European level and beyond. They do so through the General European OMCL Network (GEON), which was set up jointly by the Council of Europe and the European Commission (EC) in 1995. A number of non-European OMCLs have joined the network as associate members.
The GEON, which comprises over 70 OMCLs from over 40 different countries, is co-ordinated by the Strasbourg-based European Directorate for the Quality of Medicines & HealthCare (EDQM) of the Council of Europe, an international organisation upholding human rights, democracy and the rule of law in Europe. A list of network members is publicly available on the EDQM homepage.
The network supports laboratories across Europe in making the best use of their expertise, technical capacity and financial resources, in order to ensure the appropriate control of medicines in Europe. This is done by organising co-ordinated testing programmes, meetings, training, audits and tailored Proficiency Testing Schemes (PTSs) and by providing the necessary (IT) infrastructure. The activities of the GEON are co-funded by the Council of Europe and the European Union.
OMCLs play an essential role in the Official Control Authority Batch Release (OCABR) procedure, which is foreseen in EU legislation. Under this procedure, each batch of vaccine for human use, medicinal product derived from human blood or plasma (e.g. clotting factors, human albumin) or immunological veterinary medicinal product (e.g. veterinary vaccine) undergoes independent quality control, including testing, by an OMCL after release by the manufacturer and before it reaches the patient. The legislation requires mutual recognition of test results among the member states (EU/EEA), so the OMCLs involved work together as a network to ensure that any batch is tested in only one OMCL, under agreed conditions, for the benefit of all.
See also
Public health laboratory
External links
General European OMCL Network, Council of Europe
OMCL - Official Medicines Control Laboratories, Health Canada
References
Laboratory types | Official Medicines Control Laboratory | [
"Chemistry"
] | 689 | [
"Laboratory types"
] |
53,320,788 | https://en.wikipedia.org/wiki/Henry%20Royce%20Institute | The Henry Royce Institute (often referred to as ‘Royce’) is the UK’s national institute for advanced materials research and innovation.
Vision
Royce's vision is to identify challenges and stimulate innovation in advanced materials research to support sustainable growth and development. Royce aims to be a "single front door" to the UK’s materials research community. Its stated mission is to “support world-recognised excellence in UK materials research, accelerating commercial exploitation of innovations, and delivering positive economic and societal impact for the UK.”
Operating from its Hub at the University of Manchester, Royce is a partnership of eleven leading UK institutions. Royce operates as a hub and spoke collaboration between the University of Manchester (the hub), and the spokes of the founding Partners National Nuclear Laboratory, UK Atomic Energy Authority, Imperial College London, University of Cambridge, University of Leeds, University of Liverpool, University of Oxford and the University of Sheffield. Royce also has two Associate Partners, Cranfield University and the University of Strathclyde.
Aims
Royce aims to fulfil its mission by:
Enabling national materials research foresighting, collaboration and strategy
Providing access to the latest equipment facilities and capabilities
Catalysing industrial collaboration and exploitation of materials research
Fostering materials science skills development, innovation training, and outreach.
History
In 2014, Chancellor George Osborne announced the launch of the Henry Royce Institute for advanced materials science in his Autumn Statement in 2014. He pledged "a quarter of a billion" to support his proposals from June 2014 on creating a Northern Powerhouse. Royce was then established through a grant from the Engineering and Physical Sciences Research Council (EPSRC), which has been used to fund construction and refurbishment of buildings, equipment, and research and technical staff. Royce now coordinates over 900 academics and over £300 million in facilities, "providing a joined-up framework that can deliver beyond the current capabilities of individual partners or research teams.”
Royce is one of the EPSRC's four major research institutes, the other three being: The Alan Turing Institute in data science; The Faraday Institution in battery science and technology; and the Rosalind Franklin Institute, which focuses on transforming life science through interdisciplinary research and technology development. These institutes represent a total financial investment of around £478 million and reflect the EPSRC’s vision and objectives ("to deliver economic impact and social prosperity; to realise the potential of engineering and physical sciences research; and to enable the UK engineering and physical sciences landscape to deliver").
In 2022, the Secretary of State for Business, Energy and Industrial Strategy, Grant Schapps announced a further £95 Million investment into Royce to deliver Phase 2 of its operations.
Name
The Henry Royce Institute is named after Sir Frederick Henry Royce OBE, a British engineer famous for his designs of car and airplane engines. Henry Royce manufactured his first car in Manchester in 1904, and in 1906 co-founded Rolls-Royce.
Research strategy
Royce strategy currently focuses on research in five areas:
Low carbon power: new modes of energy generation, energy storage, and efficient energy use – from hydrogen to fusion power and energy-efficient devices
Infrastructure and mobility: efficient housing, clean transport, and transforming foundation industries for clean manufacturing
Digital and Communications: low-loss digital processes quantum technologies for computing, sensors, and data storage
Circular economy: rethinking the way we use plastic and engage with waste streams, developing truly degradable materials
Health and wellbeing: reducing carbon emissions and enabling clean water production, delivering personalised medicine, and supporting the ageing population.
In September 2020, Royce published five technology roadmaps to "set out how materials science can be harnessed to deliver net-zero targets". These roadmaps were the product of a collaboration with the Institute of Physics and the Institute for Manufacturing, convening the UK's academic and industrial materials research communities to explore how novel materials and processes can contribute to more sustainable, affordable, and reliable energy production. The roadmaps cover photovoltaics, hydrogen, thermoelectrics, calorics, and low-loss electronics.
Structure
Royce operates as a hub-and-spoke model, with the hub at the University of Manchester and spokes at the other founding Partners, comprising the universities of Sheffield, Leeds, Liverpool, Cambridge, Oxford and Imperial College London, as well as UKAEA and NNL. The hub and spokes collaborate on research in the following areas:
2-Dimensional materials – led by the University of Manchester
Advanced metals processing – led by the University of Sheffield and the University of Manchester
Atoms to devices – led jointly by the University of Leeds, Imperial College London, the University of Cambridge and the University of Manchester
Biomedical materials – led by the University of Manchester
Chemical materials design – led by the University of Liverpool and the University of Manchester
Electrochemical Systems – led by the University of Oxford
Materials systems for demanding environments – led by the University of Manchester and Cranfield University
Nuclear materials – led by the University of Manchester, National Nuclear Laboratory and UKAEA
Location
The Henry Royce Institute’s hub in Manchester is in line with the Northern Powerhouse policy, and the UK government’s aim to support centres of excellence outside of the "Golden Triangle" of research institutions in London, Cambridge and Oxford.
New buildings funded or part-funded by the Royce grant include:
Royce Hub Building, Manchester
The newly-constructed £105m Royce Hub Building draws together research facilities and meeting spaces to drive collaboration and industry engagement. Research undertaken here encompasses biomedical, metals processing, digital fabrication, and sustainable materials themes.
The nine-storey Hub building in the heart of the University of Manchester campus is high, making it the second-tallest current building on the campus after the Maths and Social Sciences Building. It has of space. It is located next to the Alan Turing Building, and is close to the National Graphene Institute, the School of Physics and Astronomy, the School of Chemistry, and the Manchester Engineering Campus Development.
The building was due to open in autumn 2020, but the ceremony was delayed due to the COVID-19 pandemic. Planning permission was granted in February 2017 and construction started in December 2017. It was originally going to be constructed on the site of the BBC's New Broadcasting House, but the site was changed to the main campus of the university.
Sir Michael Uren Hub, Imperial
Royce funding was invested in Imperial’s new Sir Michael Uren Hub building, on which Royce occupies the eighth floor. Final works are still ongoing on some floors of the building, which has not yet been officially opened. Royce facilities here focus on the production and characterisation of thin films and devices composed of a broad spectrum of materials.
Bragg Centre for Materials Research, Leeds
Housed within the new Sir William Henry Bragg Building, the Bragg Centre for Materials Research will become operational in 2021, with a formal opening following in 2022. Royce equipment at the Bragg Centre focusses on enabling the discovery, creation, characterisation, and exploitation of materials engineered at the atomic level.
Royce Discovery Centre, Sheffield
Located in the new Harry Brearley Building in the centre of Sheffield, construction on the Royce Discovery Centre finished in 2020 with a formal opening in 2022. Housing equipment worth over £20m, the building features specialist laboratories, workshops and office spaces focussed on early-stage materials discovery and processing.
Materials Innovation Factory, Liverpool
Royce invested £10.9m in Liverpool’s new Materials Innovation Factory – an £81m facility dedicated to the research and development of advanced materials. Officially opened in 2018, the site includes a Royce Open Access Area which houses one of the highest concentrations of materials science robotics in the world, and also a suite of advanced analytical equipment.
Royce Translational Centre, Sheffield
Part of the University of Sheffield’s Advanced Manufacturing Park, the Royce Translational Centre officially opened in October 2018 Its purpose is to evolve novel materials and processing techniques developed by research teams, making them accessible for trial by industry collaborators.
Royce@Cambridge
Royce@Cambridge is based within Cambridge’s contemporary Maxwell Centre with labs at various sites on the University's West Cambridge site. Royce’s £10m investment in open access facilities at the University of Cambridge address energy generation, energy storage, and efficient energy use, with equipment for fabrication of new battery structures, X-ray photoelectron spectroscopy, X-ray tomography, and electrochemical characterisation.
Outreach
Alongside funding research and facilities, Royce has funded outreach and skills programmes aimed at encouraging children and young people to consider careers in Materials Science and Engineering. Working with the Discover Materials group, they facilitated a virtual open week in July/August 2020 aimed at 16-18 year olds considering university degrees. Royce also has a regular stand at the annual Bluedot festival, engaging children and their families in interactive materials science challenges.
Collaborations
Royce partners have collaborated with other institutions to win grants from the Faraday Institution to develop new energy storage technologies. Royce has also collaborated with the Alan Turing Institute on some data-centric engineering challenges, and the Franklin Institute on procuring characterisation capabilities.
Leadership
The Royce leadership team comprises:
Chair: Professor Mark Smith
CEO: Professor David Knowles FREng
Chief Scientist: Professor Phil Withers FRS, FREng
Chief Scientist Officer: Professor Ian Kinloch
Connection to UK Government Policy
Advanced materials innovation is a key focus area for the UK for several reasons. UK businesses that depend on the production/processing of materials represent 15% of UK GDP, have a turnover of approximately £200bn, exports of £50bn and employ over 2.6 million people. Research in advanced materials is an area of national strength as one of the "Eight Great" technologies, and is a major contributor to most of the other seven, with over 150,000 published patent applications between 2004–2013. Of the UK’s sector strategies, advanced materials is the critical component to ensure the full economic benefit for the energy sector, transportation, construction, a growing digital economy, life sciences, and agriculture technology.
References
Buildings at the University of Manchester
Engineering research institutes
Engineering and Physical Sciences Research Council
Materials science institutes
Research institutes in Manchester | Henry Royce Institute | [
"Materials_science",
"Engineering"
] | 2,035 | [
"Materials science organizations",
"Engineering research institutes",
"Materials science institutes"
] |
53,324,696 | https://en.wikipedia.org/wiki/K%E2%80%93Ca%20dating | Potassium–calcium dating, abbreviated K–Ca dating, is a radiometric dating method used in geochronology. It is based upon measuring the ratio of a parent isotope of potassium () to a daughter isotope of calcium (). This form of radioactive decay is accomplished through beta decay.
Calcium is common in many minerals, with being the most abundant naturally occurring isotope of calcium (96.94%), so use of this dating method to determine the ratio of daughter calcium produced from parent potassium is generally not practical. However, recent advancements in mass spectrometric techniques [e.g., thermal ionization mass spectrometry (TIMS) and collision-cell inductively-coupled plasma mass spectrometry (CC-ICP-MS)] are allowing radiogenic Ca isotope variations to be measured at unprecedented precisions in an increasing variety of materials, including high Ca minerals (e.g., plagioclase, garnet, clinopyroxene) and aqueous (e.g., seawater and riverine) samples. In earlier studies, this technique was especially useful in minerals with low calcium contents (under 1/50th of the potassium content) so that radiogenic ingrowth of 40-Ca could be more easily quantified. Examples of such minerals include lepidolite, potassium-feldspar, and late-formed muscovite or biotite from pegmatites (preferably older than ). This method is also useful for zircon-poor, felsic-to-intermediate igneous rocks, various metamorphic rocks, and evaporite minerals (i.e. sylvite).
Method
Potassium has three naturally occurring isotopes: stable , and radioactive . exhibits dual decay: through β-decay (E = 1.33 MeV), 89% of decays to , and the rest decays to via electron capture (E = 1.46 MeV). While comprises only 0.001167% of total potassium mass, makes up 96.9821% of total calcium mass; thus, decay leads to significantly greater enrichment than any other isotope. The decay constant for the decay to is denoted as λβ and equals yr−1; the decay constant to is denoted as λEC and equals yr−1.
The general equation for the decay time of a radioactive nucleus that decays to a single product is:
Where λ is the decay constant, t1/2 is the half-life, N0 is the initial concentration of the parent isotope, and N is the final concentration of the parent isotope.
Similarly, the equation for the decay time of a radioactive nucleus that decays to more than one product is:
Where a is the daughter product of interest, λa is the decay constant for daughter product a, and λt is the sum of decay constants for daughter products a and b.
This approach is taken in potassium-calcium dating where argon and calcium are both products of decay and can be expressed as:
Where Ca is the measured amount of radiogenic in terms of parent isotope , and K0 is the initial concentration of .
Age equation
Age determination using potassium–calcium dating is best done using the isochron technique. The isochron constructed for Pike’s Peak in Colorado and the K/Ca age for the granites in the area were found to be . Rb-Sr dating of the same batholith gave results of , supporting the practicality of this method of dating. For comparison, the isochron method uses non-radiogenic to develop an isochron.
The following equation is used in the construction of the isochron plot:
t is time elapsed
ξ is the branching ratio (= λβ / λ total) = 0.8952
Ca0 is the initial / isotope ratio
Ca is the / isotope ratio
K is the / isotope ratio
Applications
Chronological applications
This technique's primary application is towards determining the crystallization age of minerals or rocks enriched in potassium and depleted in calcium. Due to the long half life of (~1.25 billion years), K–Ca dating is most useful on samples older than 100,000 years. Given that the chosen sample has a relatively high current K/Ca ratio, and that the initial concentration of can be determined, any error in this initial concentration can be considered negligible when determining the sample's age.
K–Ca dating is not a common radioactive dating method for metamorphic rocks. However, this system is considered more stable than both the K-Ar and Rb-Sr dating methods. This fact, combined with advances in precision of mass spectrometry, makes K–Ca dating a viable option for igneous and metamorphic rocks containing little to no zircon.
Potassium-calcium dating is especially useful for diagenetic minerals and marine sediments, which are both assumed to have had the same initial calcium isotopic composition as Earth's seawater at the time of their formation. As such, being able to assume the initial / ratio as a constant, this dating method proves particularly fruitful for these respective samples.
Non-chronological applications
Aside from radioactive dating, the K-Ca system is the only isotopic system capable of detecting elemental signatures in magmatic processes. Normalizing the / ratio to non-radioactive isotopes (/), it was found that the isotopic composition of calcium was similar across meteorites, lunar samples, and Earth's mantle.
Advantages & disadvantages
Disadvantages
The primary disadvantage to K–Ca dating is the abundance of calcium in most minerals; this dating method cannot be used on minerals with a high preexisting calcium content, as the radioactively added calcium will increase calcium abundance in the sample only very slightly. As such, K–Ca dating is effective only in circumstances where K/Ca>50 (in a potassium-enriched, calcium-depleted sample). Examples of such minerals include lepidolite, potassium-feldspar, and late-formed muscovite or biotite from pegmatites (preferably older than ). This method is also useful for zircon-poor, felsic-to-intermediate igneous rocks, various metamorphic rocks, and evaporite minerals (i.e. sylvite).
Another disadvantage to K–Ca dating is that the isotopic composition of calcium ( compared to ) is difficult to determine using mass spectrometry. Calcium is not easily ionized using a thermoionic source, and tends to isotopically fractionate during ionization. As such, this dating method does not yield satisfactory results unless performed with extremely high precision. Until recently, K–Ca dating was not considered useful for samples younger than the Precambrian, with extremely depleted to ratios.
Advantages
However, if used effectively on the aforementioned minerals, the K–Ca dating method provides high-precision dating comparable to other isotopic dating methods. It is also most effective, comparatively, at providing major element abundances for crustal magma sources, if used with high precision.
See also
K–Ar dating
Rb-Sr dating
References
Radiometric dating | K–Ca dating | [
"Chemistry"
] | 1,463 | [
"Radiometric dating",
"Radioactivity"
] |
70,345,105 | https://en.wikipedia.org/wiki/Internal%20wave%20breaking | Internal wave breaking is a process during which internal gravity waves attain a large amplitude compared to their length scale, become nonlinearly unstable and finally break. This process is accompanied by turbulent dissipation and mixing. As internal gravity waves carry energy and momentum from the environment of their inception, breaking and subsequent turbulent mixing affects the fluid characteristics in locations of breaking. Consequently, internal wave breaking influences even the large scale flows and composition in both the ocean and the atmosphere. In the atmosphere, momentum deposition by internal wave breaking plays a key role in atmospheric phenomena such as the Quasi-Biennial Oscillation and the Brewer-Dobson Circulation. In the deep ocean, mixing induced by internal wave breaking is an important driver of the meridional overturning circulation. On smaller scales, breaking-induced mixing is important for sediment transport and for nutrient supply to the photic zone. Most breaking of oceanic internal waves occurs in continental shelves, well below the ocean surface, which makes it a difficult phenomenon to observe.
The contribution of breaking internal waves to many atmospheric and ocean processes makes it important to parametrize their effects in weather and climate models.
Breaking mechanisms
Similar to what happens to surface gravity waves near a coastline, when internal waves enter shallow waters and encounter steep topography, they steepen and grow in amplitude in a nonlinear process known as shoaling. As the wave travels over topography with increasing height, bed friction leads to internal waves becoming asymmetrical with an increasing steepness. These nonlinear internal waves on a shallow slope are generally referred to as internal bores. Wave height and energy increase until a critical steepness is reached, whereafter the wave breaks by convective, Kelvin-Helmholtz or parametric subharmonic instability. Due to the relatively small density differences (and thus small restoring forces) over the ocean depth, ocean internal waves may reach amplitudes up to around 100 m. Analogous to surface wave breaking in the region known as the surf zone, internal breaking waves dissipate energy in what is known as the internal surf zone.
Internal tide breaking
Internal tidal waves are internal waves at tidal frequency in the ocean, which are generated by the interaction of the tide with the ocean topography. Alongside internal inertial waves, they constitute the majority of the ocean internal wavefield. The internal tides consist of so-called low modes and high modes with varying vertical wavelengths. As these waves propagate, the high modes tend to dissipate their energy quickly, leading to the low modes to dominate further away from the location of their generation. Low mode internal waves, with wavelengths exceeding 100 km, generated by either tides or winds acting on the sea surface, can travel thousands of kilometers from their regions of generation, where they will eventually encounter sloping topography and break. When this happens, isopycnals become steeper and steeper, where the wavefront is followed by a sharp temperature drop. This then leads to an unstable density profile that eventually overturns and breaks. The magnitude of the topographic slope and the slope of the internal wave beam dictate where internal waves break.
The slope of an internal wave beam () can be expressed as the ratio between its horizontal () and vertical () wavenumbers:
where is the buoyancy frequency (or Brunt-Väisälä frequency), is the Coriolis frequency and is the wave frequency in the dispersion relation that governs the propagation of internal waves in a continuously stratified and rotating medium:
.
In the case that the slope of a downgoing incident internal wave beam is larger than the topographic slope (supercritical slope), waves will be reflected downward. In the case that the slope of a downgoing incident internal wave beam is smaller than the topographic slope (subcritical slope), however, waves will be reflected upward with reduced wavelength and lower group velocity. Because the energy flux is conserved during reflection, energy density and therefore wave amplitude in the reflected wave must increase with respect to the incident wave. This increase in amplitude and wave steepness results in the waves being subject to breaking. These effects are increased the closer the slope of the internal wave beam is to the magnitude of the topographic slope. When the slope of the beam of the incoming internal wave is equal to the topographic slope, the slope of the topography is referred to as the critical slope. Critical slopes and near-critical slopes are important locations for both wave breaking and wave generation via tide-topography interactions.
Internal solitary wave breaking
Owing to the generally long distances traveled by internal tidal waves, they may steepen and form trains of internal solitary waves, or internal solitons. These internal solitons have much shorter wavelengths, on the order of hundreds of meters, making them much steeper than internal tides. The ratio of the topographic slope to the wave steepness can be characterized by the internal Iribarren number:
where is the topographical slope, the internal wave amplitude and the wavelength of the internal wave. The internal Iribarren number can be used to classify internal bores into two categories: canonical bores and non-canonical bores. For a gentle slope, as is typical for the continental shelf and nearshore areas, the internal Iribarren number is low () such that canonical bores occur. In this case, an incoming internal solitary wave can convert to a packet of solitary waves or boluses as it travels up the slope in a process referred to as fission. This is also called a fission breaker. Canonical bores are generally accompanied by an intense drop in temperature as the wavefront passes by, followed by a gradual increase over time.
In rarer cases, non-canonical () bores may occur. In these cases, for an increasing internal Iribarren number (that is, steeper waves or steeper topographic slope), wave breaking can be classified successively as surging, collapsing and plunging breakers (see Breaking wave). Contrary to canonical bores, temperature gradually decreases as the wavefront passes by, followed by a sharp increase in temperature. Due to the steeper topographic slopes associated with non-canonical bores, a larger part of the wave energy is reflected back, meaning there is less turbulent energy that leads to mixing.
Mixing
Breaking internal waves are regarded to play an important role on mixing of the ocean based on lab experiments and remote sensing. The effect of internal waves on mixing is also studied extensively in direct numerical simulations. Even though research indicates that internal wave breaking is important for local turbulence, there remains uncertainty in global estimates.
Breaking internal tidal waves can result in turbulent water columns of several hundred meters high and the turbulent kinetic energy may reach levels up to 10.000 times higher than in the open ocean.
Quantifying mixing efficiency
The intensity of the turbulence caused by breaking internal waves depends mainly on the ratio between topographical steepness and the wave steepness, known as the internal Iribarren number. A smaller internal Iribarren number correlates with a larger intensity of the resulting turbulence due to internal wave breaking. That means that a small internal Iribarren number predicts that a lot of the wave energy will be transferred to mixing and turbulence, while a large internal Iribarren number predicts that the wave energy will reflect offshore.
Studies express the mixing efficiency as the ratio between the total amount of mixing and the total irreversible energy loss. In other words, the mixing efficiency can generally be defined as the following ratio:
,
where is the mixing efficiency, the change in background potential energy due to mixing and the total energy expended. Because and are not directly observable, studies use different definitions to determine the mixing efficiency.
It is notoriously hard to estimate the mixing efficiency in the ocean, due to practical limitations in measuring ocean dynamics. Besides measurements of ocean dynamics, the mixing efficiency can also be obtained from lab experiments and numerical simulations, but they also have their limitations. Therefore, these three different approaches have slightly different definitions of mixing efficiency. In theory these three approaches should give the same estimates for the mixing efficiency, but there remain discrepancies between them. Therefore, there are varying estimates and disagreements on mixing efficiency and comparisons are difficult due to the different definitions.
Studies that quantify the mixing properties of breaking internal solitary waves have split estimates of the mixing efficiency range, with values between 5% and 25% for laboratory experiments or between 13% and 21% for numerical simulations depending on the Internal Iribarren number.
Mass and sediment transport
Breaking and shoaling of internal waves have been shown to cause the transport of mass and energy in the form of sediment and heat, but also of nutrients, plankton and other forms of marine life.
Sediment transport
Wave breaking causes mass and sediment transport that is important for the ocean biology and shaping of the continental shelves due to erosion. The erosion caused by internal wave breaking can result in sediment to be suspended and transported off-shore. This off-shore sediment transport may give rise to the emergence of nepheloid layers, which are in turn important for the ocean biology. Direct numerical simulations show that breaking internal waves are also responsible for on-shore sediment transport, after which sediment can be deposited or transported elsewhere.
Although many studies show that internal wave breaking leads to sediment transport, their traces in the geologic record remain uncertain. Their sedimentary structures may coexist in turbidites on continental slopes and canyons.
Transport of nutrients
The mixing and transport of nutrients in the ocean is affected largely by internal wave breaking. The arrival of internal tidal bores has been shown to cause a 10 to 40 fold increase of nutrients on Conch Reef. Here it has been shown that the appearance of internal bores provide a predictable and periodic source of transport that can be important for a diversity of marine life. Large amplitude tidal internal tidal waves can cause sediments to be resuspended for as long as 5 hours each tidal wave and internal bores have shown to play a vital role in the onshore transportation of planktonic larvae.
Internal wave breaking may also cause ecological hazards, such as red tides and low dissolved oxygen levels.
References
Water waves | Internal wave breaking | [
"Physics",
"Chemistry"
] | 2,040 | [
"Water waves",
"Waves",
"Physical phenomena",
"Fluid dynamics"
] |
70,346,758 | https://en.wikipedia.org/wiki/Sea%20surface%20skin%20temperature | The sea surface skin temperature (SSTskin), or ocean skin temperature, is the temperature of the sea surface as determined through its infrared spectrum (3.7–12 μm) and represents the temperature of the sublayer of water at a depth of 10–20 μm. High-resolution data of skin temperature gained by satellites in passive infrared measurements is a crucial constituent in determining the sea surface temperature (SST).
Since the skin layer is in radiative equilibrium with the atmosphere and the sun, its temperature underlies a daily cycle. Even small changes in the skin temperature can lead to large changes in atmospheric circulation. This makes skin temperature a widely used quantity in weather forecasting and climate science.
Remote Sensing
Large-scale sea surface skin temperature measurements started with the use of satellites in remote sensing. The underlying principle of this kind of measurement is to determine the surface temperature via its black body spectrum. Different measurement devices are installed where each device measures a different wavelength. Every wavelength corresponds to different sublayers in the upper 500 μm of the ocean water column. Since this layer shows a strong temperature gradient, the observed temperature depends on the wavelength used. Therefore, the measurements are often indicated with their wavelength band instead of their depths.
History
First satellite measurements of the sea surface were conducted as early as 1964 by Nimbus-I. Further satellites were deployed in 1966 and the early 1970s. Early measurements suffered from contamination by atmospheric disturbances. The first satellite to carry a sensor operating on multiple infrared bands was launched late in 1978, which enabled atmospheric correction. This class of sensors is called Advanced very-high-resolution radiometers (AVHRR) and provides information that is also relevant for the tracking of clouds. The current, third-generation features six channels at wavelength ranges important for cloud observation, cloud/snow differentiation, surface temperature observation and atmospheric correction. The modern satellite array is able to give a global coverage with a resolution of 10 km every ~6 h.
Conversion to SST
Sea surface skin temperature measurements are completed with SSTsubskin measurements in the microwave regime to estimate the sea surface temperature. These measurements have the advantage of being independent of cloud cover and underlie less variation. The conversion to SST is done via elaborate retrieval algorithms. These algorithms take additional information like the current wind, cloud cover, precipitation and water vapor content into account and model the heat transfer between the layers. The determined SST is validated by in-situ measurements from ships, buoys and profilers. On average, the skin temperature is estimated to be systematically cooler by 0.15 ± 0.1 K compared to the temperature at 5m depth.
Vertical temperature profile of the sea surface
The vertical temperature profile of the surface layer of the ocean is determined by different heat transport processes. At the very interface, the ocean is in thermal equilibrium with the atmosphere which is dominated by conductive and diffusive heat transfer. Also, evaporation takes place at the interface and thus cools the skin layer. Below the skin layer lies the subskin layer, this layer is defined as the layer where molecular and viscous heat transfer dominates. At larger scales, as the much bigger foundation layer, turbulent heat transport through eddies contributes most to the vertical heat transfer.
During the day, there is additional heating by the sun. The solar radiation entering the ocean gets heats the surface following the Beer-Lambert law. Here, approximately five percent of the incoming radiation is absorbed in the upper 1 mm of the ocean. Since the heating from above leads to a stable stratification, other processes dominate the heat transport, depending on the considered scale.
Regarding the skin layer with thickness , turbulent diffusion term is negligible. For the stationary case without external heating, the vertical temperature profile obeys the following energy budget:
Here, and denote the density and heat capacity of water, the molecular thermal conductivity and the vertical partial derivative of the temperature. The vertical heat difference consists of latent heat release, sensible heat fluxes and the net longwave thermal radiation. The observed in the skin layer is positive, which corresponds to a temperature increasing with depth (Note that the z-axis points downward into the ocean). This leads to a cool skin layer as can be seen in Fig. 2. A common empiric description of the vertical temperature profile within the skin layer of depth is:
Here, and denote the temperature of the surface and the lower boundary. When including the diurnal heating, we have to include an additional heating term, depending on the absorbed short wave radiation. Integrating over , we can express the temperature at depth as:
where is the net shortwave solar radiation at the ocean interface and is its fraction absorbed up to depth . As can be seen in Fig. 2, the diurnal heating reduces the cool skin effect. The maximum temperature can be found in the subskin layer, where the external heating per depth is lower than in the skin layer, but where the surface cooling has a smaller effect. With further increasing depth, the temperature declines, as the proportional heating is smaller and the layer is mixed via turbulent processes.
Variation of skin temperature
Daily cycle
The ocean skin temperature is defined as the temperature of the water at 20 μm depth. This means that the SSTskin is very dependent on the heat flux from the ocean to the atmosphere. This results in diurnal warming of the sea surface, high temperatures occur during the day and low temperatures during the night (especially with clear skies and low wind speed conditions).
Because the SSTskin can be measured by satellites and is the temperature almost at the interface of the ocean and the atmosphere, it is a very useful measure to find the heat flux from the ocean. The increased heat flux due to diurnal warming can reach as high as 50-60 W/m2 and has a temporal mean of 10 W/m2. These amounts of heat flux cannot be neglected in atmospheric processes.
Wind and interaction with the atmosphere
The sea surface temperature is also highly dependent on wind and waves. Both processes cause mixing and therefore cooling/heating of the SSTskin. For example, when rough seas occur during the day, colder water from lower layers are mixed with the ocean skin. When gravity waves are present at the sea surface, there is a modulation of ocean skin temperature. In this modulation, the wind plays an important role. The magnitude of this modulation depends on wind speed, the phase is determined by the direction of the wind relative to the waves. When the wind and wave direction are similar, maximum temperatures occur on the forward side of the wave and when the wind blows from the opposite side compared to the waves, maximum temperatures are found at the rear face of the wave.
Interaction with marine lifeforms
On a global scale, skin temperature is an indicator of plankton concentrations. In areas where a relatively cold SSTskin is measured, abundance of phytoplankton is high. This effect is caused by the rise of cold, nutrient-rich water from the sea bottom in these regions. This increase in nutrients causes phytoplankton to thrive. On the other hand, relatively high SSTskin is an indication of higher zooplankton concentrations. These plankton depend on organic matter to thrive and higher temperatures increase production.
On more local scales, surface accumulations of cyanobacteria can cause local increases in SSTskin by up to 1.5 degrees Celsius. Cyanobacteria are bacteria that photosynthesize and therefore chlorophyll is present in these bacteria. This increased chlorophyll concentration causes more absorption of incoming radiation. This increased absorption causes the temperature of the sea surface to rise. This increased temperature is most likely only apparent in the first meter and definitely only in the first five meters, after which no increased temperatures are measured.
See also
Sea surface temperature
Remote sensing
Remote sensing (oceanography)
Thermal radiation
Skin temperature of an atmosphere
Sea surface interface temperature
Sea surface subskin temperature
Group for High Resolution Sea Surface Temperature (GHRSST)
Weather modification
References
Oceans
Temperature | Sea surface skin temperature | [
"Physics",
"Chemistry"
] | 1,637 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
70,347,258 | https://en.wikipedia.org/wiki/Eddy%20pumping | Eddy pumping is a component of mesoscale eddy-induced vertical motion in the ocean. It is a physical mechanism through which vertical motion is created from variations in an eddy's rotational strength. Cyclonic (Anticyclonic) eddies lead primarily to upwelling (downwelling). It is a key mechanism driving biological and biogeochemical processes in the ocean such as algal blooms and the carbon cycle.
The mechanism
Eddies have a re-stratifying effect, which means they tend to organise the water in layers of different density. These layers are separated by surfaces called isopycnals. The re-stratification of the mixed layer is strongest in regions with large horizontal density gradients, known also as “fronts”, where the geostrophic shear and potential energy provide an energy source from which baroclinic and symmetric instabilities can grow. Below the mixed layer, a region of rapid density change (or pycnocline) separates the upper and lower water, hindering vertical transport.
Eddy pumping is a component of mesoscale eddy-induced vertical motion. Such vertical motion is caused by the deformation of the pycnocline. It can be conceptualised by assuming that ocean water has a density surface with mean depth averaged over time and space. This surface separates the upper ocean, corresponding to the euphotic zone, from the lower, deep ocean. When an eddy transits through, such density surface is deformed. Dependent on the phases of the lifespan of an eddy this will create vertical perturbations in different direction. Eddy lifespans are divided in formation, evolution and destruction. Eddy-pumping perturbations are of three types:
Cyclones
Anticyclones
Mode-water eddies
Eddy-centric approach
Mode-water eddies have a complex density structure. Due to their shape, they cannot be distinguished from regular anticyclones in an eddy-centric (focused on the core of the eddy) analysis based on sea level height. Nonetheless, eddy pumping induced vertical motion in the euphotic zone of mode-water eddies is comparable to cyclones. For this reasons, only the cyclonic and anticyclonic mechanisms of eddy-pumping perturbations are explained.
Conceptual explanation based on sea-surface level
An intuitive description of this mechanism is what is defined as eddy-centric-analysis based on sea-surface level. In the Northern hemisphere, anticlockwise rotation in cyclonic eddies creates a divergence of horizontal surface currents due to the Coriolis effect, leading to a dampened water surface. To compensate the inhomogeneity of surface elevation, isopycnal surfaces are uplifted toward the euphotic zone and incorporation of deep ocean, nutrient-rich waters can occur.
Physical explanation
Conceptually, eddy pumping associates the vertical motion in the interior of eddies to temporal changes in eddy relative vorticity. The vertical motion created by the change in vorticity is understood from the characteristics of the water contained in the core of the eddy. Cyclonic eddies rotate anticlockwise (clockwise) in the Northern (Southern) hemisphere and have a cold core. Anticyclonic eddies rotate clockwise (anticlockwise) in the Northern (Southern) hemisphere and have a warm core. The temperature and salinity difference between the eddy core and the surrounding waters is the key element driving vertical motion. While propagating in horizontal direction, Cyclones and anticyclones “bend” the pycnocline upwards and downwards, respectively, induced by this temperature and salinity discrepancy. The extent of the vertical perturbation of the density surface inside the eddy (compared to the mean ocean density surface) is determined by the changes in rotational strength (relative vorticity) of the eddy.
Ignoring horizontal advection in the density conservation equation, the density changes due to changes in vorticity can be directly related to vertical transport. This assumption is coherent with the idea of vertical motion occurring at the eddy centre, in correspondence to variations of a perfectly circular flow.
Through such mechanism eddy pumping generates upwelling of cold, nutrient rich deep waters in cyclonic eddies and downwelling of warm, nutrient poor, surface water in anticyclonic eddies.
Dependency on the phase of lifespan
Eddies weaken over time due to kinetic energy dissipation. As eddies form and intensify, the mechanisms mentioned above will strengthen and, as an increase in relative vorticity generates perturbations of the isopycnal surfaces, the pycnocline deforms. On the other hand, when eddies have aged and carry low kinetic energy, their vorticity diminishes and leads to eddy destruction. Such process opposes to eddy formation and intensification, as the pycnocline will return to its original position prior to the eddy-induced deformation. This means that the pycnocline will uplift in anticyclones and compress in cyclones, leading to upwelling and downwelling, respectively.
Eddy pumping characteristics
The direction of vertical motion in cyclonic and anticyclonic eddies is independent of the hemisphere. Observed vertical velocities of eddy pumping are in the order of one meter per day. However, there are regional differences. In regions where kinetic energy is higher, such as in the Western boundary current, eddies are found to generate stronger vertical currents than eddies in open ocean.
Limitations
When describing vertical motion in eddies it is important to note that eddy pumping is only one component of a complex mechanism. Another important factor to take into account, especially when considering ocean-wind interaction, is the role played by eddy-induced Ekman pumping. Some other limitations of the explanation above are due to the idealised, quasi circular linear dynamical response to perturbations that neglects the vertical displacement that a particle can experience moving along a sloping neutral surface. Vertical motion in eddies is a fairly recent research topic that still presents limitations in the theory both due to complexity and lack of sufficient observations. Nonetheless, the one presented above is a simplification that helps explain partially the important role that eddies play in biological productivity, as well as their biogeochemical role in the carbon cycle.
Biological impact
Recent findings suggest that mesoscale eddies are likely to play a key role in nutrient transport, such as the spatial distribution of chlorophyll concentration, in the open ocean. Lack of knowledge on the impact of eddy activity is however still notable, as eddies’ contribution has been argued not to be sufficient to maintain the observed primary production through nitrogen supply in parts of the subtropical gyre. Although the mechanisms through which eddies shape ecosystems are not yet fully understood, eddies transport nutrients through a combination of horizontal and vertical processes. Stirring and trapping relate to nutrient transport, whereas eddy pumping, eddy-induced Ekman pumping, and eddy impacts on mixed-layer depth variate nutrient. Here, the role played by eddy pumping is discussed.
Cyclonic eddy pumping drives new primary production by lifting nutrient-rich waters into the euphotic zone. Complete utilisation of the upwelled nutrients is guaranteed by two main factors. Firstly, biological uptake takes place in timescales that are much shorter than the average lifetime of eddies. Secondly, because the nutrient enhancement takes place in the eddy's interior, isolated from the surrounding waters, biomass can accumulate until upwelled nutrients are fully consumed.
Main examples
Evidence of the biological impacts of eddy pumping mechanism is present in various publications based on observations and modelling of multiple locations worldwide. Eddy-centric chlorophyll anomalies have been observed in the Gulf Stream region and off the west coast of British Columbia (Haida eddies), as well as eddy-induced enhanced biological production in the Weddell-Scotia Confluence in the Southern Ocean, in the northern Gulf of Alaska, in the South China Sea, in the Bay of Bengal, in the Arabian Sea and in the north-western Alboran Sea, to name a few. Estimations of the eddy pumping in the Sargasso Sea resulted in a flux between 0.24 and 0.5 nitrogen . These quantities have been deemed sufficient to sustain a rate of new primary production consistent with estimates for this region.
On a wider ecological scale, eddy-driven variations in productivity influence the trade-off between phytoplankton larval survival and the abundance of predators. These concepts partially explain mesoscale variations in the distribution of larval bluefin tuna, sailfish, marlin, swordfish, and other species. Distributions of adult fishes have also been associated with the presence of cyclonic eddies. Particularly, higher abundances of bluefin tuna and cetaceans in the Gulf of Mexico and blue marlin in the proximity of Hawaii are linked to cyclonic eddy activities. Such spatial patterns extend to seabirds spotted in the vicinities of eddies, including great frigate birds in the Mozambique Channel and albatross, terns, and shearwaters in the South Indian Ocean.
North Atlantic Algal Bloom
The North Sea is an ideal basin for the formation of algal blooms or spring blooms due to the combination of abundant nutrients and intense Arctic winds that favour the mixing of waters. Blooms are important indicators of the health of a marine ecosystem.
Springtime phytoplankton blooms have been thought to be initiated by seasonal light increase and near-surface stratification. Recent observations from the sub-polar North Atlantic experiment and biophysical models suggest that the bloom may be instead resulting from an eddy-induced stratification, taking place 20 to 30 days earlier than it would occur by seasonal changes. These findings revolutionise the entire understanding of spring blooms. Moreover, eddy pumping and eddy-induced Ekman pumping have been shown to dominate late-bloom and post-bloom biological fields.
Biogeochemistry
Phytoplankton absorbs through photosynthesis. When such organisms die and sink to the seafloor, the carbon they absorbed gets stored in the deep ocean through what is known as the biological pump. Recent research has been investigating the role of eddy pumping and more in general, of vertical motion in mesoscale eddies in the carbon cycle. Evidence has shown that eddy pumping-induced upwelling and downwelling may play a significant role in shaping the way that carbon is stored in the ocean. Despite the fact that research in this field is only developing recently, first results show that eddies contribute less than 5% of the total annual export of phytoplankton to the ocean interior.
Plastic pollution
Eddies play an important role in the sea surface distribution of microplastics in the ocean. Due to their convergent nature, anticyclonic eddies trap and transport microplastics at the sea surface, along with nutrients, chlorophyll and zooplankton. In the North Atlantic subtropical gyre, the first direct observation of sea surface concentrations of microplastics between a cyclonic and an anticyclonic mesoscale eddy has shown an increased accumulation in the latter. Accumulation of microplastics has environmental impacts through its interaction with the biota. Initially buoyant plastic particles (between 0.01 and 1 mm) are submerged below the climatological mixed layer depth mainly due to biofouling. In regions with very low productivity, particles remain within the upper part of the mixed layer and can only sink below it if a spring bloom occurs.
See also
Algal bloom - a rapid increase or accumulation in the population of algae in freshwater o marine water systems
Baroclinic instability - fluid dynamical instability of fundamental importance in the atmosphere and ocean
Ekman pumping - Ekman Pumping is the component of Ekman transport that results in areas of downwelling due to the convergence of water
Haida Eddies - episodic, clockwise rotating ocean eddies that form during the winter off the west coast of British Columbia
Mesoscale ocean eddies - Swirling in the ocean created by its turbulent nature
Spring bloom – Strong increase in phytoplankton abundance that typically occurs in the early spring
References
Water physics | Eddy pumping | [
"Physics",
"Materials_science"
] | 2,529 | [
"Water physics",
"Condensed matter physics"
] |
70,347,574 | https://en.wikipedia.org/wiki/Exclusion%20zone%20%28physics%29 | The exclusion zone is a large stratum (typically on the order of a few microns to a millimeter) observed in pure liquid water, from which particles of other materials in suspension are repelled. It is observed next to the surface of solid materials, e.g. the walls of the container in which the liquid water is held, or solid specimens immersed in it, and also at the water/air interface. Several independent research groups have reported observations of the exclusion zone next to hydrophilic surfaces. Some research groups have reported the observation of the exclusion zone next to metal surfaces. The Exclusion zone has been observed using different techniques, e.g. birefringence, neutron radiography, nuclear magnetic resonance, and others, and it has potentially high importance in biology, and in engineering applications such as filtration and microfluidics.
Historical background
The first observations of a different behavior of water molecules, close to the walls of its container, date back to late 1960s and early 1970s, when Drost-Hansen, upon reviewing many experimental articles, came to the conclusion that interfacial water shows structural difference.
In 1986 Deryagin and his colleagues observed an exclusion zone next to the walls of cells.
In 2006 the group of Gerald Pollack reported their observation of what they called an exclusion zone. They observed that the particles of colloidal and molecular solutes suspended in aqueous solution are profoundly and extensively excluded from the vicinity of various hydrophilic surfaces. The exclusion zone has been observed and characterized by several independent groups since those early observations.
Theoretical models
Since the early observations, several theoretical models have been proposed, to explain the experimental observation of the exclusion zone.
Mechanical model: Change in geometrical structure
Some researchers suggest that the exclusion zone is due to a change in the geometrical structure of water, induced by the surface of the hydrophilic (or metal) solid water's structure.
In this model, the water in the exclusion zone has a structure of hexagonal sheets, where the hydrogen atoms are positioned between oxygen atoms. Moreover, hydrogen atoms bond to the oxygens atoms lying in the layer above and below so that in total each hydrogen forms three bonds. This structure can be considered as an intermediate between ice and water. However, the hexagonal sheet hypothesis does not account for all aspects of the exclusion zone, and it is not supported by the majority of physicists.
Quantum Electrodynamical model: quantum confinement
Another calculation performed describes the molecules of the exclusion zone using Quantum Mechanics and Quantum Electrodynamics. In this model the liquid bulk water is in a gaseous state. Then, above a certain density threshold and below a specific critical temperature, those molecules go to another quantum state, with lower energy.
In this lower energy, coherent state, the cloud of electrons oscillate between two quantum states: a ground state, and an excited state where one electron per molecule is almost
free (the binding energy is about 0.5 eV). In this coherent state the quantum superposition has a component with coefficient 0.9 of the ground state, and a component with 0.1 of the excited state. The electrons in this quantum state oscillate between the ground state and the excited state with a certain frequency, and this oscillation creates an electromagnetic field, which is confined within the super-molecular structure, so that no radiation is observed. The molecules of the structure, together with the confined electromagnetic field, constitute in this model the exclusion zone.
References
Water
Fluid dynamics | Exclusion zone (physics) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 719 | [
"Hydrology",
"Water",
"Chemical engineering",
"Piping",
"Fluid dynamics"
] |
63,115,295 | https://en.wikipedia.org/wiki/Imageability | Imageability is a measure of how easily a physical object, word or environment will evoke a clear mental image in the mind of any person observing it. It is used in architecture and city planning, in psycholinguistics, and in automated computer vision research. In automated image recognition, training models to connect images with concepts that have low imageability can lead to biased and harmful results.
History and components
Kevin A. Lynch first introduced the term, "imageability" in his 1960 book, The Image of the City. In the book, Lynch argues cities contain a key set of physical elements that people use to understand the environment, orient themselves inside of it, and assign it meaning.
Lynch argues the five key elements that impact the imageability of a city are Paths, Edges, Districts, Nodes, and Landmarks.
Paths: channels in which people travel. Examples: streets, sidewalks, trails, canals, railroads.
Edges: objects that form boundaries around space. Examples: walls, buildings, shoreline, curbstone, streets, and overpasses.
Districts: medium to large areas people can enter into and out of that have a common set of identifiable characteristics.
Nodes: large areas people can enter, that serve as the foci of the city, neighborhood, district, etc.
Landmarks: memorable points of reference people cannot enter into. Examples: signs, mountains and public art.
In 1914, half a century before The Image of the City was published, Paul Stern discussed a concept similar to imageability in the context of art. Stern, in Susan Langer's Reflections on Art, names the attribute that describes how vividly and intensely an artistic object could be experienced apparency.
In computer vision
Automated image recognition was developed by using machine learning to find patterns in large, annotated datasets of photographs, like ImageNet. Images in ImageNet are labelled using concepts in WordNet. Concepts that are easily expressed verbally, like "early", are seen as less "imageable" than nouns referring to physical objects like "leaf". Training AI models to associate concepts with low imageability with specific images can lead to problematic bias in image recognition algorithms. This has particularly been critiqued as it relates to the "person" category of WordNet and therefore also ImageNet. Trevor Pagan and Kate Crawford demonstrated in their essay "Excavating AI" and their art project ImageNet Roulette how this leads to photos of ordinary people being labelled by AI systems as "terrorists" or "sex offenders".
Images in datasets are often labelled as having a certain level of imageability. As described by Kaiyu Yang, Fei-Fei Li and co-authors, this is often done following criteria from Allan Paivio and collaborators' 1968 psycholinguistic study of nouns. Yang el.al. write that dataset annotators tasked with labelling imageability "see a list of words and rate each word on a 1-7 scale from 'low imagery' to 'high imagery'.
To avoid biased or harmful image recognition and image generation, Yang et.al. recommend not training vision recognition models on concepts with low imageability, especially when the concepts are offensive (such as sexual or racial slurs) or sensitive (their examples for this category include "orphan", "separatist", "Anglo-Saxon" and "crossover voter"). Even "safe" concepts with low imageability, like "great-niece" or "vegetarian" can lead to misleading results and should be avoided.
See also
Wayfinding
Mental mapping
Environmental psychology
Speech perception
Experimental psychology
Further reading
Holahan, Charles J.; Sorenson, Paul F. (1985-09-01). "The role of figural organization in city imageability: An information processing analysis". Journal of Environmental Psychology.
Smolík Filip (2019-05-21). "Imageability and Neighborhood Density Facilitate the Age of Word Acquisition in Czech". Journal of Speech, Language, and Hearing Research.
Paivio, Allan; Yuille, John C.; Madigan, Stephen A. (1968). "Concreteness, imagery, and meaningfulness values for 925 nouns". Journal of Experimental Psychology.
Hansen, Pernille; Holm, Elisabeth; Lind, Marianne; Simonsen, Hanne Gram (2012). "Name relatedness and imageability".
Richardson, John T. E. (1975-05). "Concreteness and Imageability". Quarterly Journal of Experimental Psychology.
Silva, Kapila Dharmasena (2015). "Developing Alternative Methods for Urban Imageability Research".
McCunn, Lindsay J.; Gifford, Robert (2018-04-01). "Spatial navigation and place imageability in sense of place". Cities.
Caplan, Jeremy B.; Madan, Christopher R. (2016-06-17). "Word Imageability Enhances Association-memory by Increasing Hippocampal Engagement". Journal of Cognitive Neuroscience
Chmielewski S., Bochniak A., Natapov A., Wezyk P. (2020). "Introducing GEOBIA to Landscape Imageability Assessment". Remote Sensing.
References
Psychogeography
Environmental psychology
Knowledge representation | Imageability | [
"Environmental_science"
] | 1,072 | [
"Environmental social science",
"Environmental psychology"
] |
63,116,479 | https://en.wikipedia.org/wiki/Crossness%20Sewage%20Treatment%20Works | The Crossness Sewage Treatment Works is a sewage treatment plant located at Crossness in the London Borough of Bexley. It was opened in 1865 and is Europe's second largest sewage treatment works, after its counterpart Beckton Sewage Treatment Works located north of the river. Crossness treats the waste water from the Southern Outfall Sewer serving South and South East London, and is operated by Thames Water.
The treated effluent from the plant is discharged into the River Thames at the eastern end of the site.
History
As originally conceived the works comprised reservoirs covering 2.6 hectares designed to retain six hours’ flow of sewage. No sewage treatment was provided and the sewage was discharged untreated into the River Thames on the ebb tide. Following the Princess Alice disaster in 1878 a Royal Commission was appointed in 1882 to examine Metropolitan Sewage Disposal. It recommended that a precipitation process should be deployed to separate solids from the liquid and that the solids should be burned, applied to land or dumped at sea. A precipitation works using lime and iron sulphate was installed at Crossness in 1888–91. Sludge was disposed of in the Barrow Deep and later in the Black Deep in the outer Thames estuary. In the year 1912/13 the Crossness works received and treated 49,534 million gallons (225.2 million m3) of sewage, and disposed of 880,000 tons of sludge. The cost of operating the Crossness works was £44,269. In 1919/20 the corresponding figures were 41,209 million gallons (187.3 million m3), of sewage, 767,000 tons of sludge sent to sea, entailing 767 sludge vessel voyages, and the costs were £52,282.
Advanced treatment
Work began in the early 1960s to install a modern treatment plant capable of treating 450,000 cubic metres per day of sewage. The cost of the works was £9 million at 1963 prices. The plant comprised storm tanks, detritus channels, primary sedimentation, mechanical aeration, final sedimentation and sludge digestion.
Following the 1964 upgrade the works at Crossness began to produce a nitrifying effluent whereupon sulphide disappeared from the tideway; an excess of nitrate provided a safeguard against sulphide formation in the river. The practice of dumping sewage sludge at sea was banned in 1998. In that year a sludge incineration plant was commissioned. This provides 6 MW of power for use at the treatment works.
New processes
In 2010–14 the Crossness works were upgraded at a cost of £220 million, increasing capacity by 44% to reduce storm sewage flowing into the Thames during heavy rainfall. The upgrade involved the installation of new renewable energy sources including a 2.3 MW wind turbine, a thermal hydrolysis plant, an advanced digestion plant, and an odour control treatment system. The project enabled the plant to treat 13 cubic metres of sewage per second and incorporated new inlet works, primary settlement tanks, secondary biological treatment implementing the activated sludge process and final settlement tanks. It also included the installation of associated sludge thickening and odour treatment facilities.
The hydrolysis plant burns combustible sludge flakes created after waste water treatment to 160 °C, producing 50 per cent more biogas than anaerobic digestion process. The project included the installation of eight new primary settlement tanks where sewage is collected to remove primary sludge passing through two 1.2 km-long culverts of 2 m diameter.
Sewage passes through a pair of new aeration lanes into twelve final settlement tanks of 40 m diameter. The activated sludge plant includes six aeration lanes of 69 m with total volume of 86,000 cubic metres and a treatment capacity of 564,000 cubic metres per day. It includes anoxic zone mixers, a fine bubble diffused aeration system and five centrifugal blowers giving an air flow of up to 21,000 cubic metres per hour. Additional sludge storage and thickening facilities store the additional sludge. The five raw sludge gravity belt thickeners have a capacity of 6,055 cubic metres per day each.
Crossness Pumping Station
The original sewage pumping station on the site of the treatment plant, constructed between 1859 and 1865 and featuring spectacular Victorian architecture, has been restored and is now open as a museum.
See also
London sewer system
References
Source
Wood, Leslie B, (1982). The Restoration of the Tidal Thames. Bristol: Adam Hilger.
Thames Water
London water infrastructure
Sewage treatment plants in the United Kingdom
Sewerage | Crossness Sewage Treatment Works | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 927 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
63,119,437 | https://en.wikipedia.org/wiki/Transplant%20engineering | Transplant engineering (or allograft engineering) is a variant of genetic organ engineering which comprises allograft, autograft and xenograft engineering. In allograft engineering the graft is substantially modified by altering its genetic composition. The genetic modification can be permanent or transient. The aim of modifying the allograft is usually the mitigation of immunological graft rejection.
History
Transient genetic allograft engineering has been pioneered by Shaf Keshavjee and Marcelo Cypel at University Health Network in Toronto by adenoviral transduction for transgenic expression of the IL-10 gene. Permanent genetic allograft engineering has first been done by Rainer Blasczyk and Constanca Figueiredo at Hannover Medical School in Hanover by lentiviral transduction for knocking down MHC expression in pigs (lung) and rats (kidney).
References
Genetic engineering
Transplantation medicine | Transplant engineering | [
"Chemistry",
"Engineering",
"Biology"
] | 193 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Genetic engineering",
"Molecular biology"
] |
63,122,178 | https://en.wikipedia.org/wiki/Cytochrome%20P450%20aromatic%20O-demethylase | Cytochrome P450 aromatic O-demethylase is a bacterial enzyme that catalyzes the demethylation of lignin and various lignols. The net reaction follows the following stoichiometry, illustrated with a generic methoxy arene:
ArOCH3 + O2 + 2e− + 2H+ → ArOH + CH2O + H2O
The enzyme is notable for its promiscuity, affecting the O-demethylation of a range of substrates, including lignin.
It is a heterodimeric protein derived from the products of two genes. The component proteins are a cytochrome P450 enzyme (encoded by the gcoA gene from the family CYP255A) and a three-domain reductase (encoded by the gcoB gene) complexed with three cofactors (2Fe-2S, FAD, and NADH).
Mechanism
GcoA and GcoB form a dimer complex in solution. GcoA process the substrate while GcoB provides the electrons to support the mixed function oxidase. As with other P450's, monooxygenation of the substrate proceeds concomitantly with reduction of half an equivalent of O2 to water. An oxygen rebound mechanism can be assumed. GcoA positions the aromatic ring within the hydrophobic active site cavity where the heme is located.
Structure
GcoA has a typical P450 structure: a thiolate-ligated heme next to a buried active site. GcoB is however unusual. Cytochrome P450s normally are complemented by either a cytochrome P450 reductase or a ferredoxin and ferredoxin reductase; its electrons are carried by NAD+ or NADP+. GcoB however has a single polypeptide. This polypeptide has an N-terminal ferredoxin with both an NAD(P)+ and also an FAD binding region.
CcoA and GcoB are closely interlinked, acting as an heterodimer in solution. The surface of GcoB has an acidic patch that must interact with the matching basic region in GcoA. It is assumed that the part of GcoB interacting with GcoA is at the intersection between the FAD binding domain and ferredoxin domain. To achieve this GcoB would have to go through some structural change, which would represent a new class of P450 systems (family N).
Potential applications
Cytochrome P450 aromatic O-demethylase assists in the partial O-demethylation of lignin. The resulting 1,2-diols are well suited for oxidative degradation via intra- and extra-diol dioxygenases. Thus O-demethylated lignins are potentially susceptible to partial depolymerization. With fewer crosslinks, the modified ligand is potentially more useful than the precursor., ranging from fuels
References
Cytochrome P450
EC 1.14.14
EC 1.6.2
Prokaryote genes | Cytochrome P450 aromatic O-demethylase | [
"Biology"
] | 659 | [
"Prokaryotes",
"Prokaryote genes"
] |
63,122,597 | https://en.wikipedia.org/wiki/Pyrazofurin | Pyrazofurin (pyrazomycin) is a natural product found in Streptomyces candidus, which is a nucleoside analogue related to ribavirin. It has antibiotic, antiviral and anti-cancer properties but was not successful in human clinical trials due to severe side effects. Nevertheless, it continues to be the subject of ongoing research as a potential drug of last resort, or a template for improved synthetic derivatives.
See also
Acadesine
EICAR (antiviral)
Sangivamycin
References
Antiviral drugs
Tetrahydrofurans
Pyrazolecarboxamides
Polyols | Pyrazofurin | [
"Biology"
] | 134 | [
"Antiviral drugs",
"Biocides"
] |
63,122,726 | https://en.wikipedia.org/wiki/List%20of%20megaprojects%20in%20Bangladesh | This is a list of megaprojects of Bangladesh. "(i.e. projects) characterized by: large investment commitment, vast complexity (especially in organizational terms), and long-lasting impact on the economy, the environment, and society". The number of such projects is so large that the list may never be fully completed. The Finance Minister of Bangladesh has recently unveiled an extensive roster of ambitious mega projects encompassing various sectors. These projects primarily focus on the construction of hospitals, schools, colleges, and other essential infrastructures. Consequently, this development surge is expected to generate a substantial demand for cement within the country.
Terms Explanation
Airports
Bridges
Road and highways
Railways
Energy projects
Ports
Defense
Buildings and Housing
Sports
Barrages
Delta Plan
Satellites
Special Economic Zone
References
Megaprojects
Megaprojects | List of megaprojects in Bangladesh | [
"Engineering"
] | 161 | [
"Megaprojects"
] |
60,983,275 | https://en.wikipedia.org/wiki/Amplicon%20sequence%20variant | An amplicon sequence variant (ASV) is any one of the inferred single DNA sequences recovered from a high-throughput analysis of marker genes. Because these analyses, also called "amplicon reads," are created following the removal of erroneous sequences generated during PCR and sequencing, using ASVs makes it possible to distinguish sequence variation by a single nucleotide change. The uses of ASVs include classifying groups of species based on DNA sequences, finding biological and environmental variation, and determining ecological patterns.
ASVs were first described in 2013, by Eren and colleagues. Before that, for many years the standard unit for marker-gene analysis was the operational taxonomic unit (OTU), which is generated by clustering sequences based on a threshold of similarity.
Compared to ASVs, OTUs reflect a coarser notion of similarity. Though there is no single threshold, the most commonly chosen value is 3%, which means these units share 97% of the DNA sequence. ASV methods on the other hand are able to resolve sequence differences by as little as a single nucleotide change, thus avoiding similarity-based operational clustering units altogether. Therefore, ASVs represent a finer distinction between sequences.
ASVs are also referred to as exact sequence variants (ESVs), zero-radius OTUs (ZOTUs), sub-OTUs (sOTUs), haplotypes, or oligotypes.
Uses of ASVs versus OTUs
The introduction of ASV methods was marked by a debate about their utility. Although OTUs do not provide such precise and accurate measurements of sequence variation, they are still an acceptable and valuable approach. In one research study, Glassman and Martiny confirmed the suitability of OTUs for investigating broad-scale ecological diversity. They concluded that OTUs and ASVs provided similar results, with ASVs enabling a slightly stronger detection of fungal and bacterial diversity. And their work indicated that even though species diversification can be measured more accurately with ASVs, the use of OTUs in well-constructed studies is generally valid to demonstrate diversification at broad scales.
Some have argued that ASVs should replace OTUs in marker-gene analysis. Their arguments focus on the precision, tractability, reproducibility, and comprehensiveness they can bring to marker-gene analysis. For these researchers, the utility of finer sequence resolution (precision) and the advantage of being able to easily compare sequences between different studies (tractability and reproducibility) make ASVs the better option for analyzing sequence differences. By contrast, since OTUs depend on the specifics of the similarity thresholds used to generate them, the units within any OTU can vary across researchers, experiments, and databases. Thus comparison across OTU-based studies and datasets can be very challenging.
ASV methods
Popular methods for resolving ASVs including DADA2, Deblur, MED, and UNOISE. These methods work broadly by generating an error model tailored to an individual sequencing run and employing algorithms that use the model to distinguish between true biological sequences and those generated by error.
References
DNA | Amplicon sequence variant | [
"Engineering",
"Biology"
] | 637 | [
"Bioinformatics",
"Biological engineering"
] |
60,983,768 | https://en.wikipedia.org/wiki/Power%20Distribution%20Equipment%20Identification | The Power Distribution Equipment Identification (PDEID) () is a unique identification label used for exclusively identifying equipment and customers of the power distribution network of Iran, which has been in use since 1997. PDEID is used to simplify identifying equipment, their approximate address, updating the electrical network information and to transfer information to computers.
Etymology
The first unique identification code for equipment was introduced in the Iran's Power Distribution Network Standard in 1969. Only three equipment which were medium and low voltage poles, medium and low voltage branching nodes and distribution substations suggested to have equipment identification label. In 1996, at the 6th Conference on Electrical Power Distribution Networks, an article entitled "Application of the Integrated Equipment Identification Label for Distribution Networks in the Iran" by Gholamreza Saffarpour () and Ali Mamdoohi () was presented in which a method for uniquely identifying all equipment and subscribers of distribution networks introduced. This method was selected in 1997 with minor modifications by Tavanir to integrate the identification of equipment and subscribers of distribution networks.
Structure of Power Distribution Equipment Identification Label
Every code in the Power Distribution Equipment Identification label consists of 12 numbers and letters. The labels are grouped into two categories: network equipment and customers.
Distribution Equipment Identification for network equipment
The equipment label contains 12 numbers and characters. The first 5 identifies the zip code of the area where the equipment is located. The 5-digit postcode contains the location information provided by Iran Post for the whole country. The next two are letters which identify the equipment type-ID. The last five digits are an assigned sequence (or serial) number for the equipment in the postcode area.
The sequence or serial number used in the integrated PDEID is an arbitrary number that is unique within the area of postcode. For example, the first distribution substation in the 13457 postcode should have the serial number 00001 and the second substation in the same postcode area (13457) should have serial number 00002, and so on. In this system, determining which equipment (substation in the above example) is first and which one is second is absolutely arbitrary.
Distribution Equipment Identification for customers (subscribers)
For customers the Power Distribution Equipment Identification label is composed of 12 digits, and similar to the equipment, the 5 leftmost digits are the 5-digit postcode of the area where the electricity meter of the customer is located. The right 7 digits of the PDEID are the customer-id which is used in the billing system of power distribution utilities. In some parts of the Iran where the customer-id is more than 7 digits, the PDEID has 14 digits, and the 9 rightmost digits contain the customer-id number.
Equipment Type ID
In the Power Distribution Equipment Identification label, identification is not considered for all equipment. However, by identifying and labeling 23 equipment, all equipment which is important in regard to engineering calculations or information statistics can be uniquely identified. Equipment type-ID does not include the letters I (i), O (o), and Q (q) to avoid confusion with numerals 1 and 0.
In assigning PDEID to the distribution equipment, a single-line diagram is used, so when the three-phase system is used, insulators, cable terminations, and cable joints of all three phases take just one PDEID each.
Changes to the original design
There are three differences between the final implemented PDEID and what was proposed in the article entitled "Application of the Integrated Equipment Identification Label for Distribution Networks in the Iran” as following:
Suggested outdoor HV cable termination type id was C1, and indoor HV cable termination type id was C3. In final PDEID type id of both outdoor and indoor HV cable termination is CH.
Suggested outdoor LV cable termination type id was C2, and indoor LV cable termination type id was C4. In final PDEID type id of both outdoor and indoor HV cable termination is CL.
Adding the JN type id for virtual node.
Also to improve the readability of the labels, the type id is placed between the postcode (zip code) and sequence number (or customer id number for customers).
References
Electrical engineering
Electric power distribution network operators | Power Distribution Equipment Identification | [
"Engineering"
] | 865 | [
"Electrical engineering"
] |
76,142,154 | https://en.wikipedia.org/wiki/Vibrational%20spectroscopic%20map | Vibrational spectroscopic maps are a series of ab initio, semiempirical, or empirical models tailored to specific IR probes to describe vibrational solvatochromic effects on molecular spectra quantitatively.
Coherent multidimensional spectroscopy, a nonlinear spectroscopy utilizing multiple time-delayed pulses, is a technique that enables the measurement of solvation-induced frequency shifts and the time-correlations of the fluctuating frequencies. Researchers employ various organic and biochemical methods to introduce small vibrational probes into molecular systems into a variety of chemicals, proteins, nucleic acids, etc. These probes, labeled with infrared (IR) markers, were subject to spectroscopic investigations to obtain quantitative insights into various features of chemical and biological systems. In general, interpreting the experimental multidimensional spectra to get information on the underlying molecular processes requires theoretical modeling.
The vibrational frequency shifts observed due to complex intermolecular interactions of small IR probes with surroundings in the condensed phase are minute, often representing fractions of thermal energy. Numerical accuracy associated with advanced quantum mechanical calculations are not sufficient to accurately model these shifts. Consequently, researchers commonly resort to mapping procedures, which correlate certain physical variables calculated for the probe molecule with spectroscopic properties such as vibrational frequencies. These mapping procedures are referred to as vibrational spectroscopic maps within the field.
Typically, the physical variables employed in vibrational frequency maps include electric potentials, electric fields, distributed higher multipole moments, and other relevant factors evaluated at specific points surrounding the molecule.
As an example, the vibrational frequency associated with a localized vibrational mode is correlated with the electrostatic potential and electric field values at a designated set of points known as distributed sites within the infrared (IR) chromophore.
Theoretical foundation
The vibrational frequency shift, denoted as , for the jth normal mode of a given probe molecule is defined as the difference between the actual vibrational frequency of the mode in a solution and the frequency in the gas phase.
ref name=":2"></ref>
From an effective Hamiltonian for the solute in the presence of molecular environment, one can derive the effective vibrational force constant (or Hessian) matrix approximately as follows:
where the subscript 0 means the quantity is evaluated at the gas-phase geometry.
In the limiting case that the vibrational couplings of the normal mode of interest with other vibrational modes are relatively weak, the vibrational frequency shift under such a weak coupling approximation (WCA) in solution from the gas-phase frequency is given by
Here, and are the electric anharmonicity (EA) and mechanical anharmonicity (MA) operators, respectively. These operators are defined as
and
By substituting a relevant expression for the intermolecular interaction potential into the WCA expression for , one can derive the vibrational frequency shift based on the specific theoretical potential model under consideration.
Semiempirical approaches
While several rigorous theories for vibrational solvatochromism based on physical approximations have been proposed, these sophisticated models often necessitate extensive quantum chemistry calculations performed at elevated levels of precision with a large basis set. Current electronic structure simulation methods fall short in providing vibrational frequencies directly comparable to experimentally measured frequency shifts, especially when they are on the order of a few wavenumbers.
To accurately calculate coefficients in vibrational solvatochromism expressions, researchers frequently turn to employing multivariate leastsquare fitting. This technique involves fitting a sufficiently extensive set of training data obtained from quantum chemistry calculations of vibrational frequency shifts for numerous clusters containing a solute and multiple solvent molecules.
An early approach aimed to express the solvation-induced vibrational frequency shift in terms of the solvent electric potentials evaluated at distributed atomic sites on the target solute molecule. This method involves calculating the solvent electric potentials at these specific solute sites through the utilization of atomic partial charges from surrounding solvent molecules. The vibrational frequency shift of the solute molecule, denoted as , for the jth vibrational mode can be represented as
Here, represents the vibrational frequency of the jth normal mode in solution, signifies the vibrational frequency in the gas phase, N denotes the number of distributed sites on the solute molecule, denotes the solvent electric potential at the kth site of the solute molecule, and are the parameters to be determined through least-square fitting to a training database comprising clusters containing a solute and multiple solvent molecules. This method provides a means to quantify the impact of solvation on the vibrational frequencies of the solute molecule.
Another widely used model for characterizing vibrational solvatochromic frequency shifts involves expressing the frequency shift in terms of solvent electric fields evaluated at distributed sites on the target solute molecule.
Developments
Vibrational spectroscopic maps have been developed for a diverse range of vibrational modes, including various molecular systems and functional groups. Some of the notable vibrational modes include:
Amide I mode of NMA (N-Methylacetamide)
Amide I mode of peptide molecules
Amide I vibration of isotope-labeled proteins
Amide II vibration
Nitrile (CN) stretch
Thiocyanato (SCN) stretch
Selenothiocyanato (SeCN) stretch
Azido (N3) stretch
Carbonmonoxy (CO) stretch
Ester carbonyl (O-C=O) stretch
Carbonate carbonyl (C=O) stretch
Water OH and OD stretch
C-D stretch
S=O stretch
Phosphate (PO2) stretch
Nucleic acid base modes
OH and OD stretch mode in alcohols
Water bending mode
References
External links
Frequency map repository
Spectroscopy | Vibrational spectroscopic map | [
"Physics",
"Chemistry"
] | 1,157 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
73,237,601 | https://en.wikipedia.org/wiki/Dische%20test | The Dische test, or Dische reaction, is used to distinguish DNA from RNA. It was invented by Zacharias Dische.
Method
Dische's diphenylamine reagent consists of diphenylamine, glacial acetic acid, sulfuric acid, and ethanol.
When heated with DNA, it turns blue in the presence of DNA. A more intense blue color indicates a greater concentration of DNA.
Mechanism
The acid converts deoxyribose to a molecule that binds with diphenylamine to form a blue substance. The reagent does not interact with RNA, so can be used to distinguish DNA from RNA.
See also
Bial's test
References
Analytical reagents
Genetics techniques | Dische test | [
"Chemistry",
"Engineering",
"Biology"
] | 146 | [
"Genetics techniques",
"Analytical reagents",
"Genetic engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.