id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
66,443,279
https://en.wikipedia.org/wiki/H3R26me2
H3R26me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 26th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial. Nomenclature The name of this modification indicates dimethylation of arginine 26 on histone H3 protein subunit: Arginine Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases. Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction, and transcriptional regulation. Arginine methylation plays a major role in gene regulation because of the ability of the PRMTs to deposit key activating (histone H4R3me2, H3R2me2, H3R17me2a, H3R26me2) or repressive (H3R2me2, H3R8me2, H4R3me2) histone marks. Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. Mechanism and function of modification Methylation of H3R26me2 is mediated by CARM1 and is recruited to promoter upon gene activation along with acetyltransferases and activates transcription. When CARM1 is recruited to transcriptional promoters the histone H3 is methylated (H3R17me2 & H3R26me2). H3R26 lies close to H3K27, which is a repressive mark when methylated. There are several ways that H3R26 could change gene expression. Epigenetic implications The post-translational modification of histone tails by either histone-modifying complexes or chromatin remodeling complexes is interpreted by the cell and leads to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states, which define genomic regions by grouping different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterized by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. The human genome is annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation. Clinical significance CARM1 knockout mice are smaller and die shortly after birth. CARM1 is required for the epigenetic maintenance of pluripotency and self-renewal, as it methylates H3R17 and H3R26 at core pluripotency genes such as Oct4, SOX2, and Nanog. It is possible that H3R26me2 levels are changed before pre-implantation of bovine embryos and their development. Methods The histone mark H3K4me1 can be detected in a variety of ways: 1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region. 2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well-positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well-positioned nucleosomes are seen to have enrichment of sequences. 3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation. See also Histone methylation Histone methyltransferase References Epigenetics Post-translational modification
H3R26me2
[ "Chemistry" ]
1,158
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
61,449,511
https://en.wikipedia.org/wiki/ZTF%20J153932.16%2B502738.8
ZTF J153932.16+502738.8 is a double binary white dwarf with an orbital period of just 6.91 minutes. Its period has been observed to be decreasing, due to the emission of gravitational waves. It is both an eclipsing binary and a double-lined spectroscopic binary. The hotter white dwarf is , and the other one is significantly cooler (<10,000K). The stars may merge into one in 130,000 years, or if mass transfers between them, they may separate again. Their distance from Earth is estimated at . Stars The brighter star has an effective temperature of , a logarithm of surface gravity of 7.75, and a mass 0.6 times the Sun. Its radius is 0.0156 that of the Sun. The dimmer star is cooler, with a temperature of under , and has a mass 0.21 that of the Sun. It is physically larger than the brighter star at 0.0314 the radius of the Sun. Name ZTF stands for Zwicky Transient Facility. This is a survey of the whole northern sky recording light curves that uses Samuel Oschin Telescope at Palomar Observatory. Eclipse The light curve shows eclipses. One dip in the light curve is 15%, and the other is close to 100%. This means that one star is much brighter than the other. The light curve is not flat between eclipses, as the bright star is lighting up the face of the dim star. Orbital decay The orbital period is decreasing at seconds per second giving a characteristic timescale of 210,000 years. This decay is mostly due to the emission of gravitational waves, however 7% of the decay could be due to tidal losses. The decay is predicted to go for 130,000 years when the orbital period should reach 5 minutes. Then the dimmer star is predicted to expand and lose mass to the more massive star. It could then become an AM CVn system or merge to make a R Coronae Borealis star. The orbit compares with V407 Vulpeculae with a 9.5 minute orbit, and HM Cancri with 5.4 minute orbit. Star composition The hot star is a hydrogen-rich white dwarf of type DA. It has wide and shallow absorption lines of hydrogen. The dim star has narrow hydrogen emission lines, showing it is cooler. There are also helium absorption and emission lines. The two kinds of lines vary over the period, so that they can be identified with the two components. The emission lines are likely due to excess heating of the dim star by the bright one. References Spectroscopic binaries White dwarfs Gravitational-wave astronomy Boötes Eclipsing binaries
ZTF J153932.16+502738.8
[ "Physics", "Astronomy" ]
553
[ "Boötes", "Astrophysics", "Constellations", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
61,453,053
https://en.wikipedia.org/wiki/Ewald%20Prize
In 1986 the International Union of Crystallography (IUCr) established the Ewald Prize for outstanding contributions to the science of crystallography. The Ewald Prize is considered the highest prize available to crystallographers apart from the Nobel Prize. The Ewald Prize has been described as prestigious, acclaimed and coveted. The prize is named after Paul Peter Ewald for his contributions to the founding and leadership of the IUCr. The prize consists of a medal, a certificate and a financial award (US$ 20,000 in 1987). It is presented once every three years during the triennial International Congresses of Crystallography. The first prize was presented during the XIV Congress at Perth, Australia, in 1987. The prize is open to any scientist who has made contributions of exceptional distinction to the science of crystallography, irrespective of nationality, age or experience. The prize may be shared by several contributors to the same scientific achievement. Prize Winners References Crystallography awards
Ewald Prize
[ "Chemistry", "Materials_science" ]
202
[ "Crystallography awards", "Crystallography" ]
73,782,857
https://en.wikipedia.org/wiki/Receptivity%20%28NMR%29
In NMR spectroscopy, receptivity refers to the relative detectability of a particular element. Some elements are easily detected, some less so. The receptivity is a function of the abundance of the element's NMR-responsive isotope and that isotope's gyromagnetic ratio (or equivalently, the nuclear magnetic moment). Some isotopes, tritium for example, have large gyromagnetic ratios but low abundance. Other isotopes, for example 103Rh, are highly abundant but have low gyromagnetic ratios. Widely used NMR spectroscopies often focus on highly receptive elements: 1H, 19F, and 31P. References Nuclear magnetic resonance
Receptivity (NMR)
[ "Physics", "Chemistry" ]
146
[ "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
73,787,498
https://en.wikipedia.org/wiki/Gibbs%20rotational%20ensemble
The Gibbs rotational ensemble represents the possible states of a mechanical system in thermal and rotational equilibrium at temperature and angular velocity . The Jaynes procedure can be used to obtain this ensemble. An ensemble is the set of microstates corresponding to a given macrostate. The Gibbs rotational ensemble assigns a probability to a given microstate characterized by energy and angular momentum for a given temperature and rotational velocity . where is the partition function Derivation The Gibbs rotational ensemble can be derived using the same general method as to derive any ensemble, as given by E.T. Jaynes in his 1956 paper Information Theory and Statistical Mechanics. Let be a function with expectation value where is the probability of , which is not known a priori. The probabilities obey normalization To find , the Shannon entropy is maximized, where the Shannon entropy goes as The method of Lagrange multipliers is used to maximize under the constraints and the normalization condition, using Lagrange multipliers and to find is found via normalization and can be written as where is the partition function This is easily generalized to any number of equations via the incorporation of more Lagrange multipliers. Now investigating the Gibbs rotational ensemble, the method of Lagrange multipliers is again used to maximize the Shannon entropy , but this time under the constraints of energy expectation value and angular momentum expectation value , which gives as Via normalization, is found to be Like before, and are given by The entropy of the system is given by such that where is the Boltzmann constant. The system is assumed to be in equilibrium, follow the laws of thermodynamics, and have fixed uniform temperature and angular velocity . The first law of thermodynamics as applied to this system is Recalling the entropy differential Combining the first law of thermodynamics with the entropy differential gives Comparing this result with the entropy differential given by entropy maximization allows determination of and which allows the probability of a given state to be written as which is recognized as the probability of some microstate given a prescribed macrostate using the Gibbs rotational ensemble. The term can be recognized as the effective Hamiltonian for the system, which then simplifies the Gibbs rotational partition function to that of a normal canonical system Applicability The Gibbs rotational ensemble is useful for calculations regarding rotating systems. It is commonly used for describing particle distribution in centrifuges. For example, take a rotating cylinder (height , radius ) with fixed particle number , fixed volume , fixed average energy , and average angular momentum . The expectation value of number density of particles at radius can be written as Using the Gibbs rotational partition function, can be calculated to be Density of a particle at a given point can be thought of as unity divided by an infinitesimal volume, which can be represented as a delta function. which finally gives as which is the expected result. Difference between Grand canonical ensemble and Gibbs canonical ensemble The Grand canonical ensemble and the Gibbs canonical ensemble are two different statistical ensembles used in statistical mechanics to describe systems with different constraints. The grand canonical ensemble describes a system that can exchange both energy and particles with a reservoir. It is characterized by three variables: the temperature (T), chemical potential (μ), and volume (V) of the system. The chemical potential determines the average particle number in this ensemble, which allows for some variation in the number of particles. The grand canonical ensemble is commonly used to study systems with a fixed temperature and chemical potential, but a variable particle number, such as gases in contact with a particle reservoir. On the other hand, the Gibbs canonical ensemble describes a system that can exchange energy but has a fixed number of particles. It is characterized by two variables: the temperature (T) and volume (V) of the system. In this ensemble, the energy of the system can fluctuate, but the number of particles remains fixed. The Gibbs canonical ensemble is commonly used to study systems with a fixed temperature and particle number, but variable energy, such as systems in thermal equilibrium. References Statistical mechanics
Gibbs rotational ensemble
[ "Physics" ]
818
[ "Statistical mechanics" ]
73,788,473
https://en.wikipedia.org/wiki/SCHEMBL19952957
SCHEMBL19952957 is an oxadiazole based antibiotic, originally developed in 2014 as a potential treatment for infections with methicillin-resistant Staphylococcus aureus (MRSA) and other antibiotic-resistant bacteria. Subsequently, it has been found to be useful against Clostridioides difficile as it not only kills active bacteria but also inhibits the germination of the dormant spores which can otherwise often lead to persistent infections that repeatedly recur upon cessation of antibiotic treatment. While it is still only being researched in animals at this stage, this dual action is a significant advance over existing antibiotics, and it is likely that drugs from this class may be developed as new medications for the treatment of antibiotic-resistant infections in humans. See also Cadazolid Fidaxomicin Ridinilazole Surotomycin References Oxadiazoles 4-Hydroxyphenyl compounds Cyclopentanols Aromatic ethers
SCHEMBL19952957
[ "Chemistry" ]
207
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
73,788,761
https://en.wikipedia.org/wiki/Faber%E2%80%93Evans%20model
The Faber–Evans model for crack deflection, is a fracture mechanics-based approach to predict the increase in toughness in two-phase ceramic materials due to crack deflection. The effect is named after Katherine Faber and her mentor, Anthony G. Evans, who introduced the model in 1983. The Faber–Evans model is a principal strategy for tempering brittleness and creating effective ductility. Fracture toughness is a critical property of ceramic materials, determining their ability to resist crack propagation and failure. The Faber model considers the effects of different particle morphologies, including spherical, rod-shaped, and disc-shaped particles, and their influence on the driving force at the tip of a tilted and/or twisted crack. The model first suggested that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. The findings provide a basis for designing high-toughness two-phase ceramic materials, with a focus on optimizing particle shape and volume fraction. Fracture mechanics and crack deflection Fracture mechanics is a fundamental discipline for understanding the mechanical behavior of materials, particularly in the presence of cracks. The critical parameter in fracture mechanics is the stress intensity factor (K), which is related to the strain energy release rate (G) and the fracture toughness (Gc). When the stress intensity factor reaches the material's fracture toughness, crack propagation becomes unstable, leading to failure. In two-phase ceramic materials, the presence of a secondary phase can lead to crack deflection, a phenomenon where the crack path deviates from its original direction due to interactions with the second-phase particles. Crack deflection can lead to a reduction in the driving force at the crack tip, increasing the material's fracture toughness. The effectiveness of crack deflection in enhancing fracture toughness depends on several factors, including particle shape, size, volume fraction, and spatial distribution. The study presents weighting functions, F(θ), for the three particle morphologies, which describe the distribution of tilt angles (θ) along the crack front: The weighting functions are used to determine the net driving force on the tilted crack for each morphology. The relative driving force for spherical particles is given by: where and prescribes the strain energy release rate only for that portion of the crack front which tilts. To characterize the entire crack front at initial tilt, must be qualified by the fraction of the crack length intercepted and superposed on the driving force that derives from the remaining undeflected portion of the crack. The resultant toughening increment, derived directly from the driving forces, is given by: where represents the fracture toughness of the matrix material without the presence of any reinforcing particles, is the volume fraction of spheres, relates the rod length to its radius, , and is the ratio of the disc radius, , to its thickness, . Spatial location and orientation of particles The spatial location and orientation of adjacent particles play a crucial role in determining whether the inter-particle crack front will tilt or twist. If adjacent particles produce tilt angles of opposite sign, twist of the crack front will result. Conversely, tilt angles of like sign at adjacent particles cause the entire crack front to tilt. Therefore, to evaluate the toughening increment, all possible particle configurations must be considered. For spherical particles, the average twist angle is determined by the mean center-to-center nearest neighboring distance, , between particles with spheres of radius r: The maximum twist angle occurs when the particles are nearly co-planar with the crack, given by: and depends exclusively on the volume fraction. For rod-shaped particles, the analysis of crack front twist is more complex due to difficulties in describing the rod orientation with respect to the crack front and adjacent rods. The twist angle, , is determined by the effective tilt angle, , and the inter-particle spacing between randomly arranged rod-shaped particles. The twist of the crack front is influenced not only by the volume fraction of rods but also by the ratio of the rod length to radius: where represents the dimensionless effective inter-particle spacing between two adjacent rod-shaped particles. Morphology and volume effects on fracture toughness The analysis reveals that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks, with the potential to increase fracture toughness by up to four times. This toughening arises primarily from the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in increasing fracture toughness. For disc-shaped particles with high aspect ratios, initial crack front tilt can provide significant toughening, although the twist component still dominates. In contrast, neither sphere nor rod particles derive substantial toughening from the initial tilting process. As the volume fraction of particles increases, an asymptotic toughening effect is observed for all three morphologies at volume fractions above 0.2. For spherical particles, the interparticle spacing distribution has a significant impact on toughening, with greater enhancements when spheres are nearly contacting and twist angles approach π/2. The Faber–Evans model suggests that rod-shaped particles with high aspect ratios are the most effective morphology for deflecting propagating cracks and increasing fracture toughness, primarily due to the twist of the crack front between particles. Disc-shaped particles and spheres are less effective in enhancing toughness. However, the interparticle spacing distribution plays a significant role in the toughening by spherical particles, with greater toughening achieved when spheres are nearly contacting. In designing high-toughness two-phase ceramic materials, the focus should be on optimizing particle shape and volume fraction. The model proved that ideal second phase should be chemically compatible and present in amounts of 10 to 20 volume percent, with particles having high aspect ratios, particularly those with rod-shaped morphologies, providing the maximum toughening effect. This model is often used in the development of advanced ceramic materials with improved performance when the factors that contribute to the increase in fracture toughness is a consideration. See also Fracture toughness Toughening Ceramic Engineering Fracture References Fracture mechanics Ceramic engineering Materials science
Faber–Evans model
[ "Physics", "Materials_science", "Engineering" ]
1,277
[ "Structural engineering", "Applied and interdisciplinary physics", "Fracture mechanics", "Materials science", "nan", "Ceramic engineering", "Materials degradation" ]
73,789,292
https://en.wikipedia.org/wiki/AT%202021lwx
AT 2021lwx (also known as ZTF20abrbeie or "Scary Barbie") is the most energetic non-quasar optical transient astronomical event ever observed, with a peak luminosity of 7 × 1045 erg per second (erg s−1) and a total radiated energy between 9.7 × 1052 erg to 1.5 × 1053 erg over three years. Despite being lauded as the largest explosion ever, GRB 221009A was both more energetic and brighter. It was first identified in imagery obtained on 13 April 2021 by the Zwicky Transient Facility (ZTF) astronomical survey and is believed to be due to the accretion of matter into a super massive black hole (SMBH) heavier than one hundred million solar masses (). It has a redshift of z = 0.9945, which would place it at a distance of about eight billion light-years from earth, and is located in the constellation Vulpecula. No host galaxy has been detected. Forced photometry of earlier ZTF imagery showed AT 2021lwx had already begun brightening by 16 June 2020, as ZTF20abrbeie. It was also detected independently in data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) as ATLAS20bkdj, and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) as PS22iin. At the Neil Gehrels Swift Observatory, X-ray observations were made with the X-ray Telescope and ultraviolet, with the Ultraviolet-Optical Telescope (UVOT). Subrayan et al. originally interpreted it to be a tidal disruption event between an SMBH (~108 ) and a massive star (~14 ). Wiseman et al. disfavor this interpretation, and instead believe the most likely scenario is "the sudden accretion of a large amount of gas, potentially a giant molecular cloud" (~1,000 ), onto an SMBH (>108 ). The inferred mass of the SMBH, based on the light to mass ratio, is about 1 hundred million - 1 billion solar masses, given the observed brightness. However, the theoretical limit for an accreting super massive black hole is 1 hundred million solar masses. Given the best understood model of accreting SMBH's, this even may be the most massive SMBH to possibly accrete matter. See also Ophiuchus Supercluster eruption, a 5 × 1061-erg event that may have occurred up to 240 million years ago, revealed by a giant radio fossil MS 0735.6+7421, a 1061-erg eruption that has been occurring for the last 100 million years GRB 080916C, an 8.8 × 1054-erg gamma-ray burst seen in 2008 GRB 221009A, a 1.2 × 1055-erg gamma-ray burst seen in 2023 References Astronomical objects discovered in 2023 Astronomical events Supermassive black holes Vulpecula
AT 2021lwx
[ "Physics", "Astronomy" ]
638
[ "Black holes", "Vulpecula", "Astronomical events", "Unsolved problems in physics", "Supermassive black holes", "Constellations" ]
73,793,820
https://en.wikipedia.org/wiki/RNA%20velocity
RNA velocity is based on bridging measurements to an underlying mechanism, mRNA splicing, with two modes indicating the current and future state. It is a method used to predict the future gene expression of a cell based on the measurement of both spliced and unspliced transcripts of mRNA. RNA velocity could be used to infer the direction of gene expression changes in single-cell RNA sequencing (scRNA-seq) data. It provides insights into the future state of individual cells by using the abundance of unspliced to spliced RNA transcripts. This ratio can indicate the transcriptional dynamics and potential fate of a cell, such as whether it is transitioning from one cell type to another or undergoing differentiation. Software usage There are several software tools available for RNA velocity analysis.Each of these tools has its own strengths and applications, so the choice of tool would depend on the specific requirements of your analysis: velocyto Velocyto is a package for the analysis of expression dynamics in single cell RNA seq data. In particular, it enables estimations of RNA velocities of single cells by distinguishing unspliced and spliced mRNAs in standard single-cell RNA sequencing protocols. It is the first paper proposed the concept of RNA velocity. velocyto predicted RNA velocity by solving the proposed differential equations for each gene. The authors envision future manifold learning algorithms that simultaneously fit a manifold and the kinetics on that manifold, on the basis of RNA velocity. scVelo scVelo is a method that solves the full transcriptional dynamics of splicing kinetics using a likelihood-based dynamical model. This generalizes RNA velocity to systems with transient cell states, which are common in development and in response to perturbations. scVelo was applied to disentangling subpopulation kinetics in neurogenesis and pancreatic endocrinogenesis. scVelo demonstrate the capabilities of the dynamical model on various cell lineages in hippocampal dentate gyrus neurogenesis and pancreatic endocrinogenesis. cellDancer cellDancer is a scalable deep neural network that locally infers velocity for each cell from its neighbors and then relays a series of local velocities to provide single-cell resolution inference of velocity kinetics. cellDancer improved the extisting hypothesis of kinetic rates of velocyto and scVelo, transcription rate was either a constant (velocyto model) or binary values (scVelo model), splicing and degradation rates were shared by all the genes and cells, which may have unpredictable performance, while cellDancer can predict the specific transcription, splicing and degradation rates of each gene in each cell through deep learning. MultiVelo MultiVelo is a differential equation model of gene expression that extends the RNA velocity framework to incorporate epigenomic data. MultiVelo uses a probabilistic latent variable model to estimate the switch time and rate parameters of chromatin accessibility and gene expression . DeepVelo DeepVelo is a neural network–based ordinary differential equation that can model complex transcriptome dynamics by describing continuous-time gene expression changes within individual cells. DeepVelo has been applied to public datasets from different sequencing platforms to (i) formulate transcriptome dynamics on different time scales, (ii) measure the instability of cell states, and (iii) identify developmental driver genes via perturbation analysis. UnitVelo UnitVelo is a statistical framework of RNA velocity that models the dynamics of spliced and unspliced RNAs via flexible transcription activities. UnitVelo supports the inference of a unified latent time across the transcriptome. References Molecular biology RNA RNA sequencing
RNA velocity
[ "Chemistry", "Biology" ]
774
[ "Genetics techniques", "RNA sequencing", "Molecular biology techniques", "Molecular biology", "Biochemistry" ]
73,795,388
https://en.wikipedia.org/wiki/Nanophysiology
Nanophysiology is a field that concerns the function of nanodomains, such as the regulation of molecular or ionic flows in cell subcompartments, such as glial protrusions, dendritic spines, dendrites, mitochondria and many more. Background Molecular organization in nanocompartments provides the construction required to achieve elementary functions that can sustain higher physiological functions of a cell. This includes calcium homeostatis, protein turn over, plastic changes underlying cell communications. The goal of this field is to determine the function of these nanocompartments based on molecular organization, ionic flow or voltage distribution. Voltage dynamics How the voltage is regulated in nanodomains remains an open field. While the classical Goldman-Hodgkin-Huxley-Katz models in biophysics provides a foundation for electrophysiology and has been responsible for many advances in neuroscience, this theory remains insufficient to describe the voltage dynamics in small nano-compartments, such as synaptic terminals or cytoplasm around voltage-gated channels, because they are based on spatial and ionic homogeneity. Instead, electrodiffusion theory should be used to describe electrical current flow in these nanostructures and reveal the structure-function. References Biophysics Cell biology
Nanophysiology
[ "Physics", "Biology" ]
265
[ "Cell biology", "Applied and interdisciplinary physics", "Biophysics" ]
67,954,458
https://en.wikipedia.org/wiki/Beryllium%20oxalate
Beryllium oxalate is an inorganic compound, a salt of beryllium metal and oxalic acid with the chemical formula . It forms colorless crystals, dissolves in water, and also forms crystalline hydrates. The compound is used to prepare ultra-pure beryllium oxide by thermal decomposition. Synthesis The action of oxalic acid on beryllium hydroxide: Chemical properties Crystalline hydrates lose water when heated: References Inorganic compounds Beryllium compounds Oxalates
Beryllium oxalate
[ "Chemistry" ]
98
[ "Inorganic compounds" ]
67,958,010
https://en.wikipedia.org/wiki/Antimicrobial%20nanotechnology
Antimicrobial nanotechnology is the study of using biofilms to disrupt a microbe's cell membrane, deliver an electric charge to the microbe, and cause immediate cellular death via a "mechanical kill" process, preventing the original microbe from mutating into a superbug. The biofilms are made up of long atomic chains that can breach the cell wall. These spikes are roughly the size of a human hair and are far too small to injure large cells in mammals. These atom chains have a significant positive charge that attracts bacteria that are negatively charged. A new class of antimicrobial has been created by applying nanotechnology to the challenge of superbugs and multiple drug resistance organisms. Problem statement According to a report published in the Archives of Internal Medicine on 22 February 2010, health care–associated infections affect 1.7 million hospitalizations per year. The most prevalent nosocomial infections can live or stay on surfaces for months, posing a continuing transmission risk. On dry surfaces, most gram-positive bacteria, including Enterococcus spp. (including VRE), Staphylococcus aureus (including MRSA), and Streptococcus pyogenes, can persist for months. VRE has been cultured from frequently touched objects and has been found to survive on surfaces for more than three days. Dried cotton fabrics have been shown to support Enterococci that is resistant to vancomycin for up to 18 hours and fungi for more than five days. Nanotechnology antimicrobials are promising because they limit the spread of bacteria by lowering the number of infection agents at frequent contact points (doorknobs, rails, tables, etc.). These new treatments have been certified by the Environmental Protection Agency and are being considered for use in hospitals and other settings where community-acquired illnesses spread quickly, such as cruise ships and jails. Environmental measures and adequate antibiotic use are the first steps in preventing the emergence of superbugs. According to studies, even if a patient does not need an antibiotic, a doctor is considerably more likely to prescribe one if they believe the patient does. Safety and effects to the environment Antimicrobial nanotechnology is an environmentally friendly solution because it is based from water and contains no heavy metals, arsenic, tin, or polychlorinated phenols. According to tests, a garment treated with antimicrobial nanotechnology will degrade in a landfill in 5 years to carbon dioxide, nitrous oxide, and silicon dioxide. Using a nanotechnology antimicrobial Biofilms are being developed as consumer products that can be sprayed or wiped over porous and nonporous surfaces. The microbe resistance of a surface treated with the appropriate antimicrobial nanotechnology can last up to 90 days, or the product's usable life, if protected during the production process. On the preventative front, European researchers are developing MRSA-resistant nanotechnology-enhanced textiles that might be utilised in hospital gowns, curtains, bedding, and pillow coverings. References External links Nanotechnology Cell biology Medical technology
Antimicrobial nanotechnology
[ "Materials_science", "Engineering", "Biology" ]
647
[ "Antimicrobials", "Cell biology", "Materials science", "Nanomedicine", "Nanotechnology", "Biocides", "Medical technology" ]
67,960,783
https://en.wikipedia.org/wiki/Sparse%20polynomial
In mathematics, a sparse polynomial (also lacunary polynomial or fewnomial) is a polynomial that has far fewer terms than its degree and number of variables would suggest. For example, is a sparse polynomial as it is a trinomial with a degree of . The motivation for studying sparse polynomials is to concentrate on the structure of a polynomial's monomials instead of its degree, as one can see, for instance, by comparing Bernstein–Kushnirenko theorem with Bezout's theorem. Research on sparse polynomials has also included work on algorithms whose running time grows as a function of the number of terms rather than on the degree, for problems including polynomial multiplication, division, root-finding algorithms, and polynomial greatest common divisors. Sparse polynomials have also been used in pure mathematics, especially in the study of Galois groups, because it has been easier to determine the Galois groups of certain families of sparse polynomials than it is for other polynomials. The algebraic varieties determined by sparse polynomials have a simple structure, which is also reflected in the structure of the solutions of certain related differential equations. Additionally, a sparse positivstellensatz exists for univariate sparse polynomials. It states that the non-negativity of a polynomial can be certified by sos polynomials whose degree only depends on the number of monomials of the polynomial. Sparse polynomials oftentimes come up in sum or difference of powers equations. The sum of two cubes states that . Here is a sparse polynomial since out of the possible terms, only appear. Other examples include the identities and also where the product of two polynomials give a spearse polynomial. The Bring–Jerrard normal form of a quintic, is also a sparse polynomial. See also Askold Khovanskii, one of the main contributors to the theory of fewnomials. References Polynomials
Sparse polynomial
[ "Mathematics" ]
384
[ "Polynomials", "Algebra" ]
67,967,371
https://en.wikipedia.org/wiki/Kovacs%20effect
In statistical mechanics and condensed matter physics, the Kovacs effect is a kind of memory effect in glassy systems below the glass-transition temperature. A.J. Kovacs observed that a system’s state out of equilibrium is defined not only by its macro thermodynamical variables, but also by the inner parameters of the system. In the original effect, in response to a temperature change, under constant pressure, the isobaric volume and free energy of the system experienced a recovery characterized by non-monotonic departure from equilibrium, whereas all other thermodynamical variables were in their equilibrium values. It is considered a memory effect since the relaxation dynamics of the system depend on its thermal and mechanical history. The effect was discovered by Kovacs in the 1960s in polyvinyl acetate. Since then, the Kovacs effect has been established as a very general phenomenon that comes about in a large variety of systems, model glasses, tapped dense granular matter, spin-glasses, molecular liquids, granular gases, active matter, disordered mechanical systems, protein molecules, and more. The effect in Kovacs’ experiments Kovacs’ experimental procedure on polyvinyl acetate consisted of two main stages. In the first step, the sample is instantaneously quenched from a high initial temperature to a low reference temperature , under constant pressure. The time-dependent volume of the system in , , is recorded, until the time when the system is considered to be at equilibrium. The volume at is defined as the equilibrium volume of the system at temperature : In the second step, the sample is quenched again from to a temperature that is lower than , so that . But now, the system is held at temperature only until the time when its volume reaches the equilibrium value of , meaning . Then, the temperature is raised instantaneously to , so both the temperature and the volume agree with the same equilibrium state. Naively, one expects that nothing should happen when the system is at and . But instead, the volume of the system first increases and then relaxes back to , while the temperature is held constant at . This non-monotonic behavior in time of the volume after the jump in the temperature can be simply captured by: where , and . is also referred as the “Kovacs hump”. Kovacs also found that the hump displayed some general features: with only one maximum of height at a certain time ; as the temperature is lowered, the hump becomes larger, increases, and moves to shorter times, decreases. In the subsequent studies of the Kovacs hump in different systems, a similar protocol with two jumps in the temperature has been employed. The associated time evolution of a relevant physical quantity , often the energy, is monitored and displays the Kovacs hump. The physical relevance of this behavior is the same as in the Kovacs experiment: it shows that does not completely characterize the dynamical state of the system, and the necessity of incorporating additional variables to have the whole picture. The Kovacs hump described above has been rationalized by employing linear response theory for molecular systems, in which the initial and final states are equilibrium ones. Therein, the "direct" relaxation function (with only one temperature jump, instead of two) is a superposition of positive exponentially decaying modes, as a consequence of the fluctuation-dissipation theorem. Linear response makes it possible to write the Kovacs hump in terms of the direct relaxation function. Specifically, the positivity of the all the modes in the direct relaxation function ensures the "normal" character of the hump, i.e. the fact that . Recently, analogous experiments have been proposed for "athermal" systems, like granular systems or active matter, with the proper reinterpretation of the variables. For instance, in granular gases the relevant physical property is still the energy—although one usually employs the terminology "granular temperature" for the kinetic energy in this context—but it is the intensity of the external driving that plays the role of the temperature. The emergence of Kovacs-like humps highlights the relevance of non-Gaussianities to describe the physical state of granular gases. "Anomalous" Kovacs humps have been reported in athermal systems, i.e. , i.e. a minimum is observed instead of a maximum. Although the linear response connection between the Kovacs hump and the direct relaxation function can be extended to athermal systems, not all the modes are positive definite—the standard version of the fluctuation-dissipation theorem does not apply. This is the key that facilitates the emergence of anomalous behavior. References Statistical mechanics Amorphous solids
Kovacs effect
[ "Physics" ]
985
[ "Amorphous solids", "Statistical mechanics", "Unsolved problems in physics" ]
65,194,617
https://en.wikipedia.org/wiki/Split%20and%20pool%20synthesis
The split and pool (split-mix) synthesis is a method in combinatorial chemistry that can be used to prepare combinatorial compound libraries. It is a stepwise, highly efficient process realized in repeated cycles. The procedure makes it possible to prepare millions or even trillions of compounds as mixtures that can be used in drug research. History According to traditional methods, most organic compounds are synthesized one by one from building blocks coupling them together one after the other in a stepwise manner. Before 1982 nobody was even dreaming about making hundreds or thousands of compounds in a single process. Not speaking about millions or even trillions. So the productivity of the split and pool method invented by Prof. Á. Furka (Eötvös Loránd University Budapest Hungary), in 1982 seemed incredible at first sight. The method had been described it in a document notarized in the same year. The document is written in Hungarian and translated to English Motivations that led to the invention are found in a 2002 paper and the method was first published in international congresses in 1988 then in print in 1991. The split and pool synthesis and its features The split and pool synthesis (S&P synthesis) differs from traditional synthetic methods. The important novelty is the use of compound mixtures in the process. This is the reason of its unprecedentedly high productivity. Using the method one single chemist can make more compounds in a week than all chemists produced in the whole history of chemistry. The S&P synthesis is applied in a stepwise manner by repeating three operations in each step of the process: Dividing a compound mixture into equal portions Coupling one different building block (BB) to each portion Pooling and thoroughly mixing the portions The original method is based on the solid-phase synthesis of Merrifield The procedure is illustrated in the figure by the flowing diagram showing of a two-cycle synthesis using the same three BBs in both cycles. Choosing the solid phase method in the S&P synthesis is reasonable since otherwise removal of the by-products from the mixture of compounds would be very difficult. Efficiency The high efficiency is the most important feature of the method. In a multi step (n) synthesis using equal number of BBs (k) in every step the number of components in a forming combinatorial library (N) is: N=kn This means that the number of components increases exponentially with the number steps (cycles) while the number of the required couplings increases only linearly. If a different number of building BBs are used in the cycles (k1, k2, k3....kn) the number of the formed components is: N=k1.k2.k3...kn. This feature of the procedure offers the possibility to synthesize a practically unlimited number of compounds. For example, if 1000 BBs are used in four cycles 1 trillion compounds are expected to form. The number of needed couplings is only 4000! The reason of the high efficiency The explanation of the extraordinary efficiency is the use of mixtures in the synthetic steps. If in a traditional reaction one compound is coupled with one reactant and one new compound is formed. If a mixture of compounds containing n components is coupled with a single reactant the number of new compounds formed in the single coupling is n. The difference between the traditional and the split and pool synthesis is convincingly shown by the number of coupling steps in the traditional and the split and pool synthesis of 3,2 million pentapeptides. Conventional synthesis: 3,200,000x5=16,000,000 coupling steps cca 40,000 years S&P synthesis: 20x5=100 coupling steps cca 5 days It is possible to conduct the conventional synthesis rational way as is shown in the figure. In this case, the number of coupling cycles is: 20+400+8,000+160,000+3,200,000=3,368,420 cca 9,200 years The theoretical upper limit of the number of components As often mentioned the split and pool method makes it possible to synthesize an unlimited number of compounds. In fact, the theoretical maximum number of components depends on the quantity of the library expressed in moles. If for example, 1 mol library is synthesized the maximum number of components is equal to the Avogadro number: 6,02214076·1023 In such a library each component would be represented by a single molecule. Components of the library form in equal molar quantities As far as the chemistry of the couplings makes it possible the components of the libraries form in nearly equal molar quantity. This is made possible by dividing of the mixtures into equal samples and by homogenization of the pooled samples by thoroughly mixing them. The equal molar quantity of components of the library is very important considering their applicability. The presence of compounds in unequal quantities may lead to difficulties in evaluation of the results in screening. The solid phase method makes it possible to use the reagents in excess to drive the reactions close to completion since the surplus can easily be removed by filtration. The possibility of using two mixtures in the synthesis In principle, the use of two mixtures in the S&P synthesis can lead to the same combinatorial library that forms in the usual S&P method. The differences in the reactivity of BBs however, bring about large differences in the concentrations of components, and the differences are expected to increase after each step. Although a considerable amount of labor could be saved by using the two mixtures approach when a high number of BBs are coupled in each position, it is advisable to stick to the normally used S&P procedure. The presence of all structural varieties in the library Formation of all structural variants that can be deduced from the BBs is an important feature of the S&P synthesis. Only the S&P method can achieve this in a single process. On the other hand, the presence of all possible structural varieties in a library assures that the library is a combinatorial one and is prepared by combinatorial synthesis. Forming of one compound in the beads The consequence of using a single BB in couplings is the formation of a single compound in each bead. The formation of OBOC libraries is an inherent property of the S&P synthesis. The reason is explained in the figure. The structure of the compound formed in a bead depends on the reaction vessels in which the bead happens to occur in the synthetic route. It depends on the decision of the chemist to use the library in the tethered (OBOC) form or cleave down the compounds from the beads and use it as a solution. Realization of the split and pool synthesis The split and pool synthesis was first applied to prepare peptide libraries on solid support. The synthesis was realized in a home-made manual device shown in the figure. The device has a tube with 20 holes to which reaction vessels could be attached. One end of the tube is linked to a waste container and a water pump. Left shows loading and filtering, right coupling-shaking position. In the early years of combinatorial chemistry, an automatic machine was constructed and commercialized at AdvancedChemTech (Louisville KY USA). All operations of the S&P synthesis are carried automatically under computer control. At present, the Titan 357 automatic synthesizer is available at aapptec (Louisville KY, USA). Encoded split and pool synthesis Although in the S&P synthesis a single compound forms on each bead its structure is not known. For this reason, encoding methods had been introduced to help to determine the identity of the compound contained in a selected bead. Encoding molecules are coupled to the beads in parallel with the coupling of the BBs. The structure of the encoding molecule has to be easier determined than that of the library member on the bead. Ohlmeyer et al. published a binary encoding method. They used mixtures of 18 tagging molecules that after cleaving them from the beads could be identified by Electron Capture Gas Chromatography. Nikolajev et al. applied peptide sequences for encoding Sarkar et al. described chiral oligomers of pentenoic amides (COPAs) that can be used to construct mass encoded OBOC libraries. Kerr et al. introduced an innovative kind of encoding. An orthogonally protected removable bifunctional linker was attached to the beads. One end of the linker was used to attach the non-natural BBs of the library while to the other end the encoding amino acid triplets were linked. One of the earliest and very successful encoding methods was introduced by Brenner and Lerner in 1992. They proposed to attach DNA oligomers to the beads for encoding their content. The method was implemented by Nielsen, Brenner, and Janda using the bifunctional linker of Kerr et al. to attach the encoding DNA oligomers. This made it possible to cleave down the compound with the DNA encoding oligomer attached to it. Split and pool synthesis in solution Han et al. described a method that made it possible to keep the advantages of both the high efficiency of S&P synthesis and that of a homogeneous media in the chemical reactions. In their method polyethyleneglycol (PEG) was used as soluble support in S&P synthesis of peptide libraries. MeO-CH2-CH2-O-(CH2-CH2-O)n-CH2-CH2-OH PEG proved suitable for this purpose since it is soluble in a wide variety of aqueous and organic solvents and its solubility provides homogeneous reaction conditions even when the attached molecule itself is insoluble in the reaction medium. Separation from the solution of the polymer and the synthesized compounds bound to it can be achieved by precipitation and filtration. The precipitation requires concentrating the reaction solutions then diluting with diethyl ether or tert-butyl methyl ether. Under carefully controlled precipitation conditions the polymer with the bound products precipitates in crystalline form and the unwanted reagents remain in solution. In the solid phase, S&P synthesis a single compound forms on each bead, and as a consequence, the number of compounds can't exceed the number of beads. So, the theoretical maximum number of compounds depends on the quantity of the solid support and the size of the beads. On 1 g polystyrene resin, for example, a maximum of 2 million compounds can be synthesized if the diameter of the resin beads is 90 μm, and 2 billion can be made if the bead size is 10 μm. In practice, the solid support is used in excess (often tenfold) to be sure that all expected components are formed. The above limitation is completely removed if the solid support is omitted and the synthesis is carried out in solution. In this case, there is no upper limit concerning the number of components of the library. Both the number of components and the quantity of the library can be freely decided based only on practical considerations. An important modification was introduced in the synthesis of DNA encoded combinatorial libraries by Harbury and Halpin. The solid support in their case is replaced by the encoding DNA oligomers. This makes it possible to synthesize libraries containing even trillions of components and screen them using affinity binding methods. A different way of carrying out solution-phase S&P synthesis is applying scavenger resins to remove the byproducts. Scavenger resins are polymers having functional groups that make it possible to react with and bind components of the excess of reagents then filtered them out from the reaction mixture Two examples: a resin containing primary amino groups can remove the excess of acyl chlorides from reaction mixtures while an acyl chloride resin removes amines. A fluorous technology was described by Curran The fluorous synthesis employs functionalized perfluoroalkyl (Rf) groups like 4,4,5,5,6,6,7,7,8,8,9,9,9-Tridecafluorononyl {CF3(CF2)4CF2CH2CH2-} group attached to substrates or reagents. The Rf groups make it possible to remove either the product or the reagents from the reaction mixture. At the end of the procedure, the Rf groups attached to the substrate can be removed from the product. By attaching Rf groups to the substrate the synthesis can be carried out in solution and the product can be separated from the reaction mixture by liquid extraction using a fluorous solvent like perfluoromethylcyclohexane or perfluorohexane. It can be seen that the function of the Rf groups in the synthesis is similar to that of the solid or soluble support. If the Rf tag is attached to reagent its excess can be removed from the reaction mixture by extraction. Polymer supported reagents are also used in S&P synthesis. Special features in the synthesis of DNA encoded combinatorial libraries Self-assembling DNA encoded libraries One of the best examples of the special features caused by DNA encoding is the synthesis of the self-assembling library introduced by Mlecco et al. First, two sublibraries are synthesized. In one of the sublibraries BBs are attached to the 5’ end of an oligonucleotide containing a dimerization domain followed by the codes of the BBs. In the other sublibrary the BBs are attached to the 3’ end of the oligonucleotides also containing a dimerization domain and the codes of another set of BBs. The two sublibraries are mixed in equimolar quantities, heated to 70 °C then allowed to cool to room temperature, heterodimerize and form the self-assembling combinatorial library. One member of such two pharmacophore library is shown in the figure. In affinity screening, the two BBs of the pharmacophore may interact with the two adjacent binding sites of the target protein. DNA templated libraries In the synthesis of DNA templated combinatorial libraries, the ability of the DNA double helix to direct region-specific chemical reactions is harnessed by Gartner et al. The DNA- linked reagents are kept in close proximity. This is equivalent to the virtual increase of local concentration that is nearly constant within a distance of 30 nucleotides. The proximity effect helps reactions to proceed. Two libraries are synthesized. A template library containing at one end one of the BBs and its code followed by two annealing regions for the codes of the BBs of the two reagent libraries. Each of the two reagent libraries contains a coding oligonucleotide linked with cleavable bonds to the reagent (BB) capable of forming a bond with the already linked BB taking advantage of the proximity effect. The synthesis is realized in two steps as shown in the figure. Each step has three operations: mixing, annealing, coupling-cleaving. Synthesis in Yoctoreactor The yoctoreactor method introduced by Hansen et al. is based on the geometry and stability of a three-dimensional DNA structure that creates a yoctoliter (10−24 L) size chemical reactor in which proximity of BBs brings about reactions among them. The DNA oligomers comprise the DNA-barcode for the attached BBs and form the structural elements of the reactor. One kind of yoctoreactor format is shown in the figure. Sequence encoded routing Harbury and Halpin developed DNA template libraries that direct like genes the synthesis of DNA encoded organic libraries. The members of the template combinatorial library contain the codes of all BBs and their order of couplings. The figure shows one member of a simple ssDNA template library (A) containing the codes of three BBs (2, 4, 6) that planned to be successively attached. The coding regions are separated by the same non-coding regions (1, 3, 5, 7) in all members. The sequence directed procedure uses a series of columns of resin beads each coated with the anticodon of one of the BBs (B). When the template library is transferred to an anticodon column the proper template member is captured by hybridization then is coupled with the appropriate BB. After finished with all anticodon columns of a coupling position (CP) the libraries are eluted from the beads of the anticodon columns mixed and the mentioned operations are repeated with the series of anticodon columns of the next CP. In figure, C shows one member of the template library captured by the “yellow” second CP anticodon library. The template contains the “red” BB already coupled in CP1 and the “yellow” BB attached after its capture. The final library contains all of the synthesized organic compounds attached to their encoding DNA oligomers. Stepwise coupling and coding One of the most forward-looking method commonly used for DNA-encoding is applied in the synthesis of single-pharmacophore libraries. As the figure shows the library is built repeating the usual cycles of S&P synthesis, The second operation of the cycle is modified: in addition to coupling with the BBs the encoding DNA oligomer is elongated by attaching the code of the BB by ligation. Synthesis using macroscopic units of solid support Modifications had been developed enabling the split and pool synthesis to produce known compounds in larger quantities than the content of a bead of solid support and retain the high efficiency of the original method. As published by Moran et al. and Nicolau et al. the resin normally used in the solid phase synthesis was enclosed into permeable capsules including a radiofrequency label recording the BBs in order of their coupling. Both manual and automatic machine was constructed to sort the capsules into the appropriate reaction vessels. A different kind of labeled macroscopic solid support unit was introduced by Xiao et al. The supports are 1x1 cm polystyrene grafted square plates. The medium carrying the code is a 3x3 mm ceramic plate in the center of the synthesis support The code is etched into the ceramic support by a CO2 laser in the form of a two-dimensional bar code that can be read by a special scanner. String synthesis The String Synthesis introduced by Furka et al. uses stringed macroscopic solid support units (crowns) and the units are identified by their position occupied on the string. One string is assigned for every building block in the synthesis. In the coupling stage, the string is in the proper reaction vessel. The content of the strings coming out from a synthetic step must be redistributed into the strings of the next step. The units are not pooled. The redistribution demonstrated in the figure follows the combinatorial distribution rule: all products formed in a synthetic step are equally divided among all reaction vessels of the next synthetic step. Different distribution formats can be followed that allows the identification the content of each crown depending on the position on the new string and the destination reaction vessel of the string. The stringed crowns and the trays used in manual sorting are shown in the figure. The destination tray is moved step by step in the direction of the arrow. The crowns are transferred in groups from the slots of the source tray into the all opposite slots of the destination tray. The transfers are directed by computer and the products are identified by the positions of the crowns occupied on the final strings. A fast automatic sorter machine had also been described. The sorter is outlined in the figure. It has two sets of aligned tubes. The lower ones are step by step moving in the direction showed by the arrow and the coin-like units are dropped from the upper source tubes into the lower destination ones. The tubes may serve as reaction vessels too. A software had also been developed that can direct sorting if not a full combinatorial library is synthesized only a set of its components are prepared that are picked out from the full library. References Combinatorial chemistry Chemical synthesis
Split and pool synthesis
[ "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,121
[ "Combinatorial chemistry", "Materials science", "Combinatorics", "nan", "Chemical synthesis" ]
65,200,662
https://en.wikipedia.org/wiki/Pyroxasulfone
Pyroxasulfone is a pre-emergence herbicide that inhibits the production of very long chain fatty acids in plants. The structure of the existing herbicide thiobencarb served as the basis for development but pyroxasulfone requires a lower dose (100–25 g/ha) and is more stable resulting in longer efficacy. it had been registered for use in Japan, Australia, USA, Canada, Saudi Arabia and South Africa and was used on crops including maize, soybean, wheat and cotton. In 2015 it was applied to over 6 million hectares of land. Pyroxasulfone is from a novel chemical class but has a similar mode of action to acetamide herbicides such as metolachlor, acetochlor and dimethenamid. It is mainly used to control annual grasses but is also effective against broadleaf weeds including lambsquarters (Chenopodium berlandieri), pigweed and waterhemp (both Amaranthus species) and black nightshade (Solanum nigrum) References Further reading Herbicides Pyrazoles Oxazoles Sulfones Trifluoromethyl compounds
Pyroxasulfone
[ "Chemistry", "Biology" ]
245
[ "Sulfones", "Biocides", "Functional groups", "Herbicides" ]
65,201,794
https://en.wikipedia.org/wiki/A-230
A-230 is an organophosphate nerve agent. It was developed in the Soviet Union under the FOLIANT program and is one of the group of compounds referred to as Novichok agents that were revealed by Vil Mirzayanov. A-230 is possibly the most potent nerve agent for which specific toxicity figures have been published, with a human lethal dose estimated to be less than 0.1 mg. However it was felt to be less suitable for weaponisation than other agents such as A-232 and A-234, due to issues with the liquid agent exhibiting low volatility and solidifying at low temperatures, as well as poor stability in the presence of water. Legal status A-230 has been added to Schedule 1 of the Annex on Chemicals of the Chemical Weapons Convention as of June 2020, and it has been explicitly named as an example compound for schedule 1.A.13. For chemicals listed in Schedule 1, the most stringent declaration and verification measures are in place combined with far-reaching limits and bans on production and use. It is notable to say that Annex 1 does not explicitly relate this structure to the name A-230, just add this particular structure to the prohibited compounds section. See also C01-A035 C01-A039 A-242 EA-3148 EA-3990 Methylfluorophosphonylcholine VR VP References Acetylcholinesterase inhibitors Amidines Phosphonamidofluoridates Novichok agents
A-230
[ "Chemistry" ]
307
[ "Bases (chemistry)", "Amidines", "Functional groups" ]
65,208,303
https://en.wikipedia.org/wiki/A-242
A-242 is an organophosphate nerve agent. It was developed in the Soviet Union under the FOLIANT program and is one of the group of compounds referred to as Novichok agents that were revealed by Vil Mirzayanov. Mirzayanov gives little specific information about A-242, stating that it is highly toxic but no figures are given to compare it to other related agents. It is reportedly a solid rather than a volatile liquid as with most nerve agents, and in order to weaponise it successfully, it had to be milled into a fine powder form that could be dispersed as a dust. Legal status A-242 has been added to Schedule 1 of the Annex on Chemicals of the Chemical Weapons Convention as of June 2020, and it has been explicitly named as an example compound for schedule 1.A.15. For chemicals listed in Schedule 1, the most stringent declaration and verification measures are in place combined with far-reaching limits and bans on production and use. See also C01-A035 C01-A039 A-230 A-232 A-234 A-262 References Acetylcholinesterase inhibitors Guanidines Phosphonamidofluoridates Novichok agents Diethylamino compounds
A-242
[ "Chemistry" ]
259
[ "Guanidines", "Functional groups" ]
69,361,192
https://en.wikipedia.org/wiki/Diamagnetic%20inequality
In mathematics and physics, the diamagnetic inequality relates the Sobolev norm of the absolute value of a section of a line bundle to its covariant derivative. The diamagnetic inequality has an important physical interpretation, that a charged particle in a magnetic field has more energy in its ground state than it would in a vacuum. To precisely state the inequality, let denote the usual Hilbert space of square-integrable functions, and the Sobolev space of square-integrable functions with square-integrable derivatives. Let be measurable functions on and suppose that is real-valued, is complex-valued, and . Then for almost every , In particular, . Proof For this proof we follow Elliott H. Lieb and Michael Loss. From the assumptions, when viewed in the sense of distributions and for almost every such that (and if ). Moreover, So for almost every such that . The case that is similar. Application to line bundles Let be a U(1) line bundle, and let be a connection 1-form for . In this situation, is real-valued, and the covariant derivative satisfies for every section . Here are the components of the trivial connection for . If and , then for almost every , it follows from the diamagnetic inequality that The above case is of the most physical interest. We view as Minkowski spacetime. Since the gauge group of electromagnetism is , connection 1-forms for are nothing more than the valid electromagnetic four-potentials on . If is the electromagnetic tensor, then the massless Maxwell–Klein–Gordon system for a section of are and the energy of this physical system is The diamagnetic inequality guarantees that the energy is minimized in the absence of electromagnetism, thus . See also Citations Inequalities Electromagnetism
Diamagnetic inequality
[ "Physics", "Mathematics" ]
381
[ "Electromagnetism", "Physical phenomena", "Mathematical theorems", "Binary relations", "Mathematical relations", "Fundamental interactions", "Inequalities (mathematics)", "Mathematical problems" ]
69,368,785
https://en.wikipedia.org/wiki/Ericson-Ericson%20Lorentz-Lorenz%20correction
Ericson-Ericson Lorentz-Lorenz correction, also called the Ericson-Ericson Lorentz-Lorenz effect (EELL), refers to an analogy in the interface between nuclear, atomic and particle physics, which in its simplest form corresponds to the well known Lorentz-Lorenz equation (also referred to as the Clausius-Mossotti relation) for electromagnetic waves and light in a refractive medium. These relations link the macroscopic quantities such as the refractive index to the dipole polarization of the individual atoms or molecules.  When the latter are kept apart the polarizing field is no longer the average electric field in the medium.  Similarly for the pion, the lightest meson and the carrier of the long range part of the nuclear force, its typical non-relativistic scattering for individual nucleons has a dominant dipole structure with a known average dipole polarizability of strength ("the average scattering volume"). The physics becomes closely similar although the nuclear density is about 15 orders of magnitude larger than that of ordinary matter and the nature of the dipole interaction is totally different. The correction was predicted in 1963 by Magda Ericson and was derived in 1966 together with Torleif Ericson. The effect has since been re-derived in various ways, but is now understood as a general effect as long as the nucleons keep their individuality, independent of the detailed cause. This is the reason why in the molecular case of the classical Lorentz-Lorenz effect so many incompatible derivations give the same result. The EELL correction was first applied to the line shifts of hydrogen-like atoms, where the electron in the Coulomb field is replaced by a negatively charged pion. Its interaction with the central nucleus causes deviations in the line positions in such Bohr-like atoms. The effect has greatly influenced the understanding of the pion-nucleus many-body problem by realizing that the scattering of a pion from a nucleon in nuclear matter is determined by the local pion field at the nucleon. The effect has also found applications in other contexts of pionic phenomena in nuclei such as the modification of the axial coupling constant in beta-decay, pion scattering, axial locality, pion condensation in nuclear matter, structure of the nuclear pion field, etc. Further reading Spin excitations in nuclei, Fred Petrovich (ed.) et al., Springer (1984), see in particular G. E. Brown's contribution. Mesons in nuclei, Denys H. Wilkinson and Mannque Rho, North-Holland (1979). Ericson, M. (1978). "Pion field and weak interactions in nuclei". Progress in Particle and Nuclear Physics. 1: 67–104. References Nuclear physics Scattering theory
Ericson-Ericson Lorentz-Lorenz correction
[ "Physics", "Chemistry" ]
580
[ "Scattering", "Scattering theory", "Nuclear physics" ]
72,360,809
https://en.wikipedia.org/wiki/AI%20safety
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models. Beyond technical research, AI safety involves developing norms and policies that promote safety. It gained significant popularity in 2023, with rapid progress in generative AI and public concerns voiced by researchers and CEOs about potential dangers. During the 2023 AI Safety Summit, the United States and the United Kingdom both established their own AI Safety Institute. However, researchers have expressed concern that AI safety measures are not keeping pace with the rapid development of AI capabilities. Motivations Scholars discuss current risks from critical systems failures, bias, and AI-enabled surveillance, as well as emerging risks like technological unemployment, digital manipulation, weaponization, AI-enabled cyberattacks and bioterrorism. They also discuss speculative risks from losing control of future artificial general intelligence (AGI) agents, or from AI enabling perpetually stable dictatorships. Existential safety Some have criticized concerns about AGI, such as Andrew Ng who compared them in 2015 to "worrying about overpopulation on Mars when we have not even set foot on the planet yet". Stuart J. Russell on the other side urges caution, arguing that "it is better to anticipate human ingenuity than to underestimate it". AI researchers have widely differing opinions about the severity and primary sources of risk posed by AI technology – though surveys suggest that experts take high consequence risks seriously. In two surveys of AI researchers, the median respondent was optimistic about AI overall, but placed a 5% probability on an "extremely bad (e.g. human extinction)" outcome of advanced AI. In a 2022 survey of the natural language processing community, 37% agreed or weakly agreed that it is plausible that AI decisions could lead to a catastrophe that is "at least as bad as an all-out nuclear war". History Risks from AI began to be seriously discussed at the start of the computer age: In 1988 Blay Whitby published a book outlining the need for AI to be developed along ethical and socially responsible lines . From 2008 to 2009, the Association for the Advancement of Artificial Intelligence (AAAI) commissioned a study to explore and address potential long-term societal influences of AI research and development. The panel was generally skeptical of the radical views expressed by science-fiction authors but agreed that "additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes". In 2011, Roman Yampolskiy introduced the term "AI safety engineering" at the Philosophy and Theory of Artificial Intelligence conference, listing prior failures of AI systems and arguing that "the frequency and seriousness of such events will steadily increase as AIs become more capable". In 2014, philosopher Nick Bostrom published the book Superintelligence: Paths, Dangers, Strategies. He has the opinion that the rise of AGI has the potential to create various societal issues, ranging from the displacement of the workforce by AI, manipulation of political and military structures, to even the possibility of human extinction. His argument that future advanced systems may pose a threat to human existence prompted Elon Musk, Bill Gates, and Stephen Hawking to voice similar concerns. In 2015, dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI and outlining concrete directions. To date, the letter has been signed by over 8000 people including Yann LeCun, Shane Legg, Yoshua Bengio, and Stuart Russell. In the same year, a group of academics led by professor Stuart Russell founded the Center for Human-Compatible AI at the University of California Berkeley and the Future of Life Institute awarded $6.5 million in grants for research aimed at "ensuring artificial intelligence (AI) remains safe, ethical and beneficial". In 2016, the White House Office of Science and Technology Policy and Carnegie Mellon University announced The Public Workshop on Safety and Control for Artificial Intelligence, which was one of a sequence of four White House workshops aimed at investigating "the advantages and drawbacks" of AI. In the same year, Concrete Problems in AI Safety – one of the first and most influential technical AI Safety agendas – was published. In 2017, the Future of Life Institute sponsored the Asilomar Conference on Beneficial AI, where more than 100 thought leaders formulated principles for beneficial AI including "Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards". In 2018, the DeepMind Safety team outlined AI safety problems in specification, robustness, and assurance. The following year, researchers organized a workshop at ICLR that focused on these problem areas. In 2021, Unsolved Problems in ML Safety was published, outlining research directions in robustness, monitoring, alignment, and systemic safety. In 2023, Rishi Sunak said he wants the United Kingdom to be the "geographical home of global AI safety regulation" and to host the first global summit on AI safety. The AI safety summit took place in November 2023, and focused on the risks of misuse and loss of control associated with frontier AI models. During the summit the intention to create the International Scientific Report on the Safety of Advanced AI was announced. In 2024, The US and UK forged a new partnership on the science of AI safety. The MoU was signed on 1 April 2024 by US commerce secretary Gina Raimondo and UK technology secretary Michelle Donelan to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November. Research focus AI safety research areas include robustness, monitoring, and alignment. Robustness Adversarial robustness AI systems are often vulnerable to adversarial examples or "inputs to machine learning (ML) models that an attacker has intentionally designed to cause the model to make a mistake". For example, in 2013, Szegedy et al. discovered that adding specific imperceptible perturbations to an image could cause it to be misclassified with high confidence. This continues to be an issue with neural networks, though in recent work the perturbations are generally large enough to be perceptible. All of the images on the right are predicted to be an ostrich after the perturbation is applied. (Left) is a correctly predicted sample, (center) perturbation applied magnified by 10x, (right) adversarial example. Adversarial robustness is often associated with security. Researchers demonstrated that an audio signal could be imperceptibly modified so that speech-to-text systems transcribe it to any message the attacker chooses. Network intrusion and malware detection systems also must be adversarially robust since attackers may design their attacks to fool detectors. Models that represent objectives (reward models) must also be adversarially robust. For example, a reward model might estimate how helpful a text response is and a language model might be trained to maximize this score. Researchers have shown that if a language model is trained for long enough, it will leverage the vulnerabilities of the reward model to achieve a better score and perform worse on the intended task. This issue can be addressed by improving the adversarial robustness of the reward model. More generally, any AI system used to evaluate another AI system must be adversarially robust. This could include monitoring tools, since they could also potentially be tampered with to produce a higher reward. Monitoring Estimating uncertainty It is often important for human operators to gauge how much they should trust an AI system, especially in high-stakes settings such as medical diagnosis. ML models generally express confidence by outputting probabilities; however, they are often overconfident, especially in situations that differ from those that they were trained to handle. Calibration research aims to make model probabilities correspond as closely as possible to the true proportion that the model is correct. Similarly, anomaly detection or out-of-distribution (OOD) detection aims to identify when an AI system is in an unusual situation. For example, if a sensor on an autonomous vehicle is malfunctioning, or it encounters challenging terrain, it should alert the driver to take control or pull over. Anomaly detection has been implemented by simply training a classifier to distinguish anomalous and non-anomalous inputs, though a range of additional techniques are in use. Detecting malicious use Scholars and government agencies have expressed concerns that AI systems could be used to help malicious actors to build weapons, manipulate public opinion, or automate cyber attacks. These worries are a practical concern for companies like OpenAI which host powerful AI tools online. In order to prevent misuse, OpenAI has built detection systems that flag or restrict users based on their activity. Transparency Neural networks have often been described as black boxes, meaning that it is difficult to understand why they make the decisions they do as a result of the massive number of computations they perform. This makes it challenging to anticipate failures. In 2018, a self-driving car killed a pedestrian after failing to identify them. Due to the black box nature of the AI software, the reason for the failure remains unclear. It also raises debates in healthcare over whether statistically efficient but opaque models should be used. One critical benefit of transparency is explainability. It is sometimes a legal requirement to provide an explanation for why a decision was made in order to ensure fairness, for example for automatically filtering job applications or credit score assignment. Another benefit is to reveal the cause of failures. At the beginning of the 2020 COVID-19 pandemic, researchers used transparency tools to show that medical image classifiers were 'paying attention' to irrelevant hospital labels. Transparency techniques can also be used to correct errors. For example, in the paper "Locating and Editing Factual Associations in GPT", the authors were able to identify model parameters that influenced how it answered questions about the location of the Eiffel tower. They were then able to 'edit' this knowledge to make the model respond to questions as if it believed the tower was in Rome instead of France. Though in this case, the authors induced an error, these methods could potentially be used to efficiently fix them. Model editing techniques also exist in computer vision. Finally, some have argued that the opaqueness of AI systems is a significant source of risk and better understanding of how they function could prevent high-consequence failures in the future. "Inner" interpretability research aims to make ML models less opaque. One goal of this research is to identify what the internal neuron activations represent. For example, researchers identified a neuron in the CLIP artificial intelligence system that responds to images of people in spider man costumes, sketches of spiderman, and the word 'spider'. It also involves explaining connections between these neurons or 'circuits'. For example, researchers have identified pattern-matching mechanisms in transformer attention that may play a role in how language models learn from their context. "Inner interpretability" has been compared to neuroscience. In both cases, the goal is to understand what is going on in an intricate system, though ML researchers have the benefit of being able to take perfect measurements and perform arbitrary ablations. Detecting trojans Machine learning models can potentially contain "trojans" or "backdoors": vulnerabilities that malicious actors maliciously build into an AI system. For example, a trojaned facial recognition system could grant access when a specific piece of jewelry is in view; or a trojaned autonomous vehicle may function normally until a specific trigger is visible. Note that an adversary must have access to the system's training data in order to plant a trojan. This might not be difficult to do with some large models like CLIP or GPT-3 as they are trained on publicly available internet data. Researchers were able to plant a trojan in an image classifier by changing just 300 out of 3 million of the training images. In addition to posing a security risk, researchers have argued that trojans provide a concrete setting for testing and developing better monitoring tools. A 2024 research paper by Anthropic showed that large language models could be trained with persistent backdoors. These "sleeper agent" models could be programmed to generate malicious outputs (such as vulnerable code) after a specific date, while behaving normally beforehand. Standard AI safety measures, such as supervised fine-tuning, reinforcement learning and adversarial training, failed to remove these backdoors. Alignment Systemic safety and sociotechnical factors It is common for AI risks (and technological risks more generally) to be categorized as misuse or accidents. Some scholars have suggested that this framework falls short. For example, the Cuban Missile Crisis was not clearly an accident or a misuse of technology. Policy analysts Zwetsloot and Dafoe wrote, "The misuse and accident perspectives tend to focus only on the last step in a causal chain leading up to a harm: that is, the person who misused the technology, or the system that behaved in unintended ways… Often, though, the relevant causal chain is much longer." Risks often arise from 'structural' or 'systemic' factors such as competitive pressures, diffusion of harms, fast-paced development, high levels of uncertainty, and inadequate safety culture. In the broader context of safety engineering, structural factors like 'organizational safety culture' play a central role in the popular STAMP risk analysis framework. Inspired by the structural perspective, some researchers have emphasized the importance of using machine learning to improve sociotechnical safety factors, for example, using ML for cyber defense, improving institutional decision-making, and facilitating cooperation. Others have emphasized the importance of involving both AI practitioners and domain experts in the design process to address structural vulnerabilities. Cyber defense Some scholars are concerned that AI will exacerbate the already imbalanced game between cyber attackers and cyber defenders. This would increase 'first strike' incentives and could lead to more aggressive and destabilizing attacks. In order to mitigate this risk, some have advocated for an increased emphasis on cyber defense. In addition, software security is essential for preventing powerful AI models from being stolen and misused. Recent studies have shown that AI can significantly enhance both technical and managerial cybersecurity tasks by automating routine tasks and improving overall efficiency. Improving institutional decision-making The advancement of AI in economic and military domains could precipitate unprecedented political challenges. Some scholars have compared AI race dynamics to the cold war, where the careful judgment of a small number of decision-makers often spelled the difference between stability and catastrophe. AI researchers have argued that AI technologies could also be used to assist decision-making. For example, researchers are beginning to develop AI forecasting and advisory systems. Facilitating cooperation Many of the largest global threats (nuclear war, climate change, etc.) have been framed as cooperation challenges. As in the well-known prisoner's dilemma scenario, some dynamics may lead to poor results for all players, even when they are optimally acting in their self-interest. For example, no single actor has strong incentives to address climate change even though the consequences may be significant if no one intervenes. A salient AI cooperation challenge is avoiding a 'race to the bottom'. In this scenario, countries or companies race to build more capable AI systems and neglect safety, leading to a catastrophic accident that harms everyone involved. Concerns about scenarios like these have inspired both political and technical efforts to facilitate cooperation between humans, and potentially also between AI systems. Most AI research focuses on designing individual agents to serve isolated functions (often in 'single-player' games). Scholars have suggested that as AI systems become more autonomous, it may become essential to study and shape the way they interact. Challenges of large language models In recent years, the development of large language models (LLMs) has raised unique concerns within the field of AI safety. Researchers Bender and Gebru et al. have highlighted the environmental and financial costs associated with training these models, emphasizing that the energy consumption and carbon footprint of training procedures like those for Transformer models can be substantial. Moreover, these models often rely on massive, uncurated Internet-based datasets, which can encode hegemonic and biased viewpoints, further marginalizing underrepresented groups. The large-scale training data, while vast, does not guarantee diversity and often reflects the worldviews of privileged demographics, leading to models that perpetuate existing biases and stereotypes. This situation is exacerbated by the tendency of these models to produce seemingly coherent and fluent text, which can mislead users into attributing meaning and intent where none exists, a phenomenon described as 'stochastic parrots'. These models, therefore, pose risks of amplifying societal biases, spreading misinformation, and being used for malicious purposes, such as generating extremist propaganda or deepfakes. To address these challenges, researchers advocate for more careful planning in dataset creation and system development, emphasizing the need for research projects that contribute positively towards an equitable technological ecosystem. In governance AI governance is broadly concerned with creating norms, standards, and regulations to guide the use and development of AI systems. Research AI safety governance research ranges from foundational investigations into the potential impacts of AI to specific applications. On the foundational side, researchers have argued that AI could transform many aspects of society due to its broad applicability, comparing it to electricity and the steam engine. Some work has focused on anticipating specific risks that may arise from these impacts – for example, risks from mass unemployment, weaponization, disinformation, surveillance, and the concentration of power. Other work explores underlying risk factors such as the difficulty of monitoring the rapidly evolving AI industry, the availability of AI models, and 'race to the bottom' dynamics. Allan Dafoe, the head of longterm governance and strategy at DeepMind has emphasized the dangers of racing and the potential need for cooperation: "it may be close to a necessary and sufficient condition for AI safety and alignment that there be a high degree of caution prior to deploying advanced powerful systems; however, if actors are competing in a domain with large returns to first-movers or relative advantage, then they will be pressured to choose a sub-optimal level of caution". A research stream focuses on developing approaches, frameworks, and methods to assess AI accountability, guiding and promoting audits of AI-based systems. Efforts to enhance AI safety include frameworks designed to align AI outputs with ethical guidelines and reduce risks like misuse and data leakage. Tools such as Nvidia's Guardrails, Llama Guard, and Preamble's customizable guardrails mitigate vulnerabilities like prompt injection and ensure outputs adhere to predefined principles. These frameworks are often integrated into AI systems to improve safety and reliability. Philosophical perspectives The field of AI safety is deeply intertwined with philosophical considerations, particularly in the realm of ethics. Deontological ethics, which emphasizes adherence to moral rules, has been proposed as a framework for aligning AI systems with human values. By embedding deontological principles, AI systems can be guided to avoid actions that cause harm, ensuring their operations remain within ethical boundaries. Scaling local measures to global solutions In addressing the AI safety problem it is important to stress the distinction between local and global solutions. Local solutions focus on individual AI systems, ensuring they are safe and beneficial, while global solutions seek to implement safety measures for all AI systems across various jurisdictions. Some researchers argue for the necessity of scaling local safety measures to a global level, proposing a classification for these global solutions. This approach underscores the importance of collaborative efforts in the international governance of AI safety, emphasizing that no single entity can effectively manage the risks associated with AI technologies. This perspective aligns with ongoing efforts in international policy-making and regulatory frameworks, which aim to address the complex challenges posed by advanced AI systems worldwide. Government action Some experts have argued that it is too early to regulate AI, expressing concerns that regulations will hamper innovation and it would be foolish to "rush to regulate in ignorance". Others, such as business magnate Elon Musk, call for pre-emptive action to mitigate catastrophic risks. Outside of formal legislation, government agencies have put forward ethical and safety recommendations. In March 2021, the US National Security Commission on Artificial Intelligence reported that advances in AI may make it increasingly important to "assure that systems are aligned with goals and values, including safety, robustness and trustworthiness". Subsequently, the National Institute of Standards and Technology drafted a framework for managing AI Risk, which advises that when "catastrophic risks are present – development and deployment should cease in a safe manner until risks can be sufficiently managed". In September 2021, the People's Republic of China published ethical guidelines for the use of AI in China, emphasizing that AI decisions should remain under human control and calling for accountability mechanisms. In the same month, The United Kingdom published its 10-year National AI Strategy, which states the British government "takes the long-term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously". The strategy describes actions to assess long-term AI risks, including catastrophic risks. The British government held first major global summit on AI safety. This took place on the 1st and 2 November 2023 and was described as "an opportunity for policymakers and world leaders to consider the immediate and future risks of AI and how these risks can be mitigated via a globally coordinated approach". Government organizations, particularly in the United States, have also encouraged the development of technical AI safety research. The Intelligence Advanced Research Projects Activity initiated the TrojAI project to identify and protect against Trojan attacks on AI systems. The DARPA engages in research on explainable artificial intelligence and improving robustness against adversarial attacks. And the National Science Foundation supports the Center for Trustworthy Machine Learning, and is providing millions of dollars in funding for empirical AI safety research. In 2024, the United Nations General Assembly adopted the first global resolution on the promotion of “safe, secure and trustworthy” AI systems that emphasized the respect, protection and promotion of human rights in the design, development, deployment and the use of AI. In May 2024, the Department for Science, Innovation and Technology (DSIT) announced £8.5 million in funding for AI safety research under the Systemic AI Safety Fast Grants Programme, led by Christopher Summerfield and Shahar Avin at the AI Safety Institute, in partnership with UK Research and Innovation. Technology Secretary Michelle Donelan announced the plan at the AI Seoul Summit, stating the goal was to make AI safe across society and that promising proposals could receive further funding. The UK also signed an agreement with 10 other countries and the EU to form an international network of AI safety institutes to promote collaboration and share information and resources. Additionally, the UK AI Safety Institute planned to open an office in San Francisco. Corporate self-regulation AI labs and companies generally abide by safety practices and norms that fall outside of formal legislation. One aim of governance researchers is to shape these norms. Examples of safety recommendations found in the literature include performing third-party auditing, offering bounties for finding failures, sharing AI incidents (an AI incident database was created for this purpose), following guidelines to determine whether to publish research or models, and improving information and cyber security in AI labs. Companies have also made commitments. Cohere, OpenAI, and AI21 proposed and agreed on "best practices for deploying language models", focusing on mitigating misuse. To avoid contributing to racing-dynamics, OpenAI has also stated in their charter that "if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project" Also, industry leaders such as CEO of DeepMind Demis Hassabis, director of Facebook AI Yann LeCun have signed open letters such as the Asilomar Principles and the Autonomous Weapons Open Letter. See also AI alignment Artificial intelligence and elections Artificial intelligence detection software References External links Unsolved Problems in ML Safety On the Opportunities and Risks of Foundation Models An Overview of Catastrophic AI Risks AI Accidents: An Emerging Threat Engineering a Safer World Artificial intelligence Existential risk from artificial general intelligence Cybernetics
AI safety
[ "Technology", "Engineering" ]
5,050
[ "Safety engineering", "AI safety", "Existential risk from artificial general intelligence" ]
72,371,977
https://en.wikipedia.org/wiki/Bismuthinidene
Bismuthinidenes are a class of organobismuth compounds, analogous to carbenes. These compounds have the general form R-Bi, with two lone pairs of electrons on the central bismuth(I) atom. Due to the unusually low valency and oxidation state of +1, most bismuthinidenes are reactive and unstable, though in recent decades, both transition metals and polydentate chelating Lewis base ligands have been employed to stabilize the low-valent bismuth(I) center through steric protection and π donation either in solution or in crystal structures. Lewis base-stabilized bismuthinidenes adopt a singlet ground state with an inert lone pair of electrons in the 6s orbital. A second lone pair in a 6p orbital and a single empty 6p orbital make Lewis base-stabilized bismuthinidenes ambiphilic. Recently, a triplet bismuthinidene is reported by Cornella et al. Synthesis Transition metal-stabilized bismuthinidene The earliest examples of bismuthinidene complexes used transition metal chemistry to stabilize the Bi(I) center. These methods generally leveraged the ability of simple bismuth(I) halides or methylbismuth to ligate to tungsten, manganese, and chromium carbonyl complexes. These complexes were occasionally found to oligomerize, forming Bi-Bi single or double bonds to form bismuthane or bismuthene moieties. One of the first examples of a monomeric bismuthinidene was discovered by Balasz et al., who used R = 2-(dimethylaminomethyl)phenyl to chelate a Bi(I) center through a combination of strong C-Bi and weak N-Bi interactions. Although the molecule quickly formed a cyclic oligomer, upon reaction with two equivalents of tungsten pentacarbonyl, monomeric crystalline RBi[W(CO)5]2 was isolated. N,C,N-coordinated bismuthinidene Monomeric bismuthinidenes were not stabilized without the use of transition metal complexes until 2010, when Libor Dostál's research group reported the isolation of a bismuth(I) center stabilized only by the N,C,N-pincer ligand L = 2,6-bis[N-(2’,6’-dimethylphenyl)ketimino]phenyl. This complex was first synthesized by reacting the precursor molecule LBiIIICl2 with two equivalents of the reducing agent K[B(iBu)3H] to yield isolable crystals of stable [C6H3-2,6-(C(Me)=N-2′,6′-Me2C6H3)2]Bi. A slightly simpler N,C,N-coordinating ligand was soon used to create the bismuthinidene ArBi (Ar = C6H3-2,6-(CH=NtBu)2), which became widely used in later bismuthinidene studies and is occasionally referred to as "Dostál's bismuthinidene". In fact, many analogs of this compound have been synthesized, often replacing the imine tert-butyl groups with other bulky organic groups or replacing the two imine arms with disubstituted amine arms. N,C-coordinated bismuthinidene Dostál's group later synthesized a monomeric bismuthinidene coordinated only by a bidentate N,C-coordinating ligand. When the bismuth dichloride [C6H2-2-(CH=N-2’,6’-iPr2C6H3)-4,6-(tBu)2]BiCl2 is reduced by two equivalents of K[B(iBu)3H], isolable dark violet crystals of [C6H2-2-(CH=N-2’,6’-iPr2C6H3)-4,6-(tBu)2]Bi appear. In contrast to the earlier transition metal-stabilized [2-(Me2NCH2)C6H4]Bi[W(CO)5]2, the tert-butyl group ortho to the bismuth atom in this N,C-coordinated bismuthinidene sterically block the partially empty p-type orbital on the bismuth atom, kinetically stabilizing it without the use of transition metals. In addition, calculated nucleus-independent chemical shift indices (NICS) and anisotropy of current-induced density (ACID) analysis show that the BiC3N ring of the molecule was stabilized by a certain degree of aromatic character due and may be classified as a benzazabismole to the delocalization of six π electrons, despite the nominally dative Bi-N bond. Unlike N,C,N-coordinated bismuthinidenes, this N,C-coordinated species requires the pendant nitrogen atom to be in an imine group, as replacement of the Dipp-substituted imine arm with a diethyl-substituted amine arm resulted in rapid dimerization to a dibismuthene species. Carbene-stabilized bismuthinidene In 2019, Wang et al. isolated a novel carbene-stabilized bismuthinidene with an exocyclic bismuth(I) center. Phenylbismuth dichloride, stabilized by a diethyl/diisopropylphenyl-substituted cyclic alkyl amino carbene (Et2CAAC), reacts with one equivalent of the beryllium(0) complex Be(Et2CAAC)2 in toluene to give stable, isolable red crystals of the carbene-stabilized bismuthinidene (Et2CAAC)Bi-Ph. Despite the exposed, exocyclic bismuth(I) center, the compound can exist without dimerization for up to two weeks in the solid state. Density functional theory (DFT) calculations showed that this is a result of partial double bond character between the carbene carbon and the bismuth(I) center, wherein the p-type lone pair of electrons on the bismuth atom interact with the partially-filled p orbitals on the carbene carbon. However, the charge on the bismuth atom as determined by natural population analysis (NPA) was much lower than in bismuth(III)-carbon bonds, supporting the compound's classification as a bismuthinidene. Structural properties As with other heavier carbene analogs, the structural and electronic properties of bismuthinidenes are in large part driven by the inert-pair effect, wherein the large energy gap between the bismuth atom's 6s and 6p orbitals disfavors the formation of sp hybrid orbitals. In stark contrast to their lighter congeners phosphinidenes, whose smaller phosphorus 3s-3p energy gap favors a triplet ground state, bismuthinidenes generally have a singlet ground state on account of the larger bismuth 6s-6p energy gap. The structural and electronic properties of bismuthinidenes in general are clearly exemplified by Dostál's N,C,N-stabilized bismuthinidene, which is the most commonly used bismuthinidene in the literature to date. Optimization of the tert-butyl imino version of this compound at the M06/cc-pVTZ level of theory reveals that, as in other nontrigonal pnictogen compounds, the central bismuth atom is coplanar with the N,C,N-coordinating ligand, adopting a T-shaped C2v coordination mode. The Wiberg bond index (WBI) between the bismuth and carbon atoms is 1.09, while the Bi-C bond distance is 2.2156 Å, slightly shorter than the sum of these atoms’ single-bonded covalent radii (Σcov(Bi,C) = 2.26 Å). On the other hand, the WBI of the Bi-N bonds is only 0.34, and the Bi-N bond distance is 2.500 Å, significantly longer than the sum of these atoms’ covalent radii (Σcov(Bi,N) = 2.22 Å). This agrees with calculations based on the quantum theory of atoms in molecules (QTAIM), which show that the electron density at the bond critical point between Bi and N is only 0.049, significantly lower than the electron density of 0.114 at the Bi-C bond critical point. Natural bond orbital (NBO) calculations show that these weaker dative bonds arise from weak σ donation of the nitrogen atoms’ lone pairs into an empty 6p orbital on the central bismuth atom. These two nN → p*Bi interactions stabilize the bismuthinidene by as much as 382 kJ/mol. Additionally, the amount of sigma donation from the pendant nitrogen atoms may be increased or decreased by replacing the tert-butyl groups on the pendant nitrogen atoms with aryl groups containing electron-donating groups or electron-withdrawing groups, respectively. One lone pair resides in the bismuth atom's 6s orbital and generally remains inert, while the other resides in the 6p orbital oriented perpendicular to the plane of the central ring, which also comprises the highest occupied molecular orbital (HOMO). Reactivity Theoretically, bismuthinidenes are both Lewis acidic and Lewis basic due to their empty and filled p-type orbitals, respectively. In practice, both N,C,N- and N,C-chelated bismuthinidenes lose much of their Lewis acidic character due to nN → p*Bi donor-acceptor interactions. However, the Lewis basicity of bismuthinidenes, particularly Dostál's N,C,N-coordinated bismuthinidene, allows them to cycle predictably between stable Bi(I) and Bi(III) oxidation states depending on the reaction conditions, allowing them to act as catalysts for a variety of different reactions, including transfer hydrogenations, deoxygenations, hydrodefluorinations, and dihydrogen reduction. In addition, bismuthinidenes react intrinsically with certain alkyl halides, dichalcogenides, and alkynes to form Bi(III) species. Catalytic reactivity The comparatively large covalent radius of bismuth results in weaker bonds between bismuth and other elements. This is especially true of N,C,N-coordinated bismuthinidenes after oxidative addition of additional ligands to form trivalent bismuth(III) reactive intermediates, given the nN → p*Bi donation present in these species. This comparative weakness of the bismuth(III)-ligand bonds facilitates ligand exchange and ultimately reductive elimination to release the product, reform the bismuthinidene, and create a controllable Bi(I)/Bi(III) redox cycle that gives bismuthinidenes their own unique catalytic reactivity. Transfer hydrogenation of azoarenes In 2019, Wang et al., who leveraged the catalytic activity of Dostál's bismuthinidene to catalyze a transfer hydrogenation reaction between ammonia-borane and azoarenes to form the corresponding arylhydrazines with good functional group tolerance. The reaction's catalytic cycle proceeds through the oxidative addition of two hydrogen atoms from ammonia-borane to the bismuth(I) center, forming a highly unstable bismuthine intermediate. Subsequent reductive elimination transfers the two hydrogen atoms across the pi bond of an azoarene molecule, restoring the bismuthinidene and forming arylhydrazine. A similar bismuthinidene-catalyzed transfer hydrogenation reaction reduces nitroarenes to the corresponding aryl hydroxyl amines. Deoxygenation of nitrous oxide The Bi(I)/Bi(III) redox couple has also been applied to catalyze the deoxygenation of nitrous oxide. When Dostál's bismuthinidene is exposed to gaseous N2O, the reaction mixture changes color from green to yellow and evolves dinitrogen gas. The color change is due to the formation of an arylbismuth oxide dimer with two μ-oxo bridging moieties forming a Bi2O2 center, consistent with the propensity of bismuth(III) oxides to spontaneously dimerize or polymerize. However, a modified version of Dostál's bismuthinidene with ketimine arms and m-terphenyl substituents on the ketimine nitrogen atoms disfavors dimerization, instead forming a rare monomeric organobismuth(III) hydroxide upon reaction with N2O. In either case, reduction of the product with pinacolborane (HBpin) returns the bismuth(III) centers to the bismuth(I) state and yields a mixture of HOBpin and (pinB)2O, completing the catalytic cycle. Hydrodefluorination of polyfluoroarenes The electronic properties of the N,C,N-pincer ligand may be tuned with electron withdrawing groups to promote the reactivity of bismuthinidenes toward aryl C-F bonds. One example is Phebox-Bi(I), an N,C,N-coordinated bismuthinidene stabilized by a 2,6-bis(oxazolinyl)phenyl (Phebox) pincer ligand. Unlike Dostál's bismuthinidene, which has only shown reactivity towards pentafluoropyridine, Phebox-Bi(I) has demonstrated a propensity to add to C-F bonds in a variety of perfluorinated arenes, including pentafluoropyridine, substituted pentafluorobenzenes, highly fluorinated phosphine compounds, octafluoronaphthalene, and decafluorobiphenyl. After oxidative addition to the aryl C-F bond, the resulting Phebox-Bi(III)(fluoroaryl) fluoride intermediate may undergo ligand metathesis with diethylsilane, replacing the Bi(III)-F bond with a Bi(III)-H bond. The unstable Bi(III) hydride then undergoes aryl C-H reductive elimination, regenerating Phebox-Bi(I) and the hydrodefluorinated product. The catalyst usually targets C-F bonds para to any electron-withdrawing substituents or heteroatoms on the fluorinated substrate. Reaction rates decrease significantly when the fluorinated substrate contains an electron-donating group. Reduction of dihydrogen from acetic acid Electrochemical studies indicate that Dostál's bismuthinidene may serve as an electrocatalyst for the formation of hydrogen gas. Cyclic voltammetry and DFT calculations indicate that, under reducing conditions, acetic acid binds to the central bismuth atom, transferring a hydrogen atom to bismuth and generating an N,C,N-coordinated bismuth(III) acetate hydride intermediate. Rearrangement of the acetate ligand from the equatorial to the axial position allows a second equivalent of acetic acid to bind to the bismuth center, eliminating H2 in the process. Two-electron reduction releases acetate ligands, regenerating the bismuthinidene catalyst. Intrinsic reactivity Intrinsic bismuthinidene transformation reactions generally involve a double oxidation from the bismuth(I) redox state to the bismuth(III) redox state, generating unsymmetrically substituted trivalent bismuth(III) compounds that would have been difficult to synthesize through organometallic reagents. Oxidative addition toward alkyl halides and diphenyldichalcogenides The low valency of bismuthinidenes renders them reactive toward carbon-polar group bonds. Oxidative addition reactions between Dostál's bismuthinidene and primary C(sp3)-X bonds are particularly favorable for X = I or OTf, converting the bismuth(I) center to a bismuth(III) alkyl halide or alkyl triflate. This is true even for longer fluorinated alkyl halides up to six carbon atoms in length. Steric hindrance prevents the activation of tert-butyl iodide by Dostál's bismuthinidene, although a metastable analog with amine pincer arms rather than imine pincer arms (Ar'Bi, where Ar' = 2,6-C6H3(CH2NMe2)2) does participate in oxidative addition even with bulky tertiary C-X bonds, likely because the increased rotational mobility of the amine arms allows them to rotate away from the incoming bulky alkyl group in the transition state. This metastable Ar'Bi, as well as N,C-stabilized bismuthinidene, are also reactive toward diphenyldichalcogenides. While the former yields stable crystals of Ar'Bi(III)(EPh)2 (E = S, Se, Te) upon reaction with PhEEPh, the latter yields [2-C6H4(CH=NC6H3(i-Pr)2-2,6)]2Bi(III)(EPh) with two N,C ligands and only one phenyl chalcogenolate. The N,C,N- and doubly N,C-coordinated bismuth(III) phenyl tellurolates are particularly unstable and decompose to form a mixture of products. Oxidative additions of N,C,N-coordinated bismuthinidene to diaryldisulfides are tolerant to a variety of aryl functional groups, including pyridyl, thiazolyl, thienyl, and aminophenyl groups. Hetero Diels-Alder reaction with alkynes In 2019, Kořenková et al. discovered that Dostál's bismuthinidene behaves as a masked heterocyclic diene in the presence of the electron-deficient alkyne dimethyl acetylenedicarboxylate (DMAD), performing a hetero Diels-Alder [4+2] cycloaddition reaction to yield CO2Me-disubstituted 1-bisma-1,4-dihydro-iminonaphthalene, effectively converting one of the pendant imine arms of the bismuthinidene into a nitrogen-bridged bismacyclohexadiene, with the bismuth(III) atom serving as a bridgehead and the second imine arm largely losing coordination with the bismuth(III) center. A similar cycloaddition reaction between Dostál's bismuthinidene and methyl propiolate yields an iminonaphthalene only as an intermediate, as the bridgehead bismuth atom is quickly attacked by a deprotonated second equivalent of methyl propiolate, breaking the Bi-N bond and yielding a boat-shaped bismacyclohexadiene moiety. Though technically unbridged, the axial amine group on the bismacyclohexadiene ring remains datively coordinated to the bismuth heteroatom. Transition metal chemistry Although N,C,N-coordinated bismuthinidenes are stable without transition metal coordination, they are reactive toward certain electron-deficient transition metal complexes and act as L-type donor ligands. Upon addition of Dostál's bismuthinidene (ArBi; Ar = C6H3-2,6-(CH=NtBu)2) to solutions of dicobalt octacarbonyl or dimanganese decacarbonyl in toluene, isolable ionic crystals of [(ArBi)2Co(CO)3]+[Co(CO)4]− or [(ArBi)2Mn(CO)4]+[Mn(CO)5]− begin to form. These complexes show significant covalent interaction between the bismuth(I) atoms and the cobalt or manganese centers, though these bismuth-metal bonds are dative in character. Because the bismuth-metal bonding consists almost entirely of σ-donation of the electrons in the p-type lone pair on bismuth into the dz2 orbital of the transition metal center, the ArBi units bond to the metal center in a side-on manner with C-Bi-Co or C-Bi-Mn bond angles close to 90°, such that the planes of the N,C,N ligands and those of the trigonal planar Co(CO)3 or square planar Mn(CO)4 moieties are all nearly parallel with each other. In this binding mode, the aromatic rings on the N,C,N ligands adopt a syn configuration stabilized by weak CH···CH and CH···O interactions. Dostál's bismuthinidene also binds to gold(I) centers stabilized by the N-heterocyclic carbene ligand IPr (1,3-bis(2,6-diisopropylphenyl)imidazolin-2-ylidene). Ligand exchange with [Au(IPr)(ACN)]+[BF4]− yields the complex [Au(IPr)(ArBi)]+[BF4]− as a white powder. Though this complex is stabilized by the dative donation of electrons from Bi(I) into the empty 6s orbital of Au(I), this Bi(I) → Au(I) interaction is nevertheless stronger than any previously discovered Bi(III) → Au(I) interactions in which the bismuth atom acts as a donor. Furthermore, it represents the first stable, isolable complex containing Bi(I) → Au(I) interaction, which is thought to be enabled by the N,C,N-pincer ligand backbone. Similar reactivities have also been observed for the metastable version of Dostál's bismuthinidene containing amine pincer arms rather than imine pincer arms (Ar’Bi; Ar’ = C6H3-2,6-(CH2NMe2)2). Addition of this metastable bismuthinidene to THF solutions of M(CO)5 (where M = Cr, Mo, W) yields isolable crystals of [Ar’BiM(CO)5]. As before, the Ar’-Bi unit binds to the M(CO)5 moiety in a side-on fashion, with σ-donation from Bi(I) into the metal dz2 orbital. The reaction between Ar’-Bi and diiron nonacarbonyl likewise yields mostly [Ar’BiFe(CO)5], along with a small amount of [Ar’Bi(Fe(CO)4)2] as a minor product. In fact, the reactivities of Dostál's bismuthinidene ArBi and its metastable analog Ar’-Bi toward transition metals are so similar that, upon reaction with Co2(CO)8, they form the analogous complexes [(ArBi)2Co(CO)3]+[Co(CO)4]− and [(Ar’Bi)2Co(CO)3]+[Co(CO)4]−, respectively, with similar binding modes, bond lengths, and bond angles. References Organobismuth compounds
Bismuthinidene
[ "Chemistry" ]
5,075
[ "Functional groups", "Octet-deficient functional groups" ]
72,373,056
https://en.wikipedia.org/wiki/Negative%20hyperconjugation%20in%20silicon
Negative hyperconjugation is a theorized phenomenon in organosilicon compounds, in which hyperconjugation stabilizes or destabilizes certain accumulations of positive charge. The phenomenon explains corresponding peculiarities in the stereochemistry and rate of hydrolysis. Second-row elements generally stabilize adjacent carbanions more effectively than their first-row congeners; conversely they destabilize adjacent carbocations, and these effects reverse one atom over. For phosphorus and later elements, these phenomena are easily ascribed to the element's greater electronegativity than carbon. However, Si has lower electronegativity than carbon, polarizing the electron density onto carbon. The continued presence of second-row type stability in certain organosilicon compounds is known as the silicon α and β effects, after the corresponding locants. These stabilities occur because of a partial overlap between the C–Si σ orbital and the σ* antibonding orbital at the β position, lowering the SN reaction transition state's energy. This hyperconjugation requires an antiperiplanar relationship between the Si group and the leaving group to maximize orbital overlap. Moreover, there is also another kind of silicon α effect, which is mainly about the hydrolysis on the silicon atom. Experimental evidence In 1946, Leo Sommer and Frank C. Whitmore reported that radically chlorinating liquid ethyltrichlorosilane gave an isomeric mixture with exhibited unexpected reactivity in aqueous base. All chlorides pendant to silicon hydrolyze, but the geminal chlorine on carbon failed to hydrolyze, and the vicinal chlorine eliminated to ethene: The same behavior appeared with n-propyltrichlorosilane. The α and γ isomers resisted hydrolysis, but a hydroxyl group replaced the β chlorine: They concluded that silicon inhibits electrofugal activity at the α carbon. The silicon effect also manifests in certain compound properties. Trimethylsilylmethylamine (Me3SiCH2NH2) is a stronger base (conjugate pKa 10.96) than neopentylamine (conjugate pKa 10.21); trimethylsilylacetic acid (pKa 5.22) is a poorer acid than trimethylacetic acid (pKa 5.00). In 1994, Yong and coworkers compared the free-energy effects of α- and β-Si(CH3)3 moieties on C–H homo- and heterolysis. They, too, concluded that the β silicon atom could stabilize carbocations and the α silicon destabilize carbocations. Orbital structure The silicon α and β effects arise because 3rd period heteroatoms can stabilize adjacent carbanions charges via (negative) hyperconjugation. In the α effect, reactions that develop negative charge adjacent to the silicon, such as metalations, exhibit accelerated rates. The C–M σ orbital partially overlaps the C–Si σ* anti-bonding orbital, which stabilizes the C–M bond. More generally, (i.e. even for "naked" carbanions) the Si σ* orbitals help stabilize the electrons on the α carbon. In the β effect, reactions that develop positive charge on carbon atoms β to the silicon accelerate. The C–Si σ orbital partially overlaps the with the C–X (leaving group) σ* orbital (2b): This electron-density donation into the anti-bonding orbital weakens the C–X bond, decreasing the barrier to the cleavage indicated 3, and favoring formation of the carbenium 4. In silyl ethers The silicon α‑effect described above is mainly focused on carbon. In fact, the most industrially-important silicon α‑effect instead occurs with silyl ethers. Under hydrolysis condition, certain α-silane-terminated prepolymers crosslink 10-1000 times faster than the corresponding prepolymers produced from conventional Cγ-functionalized trialkoxypropylsilanes and dialkoxymethylpropylsilanes. History This silicon α-effect was first observed in the late 1960s by researchers at Bayer AG as an increase in reactivity at the silicon atom for hydrolysis and was used for cross-linking of α-silane-terminated prepolymers. For a long time after that, people attributed this reactivity as silicon α-effect. However, the real mechanism beneath it had been debated for many years after this discovery. Generally, this effect has been rationalized as an intramolecular donor-acceptor interaction between the lone pair of the organofunctional group (such as NR2, OC(O)R, N(H)COOMe) and the silicon atom. However, this hypothesis has been proved incorrect by Mitzel and coworkers and more experiments are needed to interpret this effect. Mechanism study Reinhold and coworkers performed a systematical experiment to study the kinetics and mechanisms of hydrolysis of such compounds. They prepared a series of α-silanes and γ-silanes and tested their reactivity in different pH (acidic and basic regime), functional group X and the spacer between the silicon atom and the functional group X. In general, they find that under basic conditions, the rate of hydrolysis is mainly controlled by the electrophilicity of the silicon center and the rate of the hydrolysis of the γ-silanes is less influenced by the generally electronegative functional groups than α-silanes. More electronegative the functional groups are, the higher the rate of hydrolysis. However, under acidic conditions, the rate of hydrolysis depends on both the electrophilicity of the silicon center (determining the molecular reactivity) and the concentration of the (protonated) reactive species. Under acidic conditions, the nucleophile changes from OH− to H2O, so it involves the process of protonation and the atoms are protonated could be either silicon or the functional group X. As a result, the general trend in acidic solution is more complicated. References Silicon Physical organic chemistry
Negative hyperconjugation in silicon
[ "Chemistry" ]
1,285
[ "Physical organic chemistry" ]
72,374,204
https://en.wikipedia.org/wiki/Nano%20Biomedicine%20and%20Engineering
Nano Biomedicine and Engineering is a quarterly peer-reviewed open-access scientific journal that was established in 2009 covering nanotechnology applied to biology, medicine, and engineering. It is published by the Tsinghua University Press and sponsored by Shanghai Jiao Tong University. The editor-in-chief is Daxiang Cui (Shanghai Jiao Tong University). The journal publishes basic, clinical, and engineering research articles, reviews, conference proceedings, editorials, and communications. Abstracting and indexing The journal is abstracted and indexed in the Directory of Open Access Journals, EBSCO databases, Embase, and Scopus. References External links Nanotechnology journals Quarterly journals Academic journals established in 2009 English-language journals Creative Commons Attribution-licensed journals Shanghai Jiao Tong University Nanomedicine journals
Nano Biomedicine and Engineering
[ "Materials_science" ]
168
[ "Nanotechnology journals", "Materials science journals" ]
72,375,503
https://en.wikipedia.org/wiki/Trusted%20Information%20Security%20Assessment%20Exchange
Trusted Information Security Assessment Exchange (TISAX) is an assessment and exchange mechanism for the information security of enterprises, developed by the ENX Association and published by the Verband der Automobilindustrie (German Association of the Automotive Industry or VDA). TISAX concerns the secure processing of information from business partners, the protection of prototypes and data protection in accordance with the General Data Protection Regulation (GDPR) for potential business transactions between automobile manufacturers and their service providers or suppliers. The VDA established TISAX in 2017 together with the ENX Association. Tests according to TISAX, especially for service providers and suppliers, are carried out by "TISAX test service providers". The ENX Association acts as a governance organization in the system. It approves the testing service providers and monitors the quality of the execution and the assessment results. This is to ensure that both the results at the end correspond to a desired quality and objectivity, and that the rights and obligations of the participants are safeguarded. This allows a company to decide whether the resulting maturity level of the supplier (service providers and suppliers) meets the requirements of the buyer. The testing requirements have been revised several times. In October 2020, the status 5.0 was published. Backgrounds, areas of application, execution processes and testing requirements are summarized in a manual. GitHub is a participant in TISAX with an Assessment Level 2 (AL2) label in the ENX Portal. References Information sensitivity Automotive industry in Europe Data security
Trusted Information Security Assessment Exchange
[ "Engineering" ]
304
[ "Cybersecurity engineering", "Data security" ]
72,377,364
https://en.wikipedia.org/wiki/Chlorine-free%20germanium%20processing
Chlorine-free germanium processing are methods of germanium activation to form useful germanium precursors in a more energy efficient and environmentally friendly way compared to traditional synthetic routes. Germanium tetrachloride is a valuable intermediate for the synthesis of many germanium complexes. Normal synthesis of it involves an energy-intensive dehydration of germanium oxide, GeO2, with hydrogen chloride, HCl Due to the environmental and safety impact of non-recyclable, high energy reactions with HCl, an alternative synthesis of a shelf-stable germanium intermediate precursor without chlorine is of interest. In 2017, a synthesis of organogermanes, GeR4 without using chloride species was reported, allowing for a much more environmentally friendly and low energy synthesis using GeO2, Ge(0), and even selectively activating germanium in the presence of zinc oxide (ZnO), resulting in products that are bench stable and solid. Synthesis of organogermanes Oxidation of germanium metal Glavinović et al. have synthesized organogermanes using ortho-quinone, which is both redox "non-innocent" and acts as a pseudo-halide, resulting in an air and moisture stable beige solid. Referring to the scheme below, when Ge(0), ortho-quinone, and pyridine (acting as an auxiliary ligand) were milled via liquid assisted grinding in a 1:1 mixture of toluene and water, the resulting organogermane was recrystallized in toluene resulting in 88% yield. In this reaction, the quinone ligands each undergo a two-electron oxidation, resulting in the Ge(0) oxidized to Ge(IV). This reaction was shown to work both at the milligram and the gram scale, proving its efficiency in the bulk scale. Dehydration of GeO2 Following a nearly identical reaction scheme as the oxidation of germanium metal with ortho-quinone, dehydration of GeO2 with catechol ligands results in the same product as the oxidation product, with similar yield 74% on milligram scale and 84% on the gram scale. This particular scheme is of much note since the sole byproduct of this reaction is water. These reactions could provide an alternative to normal oxide separations for other metals that are energy intensive and otherwise wasteful. Extraction from ZnO Industrially, germanium can be extracted from ZnO, contains amounts of GeO2. Using HCl, the key product of GeCl4 and ZnCl2 byproduct can be produced. The zinc byproduct can be distilled at high temperatures, leaving only germanium tetrachloride. A new method of chlorine-free germanium processing has proven effective in extracting germanium from zinc oxide, giving hope to replace the HCl leaching and distillation process currently employed by industry. In both 1:1 and 1:5 mass ratios of GeO2 and ZnO, germanium oxide was selectively activated by simple addition of catechol, and letting the reaction proceed under the same conditions as the dehydration reaction. The unreacted zinc oxide can be washed away with dichloromethane and the bis(catecholate) germanium product recrystallized in cyclohexane. Despite zinc oxide being present in the reaction vessel, the intermediate germanium product yields remain high, being 64 and 66%. This method, as well as other halogen-free germanium extraction methods, make the possibility of halogen free germanium processing a future possibility. Other auxiliary ligands The mechanochemical activation of germanium described above can be used with a variety of auxiliary amine-based ligands and not just pyridine as used in the syntheses above. Uni-dentate ligands such as N-methyl imidazole can be used to create a trans-disposed octahedral germanium product, isostructural to the complexes of both the catechol and ortho-quinone that contain pyridine. However, chelating ligands can be used to form the product with nitrogens cis to each other. For example, in a reaction using tetramethylethylenediamine as a chelating bi-dentate diamine affords the cis- product with catechol ligands at the other octahedral binding sites. More research as additionally been done to show that the nitrogen-containing ligands can be biologically active ones which operate at very low reduction potentials. This makes the germanium complexes with those ligands easily reducible and highly nucleophilic, making substitution and activation even easier. Substitution reactions Substitutions to form tetraorganogermanes Reagents and products The intermediates prepared by the above method are able to easily undergo substitution reactions with nucleophiles to form tetraorganogermanes, GeR4, of which include, GeH4, Germane. Germane is a key material in optical and electronic device fabrication. These substitution reactions return the original catechol ligand, making this germanium activation process easily recyclable. A solution of 20 equivalents of an alkyl or aryl Grignard reagent in tetrahydrofuran, combined with bis(catecholate) complex leads to a homogeneous solution of reagents in THF. Refluxing this solution for 24 hours yields the Grignard product organogermane in relatively high yield across multiple reagents. The figure below shows different reagents used by Glavinović et al, showing the efficacy of the substitution reaction. Proposed mechanism The substitution reaction described above is thought to process via a mechanism in which steric strain of the complex is slowly alleviated over the course of the reaction. The first Grignard reagent substitutes the most sterically hindered oxygen position, where the t-butyl group of the catechol ligand is alpha to the oxygen. The second Grignard reagent substitutes the now uni-dentate catechol-grignard adduct, removing the ligand and resulting in two complete substitutions. Referring to the scheme below, treating intermediate 2 with an additional equivalent of Grignard reagent yields 3 at a faster rate than the rate to make 2, and treatment of 3 with two equivalents of reagent yields 4 at even more quickly. This is starkly different from the substitution reactions of GeCl4, in which the germanium center becomes more sterically hindered over the course of the reaction as ligand exchange of the carbons and the chlorides progresses, making the substitution more difficult. The stereochemical selectivity of the substitution reaction is further enforced by the identity of the auxiliary amine ligand. By using a more sterically encumbered amine ligand such as triethylamine, a 1.67:1 mixture of dibutyl-germane-η2-catecholate and tributylgermyl-η1-catecholate is produced after substitution with two equivalents of BuMgCl. This proves the effect of steric encumbrance on the product of the substitution reaction as the resulting tri-substituted product has the least sterically encumbered oxygen remaining bonded to the catecholate. This reaction pathway could allow new synthetic pathways for more stereo complex and functionalized germanium complexes. Substitution to form germane Despite being highly volatile and toxic, germane, GeH4, is extremely important in the field of optoelectronics and is a good candidate for vapor deposition to form thin films of germanium. However, germane must be extremely pure to use in such a way, and much research has gone into developing methodologies to prepare and purify germane. Using bis(catecholate) germanium and lithium aluminum hydride (LiAlH4) in dibutyl ether with argon as a carrier gas, the substitution reaction yields high purity germane in the Ar carrier gas with no evolution of volitile Ge byproducts. This reaction pathway for production of germane requires no postsynthetic processing or purification, proving this to be more advantageous than current methods. References Chemical processes
Chlorine-free germanium processing
[ "Chemistry" ]
1,739
[ "Chemical process engineering", "Chemical processes", "nan" ]
78,128,775
https://en.wikipedia.org/wiki/Common%20fixed%20point%20problem
In mathematics, the common fixed point problem is the conjecture that, for any two continuous functions that map the unit interval into itself and commute under functional composition, there must be a point that is a fixed point of both functions. In other words, if the functions and are continuous, and for all in the unit interval, then there must be some in the unit interval for which . First posed in 1954, the problem remained unsolved for more than a decade, during which several mathematicians made incremental progress toward an affirmative answer. In 1967, William M. Boyce and John P. Huneke independently proved the conjecture to be false by providing examples of commuting functions on a closed interval that do not have a common fixed point. History A 1951 paper by H. D. Block and H. P. Thielman sparked interest in the subject of fixed points of commuting functions. Building on earlier work by J. F. Ritt and A. G. Walker, Block and Thielman identified sets of pairwise commuting polynomials and studied their properties. They proved, for each of these sets, that any two polynomials would share a common fixed point. Block and Thielman's paper led other mathematicians to wonder if having a common fixed point was a universal property of commuting functions. In 1954, Eldon Dyer asked whether if and are two continuous functions that map a closed interval on the real line into itself and commute, they must have a common fixed point. The same question was raised independently by Allen Shields in 1955 and again by Lester Dubins in 1956. John R. Isbell also raised the question in a more general form in 1957. During the 1960s, mathematicians were able to prove that the commuting function conjecture held when certain assumptions were made about and . In 1963, Ralph DeMarr showed that if and are both Lipschitz continuous, and if the Lipschitz constant of both is , then and will have a common fixed point. Gerald Jungck refined DeMarr's conditions, showing that they need not be Lipschitz continuous, but instead satisfy similar but less restrictive criteria. Taking a different approach, Haskell Cohen showed in 1964 that and will have a common fixed point if both are continuous and open. Later, both Jon H. Folkman and James T. Joichi, working independently, extended Cohen's work, showing that it is only necessary for one of the two functions to be open. John Maxfield and W. J. Mourant, in 1965, proved that commuting functions on the unit interval have a common fixed point if one of the functions has no period 2 points (i.e., implies ). The following year, Sherwood Chu and R. D. Moyer found that the conjecture holds when there is a subinterval in which one of the functions has a fixed point and the other has no period 2 points. Boyce's counterexample William M. Boyce earned his Ph.D. from Tulane University in 1967. In his thesis, Boyce identified a pair of functions that commute under composition, but do not have a common fixed point, proving the fixed point conjecture to be false. In 1963, Glenn Baxter and Joichi published a paper about the fixed points of the composite function . It was known that the functions and permute the fixed points of . Baxter and Joichi noted that at each fixed point, the graph of must either cross the diagonal going up (an "up-crossing"), or going down (a "down-crossing"), or touch the diagonal and then move away in the opposite direction. In an independent paper, Baxter proved that the permutations must preserve the type of each fixed point (up-crossing, down-crossing, touching) and that only certain orderings are allowed. Boyce wrote a computer program to generate permutations that followed Baxter's rules, which he named "Baxter permutations." His program carefully screened out those that could be trivially shown to have fixed points or were analytically equivalent to other cases. After eliminating more than 97% of the possible permutations through this process, Boyce constructed pairs of commuting functions from the remaining candidates and was able to prove that one such pair, based on a Baxter permutation with 13 points of crossing on the diagonal, had no common fixed point. Boyce's paper is one of the earliest examples of a computer-assisted proof. It was uncommon in the 1960s for mathematicians to rely on computers for research, but Boyce, then serving in the Army, had access to computers at MIT Lincoln Laboratory. Boyce published a separate paper describing his process for generating Baxter permutations, including the FORTRAN source code of his program. Huneke's counterexample John P. Huneke also investigated the common fixed point problem for his Ph.D. at Wesleyan University, which he also received in 1967. In his thesis, Huneke provides two examples of function pairs that commute but have no common fixed points, using two different strategies. The first of Huneke's examples is essentially identical to Boyce's, though Huneke arrived at it through a different process. Huneke's solution is based on the mountain climbing problem, which states that two climbers, climbing separate mountains of equal height, will be able to climb in such a way that they will always be at the same elevation at each point in time. Huneke used this principle to construct sequences of functions that will converge to the counterexample to the common fixed point problem. Later research Although the discovery of counterexamples by Boyce and Huneke meant that the decade-long pursuit of a proof of the commuting function conjecture was lost, it did enable researchers to focus their efforts on investigating under what conditions, in addition to the ones already discovered, the conjecture still might hold true. Boyce extended the work of Maxfield/Mourant and Chu/Moyer in 1971, showing weaker conditions that allow both of the commuting functions to have period 2 points but still imply that they must have a common fixed point. His work was later extended by Theodore Mitchell, Julio Cano, and Jacek R. Jachymski. Over 25 years after the publication of his first paper, Jungck defined additional conditions under which and will have a common fixed point, based on the notions of periodic points and the coincidence set of the functions, that is, the values for which . Baxter permutations have become a subject of research in their own right and have been applied to other problems beyond the common fixed point problem. References Fixed points (mathematics) Mathematical problems Disproved conjectures
Common fixed point problem
[ "Mathematics" ]
1,384
[ "Mathematical analysis", "Fixed points (mathematics)", "Topology", "Mathematical problems", "Dynamical systems" ]
78,134,079
https://en.wikipedia.org/wiki/Artificial%20intelligence%20engineering
Artificial intelligence engineering (AI engineering) is a technical discipline that focuses on the design, development, and deployment of AI systems. AI engineering involves applying engineering principles and methodologies to create scalable, efficient, and reliable AI-based solutions. It merges aspects of data engineering and software engineering to create real-world applications in diverse domains such as healthcare, finance, autonomous systems, and industrial automation. Key components AI engineering integrates a variety of technical domains and practices, all of which are essential to building scalable, reliable, and ethical AI systems. Data engineering and infrastructure Data serves as the cornerstone of AI systems, necessitating careful engineering to ensure quality, availability, and usability. AI engineers gather large, diverse datasets from multiple sources such as databases, APIs, and real-time streams. This data undergoes cleaning, normalization, and preprocessing, often facilitated by automated data pipelines that manage extraction, transformation, and loading (ETL) processes. Efficient storage solutions, such as SQL (or NoSQL) databases and data lakes, must be selected based on data characteristics and use cases. Security measures, including encryption and access controls, are critical for protecting sensitive information and ensuring compliance with regulations like GDPR. Scalability is essential, frequently involving cloud services and distributed computing frameworks to handle growing data volumes effectively. Algorithm selection and optimization Selecting the appropriate algorithm is crucial for the success of any AI system. Engineers evaluate the problem (which could be classification or regression, for example) to determine the most suitable machine learning algorithm, including deep learning paradigms. Once an algorithm is chosen, optimizing it through hyperparameter tuning is essential to enhance efficiency and accuracy. Techniques such as grid search or Bayesian optimization are employed, and engineers often utilize parallelization to expedite training processes, particularly for large models and datasets. For existing models, techniques like transfer learning can be applied to adapt pre-trained models for specific tasks, reducing the time and resources needed for training. Deep learning engineering Deep learning is particularly important for tasks involving large and complex datasets. Engineers design neural network architectures tailored to specific applications, such as convolutional neural networks for visual tasks or recurrent neural networks for sequence-based tasks. Transfer learning, where pre-trained models are fine-tuned for specific use cases, helps streamline development and often enhances performance. Optimization for deployment in resource-constrained environments, such as mobile devices, involves techniques like pruning and quantization to minimize model size while maintaining performance. Engineers also mitigate data imbalance through augmentation and synthetic data generation, ensuring robust model performance across various classes. Natural language processing Natural language processing (NLP) is a crucial component of AI engineering, focused on enabling machines to understand and generate human language. The process begins with text preprocessing to prepare data for machine learning models. Recent advancements, particularly transformer-based models like BERT and GPT, have greatly improved the ability to understand context in language. AI engineers work on various NLP tasks, including sentiment analysis, machine translation, and information extraction. These tasks require sophisticated models that utilize attention mechanisms to enhance accuracy. Applications range from virtual assistants and chatbots to more specialized tasks like named-entity recognition (NER) and Part of speech (POS) tagging. Reasoning and decision-making systems Developing systems capable of reasoning and decision-making is a significant aspect of AI engineering. Whether starting from scratch or building on existing frameworks, engineers create solutions that operate on data or logical rules. Symbolic AI employs formal logic and predefined rules for inference, while probabilistic reasoning techniques like Bayesian networks help address uncertainty. These models are essential for applications in dynamic environments, such as autonomous vehicles, where real-time decision-making is critical. Security Security is a critical consideration in AI engineering, particularly as AI systems become increasingly integrated into sensitive and mission-critical applications. AI engineers implement robust security measures to protect models from adversarial attacks, such as evasion and poisoning, which can compromise system integrity and performance. Techniques such as adversarial training, where models are exposed to malicious inputs during development, help harden systems against these attacks. Additionally, securing the data used to train AI models is of paramount importance. Encryption, secure data storage, and access control mechanisms are employed to safeguard sensitive information from unauthorized access and breaches. AI systems also require constant monitoring to detect and mitigate vulnerabilities that may arise post-deployment. In high-stakes environments like autonomous systems and healthcare, engineers incorporate redundancy and fail-safe mechanisms to ensure that AI models continue to function correctly in the presence of security threats. Ethics and compliance As AI systems increasingly influence societal aspects, ethics and compliance are vital components of AI engineering. Engineers design models to mitigate risks such as data poisoning and ensure that AI systems adhere to legal frameworks, such as data protection regulations like GDPR. Privacy-preserving techniques, including data anonymization and differential privacy, are employed to safeguard personal information and ensure compliance with international standards. Ethical considerations focus on reducing bias in AI systems, preventing discrimination based on race, gender, or other protected characteristics. By developing fair and accountable AI solutions, engineers contribute to the creation of technologies that are both technically sound and socially responsible. Workload An AI engineer's workload revolves around the AI system's life cycle, which is a complex, multi-stage process. This process may involve building models from scratch or using pre-existing models through transfer learning, depending on the project's requirements. Each approach presents unique challenges and influences the time, resources, and technical decisions involved. Problem definition and requirements analysis Regardless of whether a model is built from scratch or based on a pre-existing model, the work begins with a clear understanding of the problem. The engineer must define the scope, understand the business context, and identify specific AI objectives that align with strategic goals. This stage includes consulting with stakeholders to establish key performance indicators (KPIs) and operational requirements. When developing a model from scratch, the engineer must also decide which algorithms are most suitable for the task. Conversely, when using a pre-trained model, the workload shifts toward evaluating existing models and selecting the one most aligned with the task. The use of pre-trained models often allows for a more targeted focus on fine-tuning, as opposed to designing an entirely new model architecture. Data acquisition and preparation Data acquisition and preparation are critical stages regardless of the development method chosen, as the performance of any AI system relies heavily on high-quality, representative data. For systems built from scratch, engineers must gather comprehensive datasets that cover all aspects of the problem domain, ensuring enough diversity and representativeness in the data to train the model effectively. This involves cleansing, normalizing, and augmenting the data as needed. Creating data pipelines and addressing issues like imbalanced datasets or missing values are also essential to maintain model integrity during training. In the case of using pre-existing models, the dataset requirements often differ. Here, engineers focus on obtaining task-specific data that will be used to fine-tune a general model. While the overall data volume may be smaller, it needs to be highly relevant to the specific problem. Pre-existing models, especially those based on transfer learning, typically require fewer data, which accelerates the preparation phase, although data quality remains equally important. Model design and training The workload during the model design and training phase depends significantly on whether the engineer is building the model from scratch or fine-tuning an existing one. When creating a model from scratch, AI engineers must design the entire architecture, selecting or developing algorithms and structures that are suited to the problem. For deep learning models, this might involve designing a neural network with the right number of layers, activation functions, and optimizers. Engineers go through several iterations of testing, adjusting hyperparameters, and refining the architecture. This process can be resource-intensive, requiring substantial computational power and significant time to train the model on large datasets. For AI systems based on pre-existing models, the focus is more on fine-tuning. Transfer learning allows engineers to take a model that has already been trained on a broad dataset and adapt it for a specific task using a smaller, task-specific dataset. This method dramatically reduces the complexity of the design and training phase. Instead of building the architecture, engineers adjust the final layers and perform hyperparameter tuning. The time and computational resources required are typically lower than training from scratch, as pre-trained models have already learned general features that only need refinement for the new task. Whether building from scratch or fine-tuning, engineers employ optimization techniques like cross-validation and early stopping to prevent overfitting. In both cases, model training involves running numerous tests to benchmark performance and improve accuracy. System integration Once the model is trained, it must be integrated into the broader system, a phase that largely remains the same regardless of how the model was developed. System integration involves connecting the AI model to various software components and ensuring that it can interact with external systems, databases, and user interfaces. For models developed from scratch, integration may require additional work to ensure that the custom-built architecture aligns with the operational environment, especially if the AI system is designed for specific hardware or edge computing environments. Pre-trained models, by contrast, are often more flexible in terms of deployment since they are built using widely adopted frameworks, which are compatible with most modern infrastructure. Engineers use containerization tools to package the model and create consistent environments for deployment, ensuring seamless integration across cloud-based or on-premise systems. Whether starting from scratch or using pre-trained models, the integration phase requires ensuring that the model is ready to scale and perform efficiently within the existing infrastructure. Testing and validation Testing and validation play a crucial role in both approaches, though the depth and nature of testing might differ slightly. For models built from scratch, more exhaustive functional testing is needed to ensure that the custom-built components of the model function as intended. Stress tests are conducted to evaluate the system under various operational loads, and engineers must validate that the model can handle the specific data types and edge cases of the domain. For pre-trained models, the focus of testing is on ensuring that fine-tuning has adequately adapted the model to the task. Functional tests validate that the pre-trained model's outputs are accurate for the new context. In both cases, bias assessments, fairness evaluations, and security reviews are critical to ensure ethical AI practices and prevent vulnerabilities, particularly in sensitive applications like finance, healthcare, or autonomous systems. Explainability is also essential in both workflows, especially when working in regulated industries or with stakeholders who need transparency in AI decision-making processes. Engineers must ensure that the model's predictions can be understood by non-technical users and align with ethical and regulatory standards. Deployment and monitoring The deployment stage typically involves the same overarching strategies—whether the model is built from scratch or based on an existing model. However, models built from scratch may require more extensive fine-tuning during deployment to ensure they meet performance requirements in a production environment. For example, engineers might need to optimize memory usage, reduce latency, or adapt the model for edge computing. When deploying pre-trained models, the workload is generally lighter. Since these models are often already optimized for production environments, engineers can focus on ensuring compatibility with the task-specific data and infrastructure. In both cases, deployment techniques such as phased rollouts, A/B testing, or canary deployments are used to minimize risks and ensure smooth transition into the live environment. Monitoring, however, is critical in both approaches. Once the AI system is deployed, engineers set up performance monitoring to detect issues like model drift, where the model's accuracy decreases over time as data patterns change. Continuous monitoring helps identify when the model needs retraining or recalibration. For pre-trained models, periodic fine-tuning may suffice to keep the model performing optimally, while models built from scratch may require more extensive updates depending on how the system was designed. Regular maintenance includes updates to the model, re-validation of fairness and bias checks, and security patches to protect against adversarial attacks. Machine learning operations (MLOps) MLOps, or Artificial Intelligence Operations (AIOps), is a critical component in modern AI engineering, integrating machine learning model development with reliable and efficient operations practices. Similar to the DevOps practices in software development, MLOps provides a framework for continuous integration, continuous delivery (CI/CD), and automated monitoring of machine learning models throughout their lifecycle. This practice bridges the gap between data scientists, AI engineers, and IT operations, ensuring that AI models are deployed, monitored, and maintained effectively in production environments. MLOps is particularly important as AI systems scale to handle more complex tasks and larger datasets. Without robust MLOps practices, models risk underperforming or failing once deployed into production, leading to issues such as downtime, ethical concerns, or loss of stakeholder trust. By establishing automated, scalable workflows, MLOps allows AI engineers to manage the entire lifecycle of machine learning models more efficiently, from development through to deployment and ongoing monitoring. Additionally, as regulatory frameworks around AI systems continue to evolve, MLOps practices are critical for ensuring compliance with legal requirements, including data privacy regulations and ethical AI guidelines. By incorporating best practices from MLOps, organizations can mitigate risks, maintain high performance, and scale AI solutions responsibly. Challenges AI engineering faces a distinctive set of challenges that differentiate it from traditional software development. One of the primary issues is model drift, where AI models degrade in performance over time due to changes in data patterns, necessitating continuous retraining and adaptation. Additionally, data privacy and security are critical concerns, particularly when sensitive data is used in cloud-based models. Ensuring model explainability is another challenge, as complex AI systems must be made interpretable for non-technical stakeholders. Bias and fairness also require careful handling to prevent discrimination and promote equitable outcomes, as biases present in training data can propagate through AI algorithms, leading to unintended results. Addressing these challenges requires a multidisciplinary approach, combining technical acumen with ethical and regulatory considerations. Sustainability Training large-scale AI models involves processing immense datasets over prolonged periods, consuming considerable amounts of energy. This has raised concerns about the environmental impact of AI technologies, given the expansion of data centers required to support AI training and inference. The increasing demand for computational power has led to significant electricity consumption, with AI-driven applications often leaving a substantial carbon footprint. In response, AI engineers and researchers are exploring ways to mitigate these effects by developing more energy-efficient algorithms, employing green data centers, and leveraging renewable energy sources. Addressing the sustainability of AI systems is becoming a critical aspect of responsible AI development as the industry continues to scale globally. Educational pathways Education in AI engineering typically involves advanced courses in software and data engineering. Key topics include machine learning, deep learning, natural language processing and computer vision. Many universities now offer specialized programs in AI engineering at both the undergraduate and postgraduate levels, including hands-on labs, project-based learning, and interdisciplinary courses that bridge AI theory with engineering practices. Professional certifications can also supplement formal education. Additionally, hands-on experience with real-world projects, internships, and contributions to open-source AI initiatives are highly recommended to build practical expertise. References Artificial intelligence Engineering disciplines Artificial intelligence engineering
Artificial intelligence engineering
[ "Engineering" ]
3,202
[ "Artificial intelligence engineering", "Software engineering", "nan" ]
78,139,837
https://en.wikipedia.org/wiki/GRAVITY%20%28Very%20Large%20Telescope%29
GRAVITY is an instrument on the interferometer of the Very Large Telescope (VLTI). It either combines the light of the four Unit Telescopes (UT) or the smaller four Auxiliary Telescopes. The instrument works with adaptive optics and provides a resolution of 4 milliarcseconds (mas) and can measure the position of astronomical objects down to a few 10 microarcseconds (μas). VLTI GRAVITY has a collecting area of 200 m2 and the angular resolution of a 130 m telescope. Instrument details GRAVITY was built by a consortium led by the Max Planck Institute for Extraterrestrial Physics. Other partner institutes are from France, Germany, Portugal and the European Southern Observatory. The first light images included the discovery that Theta1 Orionis F in the Trapezium Cluster is a binary. GRAVITY can operate in single-field mode or in dual-field mode. In the dual-field mode it can interfere two astronomical objects at the same time and acquire this way very accurate astrometry. The instrument data can also be used for K-band spectroscopy with tree spectral resolutions. GRAVITY has the following sub-components: IR wavefront sensing system CIAO (located at the Unit Telescopes) that will work with the MACAO deformable mirror A polarisation control system to counteract polarisation effects in the VLTI An active pupil guide system including LED sources mounted on each of the telescope secondary mirror support (spiders) A field-guide system to track the position of the source The Beam Combining Instrument (BCI) The Beam Combining Instrument is the primary unit of GRAVITY. It performs acquisition and provides interferometric fringes. BCI is cryogenically cooled and located in the VLT-I laboratory. Science GRAVITY is mainly used to observe the stars orbiting the supermassive black hole Sagittarius A* and the position of exoplanets and brown dwarfs around their host star. It is also used for other studies that require a high resolution, such as the study of circumstellar disks and the study of AGNs. GRAVITY+ GRAVITY+ is the upgrade of GRAVITY, which will increase its sensitivity and increase its sky coverage. The upgrade is performed incrementally to reduce the disruption of astronomical observations. References Astronomical instruments Telescope instruments Spectrographs Interferometric telescopes Exoplanet search projects
GRAVITY (Very Large Telescope)
[ "Physics", "Chemistry", "Astronomy" ]
477
[ "Exoplanet search projects", "Spectrum (physical sciences)", "Telescope instruments", "Spectrographs", "Astronomical instruments", "Astronomy projects", "Spectroscopy" ]
78,140,446
https://en.wikipedia.org/wiki/MTEX
MTEX is a open-source MATLAB package specifically designed for the analysis of Electron Backscatter Diffraction (EBSD) data, which are widely used to analyse the crystallographic orientation of materials at the microscale. History The development of MTEX began in 2008, spearheaded by Ralf Hielscher, who aimed to create a user-friendly platform that could facilitate the analysis of large datasets generated by EBSD. The toolbox has since evolved, incorporating various features that allow for the manipulation and visualisation of crystallographic data. EBSD allows for the mapping of crystallographic orientations in materials, providing insights into their microstructural properties. The integration of EBSD with MATLAB through MTEX has enabled researchers to perform advanced analyses, such as orientation distribution function (ODF) calculations, pole figure plotting, calculation of anisotropic physical properties from texture data, and grain boundary and grain reconstruction, which are crucial for understanding the mechanical properties of materials, as the crystallographic texture can significantly influence their behaviour under stress. Moreover, the open-source nature of MTEX has fostered a collaborative environment among researchers, allowing for continuous improvements and updates to the toolbox. This community-driven approach has led to the incorporation of new features and functionalities. MTEX's versatility is further demonstrated by its application across various fields, including geology, metallurgy, and materials science. In geological studies, for instance, MTEX has been used to analyse the crystallographic orientation of minerals, providing insights into their formation processes and the conditions under which they evolved. Similarly, in metallurgy, researchers have employed MTEX to investigate the effects of processing methods on the texture and grain boundary characteristics of alloys, which are critical for optimising their mechanical properties. The toolbox has also been instrumental in advancing the understanding of deformation mechanisms in materials. By analysing EBSD data with MTEX, researchers can elucidate the relationship between microstructural features and mechanical behaviour, such as strain localisation and phase transformations during deformation. References 2008 software Crystallography software Free science software MATLAB software Software using the GNU General Public License
MTEX
[ "Chemistry", "Materials_science" ]
455
[ "Crystallography", "Crystallography software" ]
76,768,028
https://en.wikipedia.org/wiki/Metal-organic%20nanotube
Metal–organic nanotubes (MONTs) are a class of crystalline coordination polymers consisting of organic ligands bonded to a metal or metal cluster that form single-walled one-dimensional porous structures. The usage of organic ligands allows the properties of the resulting material to be tuned, as in the parent class of metal-organic frameworks (MOFs), but like carbon nanotubes, MONTs are anisotropic structures. Structure MONTs have three main components: an organic bridging ligand, an inorganic metal or metal cluster, and a capping ligand that limits the dimensionality of the resulting structure. The bridging ligand is typically di-, tri- or tetravalent, while the capping ligand and metal form structures analogous to secondary building units (SBUs) in MOFs. MONTs have topologies that can be classified as helical coils, stacked macrocyclic rings, pillars of metal-ligand chains, or (m,n) scaffold nets. Helical coil MONTs can be thought of as a linear coordination polymer that is warped into a spiral conformation, resulting in a tube-shaped structure. Macrocyclic ring MONTs are macrocycles fused via coordination bonds to construct an infinite tube. Pillar-chain MONTs are two, three, or four metal-anion linear chains connected via organic linkers to form a nanotube. (m,n) scaffold nets are constructed from a single organic linker functioning as nodes in a topological net, where “m” represents the number of metal linkers while “n” represents the number of organic nodes. Synthesis and properties MONTs are synthesized primarily via a bottom-up solvothermal synthesis approach from a mixture of organic ligands and metal. In bottom-up syntheses, ligands coordinate to metals and rapidly form pre-MONT crystallites that ripen into well-developed crystals through equilibrium processes. This process can expel defects as discrete molecules add to existing crystal structures reversibly over the course of hours to days. Guest molecules such as dimethylformamide or N-methyl-2-pyrrolidone often play a vital role in the formation of MONTs. Another route of MONT synthesis is performed via curling a 2-D sheet into a nanotube. This method relies on exfoliation of the sheet, enabled by weak interlayer interactions. Once the sheets have been separated, chemical stresses induced by a host material force the sheet to curl upon itself and form a MONT. Careful selection of ligands and metals in MONTs allow tunable pore sizes and dimensions, resulting in applications such as fluid separations, hydrogen storage, as an ion exchange material, and chemical sensing. See also Coordination chemistry Coordination polymers Covalent organic framework Macromolecular assembly Metal–inorganic framework Metal-organic framework Omar M. Yaghi Organometallic chemistry Porous polymer Reticular chemistry Susumu Kitagawa X-ray Crystallography Zeolitic imidazolate frameworks References Nanotubes by composition Metal-organic frameworks
Metal-organic nanotube
[ "Chemistry", "Materials_science" ]
636
[ "Porous polymers", "Metal-organic frameworks" ]
76,768,912
https://en.wikipedia.org/wiki/Yttrium%20stannides
Yttrium and tin form several yttrium stannide intermetallic compounds. The most tin-rich is YSn3, followed by YSn2, Y11Sn10, Y5Sn4, and Y5Sn3. None survives above , at which point Y5Sn3 melts congruently. The enthalpy of dissolution is similar to the stannides of other late lanthanoids, and the intermetallics' overall enthalpies of formation resemble silicides, not germanides or plumbides. YSn3 is an electrical superconductor below . It was originally thought to be a Type I superconductor, but 7 K may actually be the strong-coupling regime, despite the low temperature. The density of electronic states has a local maximum at the Fermi level, composed of tin p and d orbitals. The intermetallic is difficult to form, slowly crystallizing from a mixture of Sn and YSn2 above . This may arise from competing allotropes near room temperature: although its crystal structure is certainly cubic, simulation indicates that both the tricopper auride (Pmm) or aluminum-titanium alloy (I4/mmm) structures are stable under standard conditions. YSn2 has unit cell sized 4.39×16.34×4.30 Å. Like DySn2, it exhibits the zirconium disilicide crystal structure: layers of yttrium rhombohedra encapsulating tin atoms alternate with flat planes of tin. Doping with nickel puckers the planes, and Mössbauer spectroscopy suggests that it removes electron density from the tin s orbitals. Y5Sn3 has the hexagonal manganese silicide crystal structure, with unit cell 8.88×6.52×0.73 Å. References Tin compounds Yttrium compounds Intermetallics Superconductors
Yttrium stannides
[ "Physics", "Chemistry", "Materials_science" ]
408
[ "Inorganic compounds", "Metallurgy", "Superconductivity", "Alloys", "Intermetallics", "Condensed matter physics", "Superconductors" ]
70,854,090
https://en.wikipedia.org/wiki/List%20of%20rail%20accidents%20%281930%E2%80%931939%29
This is a list of rail accidents from 1930 to 1939. 1930 January 6 – United Kingdom – The rear carriages of a Southern Railway passenger train from to London are partially buried by a landslip near Wadhurst tunnel. The train is divided and the front part continues on to , where it arrives 100 minutes late. March 6 – United Kingdom – a London, Midland and Scottish Railway passenger train departs from station, Cumberland against signals. It is in collision with a ballast train at station, Cumberland. Two people die and four are seriously injured. March 17 - United States - Adams, Tennessee - L&N steam engine pulling a freight train explodes, killing 4 trainmen and 2 hobos March 22 – United Kingdom – A London, Midland and Scottish Railway Royal Scot express passenger train derails at , Bedfordshire when a crossover is taken at excessive speed. April 7 – Japan – Ōita: Perhaps due to a blasting accident at the colliery, some dynamite ends up in a train's coal supply. When it explodes, the locomotive and several cars are wrecked, 17 people die and two are seriously injured, and a forest fire is ignited. April 16 – USSR – At Domodedovo, now in Russia, some denatured alcohol spilled in a train is accidentally ignited. The fire results in the deaths of 45 people and seriously injures 23. May 20 – USSR – At Chernaya on the Moscow-Kazan line (all these places are now in Russia), the collision of a passenger and a freight train results in the deaths of 28 and severely injures 29. June 29 – USSR – A train from Irkutsk (now in Russia) to Leningrad (now St. Petersburg, Russia) is derailed near its destination due to a signalman's error; 22 die and 28 are seriously injured. July 16 – Romania – The collision of a passenger and a freight train between Petrova and Vișeu Bistra results in the deaths of 22 people. December 27 – China – A passenger train on a branch line of the Peking-Mukden Railway (those cities are now Beijing and Shenyang) is deliberately wrecked by bandits; the locomotive boiler explodes and 20 passengers are kidnapped for ransom. Altogether 80 people are killed. 1931 January 3 – United Kingdom – A London and North Eastern Railway passenger train derails at Carlisle, Cumberland due to excessive speed through a curve. Three people are killed. January 17 – United Kingdom – A London and North Eastern Railway newspaper train departs from station, Essex against signals and collides head-on with a light engine at Great Holland. Two people are killed. March 22 – United Kingdom – A London, Midland and Scottish Railway express passenger train derails at , Bedfordshire due to excessive speed through a crossover. Six people are killed. April 20 – China – Following heavy rain, an embankment collapses under a passenger train from Humchun to Kowloon, Hong Kong. At least 30 are killed and 20 to 30 seriously injured. April 29 – Egypt – The rear cars of a passenger train from Alexandria to Cairo, crowded with passengers due to the Eid al-Adha holiday, catch fire on the approach to Benha station. Many passengers jump from the moving train rather than wait for the station. Altogether 48 people are killed. May 27 – United Kingdom – A London and North Eastern Railway passenger train overruns signals and collides head-on with another passenger train at station, Norfolk. One person is killed and fifteen are injured. May 27 – United States – The Empire Builder, en route from Seattle to Chicago with 117 passengers, is struck by an F3 tornado in Clay County, Minnesota; all the cars (except the locomotive and coal tender) are blown off the track with one car thrown some 80 feet. One passenger is killed and 57 more are injured. June 17 – United Kingdom – A London Midland and Scottish Railway mail train overruns signals and rear-ends an express freight train at Crich, Derbyshire. Two people are killed and seventeen injured. September 13 – Hungary – Soon after leaving Budapest, an international express en route to Paris and Ostend is destroyed by a dynamite bomb on a bridge at Biatorbágy. Most of the train falls ; 22 passengers are killed. The bomber, Szilveszter Matuska, pretends to be a victim and sues the railway, but police checking his story become suspicious. Eventually he is given a death sentence, which is then commuted. September 18 – China,Liaoning,Mukden – Mukden Incident. late September – USSR – A troop train southwest of Leningrad (now Saint Petersburg, Russia) explodes with heavy loss of life. December – United Kingdom – At , Essex, a London and North Eastern Railway passenger train runs into wagons from the preceding freight train, which had been left on the line after a coupling broke. Two people are killed. December 25 - United States - A Southern Pacific class GS-1 4-8-4 #4402 suffers a boiler explosion in Richvale, California. The locomotive was later rebuilt in February 1932 and saw many years of service until it was scrapped on April 24, 1959. 1932 January 2 – USSR – At Kosino, just outside Moscow, a train moving at hits the rear of a stopped suburban train. Although there is time, nobody acts to protect the wreckage and a train of empty freight wagons crashes into it. Altogether 68 people are killed and 130 injured, and 11 railwaymen are arrested for criminal negligence. July 17 – South Africa – At Leeudoorn Stad (now Leeuwdoringstad), southwest of Orkney, a freight train is destroyed by the explosion of over of dynamite in 52 wagons. Two craters deep are left, 5 people killed, 7 injured, and of track destroyed. September 14 – French Algeria – Turenne rail accident: A 14-car troop train of the French Foreign Legion derails in the Atlas Mountains and plunges into a gorge. 57 legionnaires and most of the train's crew die; 223 are injured. October 16 – France – A passenger train rams a freight train near Cérences station, Manche, Normandy, and goes down a steep grade, splintering the lead coaches. Five men and two women are killed and fifteen others injured, all being residents of the local area where the accident occurred. October 18 – USSR – Heavy loss of life occurs when the Black Sea express train, coming from Sochi, strikes a freight car that had been mistakenly switched to the express tracks at Lublinov station, eleven kilometers from Moscow, telescoping five cars, three of them passenger coaches. Casualties include 36 killed and 51 injured. On October 31, the Soviet government sentences to death the station master whose negligence caused the accident. Three others also sharing responsibility receive prison terms. December 14 – Switzerland – A collision in the Gutsch Tunnel on the Zug–Lucerne railway kills at least six people. 1933 March 4 – United Kingdom – Great Western Railway freight train is struck by a landslide at Vriog, Merionethshire. The locomotive is pushed into the sea, both engine crew are killed. March 17 – Manchukuo (now part of China) – A passenger train is stopped between Chengchitun and Ssupingkai (now Siping) due to a "dislocation of the rails" and a freight train collides with its rear, killing 50 people and injuring 70. May 25 – United Kingdom – A Southern Railway passenger train derails at , London and comes to rest foul of the adjacent line. A passing express train collides with it, killing five people and injuring 35. The cause was a failure to implement a speed restriction during permanent way works. July 10 – United Kingdom – A London Midland and Scottish Railway express passenger train collides with a freight train at , Cumberland due to a signalman's error. One person is killed. September 5 – United States – A milk train goes through a stop signal and collides with a stopped Erie Railroad passenger train in Binghamton, New York. 14 people are killed, and 30 are injured. September 8 – United Kingdom – A passenger train runs into four wagons which had been left on the line at Bowling Basin, Dunbartonshire during shunting operations. Five people are injured. October 24 – France – A Chemins de fer de l'État express from Cherbourg to Paris derails at between Saint-Élier and Conches-en-Ouche, and part of the train falls into the river Iton; 36 people are killed and 68 injured. December 23 – France – Lagny-Pomponne rail accident: Rear-end collision of Paris-Nancy express and Paris-Strasbourg fast train between Lagny-sur-Marne and Pomponne (Seine-et-Marne), 17 mi (23 km) out of Paris. 204 are killed and 300 injured aboard the Nancy express as its 7 wooden coaches are smashed. The driver of the Strasbourg train had passed a signal at danger in darkness and fog, but the "Crocodile" acoustic warning system was found to have failed because the contacts had iced over. The Compagnie de Chemin de Fer de l'Est was ordered to pay FFr44,000,000 in compensation to victims' families. 1934 February 18 – Italy – Near Populonia, on the single-track line from Campiglia Marittima to Piombino, a gasoline-powered railcar going collides with a steam special and catches fire. Of 48 passengers in the railcar, 34 are killed. February 26 – United States – Seven passengers and two enginemen are killed and some 40 others injured when a Fort Wayne Division Akron-to-Pittsburgh train of the Pennsylvania Railroad derails one mile (1600 m) short of Penn Station, its destination, just before 2200 hrs. Hitting a frozen switch, the pony truck on locomotive 1638 derails, turning the engine and tender over an embankment into Merchant Street and smashing a signal tower "to splinters" in the process. Two Pullman cars behind the motive power derail but stay upright, but a following coach and diner drop 20 feet (6.1 m) to the street when their couplings break. It is in the coach that the fatalities occur. February 26 – United States – The Pennsylvania Railroad express, the Fort Dearborn, struck a truck at a grade crossing in a snowstorm at Delphos, Ohio. The engine overturned and seven cars derailed, killing the engineer and fireman, the truck driver, and injuring four more. March 4 – USSR – At a station from Moscow (now in Russia), a stationary train is struck by another one, killing 19 and injuring 52. The enginemen of the second train are sentenced to death and three other railwaymen to prison. March 12 – USSR – At Tavatuy, which is northwest of Sverdlovsk (now Yekaterinburg, both places now in Russia), a passenger train runs past signals and crashes into a freight; 33 are killed and 68 injured. March 14 – El Salvador – The explosion of 7 tons of dynamite on a train at La Libertad, a port southwest of San Salvador, kills at least 250 people and injures about 1,000, and destroys many homes. Also involved in the fire are 4,000 cases of gasoline and 15,000 sacks of coffee. September 6 – United Kingdom – Two London Midland and Scottish Railway passenger trains collide at Port Eglinton Junction, Glasgow, Renfrewshire because the driver of one of them misreads signals. Nine people are killed and 58 are injured. September 28 – United Kingdom – Winwick rail crash, near Warrington: Overworked signal box crew forget a train halted at a signal and allow another train into section; 12 people killed. November – United Kingdom – a London and North Eastern Railway passenger train collides with a lorry on a level crossing at Wormley, Hertfordshire and is derailed. Both locomotive crew are killed. December 27– United States – Powellton, West Virginia: A train carrying miners and their families suffers a boiler explosion. The boiler shoots up into the air and lands on the first coach crushing seventeen people to death. 1935 January 6 – USSR – At Porbelo on the railway from Leningrad (now St. Petersburg) to Moscow, all now in Russia, an express from Leningrad to Tiflis (now Tbilisi, Georgia) is stopped by a broken rail. The following train, an express to Moscow, runs past signals and crashes into it, killing 23 and badly injuring 56. Seven railwaymen are convicted of criminal negligence. February 25 – United Kingdom – A London Midland and Scottish Railway passenger train is derailed at , Worcestershire due to a combination of defective track and locomotive design. One person is killed. March 13 – United Kingdom – A London Midland and Scottish Railway express freight train is halted at , Hertfordshire due to a defective vacuum brake. A milk train runs into its rear and a coal train runs into the wreckage of the two trains. The line is reopened the next day. April 11 – United States – Rockville, Maryland: A school bus driver, returning students to Williamsport, Maryland from a field trip at 11:30pm, does not notice the reflective signs at a grade crossing and drives his bus into the path of an oncoming Baltimore & Ohio train. 14 students are killed, 15 others injured. In violation of a Maryland law requiring watchmen at crossings until midnight, the B&O had kept a watchman on duty only until 10pm. June 15 – United Kingdom – 1935 Welwyn Garden City rail crash: A signalman's error on the London and North Eastern Railway leads to one express train crashing into the rear of another, killing 13 passengers and injuring 81. June 25 – Ireland – Dún Laoghaire: The Drumm Train, a Battery Electric Multiple Unit, runs into a landslide between and . The train is derailed and is consequently damaged by fire. September 2 – United States – Islamorada, Florida: The upper Florida Keys are hit by the 1935 Labor Day hurricane. A 10-car rescue train is sent by the Florida East Coast Railway to evacuate hundreds of World War I veterans from government work camps, but is washed from the tracks when the Overseas Railroad is engulfed by a storm surge at Islamorada. Total train fatalities not known (at least 408 estimated storm deaths). Railway link to Florida Keys is left destroyed. The accident is mentioned in the film Key Largo. October 16 – Brazil – In the suburbs of Rio de Janeiro, an express hits a stationary passenger train, killing 20 and injuring over 100. December 13 – United States – Dearing, Georgia: Three trainmen were killed and ten others injured in the head-on collision of two Georgia Railroad trains at the station in this town near Augusta. The train bound from Augusta to Atlanta overran a switch and struck a train bound to Augusta from Atlanta which was standing at the depot. December 24 – Germany – Großheringen: A double-headed express from Berlin to Basel runs past signals and crashes into an Erfurt-Leipzig local on the junction next to a bridge. Some wreckage and bodies end up in the river Saale; 33 people are killed, 7 missing, and 27 seriously injured. 1936 January 15 – United Kingdom – A Great Western Railway freight train divided at , Oxfordshire leaving six wagons on the main line. A following sleeping car express hauled by King Class locomotive 6007 King William III runs into the wagons at almost . Two people are killed. April 16 – Japan – At the Sumitomo Mine in Tadakuma, Iizuka, the cable snaps on the cable railway used by workers, and the emergency brakes do not hold. The 9-car train runs away and 52 people are killed, 2 missing, and 28 injured. June 22 – USSR – At Karymskoye, now in Russia, a train is allowed to set out while the track ahead is occupied. The rear-end collision kills 51 people and injures 52; the stationmaster is sentenced to death and eight other people to prison. July 25 - United States - Denver & Rio Grande Western 346, at the time on loan to the Colorado & Southern, wrecked on Kenosha Pass after the engineer failed to slow down for a corner. The engine was running light and only the engineer was killed, the fireman seeing the impending wreck jumped clear of the engine. August 30 - New Zealand - 1936 Paraparaumu train wreck, in Paraparaumu, a passenger train travelling from Auckland to Wellington derailed in Paraparaumu after striking a landslide that slid on the tracks during a heave rain. One person was killed and five people were injured. October 1 – Poland – A German passenger train from Berlin to Piala (now Baltiysk, Russia) ) collides with a freight at Lamberg in the Polish Corridor, killing 20 people and injuring 150. October 10 – Colombia – A trainload of troops is sent to combat bandits; the two rear cars break away, perhaps because of sabotage, and overturn, killing 30 people and injuring 40. United Kingdom – A Southern Railway boat train catches fire at station, Hampshire due to an electrical fault. 1937 January 16 – China – Sheklung: Aboard an express from Hong Kong to Canton (now Guangzhou), fire broke out in the third-class section. One source refers to a passenger setting fire to a toy made of celluloid, another to a sulfuric acid explosion. The train has neither continuous brakes nor any way to notify the driver. The three rear cars of the train are completely burned and bodies of passengers who jumped are scattered along the tracks. Altogether 112 people are killed and at least 40 injured. February 15 – United Kingdom – A London and North Eastern Railway express passenger train derailed at Sleaford North Junction, Lincolnshire due to excessive speed on a curve killing four and injuring 15. March 1 – United Kingdom – A Great Western Railway passenger train collided with a freight train at , Buckinghamshire and derailed killing one and injuring six. March 8 – United Kingdom – A London and North Eastern Railway passenger train derailed at , Lincolnshire due to defective track. March – USSR – An official announcement states that 72 employees of the Soviet Railways have been found responsible for an accident in Siberia (now in Russia) and executed, and another 3,000 railway officials are under arrest; presumably this is actually part of the Great Purge. April 2 – United Kingdom – Battersea Park rail crash: Two passenger trains collide killing 10 and injuring 17. The signalman believed there was a fault with his equipment and overrode the interlocking. April 26 – United States – Dominguez Canyon rail crash: A Denver and Rio Grande Western Railroad passenger train crashed into Wells Gulch at around 20:00 due to burned out trestle, killing 2 and injuring 7. The engineer, CD Freeman, and the fireman, FS Perkins, were killed when the train fell through the unsupported rails due to the trestle having burned out earlier in the day. Due to the darkness the crew did not realize that the trestle had burned out; the crash was ruled accidental. June 13 – United Kingdom – A London and North Eastern Railway passenger train derails south of Durham because the driver misreads signals. Nine people are injured. June 28 – United Kingdom – A Southern Railway passenger train overruns signals and crashes into an electricity substation at ), Kent. The train had been ordered to make an unscheduled stop at Swanley but the driver was not told of this. Four people were killed. July 17 – British India – An express from Punjab to Howrah derails on damaged track at Bihta, and four cars telescoped together; 107 people were killed and about 65 injured. The Bihta deputy traffic controller was convicted for allowing trains to run at full speed after the track damage was reported. Testing reveals that the damage was caused by the Class XB locomotives in use on the line, which were prone to dangerous oscillations when running at speed. July 29 – France – At Villeneuve-Saint-Georges station just outside Paris, railway staff became confused as to whether PLM railway train 107 was going toward Melun or its actual destination of Nîmes. The power-operated switch was moved while the train was crossing it at , and the derailment killed 29 people. November 16 – United Kingdom – A Great Western Railway steam railcar overruns a signal and diverts into a short siding. It overruns the buffers and collides with a signal box at Ealing, London. November 17 – United Kingdom – A London, Midland and Scottish Railway passenger train overruns signals and rear-ends an express passenger train at Coppenhall Junction, Crewe, Cheshire. November 18 – Canada – Thirteen cars of a Canadian Pacific Railway train derail near Red Rock, Ontario with some of the cars falling onto the adjacent Canadian National Railways track, effectively blocking traffic on both railways. December 4 – Spain – A 10-car steam train and a 2-car electric one collide at Valencia, killing 20 people. December 10 – United Kingdom – Castlecary rail accident: A London and North Eastern Railway Edinburgh-Glasgow commuter express, travelling in white-out conditions, passes a danger signal and rear-ends a local train standing in the station; 35 are killed and 179 injured, most seriously. The local had been running late. 1938 January 3 – China – A train from Canton (now Guangzhou) to Hankou (now Wuhan) derails owing to subsidence at Shinchow. At least 100 people are killed and injured. January 3 – China – A train from Canton (now Guangzhou) to Wachung hits debris in a tunnel damaged by Japanese bombs; 42 are killed. January 16 – China – Fire breaks out, possibly due to arson, aboard an express on the Kowloon-Canton Railway, Kowloon being in Hong Kong, and Canton now being Guangzhou. There are 87 people killed and 30 injured, all in one car of the train. January 21 – United Kingdom – An express passenger train collides with an empty coaching stock train at Oakley Junction, Bedfordshire due to a signalman's error. Three people are killed and 46 injured. March 10 - United Kingdom - Charing Cross (Northern Line) tube crash, in Charing Cross, a London Underground train collided collided onto equipment and derailed, no one was killed but 12 people sustained injuries. March 29 – Spain – At a level crossing near Valencia, a train crashes into a gasoline truck and catches on fire; 39 people are killed. April 4 – Southern Rhodesia (now Zimbabwe) – While running through a narrow cutting between Plumtree and Tsessebe (near the border with Bechuanaland, now Botswana), an international express from Bulawayo to Cape Town collides head-on with a freight train whose crew has been given erroneous train orders. Altogether 26 people are killed and 22 injured; rescue is impaired by the inaccessible location, but some uninjured passengers give first aid to the victims. May 17 - United Kingdom - Charing Cross (District Line) tube crash, in Charing Cross, a London Underground train on the District Line collided with another train during maintenance, 6 people were killed and 46 people were injured. June 19 – United States – Custer Creek train wreck: Milwaukee Road's Olympian plunges into Custer Creek when a 25-year-old bridge, weakened by heavy rain, collapses; 47 people killed, many victims in a tourist sleeper that is submerged in 20 feet (6 m) of water for almost 36 hours. Some bodies recovered as far as 50 miles (80 km) downstream. July 30 – Jamaica – near Balaclava Station, five overcrowded cars derail; 32 killed, 70 injured. August 21 – British India – At Vadamadura on the South Indian Railway, flood damage to a bridge derails a crowded train, killing 33 people and injuring 93. August 19 – United Kingdom – A Great Western Railway express passenger train is diverted into a siding at , Monmouthshire due to a signalman's error. The train crashes through the buffers but comes to rest short of the River Usk. September 25 – Spain – On a single-track section at Martorell, northwest of Barcelona, a special train from Vilafranca del Penedès collides with a regular train from the coast. December 1 – United States – A school bus carrying 39 students in Sandy, Utah pulls onto a railroad track crossing during a snowstorm. A Denver & Rio Grande Western freight train comprising more than 80 cars emerged from the storm, killing the bus driver and 23 students. December 19 – Brazil – On the Central Railway of Brazil, a passenger train crew picks up the wrong order and collides with a freight train between João Ayres and Sitio; 42 people are killed and at least 70 injured. December 24 – Romania – At Etulia (now in Moldova), two passenger trains collide head-on on single track due to a misunderstanding between stationmasters. One is a local; the other is carrying soldiers going on leave. Altogether 93 people are killed, including a general and two colonels, and 147 are injured. 1939 January 4 – Canada – The derailment of a westbound Canadian Pacific Railway freight train near Nelson, British Columbia, kills the engineer and injures other crewmen. January 12 – British India – At Hazaribagh on the East Indian Railway, saboteurs remove a length of rail. The locomotive of the Dehra Dun Express from Howrah actually makes it across the gap and regains the rails, but the track is sufficiently damaged that the rest of the train is derailed, with 21 deaths and 71 injuries. A reward offer of 25,000 rupees fails to lead to a prosecution. This is one of 131 sabotage attempts against the railway in a 10-year period. January 26 – United Kingdom – An empty fish train runs into the back of a passenger train near , Hertfordshire. January 26 – United Kingdom – A passenger train runs into the back of another near Hatfield. Two people are killed and seven are injured. February 11 – Spain – Sarrià-Sant Gervasi: A workman's train runs away downhill, crashing into a stationary wagon and then the rear of another train; 53 are killed and at least 100 injured. April 13 – Mexico – During the period of disruption following the nationalization of Mexico's railways, trains from Guadalajara and Laredo, Texas, collide and 12 passenger cars are destroyed; at least 26 people are killed. April 16 - United States - In Windber, Massachusetts, a flood caused many trolley trains to wash away, the damaged trolley trains are dirty and now used as a display and a museum. no problems reported. April 17 – British India – At Majhdia, from Calcutta (now Kolkata) on the Eastern Bengal Railway, the North Bengal Express collides with the Dacca Mail (Dacca is now Dhaka, Bangladesh), killing 35 people including two Bengal legislators, and injuring 31. April 27 – United States – A log truck across the track sends a Union Pacific passenger train flying off the tracks in Bucoda, Washington; the train engineer and fireman were killed, along with the truck driver, while six passengers were injured. June 1 – United Kingdom – A London and North Eastern Railway express passenger train collides with a lorry on a level crossing at Hilgay Fen, Norfolk and is derailed. Four people are killed and twelve injured. June 8 – United Kingdom – Two passenger trains collide at station, Lancashire because one of them was departing against a danger signal. August 5 – United Kingdom – A London, Midland and Scottish Railway express passenger train derails at Saltcoats, Ayrshire when vandals place rocks on the line. Four people are killed. August 5 – United Kingdom – Workmen building a new military camp crossing the Southern Railway line at Bramshot Halt are struck by an express train. Three are killed and others seriously injured. August 12 – United States – 1939 City of San Francisco derailment – An act of sabotage sends the City of San Francisco flying off a bridge in the Nevada desert; 24 passengers and crew members are killed, and five cars are destroyed. This case remains unsolved. September 2 – France – A collision at Les Aubrais kills 35 and injures 77. October 8 – Germany – At Gesundbrunnen in Berlin, an express from Sassnitz collides with another passenger train, killing 20 people. October 14 – United Kingdom – , Buckinghamshire: The London Midland and Scottish Railway Night Scot express passenger train collides with a LNWR Class G1 locomotive that was adding a van at the rear of a Euston-Inverness passenger train at Bletchley railway station, demolishing a part of the station. Five people are killed and over 30 are injured. Both drivers of the Night Scot failed to observe several signals properly. October 16 – United Kingdom – A London Midland and Scottish Railway train is involved in an accident at Winwick Junction, Cheshire. The report into the accident is declared secret due to World War II. October 21 – Mexico – A freight train from Veracruz to the Pacific coast, with workers and their families on board, derails and catches fire between Santa Lucrecia and Matías Romero; 40 are killed. October 26 – Germany – An accident at St. Valentin (now in Austria) kills at least 20 people and seriously injures 30. October 30 – Italy – An electric train from Milan to Rome gets only as far as Lambrate before colliding with an express from Venice; about 20 are killed. November 12 – Germany – On the single-track branch line, now in Poland, between Cosel and Bauerwitz (now Koźle and Baborów), a signalman's error at Rosengrund (now Zakrzów) causes a collision of two crowded local trains between there and Langlieben (now Długomiłowice). There are 43 dead and 60 injured. November 20 – Germany – At Spandau in Berlin, nine people die in a collision. November 26 – Germany – Nieder Wöllstadt: 15 people die in a collision. December 1 – Romania – A construction special, carrying workers and materials for a new branch from Avrig to nearby Mârșa, runs away downhill and crashes near Sibiu; 20 people die and 16 are seriously injured. December 12 – Germany – Hagen: A head-on collision results in the deaths of 15 people. December 22 – Germany – Genthin rail disaster: Collision when train D180 drives into previously delayed and overcrowded train D10 from Berlin to Cologne. 278 dead, 453 injured. Highest number of fatalities ever in an accident in Germany. December 22 – Germany – Markdorf, near Lake Constance: Mistake of a traffic controller leads to a head-on collision of a passenger train and a cargo train. 101 dead, 47 injured. December 30 – Italy – A troop train stops at Torre Annunziata to be overtaken by the Calabria express, but cannot be sidetracked because the points are frozen. The troop train is ordered to proceed, but the express runs past signals and crashes into it, results in the deaths of 29 people. See also London Underground accidents References Sources External links Railroad train wrecks 1907–2007 Rail accidents 1930-1939 20th-century railway accidents
List of rail accidents (1930–1939)
[ "Technology" ]
6,468
[ "Railway accidents and incidents", "Lists of railway accidents and incidents" ]
70,855,353
https://en.wikipedia.org/wiki/Aureoverticillactam
Aureoverticillactam is an antifungal macrocyclic lactam with the molecular formula C28H39NO4 which is produced by the marine bacterium Streptomyces aureoverticillatus. Aureoverticillactam has also cytotoxic activity. References Further reading Aureoverticillactam Macrocycles Lactams Polyenes Triols
Aureoverticillactam
[ "Chemistry" ]
86
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs", "Macrocycles" ]
70,856,850
https://en.wikipedia.org/wiki/Aubry%E2%80%93Andr%C3%A9%20model
The Aubry–André model is a toy model of a one-dimensional crystal with periodically varying onsite energies. The model is employed to study both quasicrystals and the Anderson localization metal-insulator transition in disordered systems. It was first developed by Serge Aubry and Gilles André in 1980. Hamiltonian of the model The Aubry–André model describes a one-dimensional lattice with hopping between nearest-neighbor sites and periodically varying onsite energies. It is a tight-binding (single-band) model with no interactions. The full Hamiltonian can be written as , where the sum goes over all lattice sites , is a Wannier state on site , is the hopping energy, and the on-site energies are given by . Here is the amplitude of the variation of the onsite energies, is a relative phase, and is the period of the onsite potential modulation in units of the lattice constant. This Hamiltonian is self-dual as it retains the same form after a Fourier transformation interchanging the roles of position and momentum. Metal-insulator phase transition For irrational values of , corresponding to a modulation of the onsite energy incommensurate with the underlying lattice, the model exhibits a quantum phase transition between a metallic phase and an insulating phase as is varied. For example, for (the golden ratio) and almost any , if the eigenmodes are exponentially localized, while if the eigenmodes are extended plane waves. The Aubry-André metal-insulator transition happens at the critical value of which separates these two behaviors, . While this quantum phase transition between a metallic delocalized state and an insulating localized state resembles the disorder-driven Anderson localization transition, there are some key differences between the two phenomena. In particular the Aubry–André model has no actual disorder, only incommensurate modulation of onsite energies. This is why the Aubry-André transition happens at a finite value of the pseudo-disorder strength , whereas in one dimension the Anderson transition happens at zero disorder strength. Energy spectrum The energy spectrum is a function of and is given by the almost Mathieu equation . At this is equivalent to the famous fractal energy spectrum known as the Hofstadter's butterfly, which describes the motion of an electron in a two-dimensional lattice under a magnetic field. In the Aubry–André model the magnetic field strength maps onto the parameter . Realization Iin 2008, G. Roati et al experimentally realized the Aubry-André localization phase transition using a gas of ultracold atoms in an incommensurate optical lattice. In 2009, Y. Lahini et al. realized the Aubry–André model in photonic lattices. See also Arnold tongue Bose–Hubbard model References Condensed matter physics
Aubry–André model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
579
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
70,860,687
https://en.wikipedia.org/wiki/International%20Journal%20of%20Extreme%20Manufacturing
The International Journal of Extreme Manufacturing is a bimonthly peer-reviewed open-access scientific journal covering extreme manufacturing, ranging from fundamentals to process, measurement and systems, as well as materials, structures, and devices with extreme functionalities. The journal was established in 2019. Abstracting and indexing The journal is abstracted and indexed in: Astrophysics Data System Chemical Abstracts Service Ei Compendex Inspec ProQuest databases Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2023 impact factor of 16.1. References External links Mechanical engineering journals English-language journals Academic journals established in 2019 IOP Publishing academic journals Bimonthly journals
International Journal of Extreme Manufacturing
[ "Engineering" ]
140
[ "Mechanical engineering journals", "Mechanical engineering" ]
70,864,908
https://en.wikipedia.org/wiki/Phosphate%20sulfate
The phosphate sulfates are mixed anion compounds containing both phosphate and sulfate ions. Related compounds include the arsenate sulfates, phosphate selenates, and arsenate selenates. Some hydrogen phosphate sulfates are superprotonic conductors. List Artificial Organic derivatives A catenated sulfophosphate has the sulfur and phosphorus joined by an oxygen atom. In biochemistry, metabolism of sulfate may use such a group, for example with adenosine-5'-phosphosulfate. References Phosphates Mixed anion compounds Sulfates
Phosphate sulfate
[ "Physics", "Chemistry" ]
112
[ "Matter", "Mixed anion compounds", "Sulfates", "Salts", "Phosphates", "Ions" ]
70,865,048
https://en.wikipedia.org/wiki/Abequose
Abequose is a hexose and a 3,6-dideoxysugar. It is a constituent of the in O-specific chains in lipopolysaccharides that occur in certain serotypes of Salmonella and Citrobacter bacteria. It is the enantiomer of colitose. References External links Hexoses Deoxy sugars
Abequose
[ "Chemistry" ]
81
[ "Carbohydrates", "Deoxy sugars", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
70,865,508
https://en.wikipedia.org/wiki/Dibrospidium%20chloride
Dibrospidium chloride, also known as spirobromin, is a drug being investigated to treat bone cancer. It has potential anti-inflammatory and anti-neoplastic properties. It is an alkylating antineoplastic agent. Dibrospidium chloride and related compounds were developed in Russia in the 1980s. It is currently used in Russia as a cytostatic antitumor chemotherapeutic drug. References Experimental cancer drugs Alkylating antineoplastic agents Quaternary ammonium compounds Spiro compounds Nitrogen heterocycles Organobromides Chlorides Amides
Dibrospidium chloride
[ "Chemistry" ]
125
[ "Chlorides", "Inorganic compounds", "Functional groups", "Salts", "Organic compounds", "Amides", "Spiro compounds" ]
58,464,262
https://en.wikipedia.org/wiki/Bette%20Korber
Bette Korber is an American computational biologist focusing on the molecular biology and population genetics of the HIV virus that causes infection and eventually AIDS. She has contributed heavily to efforts to obtain an effective HIV vaccine. She created a database at Los Alamos National Laboratory that has enabled her to design novel mosaic HIV vaccines, one of which is currently in human testing in Africa. The database contains thousands of HIV genome sequences and related data. Korber is a scientist in theoretical biology and biophysics at Los Alamos National Laboratory. She has received the Ernest Orlando Lawrence Award, the Department of Energy's highest award for scientific achievement. She has also received several other awards including the Elizabeth Glaser Award for pediatric AIDS research and the Richard Feynman Award for Innovation. Early life and education Bette Korber grew up in Southern California. She earned her B.S. in chemistry in 1981 from California State University, Long Beach, where her father was a sociology professor, her mother graduated in nursing, and her sister graduated in journalism. From 1981 to 1988, she was in the graduate program at the California Institute of Technology (Caltech), where she worked with Iwona Stroynowski in Leroy Hood's laboratory, receiving her PhD in chemistry in 1988. Her work focused on regulation of the expression of major histocompatibility complex type 1 genes, producing cell surface proteins that participate in the rejection of tissue transplants, by interferon induced by viral infections. She then became a postdoctoral fellow with Myron Essex, working on the molecular epidemiology of the AIDS/HIV virus and HTLV-1, the human leukemia virus, at the Harvard School of Public Health until 1990. There, Korber used polymerase chain reaction (PCR) to show both complete and deleted versions of viral genomes in leukemic cells. Her work on these viral partial and complete genomes was influential and widely cited. She became a visiting faculty member at the Santa Fe Institute in 1991, continuing in that position until 2011. Research Korber conducts her research at Los Alamos National Laboratory, where she began in 1990. Her approach involves applying computational biology to the design of a vaccine against the HIV/AIDS virus. She first became interested in HIV when a close friend of hers and her fiancé's at Caltech contracted one of the first cases of AIDS in Pasadena, California. She said, "We learned a lot about HIV while he was sick. But there was no treatment for him and he died in 1991. I decided when I graduated from my PhD program that I wanted to work on HIV." Several years later, looking back on this event, she described its effects: "I hate HIV ... I lost a couple friends to it. HIV kills in horrible ways. I think of what the epidemic has done to Africa and it motivates me." HIV database Korber oversees the HIV Database and Analysis Project at Los Alamos. She and her team have built a global HIV database of more than 840,000 sequences from publications of the viral genome. In addition, the database focuses on the small regions (called epitopes) within the virus that can be recognized by antibodies, and evaluates the evidence for the strength of each epitope in eliciting immune responses. There is also data on the immunological profiles of individuals resistant to HIV. Korber and many other researchers have applied the data to devise possible treatments and vaccines against HIV. Her work has resulted in design of vaccines now being tested in clinical trials. HIV vaccine design Creating a vaccine against HIV has been challenging because the virus mutates rapidly, creating multiple variants that may not be recognized by immune system components specific to the original infecting virus. The most variable region is the surface of the virus, but there is also some variation of the internal proteins involved in virus replication, which may be attacked by the cellular immunity system or T cell responses. A recent approach that Korber and collaborators have taken is to design mosaic antigens. Korber developed a novel mosaic HIV vaccine that may slow or prevent HIV infection; this is currently in human testing in Africa. The goal of the mosaic antigen vaccine is to protect the vaccinated person against the great variety of HIV variants encountered. Since the proteins of HIV vary so greatly, mosaic test proteins are designed to represent the most common forms of HIV-1 virus that can be recognized by antibodies or cellular immune responses (epitopes). In 2009, Korber described the process: "I create sort of little Frankenstein proteins that look and feel like HIV proteins but they don't exist in nature." Several of the major variations are included in each molecule of protein, thus producing a variant protein antigen that probably does not exist in the wild virus population but should cross-react with variants that do exist. Korber has taken two different approaches to designing such antigens. Her group has developed a computer algorithm to choose epitopes to combine into a mosaic molecule for the mosaic antigens. In 2009, she described a designed mosaic protein this way: "People didn't know if it would fold properly, if it would be antigenic, or if it would have the same sites that recognized by killer T cells". They found that the newly designed antigens did fold properly and acted as a strong antigen, and were recognized by the cytotoxic T cells (killer cells). Also, Korber and her collaborators have developed a graphical analysis called Epigraph that can generate promising antigens with a mixture of epitopes. Korber explains that the approach of designing a protein via computer, combining bits of known proteins that provoke immune responses, had never been tried. She says, "Even after it worked, it was hard to convince people that this novel thing could be a vaccine because it hadn't been done before". In collaboration with Dan Barouch, a professor at Harvard Medical School, some of these antigens have been tested in monkeys as possible vaccines. With one series of tests, Barouch checked a number of possible ways to deliver the virus genes and chose to use the common cold virus as a vehicle. The tested mosaic vaccine routinely slowed monkey infection with the closely related Simian Immunodeficiency Virus (SIV), and for 66 percent of monkeys exposed multiple times, no infection resulted. Next, in collaboration with the National Institutes of Health, Janssen Pharmaceutical Companies (a division of Johnson & Johnson), and the Bill and Melinda Gates Foundation, the researchers tested a mosaic vaccine for safety in human subjects; it passed that test too. In 2017, the group of collaborators announced a human efficiency test with that same mosaic protein preparation, vaccinating 2,600 women in Sub Saharan Africa, who will be examined for several years to show how efficiently, if at all, the virus interferes with infection. Korber cautioned that effectiveness of this strategy in monkeys is not a guarantee that a human vaccine will work. In recognition of her research, Korber received the 2018 Feynman Award for Innovation, the first woman at Los Alamos National Laboratory to receive one. She recalled that at Caltech when few women were there, she took a class with physicist Richard Feynman and became friends with him. She said, "At a time when kindness seemed rare, I really appreciated his generous spirit and encouragement. I think he would have been pleased about this award". Dating the HIV-1 virus In the history of HIV/AIDS virus with regard to when and where HIV originated, Edward Hooper had postulated in a best-selling book called The River: A Journey to the Source of HIV and AIDS in 1999 that HIV could have jumped from chimpanzees to humans because of an accidental contamination by chimpanzee SIV of the oral polio vaccine (CHAT) used in Africa in the 1950s. Korber and her colleagues employed the Los Alamos National Laboratory database's genomic data to calculate when the HIV sequence evolution began, using a model of evolution based on the mutation rate of HIV strains and assuming that variable was the same on all branches of the evolutionary tree. In 2000 they published an estimate of approximately 1930 for the origin of the human immunodeficiency virus. Their research was covered widely as establishing a new date for the origin of the human virus, discrediting the oral polio virus theory, and therefore refuting concerns about using oral polio vaccine (OPV). These two concepts of the origin of this virus plus other related theories continued to compete for scientific credibility. In 2008, Worobey and collaborators used a computer modeling approach similar to Korber's but with a relaxed evolutionary model and two older samples, collected earlier than any genomes included in Korber's study, and found an origin date for HIV of approximately 1900. COVID-19 As the COVID-19 pandemic unfolded, Korber and her Los Alamos colleagues devised computational strategies that look for evolutionary changes in genes that encode the Spike proteins that stud the SARS-CoV-2 coronavirus and give it its crown-like appearance. Her strategies can examine millions of global genomes stored by GISAID, and it flags mutations that vary from the original Wuhan sequence by at least a minimum specified threshold amount. Using this strategy, she and colleagues identified a particular Spike mutation, Aspartic acid (Asp) to Glycine (Gly) at position 614 (D614G), that was gaining prevalence across the globe since February 2020. This finding, which was controversial at first, was validated by multiple other groups who showed that the D614G mutation was shown to improve the efficiency of replication and transmission of SARS-CoV-2, and this mutation, as of June 2020, has become part of all globally prevalent SARS-CoV-2 strains. As of September 28, 2021, she and her group continue to analyze GISAID data for novel variants, and she continues to be an active member of the NIH TRACE Working Group, whose objective is to "provide actionable intelligence on SARS-CoV-2 variants through genomic surveillance, data sharing and curation, and standardized in vitro assessments of therapeutics against novel strains." Personal life Korber married James Theiler in 1988. They have two sons. Out of her concern for the impact of AIDS on those with few financial resources, Korber contributed $50,000 from her EO Lawrence Award to help establish, along with family and friends, an AIDS orphanage in South Africa, working through Nurturing Orphans of AIDS for Humanity (NOAH). She has joined the Board of NOAH. She also contributed to the distribution of Earth Boxes of maintenance-free portable gardens to orphanages, clinics, and schools in Africa. Awards and honors 2021: Los Alamos Medal, for changing the course of science 2019: Inventor of the Year, Battelle, 2019, Award given in Columbus, Ohio 2018: R&D Magazine Scientist of the Year 2018: Richard Feynman Award for Innovation 2014: Selected to Thomson Reuters Corporation's 100 Most Influential Minds of the Decade 2004: Ernest Orlando Lawrence Award 2002: Los Alamos National Laboratory Fellow 2001: Distinguished Alumna of CSULB 1997: Elizabeth Glaser Scientist, for work on pediatric AIDS, presented by Hillary Clinton Other work In 2019, Korber led a series of lectures called Frontiers in Science that focused on her work designing a vaccine against HIV. Selected publications Keele, Brandon F.; Giorgi, Elena E.; Salazar-Gonzalez, Jesus F.; Decker, Julie M.; Pham, Kimmy T.; Salazar, Maria G.; Sun, Chuanxi; Grayson, Truman; Wang, Shuyi; Li, Hui; Wei, Xiping (2008-05-27). "Identification and characterization of transmitted and early founder virus envelopes in primary HIV-1 infection". Proceedings of the National Academy of Sciences. 105 (21): 7552–7557. doi:10.1073/pnas.0802203105. ISSN 0027-8424. PMID 18490657. References Living people Computational chemistry Computational biology American women molecular biologists American molecular biologists American virologists California Institute of Technology alumni Los Alamos National Laboratory personnel Year of birth missing (living people)
Bette Korber
[ "Chemistry", "Biology" ]
2,528
[ "Theoretical chemistry", "Computational chemistry", "Computational biology" ]
58,467,542
https://en.wikipedia.org/wiki/Capped%20trigonal%20prismatic%20molecular%20geometry
In chemistry, the capped trigonal prismatic molecular geometry describes the shape of compounds where seven atoms or groups of atoms or ligands are arranged around a central atom defining the vertices of an augmented triangular prism. This shape has C2v symmetry and is one of the three common shapes for heptacoordinate transition metal complexes, along with the pentagonal bipyramid and the capped octahedron. Examples of the capped trigonal prismatic molecular geometry are the heptafluorotantalate () and the heptafluoroniobate () ions. References Stereochemistry Molecular geometry
Capped trigonal prismatic molecular geometry
[ "Physics", "Chemistry" ]
124
[ "Molecules", "Molecular geometry", "Stereochemistry", "Space", "Stereochemistry stubs", "nan", "Spacetime", "Matter" ]
58,467,706
https://en.wikipedia.org/wiki/Dodecahedral%20molecular%20geometry
In chemistry, the dodecahedral molecular geometry describes the shape of compounds where eight atoms or groups of atoms or ligands are arranged around a central atom defining the vertices of a snub disphenoid (also known as a trigonal dodecahedron). This shape has D2d symmetry and is one of the three common shapes for octacoordinate transition metal complexes, along with the square antiprism and the bicapped trigonal prism. One example of the dodecahedral molecular geometry is the ion. References Stereochemistry Molecular geometry
Dodecahedral molecular geometry
[ "Physics", "Chemistry" ]
115
[ "Molecular geometry", "Molecules", "Stereochemistry", "Space", "Stereochemistry stubs", "nan", "Spacetime", "Matter" ]
58,470,726
https://en.wikipedia.org/wiki/Pharmacy%20management%20system
The pharmacy management system, also known as the pharmacy information system, is a system that stores data and enables functionality that organizes and maintains the medication use process within pharmacies. These systems may be an independent technology for the pharmacy's use only, or in a hospital setting, pharmacies may be integrated within an inpatient hospital computer physician order entry (CPOE) system. Necessary actions for a basic, functioning pharmacy management system include a user interface, data entry and retention, and security limits to protect patient health information. Pharmacy computer software is usually purchased ready-made or provided by a drug wholesaler as part of their service. Various pharmacy software operating systems are common place throughout the many practice settings. Purpose The pharmacy management system serves many purposes, including the safe and effective dispensing of pharmaceutical drugs. During the dispensing process, the system will prompt the pharmacist to verify the medication they have is for the correct patient and has the correct quantity, dosage, and information on the prescription label. Advanced pharmacy management systems offer clinical decision support and may be configured to alert the pharmacist to perform clinical interventions, such as an opportunity to offer verbal counseling if the patient's prescription requires additional education in the pharmacy. Pharmacy management systems should also serve the pharmacist throughout the Pharmacists’ Patient Care Process, a cycle developed by the Joint Commission of Pharmacy Practitioners (JCPP). The process details the steps pharmacists take to practice tangible, proven care to their patients. Pharmacist patient care process The JCPP's pharmacist patient care process consists of five steps: collect, assess, plan, implement, and follow-up. Ideally, the pharmacy management system assists with each of these practices. The pharmacy system should Collect data at intake and continue to store and organize information as the pharmacist learns more about the patient's medications, their history, goals, and other factors that may affect their health. The technology within the pharmacy information system should allow the pharmacists to Assess the collected information to form a Plan and Implement creative strategies that address the patient's issues. After implementing a plan, the pharmacist should routinely Follow-Up with the patient and make adjustments as needed to further progress. Vendors Outpatient software vendors Outpatient pharmacies typically are retail pharmacies that offer patient care services outside of hospitals and treatment facilities. Outpatient pharmacies, also known as community pharmacies or independent pharmacies, offer care in the form of medication therapy management (MTM), patient education, and clinical services. Rx30 Developed in Florida in 1980, Rx30 is a multi-platform software that offers automated pharmacy processes, vendor integrations, and compounding functionality. The Core Services include Accounts Receivable, Point of Sale, and Virtual Pharmacist, a feature that automates the refill process. On October 6, 2016, Rx30 announced its merger with Computer-Rx. Inpatient software vendors Inpatient pharmacies operate within hospitals and dispense medications to admitted patients receiving treatment. Inpatient pharmacists manage patient health alongside doctors and nurses, and the pharmacy management system must integrate with the various systems operating throughout the hospital to maintain accurate Electronic Medical or Health Records (EMR, EHR). Epic Willow Epic, named for the long-form poems chronicling hero's lives, began in 1979 by founder Judith R. Faulkner. Epic software currently manages over 200 million patient electronic records. The Willow Inpatient Pharmacy System, when combined with other Epic systems, allows pharmacies access to medical administration records (MAR) and links all aspects of the ordering and dispensing process to simplify collaboration amongst all parties involved in patient care management. Cerner PharmNet: Medication Manager Cerner Corporation has provided health information technology (HIT) to hospitals and healthcare systems since 1979. Cerner PharmNet enables pharmacists to automate their workflow processes and center care around the patient, not the encounter. This software allows pharmacists and doctors to manage prescriptions and verification from the same order in order to streamline medication management. Datascan: Winpharm Datascan was started back in 1981 by Alex Minassian focused on providing pharmacy management software to independently owned community pharmacies. Initially, Datascan modified the code it had purchased and began selling its DOS based version of the software. In the early 2000's Winpharm was written and released as an updated Windows version of the software, which continued the ability to quickly fill prescriptions using only the keyboard as part of the fill screen. Back in 2009, Kevin Minassian stepped in to purchase Datascan. Today, over 40 years later, Datascan continues to serve the needs of independent pharmacies nationwide with a focus on technology and support. See also Health information technology Pharmacoinformatics References Pharmacy Health informatics
Pharmacy management system
[ "Chemistry", "Biology" ]
1,010
[ "Pharmacology", "Medical technology", "Health informatics", "Pharmacy" ]
58,470,933
https://en.wikipedia.org/wiki/Water%20supply%20in%20Sudan
Sudan is a country that is half desert and much of the population suffers from a shortage of clean drinking water as well as a reliable source of water for agriculture. With the Nile river in the east of the country, parts of Sudan have substantial water resources, but those in the west have to rely on wadis, seasonal wells which often dry up. These imbalances in water availability are a source of hardship, as well as a source of conflict. While storage facilities are limited, many local communities have constructed makeshift dams and reservoirs, weirs, which help in stabilizing farming communities. Farmers also utilize hafirs to store rain water which falls in the rainy season, but groundwater remains a vital source of water for over 80% of Sudanese people. For decades, political instability has led to terrible conditions and thwarted many projects and relief efforts, but aid is making its way through. Several water infrastructure projects have been enacted in recent years, with both domestic and international sources of funding. Funding from the UN has provided 9,550 local farmers with better access to water and fertile soils. A project which also plans to replant forest cover in the wadi to reverse desertification. Darfur  is located in an arid region, in the western part of Sudan, where water scarcity is common. Due to recent population growth, there is an increased pressure on urban water supply sources and infrastructure. Now there is a greater difficulty accessing water—especially for cattle farmers. Many women and children, mostly girls, spend countless hours walking to a clean water supply a year preventing time taking care of children and schooling. Collecting water from ponds, marshes, ditches, or hand-dug wells, often contaminated with disease-causing parasites and bacteria. Experience dry and wet seasons. Wet seasons are plentiful with rainfall and crops; however, dry seasons force families to make mile treks for water and some relocated during the dry seasons. Water resources The Nile River flows through the eastern part of the country and provides a large portion of those living nearby with ample water for drinking and agriculture. Wetlands flanking the Nile cover almost 10% of the country, and support diverse riparian ecosystems. Others living in the more arid western region rely on wells or seasonal wadi to obtain their daily water. These wadis are dry stream beds for large portions of the year, but people are able to access the groundwater that accumulates underneath by digging well holes. Water storage infrastructure is limited throughout the western part of the country, but many local communities have constructed makeshift dams and reservoirs called weirs which can store water for future use and play big parts in stabilizing farming communities. Farmers also utilize hairs to store rainwater during the rainy season, but groundwater remains a vital source of water for over 80% of Sudanese people. The Nubian Sandstone Aquifer System, the largest aquifer in the country, provides most of Sudan's drinking water, Quality Another concern within Sudan is the quality of the water people have access to. In eastern Sudan, a study was conducted in the cities of Wad Madani and Al Khartoum that revealed 86% of water in public taps was meeting both Sudanese and international quality levels. In Darfur, water scarcity is more prevalent with many people regularly being exposed to drought and famine conditions. Most of the western part of Sudan lacks year round access to quality water, as the wadi are dry for much of the year unless heavy rains fall. Due to instability in much of this part of the county, water quality dramatically decreases when compared to the more water-secure east. The capital of Sudan, Khartoum, will benefit greatly from the Grand Ethiopian Renaissance Dam and looks to be in a much better position with regards to accessing quality sanitary water in the near future. Many of the communities living nearby the border of Chad are exposed to chronic water shortages with no solution in sight until the conflicts are fully resolved. Water Treatment United Nations Office for Project Services (UNOPS), rehabilitated an unused water treatment plant in El Fasher, Darfur’s state capital, installing a chlorination unit ensuring water quality. This unit now produces enough water for 37,500 people a day. Fecal contamination of drinking water supplies is the main cause of diseases found in water. Proper disposal, waste removal, of fecal matter are rare and often difficult without a proper plumbing infrastructure. Consequently, child stunting rate increases with high levels of open defecation and limited access to improved water sources. Poor sanitation conditions cause about 700,000 children deaths a year and prevent full mental and physical development. Plans for improvement An organization called United Nations Office for Project Services (UNOPS) had done chlorination plant projects in Darfur's capital city of El Fasher with major funding from Japan. The organization rehabilitated a water treatment plant in 2010, then installed a chlorination plant to improve the quality for over 37,500 people. In the city of El Daein, UNOPS had done projects rehabilitating water treatment facilities, helping over 50,000 with access to clean water. Japan has played a huge part by providing funding and expertise in projects to develop rural Sudan and its access to clean water. UNOPS had completed projects that now help over 250,000 people with access to a potable water source. The Grand Ethiopian Renaissance Dam is already under construction on the Nile river just nine miles upstream of the border of Sudan and has the potential for a multitude of positive effects for the country. More reliable river levels would allow large-scale irrigation and agricultural production that was not possible due to the annual change of the Nile within Sudan being over 8 m Hydro-electric power produced from the dam would exceed the amount needed by Ethiopia, so Sudan will stand to benefit greatly by being able to purchase this extra power. Another benefit from this dam will be creation of jobs, as the infrastructure to distribute the electricity efficiently does not exist yet and will have to be built. Grassroots organizations have seen success in the Darfur region, an example being the Wadi El Ku Catchment Forum, which was founded to help the 81,000 residents in the Wadi El Ku area provide more water for their crops. The local group consists of 50 representatives from 34 villages in the area, and these men and women decided the construction of weirs would be both cost-efficient and help the most people. Their efforts to build three weirs with funding from the UN provided 9,550 local farmers with better access to water and fertile soils. This project also plans to replant forest cover in the wadi to accommodate pastoral farmers and reverse climate change from previous desertification. See also History of irrigation in Sudan References Economy of Sudan Water supply
Water supply in Sudan
[ "Chemistry", "Engineering", "Environmental_science" ]
1,351
[ "Hydrology", "Water supply", "Environmental engineering" ]
58,472,849
https://en.wikipedia.org/wiki/Curve%20of%20growth
In astronomy, the curve of growth describes the equivalent width of a spectral line as a function of the column density of the material from which the spectral line is observed. Shape The curve of growth describes the dependence of the equivalent width , which is an effective measure of the strength of a feature in a emission or absorption spectrum, on the column density . Because the spectrum of a single spectral line has a characteristic shape, being broadened by various processes from a pure line, by increasing the optical depth of a medium that either absorbs or emits light, the strength of the feature develops non-trivially. In the case of the combined natural line width, collisional broadening and thermal Doppler broadening, the spectrum can be described by a Voigt profile and the curve of growth exhibits the approximate dependencies depicted on the right. For low optical depth corresponding to low , increasing the thickness of the medium leads to a linear increase of absorption and the equivalent line width grows linearly . Once the central Gaussian part of the profile saturates, and the Gaussian tails will lead to a less effective growth of . Eventually, the growth will be dominated by the Lorentzian tails of the profile, which decays as , producing a dependence of . References Spectroscopy
Curve of growth
[ "Physics", "Chemistry", "Astronomy" ]
259
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Astronomy stubs", "Spectroscopy" ]
58,472,865
https://en.wikipedia.org/wiki/Phosphaethynolate
The phosphaethynolate anion, also referred to as PCO, is the phosphorus-containing analogue of the cyanate anion with the chemical formula or . The anion has a linear geometry and is commonly isolated as a salt. When used as a ligand, the phosphaethynolate anion is ambidentate in nature meaning it forms complexes by coordinating via either the phosphorus or oxygen atoms. This versatile character of the anion has allowed it to be incorporated into many transition metal and actinide complexes but now the focus of the research around phosphaethynolate has turned to utilising the anion as a synthetic building block to organophosphanes. Synthesis The first reported synthesis and characterisation of phosphaethynolate came from Becker et al. in 1992. They were able to isolate the anion as a lithium salt (in 87% yield) by reacting lithium bis(trimethylsilyl)phosphide with dimethyl carbonate . The x-ray crystallographic analysis of the anion determined the bond length to be (indicative of a phosphorus-carbon triple bond) and the bond length to be . Similar studies were performed on derivatives of this structure and the results indicated that dimerisation to form a four-membered Li ring is favoured by this molecule. Ten years later, in 2002, Westerhausen et al. published the use of Becker's method to make a family of alkaline earth metal salts of PCO ; this work involved the synthesis of the magnesium, calcium, strontium and barium bis-phosphaethynolates. Like the salts previously reported by Becker, the alkali-earth metal analogues were unstable to moisture and air and thus were required to be stored at low temperatures (around ) in dimethoxyethane solutions. It was not until 2011 that the first stable salt of the phosphaethynolate anion was reported by Grutzmacher and co-workers . They managed to isolate the compound as a brown solid in 28% yield. The structure of the stable sodium salt, formed by carbonylation of sodium phosphide, contains bridging PCO units in contrast to the terminal anions found in the previously reported structures. The authors noted that this sodium salt could be handled in air as well as water without major decomposition; this emphasises the significance of the accompanying counter cation in stabilisation of PCO. Direct carbonylation was a method also employed by Goicoechea in 2013 in order to synthesis a phosphaethynolate anion stabilised by a potassium cation sequestered in 18-crown-6 . This method required the carbonylation of solutions of at and produced by-products that were readily separated during aqueous work ups. The use of aqueous work ups reflects the high stability of the salt in water. This method afforded the PCO anion in reasonable yields around 43%. Characterisation of the compound involved infra-red spectroscopy; the band indicative of the triple bond stretch was observed at . Ambidentate nature of the anion The phosphaethynolate anion is the heavier isoelectronic congener of the cyanate anion. It has been shown that it behaves in a similar way to its lighter analogue, as an ambidentate nucleophile. This ambidentate character of the anion means that it is able to bind via both the phosphorus and oxygen atoms depending on the nature of the centre being coordinated. Computational studies carried out on the anion such as Natural Bond Orbital (NBO) and Natural Resonance Theory (NRT) analyses can go part way to explain why PCO can react in such a manner . The two dominant resonance forms of the phosphaethynolate anion localise negative charge on either the phosphorus or oxygen atoms meaning both are sites of nucleophilicity. The same applies for the cyanate anion hence why PCO is observed to have similar pseudo-halogenic behaviour. Attack by oxygen Coordination via the oxygen atom is favoured by hard, highly electropositive centres. This is due to the fact that oxygen is a more electronegative atom and thus prefers to bind via more ionic interactions. Examples of this type of coordination were presented in the work of Arnold et al. from 2015. The group found that actinide complexes of PCO involving uranium and thorium both coordinated through the oxygen. This is the result of the contracted nature of the actinide orbitals which makes the metal centres more 'core-like' thus favouring ionic interactions. Attack by phosphorus On the other hand, softer, more polarisable centres prefer to coordinate in a more covalent manner through the phosphorus atom. Examples of this include complexes accommodating a neutral or sparsely charged transition metal centre. The first example of this nature of PCO binding was published by Grutzmacher and co-workers in 2012. The group's studies used a Re(I) complex and the analysis of its bonding parameters and electronic structure showed that the phosphaethynolate anion coordinated in a bent fashion. This suggested the Re(I) – P bond possessed a highly covalent character thus the complex would be best described as a metallaphosphaketene. It wasn't until four years later that a second example of this coordination nature of PCO was identified. This time it came in the form of a W(0) pentacarbonyl complex produced by the Goicoechea group. Rearrangement of coordination character There is one particular reaction studied by Grutzmacher et al. that exhibits the rearrangement of coordination character of PCO. Initially when reacting the anion with triorganyl silicon compounds, it binds via the oxygen forming the kinetic oxyphosphaalkyne product. The thermodynamic silyl phosphaketene product is generated when the kinetic product rearranges to allow PCO to coordinate through phosphorus. The formation of the kinetic product is charged controlled and thus explains why it is formed by oxygen coordination. The oxygen atom favours a larger degree of ionic interactions as a result of its greater electronegativity. Contrastingly, the thermodynamic product of the reaction is generated under orbital control. This comes in the form of phosphorus coordination as the largest contribution in the HOMO of the anion resides on the phosphorus atom; this is clearly visible in Figure 3. Reactivity of the anion Extensive studies involving the phosphaethynolate anion have shown that it can react in a variety of ways. It has documented use in cycloadditions, as a phosphorus transfer agent, a synthetic building block and as pseudo halide ligands (as described above). Phosphorus transfer agents In these types of reactions, CO is released as the phosphaethynolate anion acts as either a mild nucleophilic source of phosphorus or a Brønsted base. Examples of these types of reactions involving PCO include work conducted by Grutzmacher and Goicoechea. In 2014, Grutzmacher et al. reported that an imidazolium salt would react with the phosphaethynolate anion to produce a phosphinidine carbene adduct. Computational mechanistic studies were conducted on this reaction using density functional theory at the B3LYP/6-31+G* level. The results of these investigations suggested that the lowest energy and therefore most likely pathway involves PCO acting as a Brønsted base initially deprotonating the acidic imidazolium cation to generate the intermediate phosphaketene, HPCO. The highly unstable protonated PCO remains hydrogen bonded to the newly produced N-heterocylic carbene prior to rearrangement and formation of the observed product. In this case, PCO does not act as a mild nucleophile due to the augmented stability of the starting imidazolium cation. On the other hand, in the work published by Goicoechea and co-workers in 2015, the phosphaethynolate anion can be seen to act as a source of nucleophilic phosphide (). The anion was seen to add across the double bond of cyclotrisilene thus introducing a phosphorus vertex into its scaffold (after undergoing decarbonylation). Cycloaddition Reagents After synthesising the potassium salt of the phosphaethynolate anion in 2013, Goicoechea et al. began to look into the potential of PCO towards cycloadditions. They found that the anion could react in a [2+2] fashion with a diphenyl ketene to produce the first isolatable example of a four-membered monoanionic phosphorus containing heterocycle. They employed the same method to test other unsaturated substrates such as carbodiimides and found that the likelihood of cyclisation heavily relies on the nature of the substituents on the unsaturated substrate. Cycloaddition reactions involving the phosphaethynolate anion have also been shown by Grutzmacher and co-workers to be a viable synthetic route to other heterocycles. One simple example is the reaction between the NaPCO and an α-pyrone. This reaction yields the sodium phosphinin-2-olate salt which is stable to both air and moisture. Synthetic building blocks A large part of the research involving PCO is now looking into utilising the anion as a synthetic building block to derive phosphorus containing analogues of small molecules. The first major breakthrough in this area came from Goicoechea et al. in 2013; they published the reaction between the PCO anion and ammonium salts which yielded the phosphorus containing analogue of urea in which phosphorus replaces a nitrogen atom. The group predict that this heavier congener could have applications in new materials, anion sensing and coordination chemistry. Goicoechea and co-workers were also able to isolate the heavily sought after phosphorus containing analogue of isocyanic acid, HPCO, in 2017. This molecule is thought to be a crucial intermediate in a lot of reactions involving PCO (including P-transfer to an imidazolium cation). Moreover, the most recent addition to this class of small molecules is the phosphorus containing analogue of N,N-dimethylformamide. This work in which the phosphorus again replaces a nitrogen atom was published in 2018 by Stephan and co-workers. Generating acylphosphines in this manner is considered a much milder route than other current strategies that require multi-step syntheses involving toxic, volatile and pyrophoric reagents. Other analogues The other analogues of the phosphaethynolate anion all obey the general formulae E-C-X and are made by varying E and X. When changing either atom, unique trends amongst the different analogues become apparent. Varying E As 'E' is varied by descending group 15, there is a clear shift in the weights of the resonance structures towards the phosphaketene analogue . This reflects the decrease in effective orbital overlap between E and C which in turn disfavours multiple bond formation. This increasing tendency to form double and not triple E-C bonds is also reflected in calculated E-C bond lengths . The data from Table 1 is evidence of E-C bond elongation which correlates with the change from triple to double bond. In addition, NBO analysis highlights that the greatest electron delocalisation within the anions stems from the donation of an oxygen lone pair into the E−C π antibonding orbital. The energy value associated with this donation is seen to increase down the group . This explains the increasing resonance weight towards the ketene like isomer as populating antibonding orbitals usually suggests the breaking of a bond. The shift towards the ketene isomer will also cause an increase in charge density on the elemental 'E' atom; this makes the elemental atom an increasing source of nucleophilicity . Varying X The simplest analogue that can be formed as 'X' is varied is . This anion was first isolated by Becker et al. by reacting the phosphaethynolate anion with carbon disulphide. Unlike PCO, PCS shows ambidentate nucleophilic tendencies towards the W(0) complex mentioned above. This is the result of a reduced difference in electronegativity between E and X thus neither atom offers a substantial advantage over the other in terms of providing ionic contributions to bonding. As a result, the average electron density in PCS is spread over the entire anion whereas in PCO, most electron density is localised on the phosphorus atom as this is the atom which bonds to form the thermodynamically favourable product. References Anions Organophosphorus compounds Physical organic chemistry Substances discovered in the 1990s
Phosphaethynolate
[ "Physics", "Chemistry" ]
2,720
[ "Matter", "Anions", "Functional groups", "Organic compounds", "Organophosphorus compounds", "Physical organic chemistry", "Ions" ]
61,464,897
https://en.wikipedia.org/wiki/PK-4%20%28ISS%20experiment%29
The PK-4 or (Plasmakristall-4) laboratory is a joint Russian-European laboratory for the investigation of dusty/complex plasmas on board the International Space Station (ISS), with the principal investigators at the Institute of Materials Science at the German Aerospace Center (DLR) and the Russian Institute for High Energy Densities of the Russian Academy of Sciences. It is the third laboratory on board the ISS to study complex plasmas, after the PKE Nefedov and PK-3 Plus experiments. In contrast to the previous setups, the geometry was significantly changed and is more suited to study flowing complex plasmas. Technical description The heart of the PK-4 laboratory consists of a direct current (DC) discharge tube. A plasma is generated by applying an electric field between an anode and a cathode. Microparticles are then injected into the plasma and move through the tube into the working area where their motion is recorded with two cameras, the images of which are joined for analysis. The movement of the microparticles inside the fields of view of the cameras is followed by experimenters. The polarity of this electric field can be switched at a high frequency, so that the microparticles can be trapped in the working area. A variety of manipulation techniques are available, for instance a manipulation laser that can produce shear flow, and a thermal manipulator which can trap microparticles with a thermal gradient. The optical observation of the microparticles is complemented by other diagnostics methods: a spectrometer and a glow camera that records the plasma glow in several spectral lines. Scientific goals As its predecessors, PK-4 Plus studies complex plasmas, which are low temperature plasmas that contain highly charged microparticles. The microparticles interact with each other and with the plasma and can be used to study a variety of topics, for instance waves, the influence of microparticles on the plasma, string formation, and shear flow. External links Forschungsgruppe komplexe Plasmen - DLR Oberpfaffenhofen References Plasma physics facilities Science facilities on the International Space Station International Space Station experiments
PK-4 (ISS experiment)
[ "Physics" ]
452
[ "Plasma physics facilities", "Plasma physics" ]
61,467,609
https://en.wikipedia.org/wiki/Warshipping
In computer network security, warshipping is using a physical package delivery service to deliver an attack vector to a target. This concept was first described in 2008 at the DEF CON hacking convention by Robert Graham and David Maynor as part of a talk entitled “Bringing Sexy Back: Breaking in with Style”, that included various penetration testing methods. In their implementation, an iPhone box was modified to include a larger battery, which powered a jailbroken iPhone. A first-generation iPhone was chosen for this attack based on the reported run-time of 5 days when coupled with an external battery, whereas newer 3G iPhones of the era would reportedly run for 1½ days. A social engineering pretext was described that would trick the recipient into believing they had won an iPhone, in order to explain the shipment. The advancement of low-power electronics, thanks in part to maker culture, has greatly increased the effectiveness of this methodology as a credible method of attacking networks. In 2019, IBM X-Force Red coined the name “Warshipping” and described an attack platform that included several low-cost components that could be combined, shipped to targets, and controlled remotely for 2–3 weeks. A solar component was also described to allow the devices to run indefinitely. Aspects of a modern warshipping attack include the following: Devices that are hidden from the recipient, potentially inside objects or inside the packaging material or box structure itself. Command and Control (C2) capability via a dependable communication medium. Most commonly this is provided via cellular modems. A power management strategy that allows the device to operate for weeks. Solar panels may be utilized to lengthen the run-time of the device. One or more devices used for the operational attack. These can include radios that are built for protocols such as Bluetooth, Wireless LAN, Near Field Communication (NFC), and software-defined radio (SDR) devices for capturing multiple types of protocols. Microphones, cameras, and other capture devices could be included as well. Satellite navigation (GNSS) technology for reporting on the location of the device, allowing the activation of certain capabilities upon delivery to its target. Passive triangulation to get around GPS signal issues The increasing use of large, online retailers contributes to the relevancy of this attack. In 2019, the United States Postal Service reports that they deliver 484.8 million mailpieces per day. The name is by analogy with wardriving and wardialling. References Computer security exploits Wireless networking
Warshipping
[ "Technology", "Engineering" ]
503
[ "Wireless networking", "Computer networks engineering", "Computer security exploits" ]
61,471,625
https://en.wikipedia.org/wiki/7-Methyl-1%2C5%2C7-triazabicyclo%284.4.0%29dec-5-ene
7-Methyl-1,5,7-triazabicyclo[4.4.0]dec-5-ene (mTBD) is a bicyclic strong guanidine base (pKa = 25.43 in CH3CN and pKa = 17.9 in THF). mTBD, like 1,5,7-triazabicyclo[4.4.0]dec-5-ene and other guanidine super bases, can be used as a catalyst in a variety of chemical reactions. It also reacts with CO2, which could make it useful for carbon capture and storage. When brought into contact with some acids, mTBD reacts to form an ionic liquid. Some of these ionic liquids can dissolve cellulose. References Catalysts Guanidines Pyrimidopyrimidines Amines Reagents for organic chemistry Superbases
7-Methyl-1,5,7-triazabicyclo(4.4.0)dec-5-ene
[ "Chemistry" ]
195
[ "Catalysis", "Catalysts", "Superbases", "Guanidines", "Functional groups", "Reagents for organic chemistry", "Amines", "Bases (chemistry)", "Chemical kinetics" ]
73,800,460
https://en.wikipedia.org/wiki/Definitions%20of%20intersex
Various criteria have been offered for the definition of intersex, including ambiguous genitalia, atypical genitalia, and differential sexual development. Ambiguous genitalia occurs in roughly 0.05% of all births, usually caused by masculinization or feminization during pregnancy, these conditions range from full androgen insensitivity syndrome to ovotesticular syndrome. 1.7% of people are born with a disorder of sexual development (DSD) as defined by the DSD consortium, such as those with Klinefelter's syndrome. The DSD was specifically made to be as inclusive to all atypical sexual development; not all conditions within the DSD affect individuals to the same extent. Most intersex activism is based around the end of unnecessary medical interventions on intersex youth which attempt to assign an arbitrary sex and gender binary, often causing physical harm with no input from the child. Intersex conditions are usually expanded to include the DSD more generally. 0.05% of births are medically treated or considered to have ambiguous genitalia. There can also be a stricter definition, specifically for ambiguous DSD. This definition is restricted to those conditions in which typical chromosomal categorization patterns is inconsistent with phenotypic sex, or in which the phenotype is not easily classifiable as either male or female," with the prevalence of about 0.018%. The exact cut-off point between male and female in an intersex context is largely arbitrary. Likewise, the definition of biological sex is also sometimes considered to be arbitrary; as an example, some individuals with XY female (SRY inactivation) may have a uterus, ovaries, and normal menstruation, and be able to achieve pregnancy. These individuals would be declared to be biologically female but karyotypically male. Likewise, many intersex individuals are born completely sterile, although medical interventions have been known to remove potentially fertile gonads, which makes sex determination often arbitrary. Individuals with XX male develop male genitalia but are entirely infertile due to a lack of SRY gene expression and develop a generally feminine body. This range of possibilities is further expanded by conditions which effect genital development but not hormonal or sex gene expression. Generally, most intersex advocates, as well as parts of the medical community, advocate for broadening the definitions of sexual development and the definition of intersex. Causes of intersex development The overall causes of intersex conditions are complex, and are caused primarily by sexual development during pregnancy. Certain individuals may have a masculinized clitoris or a feminized penis, however this might change after pregnancy. The exact differentiation of ovotestis of intersex people are often ambiguous. Other cases of intersex conditions can occur when hormones are taken during pregnancy such as estrogens or androgens, which can lead to atypical sexual development. Commonly intersex people are defined as those who are born with ambiguous genitalia, usually within the context of the OGR, or individuals with substantial atypical sexual development such as those with XX male. Most conditions under the DSD are not apparent at birth, and most are not medicalized. Certain definitions declare the need for atypical masculinization or feminization during fetal development to declare an intersex condition. Under this definition, cloacal exstrophy a rare condition which is caused by the stomach internal organs developing incorrectly would not be intersex. Individuals with cloacal exstrophy who are born with XY do not develop a penis and are usually castrated and assigned female at birth. These people are medicalized like other people with intersex conditions and the OGR model. Due to this individuals with cloacal exstrophy are often considered intersex. InterACT the leading organization of intersex rights in the US, states that 1.5% of children are born with an intersex condition (DSD), and 0.05% are born with full ambiguous genitalia. Assigned sex There is a high bias to assign intersex people with ambiguous genitalia as female at birth, as it was generally thought that it was easier to create a girl than a boy. Likewise as puberty would result in general feminization for most intersex children as well as a low libido, it was thought that they should be assigned female. This was also motivated by the fact that vaginoplasty was far more developed than phalloplasty. This system was known as the optimum gender of rearing model (OGR model) which attempted to define a binary for intersex children. Some individuals who did not have any intersex conditions were raised under the OGR model, such as David Reimer who suffered a botched circumcision and was assigned female at seven months. The primary goal of the OGR was to stop gender incongruence, and to assign a gender binary for "proper" sex socialization. The model often specifically involved the falsification of medical history, such as the karyotype falsification or that internal testis were ovaries and needed to be removed for "cancer," despite no physical complications existing from their presence. Intersex advocates used a feminist perspective for criticism of the OGR as inherently sexist and cruel. The OGR modeled girls as passive and the receivers of penetration, and boys as the givers of penetration. As most intersex conditions cause vagina-like development and no phallus, medical staff were biased towards assigning female at birth. Transfeminists and queer liberationists particularly criticized the OGR model for not allowing children deviate from gender binary or expression. Likewise, feminists view bodily autonomy as a fundamental human right, which led to criticisms of the OGR taking away the bodily autonomy of intersex people. Intersex and medical definitions The OGR stated that gender non-conformity was a physiological threat which affected an individual's ability to function in normal society. Most research has found this to be false and that the medical procedures practiced against intersex individuals generally leads to isolation, physiological stress and physical complications throughout life. The definition of intersex is closely linked to the specific medical interventions on intersex people. According to the ISNA 1.92% of the population will have some variation in sexual development throughout their lives, (0.42% excluding LOCAH). 1% of people have bodies that "differ from standard male or female," and 0.1-0.2% of births are considered for intersex genital surgeries. The DSD as a model was advocated for by intersex advocates to include all variation of atypical sexual development. Specifically the DSD exists as replacement for the OGR which was the standard model for individuals with atypical sexual development. This model stated goal was to assign a gender binary, usually female via non-consensual medicalization, often via the falsification of medical records. After the publication of individuals who had undergone the OGR model and had gone through serious physiological distress, the model was discredited. The term "disorders of sexual development" was chosen to reflect the variation of sexual development over differences which effects all individuals, this however has been controversial, with many instead opting for "differentiation" or "variation." The DSD has generally superseded the OGR in the US, although no official medical precautions exist against intersex genital mutilation in the US. Another point of contention is intersex conditions and karyotype, while many intersex individuals have atypical gene expression, many intersex individuals are born due to hormonal changes in pregnancy, either natural or induced. As an example, a case of a woman who had a virilized clitoris which was surgically altered during her birth, brought up the point that she was intersex, which was stated to be "false" by a doctor as her mother had gone on progesterone, instead of natural virilization which induced biological change. Generally those who have undergone the OGR model, or have ambiguous genitalia are considered intersex. The DSD consortium was specifically made to remedy this, and was advocated for by intersex activists by including all differentiation in sexual development. The definitions of intersex genitalia are difficult as different medical practices exist in different regions for what a "normal" penis or vagina should look like. Spectrum approach Many intersex activists have advocated for a spectrum based approach for intersex conditions, which would differentiate various intersex conditions, including hormonal differences. Intersex conditions, even the same conditions such as ovotesticular syndrome, can vary wildly in terms of organs, genetic expression, phenotype, genotype, and karyotype. Under this model intersex conditions would be described via their own individualized effects described as effecting individuals on a spectrum. The DSD generally describes this by the description of individualized care for people with atypical sexual development, making the distinction between different conditions. See also Yogyakarta Principles Genetic diagnosis of intersex History of intersex surgery Intersex human rights Disorders of sex development Androgen insensitivity syndrome References Intersex topics Sex differences in humans Intersex healthcare Intersex rights External links Intersex people OHCHR and the human rights of LGBTI people Intersex Definitions, InterACT
Definitions of intersex
[ "Biology" ]
1,984
[ "Intersex topics", "Sex" ]
73,813,366
https://en.wikipedia.org/wiki/Ecohydraulics
Ecohydraulics is an interdisciplinary science studying the hydrodynamic factors that affect the survival and reproduction of aquatic organisms and the activities of aquatic organisms that affect hydraulics and water quality. Considerations include habitat maintenance or development, habitat-flow interactions, and organism responses. Ecohydraulics assesses the magnitude and timing of flows necessary to maintain a river ecosystem and provides tools to characterize the relation between flow discharge, flow field, and the availability of habitat within a river ecosystem. Based on this relation and insights into the hydraulic conditions optimal for different species or communities, ecohydraulics-modeling predicts how hydraulic conditions in a river change, under different development scenarios, the aquatic habitat of species or ecological communities. Similar considerations also apply to coastal, lake, and marine eco-systems. In the past century, hydraulic engineers have been challenged by habitat modeling, complicated by lack of knowledge regarding ecohydraulics. Since the 1990s, especially after the first International Symposium on Ecohydraulics in 1994, ecohydraulics has developed rapidly, mainly to assess the impacts of human-induced changes of water flow and sediment conditions in river ecosystems... Ecohydraulics analyzes, models, and seeks to mitigate the adverse impacts of changes in hydraulic characteristics caused by dam construction and other human activities, on the suitability of habitat for organisms, such as fish and invertebrates, and to predict changes in biological communities and biodiversity. Many articles report research findings about fluvial ecohydraulics. For example, the International Association for Hydro-Environment Engineering and Research (IAHR) and Taylor & Francis have been publishing the Journal of Ecohydraulics since 2016. The journal spans all topics in natural and applied ecohydraulics in all environmental settings. Key Concepts An aquatic ecosystem is defined as a community of aquatic organisms, with the species dependent on each other and on their physical-chemical environment and linked through flows of energy and materials. The distribution patterns of species are affected by the spatial and temporal characteristics of water flow. Flow velocity affects the delivery of food and nutrients to organisms. It can also dislodge organisms and prevent them from remaining at a site. Some vertebrates and invertebrates, such as the shellfish Corbicula fluminea, filter their food through flowing water. Flow velocity and turbulence are critical to the life activities of many species. For example, some fish migrate and some fish spawn when they detect high flows. However, extremely high flow velocity, or high intensities of turbulence, created by hydraulic engineering infrastructure can exert pressure on most fish and invertebrates and even kill them. When the flow velocity is below 0.1 m/s, the biological community in a river is similar to that in a lake. Usually, in rivers, flow velocity between 0.1–1 m/s is most suitable for major-stream fish species. High flow velocity and turbulence are cues for timing migration and spawning of some fish. Asian carp lay floating eggs when they sense increasing discharge resulting from a spring flood flow. The settling velocity of the eggs varies in the range of 0.7-1.5 cm/s. Once a carp egg settles on the riverbed, the egg cannot hatch. Only if flow velocity exceeds the settling velocity can an egg remain in suspension and complete incubation within 24–40 hours. Golden mussels (Limnoperna fortune) are an invasive filter-feeding macro-invertebrate species. Dense attachment of the species to the boundaries of water-transfer tunnels and pipelines results in biofouling, causing high resistance to water flow and damage to pipeline walls. This consequence, along with the decay of dead mussels, harm water quality. Golden mussel larvae can be killed by high-frequency turbulence and increased flow velocity. Experiments show that the larvae can be killed in a flow field with velocities in excess of 0.08-0.15 m/s and a turbulence frequency higher than 30 Hz. Preliminary results have shown that the higher the turbulence intensity the higher the mortality of golden mussel larvae. On the other hand, low vertical mixing or turbulence is a key factor in favoring the development of harmful algal blooms. Reservoirs are operated according to the requirements of power generation, water supply, navigation, and, in recent decades, environmental flows. Thus, the time and magnitude of peak discharge of floods may change, which thereby affect the life cycle and habitat of aquatic bio-communities. Most faunal species in a river cannot adapt to the non-natural change of flow and disappear from the reach downstream of the dam. Fish stranding caused by reservoir operation has occurred downstream of hydropower stations in many countries. A hydro-power dam, such as Fengshuba Dam on the East River, China, releases water suddenly during daytime and shuts off at night to meet an unsteady power demand. The instantaneous fluctuation in flow discharge and velocity kills most species except for those (e.g., the small shrimp, Palaemonidae) that can hide in crevices in riverbed sediment. Water depth is crucial for large fauna. The habitats created by shallow rapids of small rivers in mountainous areas typically suit invertebrates and small vertebrates. Only mountain streams with many deep pools can have medium-sized creatures such as rainbow trout. White-flag dolphin, Chinese sturgeon, and finless porpoise require the water depths associated with the middle and lower reaches of the Yangtze River, where there is sufficient water depth for them to grow and hide. On the other hand, few animals can live in the lower layers of deep lakes and reservoirs, because of low dissolved-oxygen (DO) concentration. Temperature is an important factor for many species. Salmon can only survive in cold water rivers. The Mississippi and Yangtze rivers are not suitable for salmon due to high temperatures. However, aquatic insects grow and develop more rapidly in tropical and subtropical rivers than in temperate rivers. Some species may complete two or more generations per year at warmer sites yet only one or fewer at cooler sites. Some dragonfly species on the Tibetan Plateau live for more than ten years in cold water before attaining sexual maturity and eclosion. Variability of hydraulic characteristics is essential for biodiversity. A wide variety of flow velocities, water depths, and temperatures, both spatial and temporal, are needed to maintain high levels of biodiversity in aquatic ecosystems. Eutrophication refers to the enrichment of a water body by nutrients to a level that results in algal blooms, deterioration in water quality, and undesirable disruption to the balance of an aquatic ecosystem. Eutrophication and algal blooms occur in rivers, lakes, estuaries, coastal, and marine waters. Algal blooms in lakes and coastal waters may lead to massive fish kills. The onset and the risk of algal blooms are closely related to the hydraulic flow and vertical turbulent mixing processes. This relationship has been shown by a real-time forecasting and warning system established to monitor algal and DO dynamics. Monitoring shows that diurnal DO fluctuations mirror the algal biomass. Algae of high density can increase fluid viscosity by more than 100%. Real-time monitoring and early warning systems can help with adaptive management to mitigate the harmful effects of massive algal blooms. Emergent vegetation (e.g., reeds and bulrushes) on floodplains and riparian wetlands imposes significant resistance to overbank flow. The resistance of emergent vegetation is so great that the resistance coefficient in the equations of hydraulics requires adjustment. For instance, the Manning's n increases tenfold as flow depth increases from 0.03 to 0.5 m, mainly due to emergent vegetation. Emergent and submerged vegetation change the turbulence structure and sediment transportation, and may cause these quantities to vary with flow velocity over a floodplain. Aquatic animals may change flow and sediment transportation. Initiation of motion for sediment and transportation are affected by salmonid spawning. Clustering of bed gravel is important to embryo survival of the species. The spawning fish move the riverbed pebbles and bury their eggs underneath and the egg burial depth tends to be just deeper than the observed scour depth. The species has adapted its egg placement strategy to the process of flood scouring. Beavers may construct wood dams across small streams and the beaver dams alter the hydrological process and hydraulic characteristics of a stream. Invasion of Zebra mussels and Golden mussels into pipelines, such as the cooling water pipeline of a hydro-power plant, can block pipelines and hamper power generation. Habitat is an area where plants or animals normally live, grow, feed, reproduce, and otherwise exist for any portion of their life cycle. Because each species responds differently to environmental and biotic conditions, the term habitat is specific to a species, and in more general terms, specific to guilds of species; for example, 'fish habitat' is specific to fish. Hydraulic attributes are considered to be the most important features of habitat for almost all organisms in rivers. The biological diversity and species abundance in streams depend on the diversity of available habitat. The slope, planform, confinement, and cross-sectional shape and dimensions of a stream, and the grain-size distribution of bed sediment affect aquatic habitat. Under less disturbed situations, a narrow, steep-walled cross section provides less physical area for habitat than does a wide cross section. A steep, confined stream is a high-energy environment that may limit the occurrence, diversity, and stability of habitat. Substrate is a general term that refers to all material that constitutes a riverbed or stream bed, which in most cases mainly comprise sediment. Stream-bed and bank erosion, sediment transportation, and deposition are among the most important factors that affect aquatic habitat. Stable streams are streams with a stable channel bed, which normally features energy-dissipation structures and little bed-load motion (transport of particles from a bed). Such streams have the best habitat for fish and benthic invertebrates. Incised streams are streams experiencing channel bed erosion, which provide the second-best habitat. Streams with intensive bed-load motion and sedimentation provide bad habitat for organisms. The taxa richness or biodiversity of these different types of rivers varies extremely because of different magnitudes of erosion, sedimentation, and sediment transportation. A uniform sand bed in a stream provides less potential habitat diversity than a bed with a step-pool system, boulder cascades, rapids, pool-riffle sequences, or other types of "bed structures" because of the resting places such feature provide. Hyporheic zone is a layer of substrate on the riverbed in which benthic animals normally live or exist for any portion of their life cycle. Animals in the hyporheic zone usually are protected from severe washouts and temperature extremes. Other species prefer the stream bed surface for its higher DO concentration, direct contact with flowing water, and high food availability. Macro-invertebrates inhabit a sediment bed layer with a thickness of about 40–55 cm in gravel beds, 60 cm in cobble beds, 10–30 cm in coarse sand beds, and 5–10 cm in fine sand beds. The thickness of the zone in clay and silt beds is about 30 cm because the bed is relatively soft; some macro-invertebrates can move within the fluid mud layer. Environmental Flows are defined as the quantity, timing, and quality of freshwater flows and levels necessary to sustain aquatic ecosystems which, in turn, support human cultures, economies, sustainable livelihoods, and well-being. The natural flow regime plays a critical role in sustaining native biodiversity and ecosystem integrity in rivers. The concepts and terminology vary across countries,such as minimum flow, environmental flow regime, environmental water, ecological flows. In the 2010s, the assessments of environmental flows at the basin scale have greatly evolved with the application of habitat-based methods or holistic methods, to balance the environmental flows and water uses, e.g. agriculture and hydropower, in the water planning at the watershed or river basin scale. In addition, some methodologies of water planning evaluate performance in river systems including stress tests, which consider the uncertainty associated with climate and global change, and evaluate the feasibility of balancing environmental flows and other water uses. For instance, several irrigation schemes were being considered for development of the Kilombero River Basin, Tanzania. It was determined what quantity of water could be abstracted from the river without degrading the ecological condition. Basic Principles and Models High habitat diversity supports high biodiversity. Or stated alternatively, biodiversity depends on habitat diversity, which is defined as the diversity of habitat types suitable for different bio-communities. The physical conditions of stream habitats depend mainly on the following factors: 1) substrate; 2) water depth; and 3) flow velocity. Different physical conditions support different bio-communities, so diversified physical conditions may support diversified bio-communities. Habitats with flow velocity less than 0.3 m/s are suitable for species that swim slowly. Habitats with flow velocity higher than 1 m/s are suitable for species that like high flow velocities. Fish species diversity and richness are strongly related to the combination of the effects of substrate, velocity, and depth, which can be represented by the Habitat Diversity Index. Field investigations have shown that a stream with different substrates is suitable for a large variety of invertebrate species and has a high biodiversity. The species richness, or number of species, S, is proportional to the habitat diversity index. Cut-off of connections of habitats impairs ecology. Connections of habitats are essential for complex bio-communities and high biodiversity. Cut-off of the connections with artificial dams or locks reduces biodiversity and undermines the bio-communities. Some projects are intended to restore the connections of habitats. The Yangtze River once connected thousands of riparian lakes in its middle and lower reaches, thereby forming a complex habitat system. Water flowed from the river to the lakes during the rising stage of floods and vice versa during the recession stage of floods. The river had high biodiversity and was home to 400 species of fish, 3 species of whales, and numerous species of amphibians, reptiles, birds, and invertebrates. The connection between the upper reaches and the middle and lower reaches, and the connections between the river and riparian lakes have been cut-off to reduce the cost of levee construction and to promote fish farming, resulting in the fragmentation of the complex habitats. Investigations have shown that cutting the connections has reduced the numbers of macroinvertebrate species by 60% and fish species by 40-50% in the lakes. There are 101 fish species in Poyang Lake, which remains connected to the Yangtze River, but only 57 and 47 fish species in Honghu Lake and Zhangdu Lake, respectively, which have been cut off from the river. Experiments have shown that a substantial reduction in the number of species and the abundance of macro-invertebrates occurs within 4 months after a riparian wetland is isolated from the river. Resilience refers to an ecosystem's stability and capability to tolerate disturbances and restore itself. The resilience of an eco-system involves both the process and the outcome of successfully adapting to ecological stresses, and the ability to maintain its normal patterns of biomass production after being subjected to damage. If a disturbance were of sufficient magnitude or duration, a threshold may be reached where the ecosystem undergoes a regime shift, possibly permanently. Ecological projects, in some sense, are designed to enhance the resilience of ecosystems, reduce the time required for the ecosystem to return to an equilibrium, and increase the ecosystem's capacity to absorb disturbances and reorganize. A new paradigm in river and coastal management is evolving ecological enhancement, recreation, and aesthetics, as well as complying with strict environmental protection legislation. These complex projects require extensive data and simulation tools to assist decision makers and communities in selecting management strategies which offer the maximum benefits, whilst preserving and enhancing the ecological integrity of the river system. Models, especially numerical models often are needed. A common approach to habitat studies is to apply numerical hydraulic modeling with the models included in PHABSIM. This approach is based on a one‑dimensional hydraulic characterization of a limited river reach under steady flow conditions. The model was tested to assess its capability to evaluate suitable habitat for Pacific and Atlantic salmon spawning and the results showed that the model works well for this lifestage, as spawning involves adult fish and is tightly coupled with hydrogeomorphology. Vegetation affects the turbulence intensity and turbulence structure. Modeling of the dynamic process of vegetative succession describes the relation between the hydraulic characteristics of flood disturbances and the colonization and succession processes of vegetation on sediment bars and floodplains. The model is composed of modules for hydraulic, wood, and herbaceous plants, and soil nutrients. The model's hydraulic module simulates the processes of flood inundation, flushing, and sedimentation. The timing and locations of plant recruitment use the characteristics of a flood. The mortality of plants at each location during a flood is estimated from surface erosion rates obtained from a hydrodynamic model. The gap between the existing model technology and the requirements of modeling the whole aquatic ecosystem on a wide range of spatial and temporal scales requires investigation. Physical habitat models are particularly useful for assessing the impact of hydropower projects, analyzing the effects of water abstraction on river ecology, and determining the minimum flow requirements of aquatic populations. As mentioned above, hydraulic variables profoundly affect habitat utilization by biota. Fluvial habitat suitability curves have been developed for forty years. Also, habitat suitability models are applied to evaluate the ability of a habitat to support a particular species. Fish behavior has been analyzed in designing microhabitat in a meter-resolution two-dimensional (2D) microhabitat modeling. Suitability indices are the core for habitat modeling, which may be illustrated for the Chinese sturgeon. The life cycle of the Chinese sturgeon in the Yangtze River mainly comprises spawning, hatching, and maturation. Brood fish seek suitable spawning sites and adhere fertilized eggs to stones, which hatch after about 120 to 150 h. Juvenile sturgeon swim to the East China Sea and stay there until they reach maturity. Ten aquatic eco-factors influence the habitat suitability of the Chinese sturgeon: 1) water temperatures for adults and juveniles (V1, °C); 2) water depth for adults (V2, m); 3) substrate for adults (V3); 4) water temperature for spawning (V4, °C); 5) water depth for spawning (V5, m); 6) substrate for spawning and hatching (V6); 7) water temperature during hatching (V7, °C); 8) flow velocity during spawning (V8, m/s); 9) suspended sediment concentration during spawning (V9, mg/L); and 10) the ratio of estimated brood sturgeon to eggs-predatory fish (V10). The ratio, V10, is important because 90% of eggs suffer predation. The Habitat Suitability Index (HSI) is given by as Figure 1 shows suitability curves for the ten eco-factors. Using these curves, the Habitat Suitability Index (HSI) was calculated, in which the velocity, depth, temperature, and substrate were estimated using a two-dimensional model of hydraulics and sediment movement. The habitat suitability HSI ranges from 0 (unsuitable) to 1 (optimal). Yi et al. indicated that the space and time suitable for spawning were reduced after the completion of the Three Gorges Dam in 2003. The model proved that reservoir operation revised to mimic the natural flow regime would enhance habitat suitability. Applications Construction of dams has caused insurmountable obstacles to migratory fish. At least 1/5 of the world's 9000 species of freshwater fish have disappeared due to dams. This proportion is even higher in rivers with more dams, which is 2/5 in the United States and 3/4 in Germany. More than 130 dams have been built on the Columbia River and its tributaries, blocking salmon spawning upstream, resulting in a fishery loss of $6.5 billion between 1960 and 1980. A fish ladder is designed to help the migrating fish and brood fish to cross a dam to the upstream spawning ground, and a fish pass helps the juvenile fish to cross the dam to the downstream and the sea. Fish ladders and passes can be designed separately or can be combined into one channel. The main concept of fish ladder design is to create extremely high resistance, letting the water from upstream to downstream of the dam flow at a low velocity while maintaining a large depth. The design of the inlet and outlet of the fish ladder is critical. If the downstream outlet velocity is too high, fish cannot swim into the fish ladder. If the outlet flow velocity is too low, the fish cannot determine whether it leads to the upstream spawning ground. Also important is turbulence along a ladder or pass. The earliest fish ladder was constructed by Denil in 1909. The ladder consists of a series of baffles positioned on the walls and floor of a channel all of which enable the upstream moving brood fish, specifically Atlantic salmon, to bypass weirs and small dams. Generally, a fish ladder maximizes energy dissipation and reduces flow velocity, the shape and position of the baffles create a secondary outward circulation of flow, producing a momentum transfer from the central portion of the channel towards the walls. Research has been done regarding Denil's fish ladder, focusing mainly on refining the baffles.  Additionally, research concentrates on understanding organism response to the hydrodynamics (flow velocity and turbulence) under experimental settings. Attention has been paid to the turbulence intensity, eddy size, and hydrodynamic drag in fishways. On the other hand, fish biologists have worked closely with hydraulic engineers to understand how fish respond to complex fluid dynamics. Humans have created a variety of fish ladders, such as the submerged jet from a vertical gap type, which is suitable for large fish; step-pool type and submerged window type, which is suitable for medium fish; and overflow weir type, which is suitable for small fish (Figure 2). Figure 2 Various types of fish ladder In 2004, a fish ladder was built for the brood fish to bypass the Itaipu Dam on the Parana River, and for juvenile fish to pass down the river. The maximum flow velocity was less than 3 m/s. At the initial stage, the flow discharge used for trapping fish was 20 m3/s, and when the fish were swimming into the passage channel, the flow discharge dropped to 11.4 m3/s. The most successful fish passages in the world are the fish ladders and fish passes bypassing the eight dams on the Columbia River. The U.S. government legislates that dams on the Columbia River must be built with a fishway. Bonneville Dam is the most downstream dam on the river, with a height of 60 m. The fish ladder was designed as a series of "cabins" using vertical gap jet diffusion and energy dissipation. Since the 1930s, a yearly average of 721,000 brood fish have crossed the dam and entered an upstream spawning ground. Reservoir operation: Since 2010, the Three Gorges Reservoir has been operated to promote spawning of the Asian carp. In June 2011, the discharge from the reservoir increased by 2,000 m3/s every day, and the flow velocity and turbulence intensity increased continuously for 5 days. Stimulating flood flow, brood fish gathered downstream and spawned. In 2022, the reservoir increased the discharge from 12,800 m3/s on June 3 to 22,400 m3/s on June 8. The number of drifting eggs spawned by the carps in the Yichang-Yidu section increased by more than 400 million. Artificial step-pools: In the past decades, artificial step-pools have been applied in mountain rivers to increase the habitat diversity, and thereby improve river ecology in Germany, Italy, the United States, Canada, Switzerland, Austria, and other countries. An experiment done in the Diaoga River in Yunnan, China, proved that artificial step-pools may create stable and diverse habitat with low velocity and deep-water pools and high velocity waterfalls. Thus, different species can find suitable habitat for survival and reproduction. Myriophyllum and Periphyton (forms of algae) grew on the riverbed, and the original white gravel bed was covered with green aquatic plants. The number of species of invertebrates doubled, and the number of individuals per unit area increased by 10 to 85 times. The artificial step-pool system created great resistance to flow and reduced debris-flow problems. Elsewhere, step-pool systems are used.  For example, Germany invested 400,000 Euros to build an artificial step-pool system on the Mangfall River, a tributary of the Inn River. Italy imitated a step-pool system and constructed a group of small dams with boulders, achieving significant results in stabilizing streams and restoring river ecology in northern mountain rivers. Artificial step-pool system constructed on the Kleinschmidt River in Montana and on the Little Snake River in Wyoming restored salmon and rainbow trout habitats. Wetland restoration: Channelization of the Kissimmee River in central Florida destroyed or degraded most of the fish and wildlife habitat once provided by the river and its floodplain wetlands. A subsequent project restored the river's biological resources from 1984 to 1989. The straight channel was re-meandered, flow velocity was reduced, and water-stage increased. Reintroduction of flow through remnant river channels increased habitat diversity and led to favorable responses by fish and invertebrate communities. The habitat was restored forming a shallow and wide "River of Grass" which flowed slowly across everglades sawgrass toward mangrove estuaries in the Gulf of Mexico Restoration of habitat connectivity: Restoration of connectivity between habitats that became fragments mainly involves dredging and excavating channels to connect lakes, wetlands, and rivers and creating ecological corridors for aquatic animals. In 2012, the city of Wuhan, China, connected 20 lakes on the left bank of the Yangtze River. A channel with a maximum width of 60 m and a depth of 1.5 m was dug between the lakes. The city built the ecological network of the Great East Lake on the right bank of the Yangtze River, reconnecting six lakes to the river. During the project, pump stations were used to exchange water between the lakes and the river to improve the water quality. The reconnection of the fragmented habitat repaired the damaged ecosystem. Case Studies Many examples can be given. A few ensue. Ecohydraulics for land development: Sand Motor (Netherlands)-Ecohydraulics is increasingly used in the quest for nature-based solutions for sustainable development. A landmark example of building with nature, the "sand motor", was first implemented in the Netherlands in 2011 as a pilot project to provide an alternative solution for depositing a large amount of sand along the shore to nourish the coast and safeguard the hinterland from being eroded. A hook-shaped peninsula of about 21.5 million m3 of sand was constructed to protrude 1 km into the sea and cover about 2 km alongshore (Figure 3). By making use of natural processes such as waves, wind and tide to redistribute the sand, this innovative approach succeeded in limiting the disturbance of local ecosystems, while also providing new areas for nature and more types of recreation Figure 3 Creating land by natural processes to minimize ecosystem disruption: The Sand Motor after completion in July 2011 (left) and 5 years later in January 2016 (right) Ecohydraulics for restoring habitat of migratory birds (South Korea)- Nakdong River estuary is regulated by a 2,400-meter-long dam built in 1987 to control the inflow of seawater into farmland and secure drinking and agricultural water for nearby regions, including Busan, Ulsan and South Gyeongsang Province (Figure 4). However, the biodiversity of the river had been diminished since the establishment of the barrage; the stoppage of upstream sea water intrusion limited the supply of brackish water to the rice paddy fields which provided a natural habitat for migratory birds. A controlled partial gate-opening project was started in 2019 to restore and protect the biodiversity of the estuary, and by its third opening in July 2020, improvement was confirmed as the estuary's eco-species, including eels and anchovies, were found again in the waters upstream of the gates. A tidal flat was formed towards the seaside where the sand and mud carried over along the river accumulates provides fertile soil, rendering the area agriculturally rich and the habitat for migratory birds restored. Other examples of eco-hydraulics can also be found in the IAHR Media Library. Figure 4 The Nakdong Estuary Dam traps freshwater and prevents salt water from flowing upstream, affecting the rice paddy habitat of migratory birds. Ecological restoration involves opening a small part of the dam using modern eco-hydraulics (Courtesy of K-water) References External links International Association for Hydro-Environment Engineering and Research (IAHR) IAHR Media Library Hydrology Ecosystems
Ecohydraulics
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
5,950
[ "Hydrology", "Symbiosis", "Ecosystems", "Environmental engineering" ]
66,456,099
https://en.wikipedia.org/wiki/Bassam%20Izzuddin
Bassam Izzuddin is an author, developer and professor of computational structural mechanics at the Department of Civil and Environmental Engineering at the Imperial College London where he is coordinator of three courses at advanced undergraduate and masters level. He is the founder and developer of ADAPTIC, an adaptive static and dynamic structural analysis program which has been developed to provide an efficient tool for the nonlinear analysis of steel and composite frames, slabs, shells and integrated structures. In 2016, he won the IStructE award for the best research paper in structural mechanics. References Year of birth missing (living people) Living people English mechanical engineers Computational physicists Alumni of Imperial College London American University of Beirut alumni Academics of Imperial College London
Bassam Izzuddin
[ "Physics" ]
141
[ "Computational physicists", "Computational physics" ]
66,457,520
https://en.wikipedia.org/wiki/Nitrospira%20inopinata
Nitrospira inopinata is a bacterium from the phylum Nitrospirota. This phylum contains nitrite-oxidizing bacteria playing role in nitrification. However N. inopinata was shown to perform complete ammonia oxidation to nitrate thus being the first comammox bacterium to be discovered. N. inopinata was cultivated in enrichment culture. Initial inoculum was obtained in 2011 from microbial biofilm growing on metal surface of the pipe covered by hot water (56 °C, pH 7.5), which was raised from 1 200m deep oil exploration well. The well was located in Aushiger, North Caucasus, Russia. The growth in pure culture was achieved in 2017. Genome of N. inopinata was released in 2015 represented by 3.3 Mbp, with 3 116 genes and 59.2% GC content. NCBI accession number is LN885086. References Nitrospirota Nitrogen cycle Bacteria described in 2015
Nitrospira inopinata
[ "Chemistry" ]
211
[ "Nitrogen cycle", "Metabolism" ]
66,458,263
https://en.wikipedia.org/wiki/Nitrospinota
Nitrospinota is a bacterial phylum. Despite only few described species, members of this phylum are major nitrite-oxidizing bacteria in surface waters in oceans. By oxidation of nitrite to nitrate they are important in the process of nitrification in marine environments. Although the genus Nitrospina is an aerobic bacterium, it was shown to oxidize nitrite also in oxygen minimum zone of the ocean. Depletion of oxygen in such zones leads to preference of anaerobic processes such as denitrification and nitrogen loss through anammox. Nitrospina thus outweigh nitrogen loss by nitrification also in these oxygen depleted zones. Among the cultivated isolates within the genus Nitrospina are Nitrospina gracilis and Nitrospina watsonii. Further genomes were resolved by culture-independent metagenome binning. The two Nitrospina species are, however, distantly related to environmentally abundant uncultured Nitrospinota. The two other strains were cultivated in 2020 each in the binary culture with alphaproteobacterial heterotroph. They are called "Candidatus Nitrohelix vancouverensis" and "Candidatus Nitronauta litoralis". "Nitrohelix vancouverensis" is closely related to uncultivated environmentally abundant Nitrospinota clades 1 and 2. Taxonomy The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) Phylum Nitrospinota Lücker et al. 2021 Class "Nitrospinia" Lücker et al. 2013 Order "Nitrospinales" Lücker et al. 2013 Family Nitrospinaceae Garrity, Bell & Lilburn 2006 Genus "Candidatus Nitrohelix" Mueller et al. 2021 "Ca. N. vancouverensis" Mueller et al. 2021 Genus "Candidatus Nitromaritima" Ngugi et al. 2016 Genus "Candidatus Nitronauta" Mueller et al. 2021 "Ca. N. litoralis" Mueller et al. 2021 Genus Nitrospina Watson & Waterbury 1971 N. gracilis Watson & Waterbury 1971 "N. watsonii" Spieck et al. 2014 References Nitrogen cycle Bacteria phyla Bacteria
Nitrospinota
[ "Chemistry", "Biology" ]
505
[ "Prokaryotes", "Nitrogen cycle", "Bacteria", "Microorganisms", "Metabolism" ]
67,973,010
https://en.wikipedia.org/wiki/Praseodymium%28III%29%20oxalate
Praseodymium(III) oxalate is an inorganic compound, a salt of praseodymium metal and oxalic acid, with the chemical formula C6O12Pr2. The compound forms light green crystals that are insoluble in water. It also forms crystalline hydrates. Preparation Praseodymium(III) oxalate can be prepared from the reaction of soluble praseodymium salts with oxalic acid: Properties Praseodymium(III) oxalate forms crystalline hydrates (light green crystals): Pr2(C2O4)3•10H2O. The crystalline hydrate decomposes stepwise when heated: Uses Praseodymium(III) oxalate is used as an intermediate product in the synthesis of praseodymium. It is also applied to colour some glasses and enamels. If mixed with certain other materials, the compound paints glass intense yellow. References Inorganic compounds Praseodymium(III) compounds Oxalates
Praseodymium(III) oxalate
[ "Chemistry" ]
213
[ "Inorganic compounds" ]
67,974,589
https://en.wikipedia.org/wiki/Indian%20burn
An Indian burn, also known as a snake bite or Chinese burn in the UK and Australia, is a pain-inducing prank, where the prankster grabs onto the victim's forearm or wrist, and starts turning the skin away from themselves with one hand, and with another hand towards themselves, causing an unpleasant burning sensation to the skin. The prank is popular in a school setting. Terminology The prank is known by various different names in the United States, such as Indian sunburn or Indian rug burn, and also as Chinese wrist burn, and as the snake bite. In countries such as the United Kingdom and Australia, it is known as a Chinese burn. In Mexico, it is known as an . In Sweden, it is called ("a thousand needles"). In Netherlands, it is called ("barbed wire") and in Germany it is called ("stinging nettle"). In Afrikaans it is called a "donkie byt" which translates to "donkey bite." Variations A variation of the prank can be done with a yarn that can be rubbed against the skin in a similar manner when starting fire in a small and dried haystack. Criticism Some Native Americans disapprove the use of the term Indian burn, including other vocabulary starting with the prefix "Indian-", such as Indian corn, Indian summer and Indian giver, among others. Statistics According to a poll carried out in the United Kingdom, with a sample size of 1,844 adults, 27% recalled receiving Indian burns in secondary school. See also List of practical joke topics Wedgie References Abuse Harassment and bullying Pain Practical jokes Suffering Native American topics
Indian burn
[ "Biology" ]
335
[ "Behavior", "Abuse", "Harassment and bullying", "Aggression", "Human behavior" ]
67,977,408
https://en.wikipedia.org/wiki/Transition%20metal%20oxalate%20complex
Transition metal oxalate complexes are coordination complexes with oxalate (C2O42−) ligands. Some are useful commercially, but the topic has attracted regular scholarly scrutiny. Oxalate (C2O42-) is a kind of dicarboxylate ligand. As a small, symmetrical dinegative ion, oxalate commonly forms five-membered MO2C2 chelate rings. Mixed ligand complexes are known, e.g., [Co(C2O4)(NH3)4]κ+. Examples Homoleptic complexes Homoleptic oxalato complexes are common, e.g., those with the formula [M(κ2-C2O4)3]n-: M = V(III), Mn(III), Cr(III), Tc(IV), Fe(III), Ru(III), Co(III), Rh(III), Ir(III). These anions are chiral (D3 symmetry), and some have been resolved into their component enantiomers. Some early metals form tetrakis complexes of the type [M(κ2-C2O4)4]n- M = Nb(V), Zr(IV), Hf(IV), Ta(V), The Δ and Λ enantiomorphs of have been separated. Oxalate is often a bridging ligand forming bi- and polynuclear complexes with (κ2,κ'2-C2O4)M2 cores. Illustrative binuclear complexes are [M2(C2O4)5]2- M = Fe(II) and Cr(III) Mixed ligand complexes Whereas homoleptic complexes are easier to describe, far more abundant are complexes with oxalate and other ligands. Many metals form polynuclear complexes with oxalate and water. In ferric oxalate, , one oxalate is bonded through all four oxygen atoms and another oxalate binds through only two oxygen atoms, in both cases bridging. "Durrant's salt" contains the anionic complex . Reactions and applications Metal oxalate complexes are photoactive, degrading with loss of carbon dioxide. This reaction is the basis of the technique called actinometry. Ferrioxalate undergoes photoreduction. The iron centre is reduced (gains an electron) from the +3 to the +2 oxidation state, while an oxalate ion is oxidised to carbon dioxide: 2 []3− + hν → 2 []2− + 2 + The redox reaction has been used to access unusual complexes. UV-irradiation of Pt(C2O4)(PPh3)2 gives derivatives of Pt0(PPh3)2. Metal oxalates with the stoichiometry 1:1 are often insoluble. This fact provides a way to separate metal ions from solutions, including extract of ores. Combustion of metal oxalates gives metal oxides. Natural occurrence The minerals moolooite and antipinite are examples of naturally occurring copper oxalates. They arise from the weathering of other copper ores. A few other oxalate-containing minerals are known. See also Transition metal carboxylate complex Oxalatonickelate References Ligands Oxalato complexes
Transition metal oxalate complex
[ "Chemistry" ]
706
[ "Ligands", "Coordination chemistry" ]
67,977,435
https://en.wikipedia.org/wiki/4-PrO-DMT
4-Propionoxy-N,N-dimethyltryptamine (4-PrO-DMT, or O-Propionylpsilocin) is a synthetic psychedelic drug from the tryptamine family with psychedelic effects, and is believed to act as a prodrug for psilocin. It produces a head-twitch response in mice. It has been sold online as a designer drug since May 2019. It was first identified as a new psychoactive substance in Sweden, in July 2019. A number of related derivatives have been synthesized as prodrugs of psilocin for medical applications. Recreational use Dosage 4-PrO-DMT is reported to be orally psychoactive substance and while dosage effects have been studied in mice, its effects and longevity on humans has not been formally studied. The effects of 4-PrO-DMT are similar to those of psilocin (4-HO-DMT), as it acts as a prodrug. Pharmacology Pharmacodynamics 4-PrO-DMT is theorized to be a serotonergic psychedelic, and is partial agonist of the 5-HT1D, 5-HT1B and 5-HT1A serotonin receptors. Toxicity Very little data about the toxicity or pharmacology of 4-PrO-DMT is known. Its chemical structure and pharmacological activity are similar to psilacetin, a compound which isn't associated with compulsive use or physical dependence. However, due to lack of research and data, it cannot be definitively concluded that its pharmacological actions in the human body do not differ from those of psilacetin. To date, there have been no reported deaths from 4-PrO-DMT. See also 4-HO-DMT 4-AcO-DMT 5-MeO-DMT FT-104 Psilocybin References Psychedelic tryptamines Dimethylamino compounds Entheogens Serotonin receptor agonists Prodrugs Designer drugs Esters
4-PrO-DMT
[ "Chemistry" ]
433
[ "Esters", "Functional groups", "Prodrugs", "Organic compounds", "Chemicals in medicine" ]
65,214,425
https://en.wikipedia.org/wiki/Hexaphosphabenzene
Hexaphosphabenzene is a valence isoelectronic analogue of benzene and is expected to have a similar planar structure due to resonance stabilization and its sp2 nature. Although several other allotropes of phosphorus are stable, no evidence for the existence of has been reported. Preliminary ab initio calculations on the trimerisation of leading to the formation of the cyclic were performed, and it was predicted that hexaphosphabenzene would decompose to free with an energy barrier of 13−15.4 kcal mol−1, and would therefore not be observed in the uncomplexed state under normal experimental conditions. The presence of an added solvent, such as ethanol, might lead to the formation of intermolecular hydrogen bonds which may block the destabilizing interaction between phosphorus lone pairs and consequently stabilize . The moderate barrier suggests that hexaphosphabenzene could be synthesized from a [2+2+2] cycloaddition of three molecules. Currently, this is a synthetic endeavour which remains to be conquered. Synthesis Isolation of hexaphosphabenzene was first achieved within a triple-decker sandwich complex in 1985 by Scherer et al. Amber coloured, air-stable crystals of [{(η5-Me5C5)Mo}2(μ,η6-P6)] are formed by reaction of with excess in dimethylbenzene, albeit with a yield of approximately 1%. The crystal structure of this complex is a centrosymmetric molecule, and both five-membered rings as well as the central bridge-ligand ring are planar and parallel. The average P–P distance for the hexaphosphabenzene within this complex is 2.170 Å. Thirty years later, Fleischmann et al. improved the synthetic yield of [{(η5-Me5C5)Mo}2(μ,η6-P6)] up to 64%. This was achieved by increasing the reaction temperature of the thermolysis of with to approximately 205 °C in boiling diisopropylbenzene, thus favouring the formation of [{(η5-Me5C5)Mo}2(μ,η6-P6)] as the thermodynamic product. Several analogues of this triple‐decker complex where the coordinating metal and η5-ligand has been varied have also been reported. These include triple‐decker complexes for Ti, V, Nb, and W, whereby the synthetic method is still based on the originally reported thermolysis of with . Electron count If one regards the planar ring as a 6π electron donor ligand, then [{(η5-Me5C5)Mo}2(μ,η6-P6)] is a triple-decker sandwich complex with 28 valence electrons. If , similar to C6H6, is taken as a 10π electron donor, a 32 valence electron count may be obtained. In most triple-decker complexes with an electron count ranging from 26 to 34, the structure of the middle ring is planar ([{(η5-Cp)M}2(μ,η6-P6)] with M = Mo, Sc, Y, Zr, Hf, V, Nb, Ta, Cr, and W). In the 24 valence electron [{(η5-Cp)Ti}2(μ,η6-P6)] complex, however, a distortion is observed, and the ring is puckered. Calculations have concluded that completely filled 2a*and 2b* orbitals in 28 valence electron complexes lead to a planar symmetrical middle ring. In 26 valence electron complexes, the occupancy of either 2a*or 2b* results in in-plane or bisallylic distortions and an asymmetric planar middle ring. The puckering of in 24 valence electron complexes is due to the stabilization of 5a, as well as that conferred by the tetravalent oxidation state of Ti in [{(η5-Cp)Ti}2(μ,η6-P6)]. Reactivity One-electron oxidation The reactivity of [{(η5- Me5C5)Mo}2(μ,η6-P6)] toward silver and copper monocationic salts of the weakly coordinating anion [Al{OC(CF3)3}4]− ([TEF]) was studied by Fleischmann et al. in 2015. Addition of a solution of Ag[TEF] or Cu[TEF] to a solution of [{(η5- Me5C5)Mo}2(μ,η6-P6)] in chloroform results in oxidation of the complex, which can be observed by an immediate colour change from amber to dark teal. The magnetic moment of the dark teal crystals determined by the Evans NMR method is equal to 1.67 μB, which is consistent with one unpaired electron. Accordingly, [{(η5- Me5C5)Mo}2(μ,η6-P6)]+ is detected by ESI mass spectrometry. The crystal structure of the teal product shows that the triple‐decker geometry is retained during the one‐electron oxidation of [{(η5- Me5C5)Mo}2(μ,η6-P6)]. The Mo—Mo bond length of the [{(η5- Me5C5)Mo}2(μ,η6-P6)]+ cation is 2.6617(4) Å; almost identical to the bond length determined for the unoxidized species at 2.6463(3) Å. However, the P—P bond lengths are strongly affected by the oxidation. While the P1—P1′ and P3—P3′ bonds are elongated, the remaining P—P bonds are shortened compared to the average P—P bond length of about 2.183 Å in the unoxidized species. Therefore, the middle deck of the 27 valence electron [{(η5- Me5C5)Mo}2(μ,η6-P6)]+ complex can best be described as a bisallylic distorted P6 ligand, intermediate between the 28 valence electron complexes with a perfectly planar symmetrical ring, and those with 26 valence electrons displaying a more amplified in-plane distortion. Density functional theorem (DFT) calculations confirm that this distortion is due to depopulation of the P bonding orbitals upon oxidation of the triple-decker sandwich complex. Cu[TEF] & Ag[TEF] To avoid oxidation of [{(η5- Me5C5)Mo}2(μ,η6-P6)], further reactions were performed in toluene to decrease the redox potential of the cations. This resulted in a bright orange coordination product upon reaction with copper, although a mixture also containing the dark teal oxidation product was obtained upon reaction with silver. Single‐crystal X‐ray analysis reveals that this product displays a distorted square‐planar coordination environment around the central cation through two side‐on coordinating P—P bonds. The Ag—P distances are approximately 2.6 Å, whereas the Cu—P distances are determined to be approximately 2.4 Å. The P—P bonds are therefore elongated to 2.2694(16) Å and 2.2915(14) Å upon coordination to copper and silver, respectively, whilst the remaining P—P bonds are unaffected. In another experiment Cu[TEF] is treated with [{(η5- Me5C5)Mo}2(μ,η6-P6)] in pure toluene and the solution shows the bright orange color of the complex cation [Cu([{(η5- Me5C5)Mo}2(μ,η6-P6)])2]+. However, analysis of crystals from this solution reveals a distorted tetrahedral coordination environment around Cu. The resulting Cu—P distances are somewhat shorter than their counterparts discussed above. The coordinating P—P bonds are a little longer, which is attributed to less steric crowding in the tetrahedral coordination geometry around the Cu center. The successful isolation of [Cu([{(η5- Me5C5)Mo}2(μ,η6-P6)])2]+ either as its tetrahedral or square‐planar isomer is therefore achievable. DFT calculations show that the enthalpy for the tetrahedral to square‐planar isomerization is positive for both metals, with the tetrahedral coordination being favored. When entropy is taken into account, small positive values for Cu+ and larger, but negative, values for Ag+ are observed. This means that the tetrahedral geometry is predominant for Cu+, but a significant percentage of the complexes adopt a square‐planar geometry in solution. For Ag+, the equilibrium is shifted significantly to the right side, which is presumably why a tetrahedral coordination of [{(η5- Me5C5)Mo}2(μ,η6-P6)] and Ag+ has not yet been observed. Examination of the crystal packing reveals that these products are layered compounds that crystallize in the monoclinic C2/c space group with alternating negatively charged layers of the [TEF] anions and positively charged layers of isolated [M([{(η5- Me5C5)Mo}2(μ,η6-P6)])2]+ complexes. The layers lie inside the bc plane, alternate along the a axis, and do not form a two‐dimensional network. Tl[TEF] The treatment of [{(η5- Me5C5)Mo}2(μ,η6-P6)] with Tl[TEF] in chloroform gives an immediate color change from amber to a deep red. The crystal structure reveals a trigonal pyramidal coordination of the thallium cation, Tl+, by three side‐on coordinating P—P bonds of the P6 ligands. Two of these P6 ligands show shorter and uniform Tl—P distances of 3.2–3.3 Å with P—P bonds elongated to about 2.22 Å, whilst the third unit shows an unsymmetrical coordination with long Tl—P distances of approximately 3.42 and 3.69 Å and no P—P bond elongation. Although the environment of Tl+ is distinctly different from that of Cu+ and Ag+, their structures are related by the two‐dimensional coordination network that propagates inside the bc plane. Crucially, whilst Cu+ and Ag+ form layered structures with isolated [M([{(η5- Me5C5)Mo}2(μ,η6-P6)])2]+ complex cations, there is a statistical distribution of the Tl+ cations inside the two‐dimensional coordination, which shows further interconnection of the P6 ligands to form an extended 2D network that could be regarded as a supramolecular analogue of graphene. Jahn–Teller distortion Despite the triple-decker sandwich complex {(η5-Me5C5)Mo}2(μ,η6-P6) containing a demonstrably planar P6 ring with equal P—P bond lengths, theoretical calculations reveal that there are at least 7 non-planar P6 isomers lower in energy than the planar benzene-like D6h structure. In increasing order of energy these are: benzvalene, prismane, chair, Dewar benzene, bicyclopropenyl, distorted benzene, and benzene. A pseudo Jahn–Teller effect (PJT) is responsible for distortion of the D6h benzene-like structure into the D2 structure, which occurs along the e2u doubly degenerate mode as a result of vibronic coupling of the HOMO − 1 (e2g) and LUMO (e2u): e2g ⊗ e2u = a1u ⊕ a2u ⊕ e2u. The distorted structure is calculated to lie just 2.7 kcal mol−1 lower in energy than the D6h structure. If the uncomplexed structure were to be successfully synthesized, the aromaticity of the benzene-like P6 structure would not be sufficient to stabilize the planar geometry, and the PJT effect would result in distortion of the ring. Isomers Adaptive Natural Density Partitioning (AdNDP) is a theoretical tool developed by Alexander Boldyrev that is based on the concept of the electron pair as the main element of chemical bonding models. It can therefore recover Lewis bonding elements such as 1c–2e core electrons and lone pairs, 2c–2e objects which are two-center two-electron bonds, as well as delocalized many-center bonding elements with respect to aromaticity. The AdNDP analysis of the seven representative low-lying P6 structures reveal that these are well described by the classical Lewis model. A lone pair on each phosphorus atom, a two-center-two-electron (2c–2e) σ-bond in every pair of adjacent P atoms, and an additional 2c–2e π-bond between adjacent 2-coordinated P atoms are found, with occupation numbers (ON) of all these bonding elements above 1.92 |e|. The chemical bonding in the chair structure is unusual. Based on fragment orbital analysis, it was concluded that two linkages between the two P3 fragments are of the one-electron hemibond type. The AdNDP analysis reveals a lone pair on each P atom and six 2c–2e P—P σ-bonds. One 3c–2e π-bond in every P3 triangle was revealed with the user-directed form of the AdNDP analysis, as well as a 4c–2e bond responsible for bonding between the two P3 triangle, confirming that this isomer cannot be represented by a single Lewis structure, and requires a resonance of two Lewis structures, or can be described by a single formula with delocalized bonding elements. Both the D6h benzene-like structure, as well as the D2 isomer of P6 is similar to the reported AdNDP bonding pattern of the C6H6 benzene molecule: 2c–2e σ-bond and lone pairs, as well as delocalized 6c-2e π-bonds. The distortion due to the PJT effect therefore does not significantly disturb the bonding picture. Suppression The planar P6 hexagonal structure D6h is a second-order saddle point due to the pseudo-Jahn–Teller effect (PJT), which leads to the D2 distorted structure. Upon sandwich complex formation the PJT effect is suppressed due to filling of the unoccupied molecular orbitals involved in vibronic coupling in P6 with electron pairs of Mo atoms. Specifically, from molecular orbital analysis it was determined that, upon complex formation, the LUMO in the isolated P6 structure is now occupied in the triple-decker complex as a result of the appreciable δ-type M → L back-donation mechanism from the occupied dx2–y2 and dxy atomic orbitals of the Mo atom into the partially antibonding π molecular orbitals of P6, thus restoring the high symmetry and planarity of P6. References Phosphorus Sandwich compounds Inorganic chemistry Solid-state chemistry Hypothetical chemical compounds Aromatic compounds Six-membered rings
Hexaphosphabenzene
[ "Physics", "Chemistry", "Materials_science" ]
3,308
[ "Aromatic compounds", "Hypotheses in chemistry", "Sandwich compounds", "Organic compounds", "Theoretical chemistry", "Condensed matter physics", "Hypothetical chemical compounds", "nan", "Organometallic chemistry", "Solid-state chemistry" ]
65,220,528
https://en.wikipedia.org/wiki/Transcription-translation%20coupling
Transcription-translation coupling is a mechanism of gene expression regulation in which synthesis of an mRNA (transcription) is affected by its concurrent decoding (translation). In prokaryotes, mRNAs are translated while they are transcribed. This allows communication between RNA polymerase, the multisubunit enzyme that catalyzes transcription, and the ribosome, which catalyzes translation. Coupling involves both direct physical interactions between RNA polymerase and the ribosome ("expressome" complexes), as well as ribosome-induced changes to the structure and accessibility of the intervening mRNA that affect transcription ("attenuation" and "polarity"). Significance Bacteria depend on transcription-translation coupling for genome integrity, termination of transcription and control of mRNA stability. Consequently, artificial disruption of transcription-translation coupling impairs the fitness of bacteria. Without coupling, genome integrity is compromised as stalled transcription complexes interfere with DNA replication and induce DNA breaks. Lack of coupling produces premature transcription termination, likely due to increased binding of termination factor Rho. Degradation of prokaryotic mRNAs is accelerated by loss of coupled translation due to increased availability of target sites of RNase E. It has also been suggested that coupling of transcription with translation is an important mechanism of preventing formation of deleterious R-loops. While transcription-translation coupling is likely prevalent across prokaryotic organisms, not all species are dependent on it. Unlike Escherichia coli, in Bacillus subtilis transcription significantly outpaces translation, and coupling consequently does not occur. Mechanisms Translation promotes transcription elongation and regulates transcription termination. Functional coupling between transcription and translation is caused by direct physical interactions between the ribosome and RNA polymerase ("expressome complex"), ribosome-dependent changes to nascent mRNA secondary structure which affect RNA polymerase activity (e.g. "attenuation"), and ribosome-dependent changes to nascent mRNA availability to transcription termination factor Rho ("polarity"). Expressome complex The expressome is a supramolecular complex consisting of RNA polymerase and a trailing ribosome linked by a shared mRNA transcript. It is supported by the transcription factors NusG and NusA, which interact with both RNA polymerase and the ribosome to couple the complexes together. When coupled by transcription factor NusG, the ribosome binds newly synthesized mRNA and prevents formation of secondary structures that inhibit transcription. Formation of an expressome complex also aids transcription elongation by the trailing ribosome opposing back-tracking of RNA polymerase. Three-dimensional models of ribosome-RNA polymerase expressome complexes have been determined by cryo-electron microscopy. Ribosome-mediated attenuation Ribosome-mediated attenuation is a gene expression mechanism in which a transcriptional termination signal is regulated by translation. Attenuation occurs at the start of some prokaryotic operons at sequences called "attenuators", which have been identified in operons encoding amino acid biosynthesis enzymes, pyrimidine biosynthesis enzymes and antibiotic resistance factors. The attenuator functions via a set of mRNA sequence elements that coordinate the status of translation to a transcription termination signal: A short open reading frame encoding a "leader peptide" A transcription pause sequence A "control region" A transcription termination signal Once the start of the leader open reading frame has been transcribed, RNA polymerase pauses due to folding of the nascent mRNA. This programmed arrest of transcription gives time for translation of the leader peptide to commence, and transcription to resume once coupled to translation. The downstream "control region" then modulates the elongation rate of either the ribosome or RNA polymerase. The factor determining this depends on the function of the downstream genes (e.g. the operon encoding enzymes involved in the synthesis of histidine contains a series of histidine codons is the control region). The role of the control region is to modulate whether transcription remains coupled to translation depending on the cellular state (e.g. a low availability of histidine slows translation leading to uncoupling, while high availability of histidine permits efficient translation and maintains coupling). Finally, the transcription terminator sequence is transcribed. Whether transcription is coupled to translation determines whether this stops transcription. The terminator requires folding of the mRNA, and by unwinding mRNA structures the ribosome elects the formation of either of two alternative structures: the terminator, or a competing fold termed the "antiterminator". For amino acid biosynthesis operons, these allow the gene expression machinery to sense the abundance of the amino acid produced by the encoded enzymes, and adjust the level of downstream gene expression accordingly: transcription occurring only if the amino acid abundance is low and the demand for the enzymes is therefore high. Examples include the histidine (his) and tryptophan (trp) biosynthetic operons. The term "attenuation" was introduced to describe the his operon. While it is typically used to describe biosynthesis operons of amino acids and other metabolites, programmed transcription termination that does not occur at the end of a gene was first identified in λ phage. The discovery of attenuation was significant as it represented a regulatory mechanism distinct from repression. The trp operon is regulated by both attenuation and repression, and was the first evidence that gene expression regulation mechanisms can be overlapping or redundant. Polarity "Polarity" is a gene expression mechanism in which transcription terminates prematurely due to a loss of coupling between transcription and translation. Transcription outpaces translation when the ribosome pauses or encounters a premature stop codon. This allows the transcription termination factor Rho to bind the mRNA and terminate mRNA synthesis. Consequently, genes that are downstream in the operon are not transcribed, and therefore not expressed. Polarity serves as mRNA quality control, allowing unused transcripts to be terminated prematurely, rather than synthesized and degraded. The term "polarity" was introduced to describe the observation that the order of genes within an operon is important: a nonsense mutation within an upstream gene effects the transcription of downstream genes. Furthermore, the position of the nonsense mutation within the upstream gene modulates the "degree of polarity", with nonsense mutations at the start of the upstream genes exerting stronger polarity (more reduced transcription) on downstream genes. Unlike the mechanism of attenuation, which involves intrinsic termination of transcription at well-defined programmed sites, polarity is Rho-dependent and termination occurs at variable position. Discovery The potential for transcription and translation to regulate each other was recognized by the team of Marshall Nirenberg, who discovered that the processes are physically connected through the formation of a DNA-ribosome complex. As part of the efforts of Nirenberg's group to determine the genetic code that underlies protein synthesis, they pioneered the use of cell-free in vitro protein synthesis reactions. Analysis of these reactions revealed that protein synthesis is mRNA-dependent, and that the sequence of the mRNA strictly defines the sequence of the protein product. For this work in breaking in the genetic code, Nirenberg was jointly awarded the Nobel Prize in Physiology or Medicine in 1968. Having established that transcription and translation are linked biochemically (translation depends on the product of transcription), an outstanding question remained whether they were linked physically - whether the newly synthesized mRNA released from the DNA before it is translated, or if can translation occur concurrently with transcription. Electron micrographs of stained cell-free protein synthesis reactions revealed branched assemblies in which strings of ribosomes are linked to a central DNA fibre. DNA isolated from bacterial cells co-sediment with ribosomes, further supporting the conclusion that transcription and translation occur together. Direct contact between ribosomes and RNA polymerase are observable within these early micrographs. The potential for simultaneous regulation of transcription and translation at this junction was noted in Nirenberg's work as early as 1964. References Gene expression RNA
Transcription-translation coupling
[ "Chemistry", "Biology" ]
1,656
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
65,220,906
https://en.wikipedia.org/wiki/Alpha%20beta%20barrel
An alpha/beta barrel is a protein fold formed by units composed of a short α-helix followed by two anti-parallel β-strands, followed by an α-helix and a β-strand; the three β-strands form a β-sheet that runs parallel to the barrel and the α-helix is in the outside of the barrel but does not contact the α-helices of the other repeats like in TIM barrels. The protein structures known for this fold come proteins from the eukaryotic and archaeal initiation factor 6 family, namely the Methanococcus jannaschii aIF6 and Saccharomyces cerevisiae eIF6, and from the eIF6 from Dictyostelium discoideum. These alpha/beta barrels are commonly occurring motifs constructed from repetitions of the beta-alpha-beta loop motif. This alpha/beta barrel is a domain of pyruvate kinase enzyme. References Protein structure Protein folds Protein tandem repeats Protein domains
Alpha beta barrel
[ "Chemistry", "Biology" ]
205
[ "Protein tandem repeats", "Protein classification", "Protein domains", "Structural biology", "Protein structure" ]
75,161,417
https://en.wikipedia.org/wiki/Finite%20subgroups%20of%20SU%282%29
In applied mathematics, finite subgroups of are groups composed of rotations and related transformations, employed particularly in the field of physical chemistry. The symmetry group of a physical body generally contains a subgroup (typically finite) of the 3D rotation group. It may occur that the group with two elements acts also on the body; this is typically the case in magnetism for the exchange of north and south poles, or in quantum mechanics for the change of spin sign. In this case, the symmetry group of a body may be a central extension of the group of spatial symmetries by the group with two elements. Hans Bethe introduced the term "double group" (Doppelgruppe) for such a group, in which two different elements induce the spatial identity, and a rotation of may correspond to an element of the double group that is not the identity. The classification of the finite double groups and their character tables is therefore physically meaningful and is thus the main part of the theory of double groups. Finite double groups include the binary polyhedral groups. In physical chemistry, double groups are used in the treatment of the magnetochemistry of complexes of metal ions that have a single unpaired electron in the d-shell or f-shell. Instances when a double group is commonly used include 6-coordinate complexes of copper(II), titanium(III) and cerium(III). In these double groups rotation by 360° is treated as a symmetry operation separate from the identity operation; the double group is formed by combining these two symmetry operations with a point group such as a dihedral group or the full octahedral group. Definition and theory Let be a finite subgroup of SO(3), the three-dimensional rotation group. There is a natural homomorphism of SU(2) onto SO(3) which has kernel . This double cover can be realised using the adjoint action of SU(2) on the Lie algebra of traceless 2-by-2 skew-adjoint matrices or using the action by conjugation of unit quaternions. The double group is defined as . By construction, is a central subgroup of and the quotient is isomorphic to . Thus is a central extension of the group by , the cyclic group of order 2. Ordinary representations of are just mappings of into the general linear group that are homomorphisms up to a sign; equivalently, they are projective representations of with a factor system or Schur multiplier in . Two projective representations of are closed under the tensor product operation, with their corresponding factor systems in multiplying. The central extensions of by also have a natural product. The finite subgroups of SU(2) and SO(3) were determined in 1876 by Felix Klein in an article in Mathematische Annalen, later incorporated in his celebrated 1884 "Lectures on the Icosahedron": for SU(2), the subgroups correspond to the cyclic groups, the binary dihedral groups, the binary tetrahedral group, the binary octahedral group, and the binary icosahedral group; and for SO(3), they correspond to the cyclic groups, the dihedral groups, the tetrahedral group, the octahedral group and the icosahedral group. The correspondence can be found in numerous text books, and goes back to the classification of platonic solids. From Klein's classifications of binary subgroups, it follows that, if a finite subgroup of SO(3), then, up to equivalence, there are exactly two central extensions of by : the one obtained by lifting the double cover ; and the trivial extension . The character tables of the finite subgroups of SU(2) and SO(3) were determined and tabulated by F. G. Frobenius in 1898, with alternative derivations by I. Schur and H. E. Jordan in 1907 independently. Branching rules and tensor product formulas were also determined. For each binary subgroup, i.e. finite subgroup of SU(2), the irreducible representations of are labelled by extended Dynkin diagrams of type A, D and E; the rules for tensoring with the two-dimensional vector representation are given graphically by an undirected graph. By Schur's lemma, irreducible representations of are just irreducible representations of multiplied by either the trivial or the sign character of . Likewise, irreducible representations of that send to are just ordinary representations of ; while those which send to are genuinely double-valued or spinor representations. Example. For the double icosahedral group, if is the golden ratio with inverse , the character table is given below: spinor characters are denoted by asterisks. The character table of the icosahedral group is also given. {| class="wikitable" |+ Character table: double icosahedral group |- ! ||1||12C2[5]||12C3[5]||1C4[2]||12C5[10]||12C6[10] || 20C7[3] || 20C8[6] || 30C9[4] |- ! ||1||1||1||1||1||1||1||1||1 |- ! ||3||||||3||||||0||0||−1 |- ! ||3||||||3||||||0||0||−1 |- ! ||4||−1||−1||4||−1||−1||1||1||0 |- ! ||5||0||0||5||0||0||−1||−1||0 |- ! || 2 || || || −2 || || || −1 || 1 || 0 |- ! || 2 || || || −2 || || || −1 || 1 || 0 |- ! || 4 || −1 || −1 || −4 || −1 || −1 || 1 || 0 || −1 |- ! || 6 || 1 || 1 || −6|| −1||−1|| 0|| 0 || 0 |} {| class="wikitable" |+ Character table: icosahedral group ! || 1 || 20C2[3] || 15C3[2] || 12C4[5] || 12C5[5] |- ! || 1 || 1 || 1 || 1 || 1 |- ! || 3 || 0 || −1 || || |- ! || 3 || 0 || −1 || || |- ! || 4 || 1 || 0 || −1 || −1 |- ! || 5 || −1 || 1 || 0 || 0 |} The tensor product rules for tensoring with the two-dimensional representation are encoded diagrammatically below: The numbering has at the top and then below, from left to right, , , , , , , , and . Thus, on labelling the vertices by irreducible characters, the result of multiplying by a given irreducible character equals the sum of all irreducible characters labelled by an adjacent vertex. The representation theory of SU(2) goes back to the nineteenth century and the theory of invariants of binary forms, with the figures of Alfred Clebsch and Paul Gordan prominent. The irreducible representations of SU(2) are indexed by non-negative half integers . If is the two-dimensional vector representation, then = S2j , the th symmetric power of , a -dimensional vector space. Letting be the compact group SU(2), the group acts irreducibly on each and satisfies the Clebsch-Gordan rules: In particular for > 0, and By definition, the matrix representing in is just S2j ( ). Since every is conjugate to a diagonal matrix with diagonal entries and (the order being immaterial), in this case S2j ( ) has diagonal entries , , ... , , . Setting this yields the character formula Substituting , it follows that, if has diagonal entries then The representation theory of SU(2), including that of SO(3), can be developed in many different ways: using the complexification Gc = SL(2,C) and the double coset decomposition Gc = B ⋅ w ⋅ B ∐ B, where B denotes upper triangular matrices and ; using the infinitesimal action of the Lie algebras of SU(2) and SL(2,C) where they appear as raising and lowering operators , , of angular momentum in quantum mechanics: here E = , F = E* and H = [E, F] so that [H, E] = 2E and [H, F] = −2F; using integration of class functions over SU(2), identifying the unit quaternions with 3-sphere and Haar measure as the volume form: this reduces to integration over the diagonal matrices, i.e. the circle group T. The properties of matrix coefficients or representative functions of the compact group SU(2) (and SO(3)) are well documented as part of the theory of special functions: the Casimir operator commutes with the Lie algebras and groups. The operator can be identified with the Laplacian , so that on a matrix coefficient of , . The representative functions form a non-commutative algebra under convolution with respect to Haar measure . The analogue for a finite subgroup of of SU(2) is the finite-dimensional group algebra C[] From the Clebsch-Gordan rules, the convolution algebra is isomorphic to a direct sum of matrices, with and . The matrix coefficients for each irreducible representation form a set of matrix units. This direct sum decomposition is the Peter-Weyl theorem. The corresponding result for C[] is Maschke's theorem. The algebra has eigensubspaces or , exhibiting them as direct sum of , summed over non-negative integers or positive half-integers – these are examples of induced representations. It allows the computations of branching rules from SU(2) to , so that can be decomposed as direct sums of irreducible representations of . History Georg Frobenius derived and listed in 1899 the character tables of the finite subgroups of SU(2), the double cover of the rotation group SO(3). In 1875, Felix Klein had already classified these finite "binary" subgroups into the cyclic groups, the binary dihedral groups, the binary tetrahedral group, the binary octahedral group and the binary icosahedral group. Alternative derivations of the character tables were given by Issai Schur and H. E. Jordan in 1907; further branching rules and tensor product formulas were also determined. In a 1929 article on splitting of atoms in crystals, the physicist H. Bethe first coined the term "double group" (Doppelgruppe), a concept that allowed double-valued or spinor representations of finite subgroups of the rotation group to be regarded as ordinary linear representations of their double covers. In particular, Bethe applied his theory to relativistic quantum mechanics and crystallographic point groups, where a natural physical restriction to 32 point groups occurs. Subsequently, the non-crystallographic icosahedral case has also been investigated more extensively, resulting most recently in groundbreaking advances on carbon 60 and fullerenes in the 1980s and 90s. In 1982–1984, there was another breakthrough involving the icosahedral group, this time through materials scientist Dan Shechtman's remarkable work on quasicrystals, for which he was awarded a Nobel Prize in Chemistry in 2011. Applications Magnetochemistry In magnetochemistry, the need for a double group arises in a very particular circumstance, namely, in the treatment of the magnetic properties of complexes of a metal ion in whose electronic structure there is a single unpaired electron (or its equivalent, a single vacancy) in a metal ion's d- or f- shell. This occurs, for example, with the elements copper, silver and gold in the +2 oxidation state, where there is a single vacancy in the d-electron shell, with titanium(III) which has a single electron in the 3d shell and with cerium(III) which has a single electron in the 4f shell. In group theory, the character , for rotation, by an angle , of a wavefunction for half-integer angular momentum is given by where angular momentum is the vector sum of spin and orbital momentum, . This formula applies with angular momentum in general. In atoms with a single unpaired electron the character for a rotation through an angle of is equal to . The change of sign cannot be true for an identity operation in any point group. Therefore, a double group, in which rotation by is classified as being distinct from the identity operation, is used. A character table for the double group D4′ is as follows. The new operation is labelled R in this example. The character table for the point group D4 is shown for comparison. {| class="wikitable" |+ Character table: double group D4′ |- !D'4 || || ||C4||C43||C2||2C2′||2C2″ |- ! ||E ||R ||C4R||C43R||C2R||2C2′R||2C2″R |- !A1′ ||1||1||1||1||1||1||1|1 |- !A2′ ||1||1||1||1||1||1|−1||−1 |- !B1′ ||1||1||−1||−1||1||1||−1 |- !B'2 ||1||1||−1||−1||1||−1||1 |- !E1′ ||2||−2||0||0||−2||0||0 |- !E2′ ||2||−2||√2||−√2||0||0||0 |- !E3′ ||2 ||−2 ||−√2||√2 ||0 ||0 ||0 |} {| class="wikitable" |+ Character table: point group D4 |- ! D4 || E || 2 C4 || C2 || 2 C2′ || 2 C2 |- ! A1 || 1 || 1 || 1 || 1 || 1 + |- ! A2 || 1 || 1 || 1 || −1 || −1 |- ! B1 || 1 || −1 || 1 || 1 || −1 |- ! B2 || 1 || −1 || 1 || −1 || 1 |- ! E || 2 || 0 || −2 || 0 || 0 |- |} In the table for the double group, the symmetry operations such as C4 and C4R belong to the same class but the header is shown, for convenience, in two rows, rather than C4, C4R in a single row. Character tables for the double groups T′, O′, Td′, D3h′, C6v′, D6′, D2d′, C4v′, D4′, C3v′, D3′, C2v′, D2′ and R(3)′ are given in , and . The need for a double group occurs, for example, in the treatment of magnetic properties of 6-coordinate complexes of copper(II). The electronic configuration of the central Cu2+ ion can be written as [Ar]3d9. It can be said that there is a single vacancy, or hole, in the copper 3d-electron shell, which can contain up to 10 electrons. The ion 2+ is a typical example of a compound with this characteristic. Six-coordinate complexes of the Cu(II) ion, with the generic formula [CuL6]2+, are subject to the Jahn–Teller effect so that the symmetry is reduced from octahedral (point group Oh) to tetragonal (point group D4h). Since d orbitals are centrosymmetric the related atomic term symbols can be classified in the subgroup D4. To a first approximation spin–orbit coupling can be ignored and the magnetic moment is then predicted to be , the so-called spin-only value. However, for a more accurate prediction spin–orbit coupling must be taken into consideration. This means that the relevant quantum number is , where . When J is half-integer, the character for a rotation by an angle of radians is equal to minus the character for rotation by an angle . This cannot be true for an identity in a point group. Consequently, a group must be used in which rotations by are classed as symmetry operations distinct from rotations by an angle . This group is known as the double group, D4′. With species such as the square-planar complex of the silver(II) ion [AgF4]2− the relevant double group is also D4′; deviations from the spin-only value are greater as the magnitude of spin–orbit coupling is greater for silver(II) than for copper(II). A double group is also used for some compounds of titanium in the +3 oxidation state. Compounds of titanium(III) have a single electron in the 3d shell. The magnetic moments of octahedral complexes with the generic formula [TiL6]n+ have been found to lie in the range 1.63–1.81 B.M. at room temperature. The double group O′ is used to classify their electronic states. The cerium(III) ion, Ce3+, has a single electron in the 4f shell. The magnetic properties of octahedral complexes of this ion are treated using the double group O′. When a cerium(III) ion is encapsulated in a C60 cage, the formula of the endohedral fullerene is written as . Free radicals Double groups may be used in connection with free radicals. This has been illustrated for the species CH3F+ and CH3BF2+ which both contain a single unpaired electron. See also McKay graph ADE classification Molecular symmetry Point group Space group Notes References Further reading (web site) Group theory Molecular physics Theoretical chemistry Materials science
Finite subgroups of SU(2)
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,995
[ "Applied and interdisciplinary physics", "Molecular physics", "Materials science", "Group theory", "Theoretical chemistry", "Fields of abstract algebra", " molecular", "nan", "Atomic", " and optical physics" ]
75,164,026
https://en.wikipedia.org/wiki/Bandh%20Baretha
Bandh Baretha is a freshwater man-made wetland and wildlife sanctuary covering an area of 10 square kilometers. It is located approximately 50 kilometers south of Bharatpur city, in the Bayana tehsil of Bharatpur, India. This sanctuary serves as a significant winter resort for migratory birds and plays a crucial role in storing drinking water for the region. The sanctuary is situated near the small river Kakund, which enters the south-western border of Bayana tehsil from the Karauli side. Here, the river's waters are held in the Baretha reservoir. During low rainfall years, the population of water birds increases, making it a large, permanent, and legally protected wetland. Bundh Baretha is home to a diverse avian population, with a total of 67 water bird species, including six globally threatened species. It is an essential refuge for birds, especially when adverse conditions prevail in the nearby Keoladeo National Park wetlands. The aquatic vegetation in this sanctuary is similar to that found in Keoladeo National Park, further highlighting its ecological significance. References Bharatpur, Rajasthan Wetlands of India Constructed wetlands
Bandh Baretha
[ "Chemistry", "Engineering", "Biology" ]
231
[ "Bioremediation", "Constructed wetlands", "Environmental engineering" ]
75,164,134
https://en.wikipedia.org/wiki/Tropical%20Modernism
Tropical Modernism, or Tropical Modern is a style of architecture that merges modernist architecture principles with tropical vernacular traditions, emerging in the mid-20th century. This movement responded to the unique climatic and cultural conditions of tropical regions, primarily in Asia, Africa, Latin America, and the Pacific Islands. Pioneering architects like Geoffrey Bawa in Sri Lanka, and Charles Correa in India balanced modern architectural techniques with traditional building practices of their respective regions. Tropical Modernism's legacy continues to influence contemporary architectural practices, especially in the quest for sustainable design solutions in tropical climates. Historical development Tropical Modernism originated in the mid-20th century, a period marked by post-war modernization and decolonization, which saw emerging national identities across the Global South. The movement was a response to the modernist architectural approaches of the time, aiming to adapt them to the unique environmental and cultural contexts of tropical regions. Origins and early pioneers The early pioneers of Tropical Modernism include architects like Geoffrey Bawa in Sri Lanka, whose work demonstrated a profound understanding of the local climate and culture, blending modernist principles with traditional vernacular architecture. Similarly, architects like Charles Correa in India contributed to the movement by integrating modern architectural forms with traditional Indian architectural elements. Post-war modernization The post-war era saw a surge in modernization efforts across many tropical countries. The need for new infrastructure and urban development provided a fertile ground for the adaptation and evolution of modernist architectural principles in tropical contexts. Decolonization and national identity The period of decolonization in many tropical regions contributed to the rise of Tropical Modernism, as emerging nations sought to express their newly found national identities through architecture. The movement became a means to reflect a blend of modernity and tradition in architectural designs. Regional variations and evolution Tropical Modernism manifested differently across various regions, reflecting the unique cultural, political, and environmental conditions of each area. In West Africa, for instance, the movement was intertwined with political power and national identity. Similarly, in regions like Latin America and Southeast Asia, Tropical Modernism evolved to reflect the distinct vernacular traditions and modernization agendas. Characteristics Tropical Modernism is characterized by its seamless integration of modernist principles with tropical vernacular architectures. The style places a significant emphasis on environmental responsiveness, often characterized by extensive use of local materials, passive cooling strategies, and a strong indoor-outdoor connection. Environmental responsiveness A defining characteristic of Tropical Modernism is its responsiveness to the local climate. The design approach often incorporates passive cooling strategies, such as natural ventilation, shading, and water features, to mitigate the harsh tropical climate. Buildings designed in this style are typically oriented to maximize natural ventilation and minimize solar heat gain, thereby reducing the reliance on mechanical cooling systems. Use of local materials The use of local materials is a hallmark of Tropical Modernism, reflecting a commitment to sustainability and a respect for local traditions. Materials such as timber, stone, and thatch are commonly used, often in innovative ways that reflect both modernist and traditional craftsmanship. Indoor-outdoor connection One of the quintessential features of Tropical Modernism is the blurring of indoor and outdoor spaces to promote natural ventilation and a sense of openness. This is often achieved through the use of large openings, verandas, courtyards, and other transitional spaces, which encourage the flow of air and the extension of living spaces into the landscape. Architectural elements Tropical Modernism often incorporates architectural elements that are characteristic of the local vernacular, such as pitched roofs, wide eaves, and raised floor levels, which are adapted to modernist sensibilities. The juxtaposition of modern and traditional elements creates a distinctive architectural language that reflects a synthesis of global modernist trends with local building traditions. Notable practitioners Tropical Modernism has been significantly shaped by a number of architects who melded modern architectural principles with tropical vernacular designs. Some notable practitioners include: Maxwell Fry and Jane Drew. This couple of British architects were active in British West Africa (Ghana, Nigeria, Gambia and Sierra Leone), where they used new construction methods and innovative techniques of climate control (e.g., adjustable louvers, wide eaves and brises soleils). They drew international attention to the principles of modernism as applied to the tropical context through the establishment of the Department of Tropical Architecture at the Architectural Association in 1954 and through their influntial book Tropical Architecture in the Humid Zone (1956). Geoffrey Bawa: A Sri Lankan architect known for pioneering Tropical Modernism. His work exemplifies the integration of modernist design principles with the traditional architectural elements of Sri Lanka, creating a unique, locally adapted style of modern architecture. Vladimir Ossipoff: Known as the “master of Hawaiian architecture,” Ossipoff’s work prominently features the elements of Tropical Modernism. His designs emphasize natural ventilation, indoor-outdoor integration, and the use of local materials to create buildings suited for Hawaii’s climate. Charles Correa: An Indian architect who significantly contributed to Tropical Modernism by integrating modern architectural forms with traditional Indian architectural elements. His design for the Gandhi Smarak Sangrahalaya in Ahmedabad is a notable example. Lúcio Costa and Oscar Niemeyer: These Brazilian architects were instrumental in the development of Tropical Modernism in Brazil, with their design for the city of Brasília showcasing modernist architectural principles adapted to the tropical climate. Exemplary projects Tropical Modernism is epitomized in various projects that showcase the movement's key characteristics of environmental responsiveness, use of local materials, and indoor-outdoor connectivity. Here are some exemplary projects: Kandalama Hotel, Sri Lanka: Designed by Geoffrey Bawa, this hotel is a quintessential example of Tropical Modernism. Its design incorporates the natural landscape, local materials, and modern architectural principles. Liljestrand House, Hawaii: Designed by Vladimir Ossipoff, this house exemplifies the seamless integration of indoor and outdoor spaces, a hallmark of Tropical Modernism. Gandhi Smarak Sangrahalaya, Ahmedabad, India: This museum, designed by Charles Correa, reflects the principles of Tropical Modernism with its use of local materials, passive cooling techniques, and integration of indoor and outdoor spaces. Palácio do Planalto, Brasília, Brazil: Designed by Lúcio Costa and Oscar Niemeyer, this presidential palace showcases Tropical Modernism with its modernist design adapted to the tropical climate. The Salk Institute, La Jolla, California: Although not located in a tropical region, the design by Louis Kahn incorporates key principles of Tropical Modernism. Pearl Bank Apartments, Singapore: Designed by Tan Cheng Siong, this residential high-rise is a hallmark of Tropical Modernism in Southeast Asia. Faculty of Architecture Building, Khon Kaen University, Thailand: This building is an example of how Tropical Modernism can be integrated into educational infrastructure. Regional variations Tropical Modernism, though rooted in modernist architectural principles, has been diversified and enlivened by its interaction with various regional vernacular traditions. Below are some regional variations: Hawaii: In Hawaii, the style became prominent through the works of architects like Vladimir Ossipoff, who blended Modernism with local vernacular styles. His designs highlighted the importance of environmental responsiveness and cultural sensitivity, which are now considered as seminal examples of Tropical Modernism in the Pacific region. West Africa: The style was also adapted in West Africa where it was used as a tool to assert a modern identity post-independence. Architects such as Maxwell Fry and Jane Drew utilized Tropical Modern principles to design buildings suited to the local climate while embodying a modern aesthetic. Brazil: In Brazil, architects like Paulo Mendes da Rocha gained international recognition for sustainable designs embodying Tropical Modernism. This regional variant emphasized functionality, aesthetic appeal, and incorporation of natural elements, reflecting a synthesis of Modernism and "Brasilidade" or Brazilian-ness. Criticism and colonial legacy Tropical Modernism has faced criticism for its colonial roots, particularly in regions such as West Africa. Initially, this architectural style was employed by colonial powers, representing a form of colonial imposition, especially in British West Africa. The design principles of Tropical Modernism were largely tailored to cater to the comfort of colonial administrators, fostering a notion of a more productive colonial subject to counter calls for independence. Despite its Eurocentric beginnings, post-independence leaders like Kwame Nkrumah recognized the potential of Tropical Modernism for nation-building, intertwining it with Pan-African ideologies to foster a sense of national identity and progress. Perspectives surrounding Vladimir Ossipoff and Tropical Modernism in Hawaii are nuanced. Ossipoff, often dubbed as the "master of Hawaiian architecture," played a pivotal role in bringing the essence of Tropical Modernism to the Hawaiian Islands. His work is known for its environmental sensitivity, cultural contextualization, and appropriateness to Hawaii's unique landscape characteristics, portraying a harmonious blend between modern architectural principles and local cultural and geographic contexts. He was known for his conviction-driven, no-nonsense approach towards architecture, waging what he called a "war on ugliness," which was brought on by dismal architectural design and rampant over-development in the Hawaiian Islands. However, it's essential to note that the term "Tropical Modernism" itself, as a broader movement beyond Ossipoff's work, has faced criticisms for potentially carrying colonial or Eurocentric undertones, especially when applied in non-Western contexts like Africa. Critics argue that the movement, while aiming to blend modernist and local vernacular architectures, might inadvertently perpetuate a form of architectural colonialism or exhibit a Eurocentric bias, often by dismissing or undervaluing local architectural traditions in favor of modernist principles. Contemporary relevance The contemporary relevance of Tropical Modernism lies in its ability to address climate-related challenges inherent to tropical regions. Several aspects underscore its modern-day significance: Sustainable development: Approximately 50% of the world's population resides in the tropical belt, where the fastest-growing cities are located, along with 70% of the forests that help contain CO2 emissions. The principles of Tropical Modernism are crucial for designing coherent and adapted architecture in these regions, recognizing the values of tropicality along with its specificities. Environmental responsiveness: The style emphasizes passive design elements to achieve thermal comfort, an approach that is critical in tropical climates characterized by high temperatures and humidity. Features such as sunshades, overhangs, and the use of local materials contribute to energy efficiency and environmental sustainability. Regional architectural expressions: The resurgence of regional architectures, including Tropical Modernism, is noted in international architectural discourse. This style allows for the exploration of regionalized aesthetics, encouraging reflective design practices that contemplate environmental and human contingencies. It challenges the globalized mainstream architectural aesthetics, promoting a more contextual and thoughtful architectural practice. See also Modern architecture Vernacular architecture Sustainable architecture New Khmer Architecture References 1950s architecture 20th-century architectural styles Modernist architecture Sustainable architecture Low-energy building Sustainable development Postcolonialism
Tropical Modernism
[ "Engineering", "Environmental_science" ]
2,196
[ "Sustainable architecture", "Environmental social science", "Architecture" ]
75,166,991
https://en.wikipedia.org/wiki/Wrexham%20Police%20Station%20%281973%E2%80%932020%29
The Wrexham Police Station () was a police station housed in a tall brutalist building, located on Bodhyfryd in Wrexham, Wales. Constructed in 1973 and demolished in 2020, the tower was the tallest building in Wrexham, overtaking St Giles' Church. The building served as a North Wales Police divisional headquarters and Wrexham's police station. Concerns over maintenance costs and the overall state of the building were raised in 2011. North Wales Police vacated the building in 2019. It was rejected for listed status by Cadw and in November 2020 was demolished in a controlled explosion, amid a national lockdown. Police officers relocated to Llay HQ and to a smaller station near the Wrexham Library. A Lidl supermarket has been built on the site. Description The station was built in the Brutalist architecture style between 1973 and 1975. It was designed by Eric Langford Lewis, the county architect, and Stuart Brown, the assistant county architect. The building was ten-storeys, and built with an re-enforced concrete frame clad containing pre-cast corrugated concrete panels. The building's defining aspect was its cantilevered tower which emerged from the building's central stalk. The tower was tall, and contained the main offices, briefing and interview rooms, and the Special Branch's highly specialised accommodation. When opened, the tower became Wrexham's tallest building, overtaking the tower of St Giles' Church. The building served as the divisional headquarters of North Wales Police until replaced by a facility in Llay, as well as Wrexham's police station, since replaced by a smaller one near Wrexham library. Its Brutalist architecture style made it difficult for some to like, described once as a "monstrosity". Cadw, the Welsh Government's historic environment service in a letter explaining its refusal to list the building, said it was a "rare and unusual (possibly unique) example of slab and podium design in Wales which makes an expressive architectural statement", but that "in the handling of form, materials and design, the building does not compare favourably with other buildings of similar design which are notably more sophisticated and elegant". North Wales Police and Crime Commissioner, Arfon Jones acknowledged that the tower defined the skyline of Wrexham for an era but said he could not get "very sentimental about the old HQ" and "always thought it was a bit of a dump". History Construction of the building began in 1973, and it opened in 1975. The building served as an integral part of the Bodhyfryd site alongside the Wrexham Memorial Hall, Wrexham Law Courts and Waterworld. The police station replaced the old Wrexham city centre police station, which was housed in County Buildings, now the Wrexham County Borough Museum. In 2011, North Wales Police expressed concerns that the building could no longer be used, as it was very expensive to maintain. The cost of maintaining the building over the next ten years was estimated to be £6 million, and the police force had suggested that "it is likely the tower will be removed", hinting at a possible demolition of some form. At the same time, the police station in Mold, Flintshire was also under review and was recommended to remain open while the Wrexham station's future remained uncertain. The force later announced it would build a new facility in Llay, on the outskirts of Wrexham. No decision was made at the time on what the existing building should become, but only that it was "no longer suitable" and "no longer fit for purpose" for the police force to use, due to the building's poor aging facilities and high maintenance costs. In 2012, the police force cut opening times at its police stations, including in Wrexham, to save money. A pair of peregrine falcons nested on the tower roof for a number of years and a webcam monitored several chicks being hatched and fledged. The falcons had been relocated by August 2015 when the building was proposed for demolition. Proposals for demolition It was originally set to be demolished in August 2016. Its proposed demolition raised concerns that the adjacent buildings, which form Wrexham's "civic area", namely a magistrates' court, Wrexham Waterworld, Wrexham Memorial Hall, the local cenotaph and the Crown Buildings would also be under review. However, the recent abandonment by the council to demolish Waterworld, had been argued to have quelled fears of a "mass land sell off", although concerns remained over the possibility of moving the courts to Mold, and a councillor's disapproval of the police station site being turned into homes. In November 2016, the building was put up for sale, with the expectation it would be vacated by 2018. The police station closed in January 2019, and police services temporarily relocated to Crown Buildings on Chester Street. A smaller police station opened in May 2019 conjoined with the Wrexham Library building, and a larger police headquarters facility opened in Llay in November 2018. In February 2019, Cadw, the Welsh Government's historic environment service responsible for listed buildings in Wales, reviewed a request for the listing of the building to protect it from demolition. Police and Crime Commissioner, Arfon Jones, stated that if the building was not demolished, the proposed sale of the site to supermarket chain Lidl would fall through, negatively impacting police funding in North Wales. Jones had written a joint-statement, with the North Wales Police chief constable Carl Foulkes, to Jason Thomas, the Welsh Government's director for Culture, Tourism and Sport asking for clarity on the possible listing. Cadw confirmed the building would not be listed and so the demolition could proceed. Local people were split on the issue, some calling it an "iconic building" of Wrexham, while others stated it was an "eyesore". Demolition The planned demolition of the building was approved in March 2020. Removal of the tower's electrical equipment started in July 2020. Parts of the building were demolished through to October 2020, while the rest, including the tower, was demolished on 1 November 2020 in a controlled explosion streamed online during Wales' national lockdown. Some local roads were closed during the demolition. The demolition was featured in the TV show Scrap Kings. Site The entire site was expected to sell for £1.5 million, and be available for homes, retail outlets and a hotel. In January 2017, the council announced it was assessing bids for the site. In August 2018, Lidl submitted an application to open a store on the site. The council conducted a retail assessment into the need for a new supermarket and later stated that Lidl had successfully demonstrated the need for a new store. Concerns raised by neighbouring supermarket Asda over an excessive increase in traffic in the area were dismissed by planners, who stated the estimated five per cent increase was not significant and nor would a planned drive-through coffee shop increase traffic or have an impact on pollution. The council planning committee later unanimously approved the scheme. Nine months prior to the demolition, a petition was submitted opposing the redevelopment of the site for a supermarket. The petition did not suggest alternatives, but criticised the redevelopment and stated that the data used to gauge demand for a new supermarket was outdated. Some of those signing the petition proposed a walk-in hospital or an indoor ski centre, while others said there were already too many supermarkets in the town. By October 2020, shortly before the full demolition, 365 people had signed the petition. Nonetheless, a Lidl supermarket was built on the site. The proposal for a drive-through coffee shop was not followed through. References Buildings and structures in Wrexham 1973 establishments in Wales 2020 disestablishments in Wales Police stations in Wales Demolished buildings and structures in Wales Buildings and structures completed in 1973 Buildings and structures demolished in 2020 Buildings and structures demolished by controlled implosion
Wrexham Police Station (1973–2020)
[ "Engineering" ]
1,570
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
61,478,670
https://en.wikipedia.org/wiki/Compact%20Toroidal%20Hybrid
The Compact Toroidal Hybrid (CTH) is an experimental device at Auburn University that uses magnetic fields to confine high-temperature plasmas. CTH is a torsatron type of stellarator with an external, continuously wound helical coil that generates the bulk of the magnetic field for containing a plasma. Background Toroidal magnetic confinement fusion devices create magnetic fields that lie in a torus. These magnetic fields consist of two components, one component points in the direction that goes the long way around the torus (the toroidal direction), while the other component points in the direction that is the short way around the torus (the poloidal direction). The combination of the two components creates a helically shaped field. (You might imagine taking a flexible stick of candy cane and connecting the two ends.) Stellarator type devices generate all required magnetic fields with external magnetic coils. This is different from tokamak devices where the toroidal magnetic field is generated by external coils and the poloidal magnetic field is produced by an electrical current flowing through the plasma. The CTH device The main magnetic field in CTH is generated by a continuously wound helical coil. An auxiliary set of ten coils produces a toroidal field much like that of a tokamak. This toroidal field is used to vary the rotational transform of the confining magnetic field structure. CTH typically operates at a magnetic field of 0.5 to 0.6 tesla at the center of the plasma. CTH can be operated as a pure stellarator, but also has ohmic heating transformer system to drive electrical current in the plasma. This current produces a poloidal magnetic field that, in addition to heating the plasma, changes the rotational transform of the magnetic field. CTH researchers study how well the plasma is confined while they vary the source of rotational transform from external coils to plasma current. The CTH vacuum vessel is made of Inconel 625, which has a higher electrical resistance and lower magnetic permeability than stainless steel. Plasma formation and heating is achieved using 14 GHz, 10 kW electron cyclotron resonance heating (ECRH). A 200 kW gyrotron has recently been installed on CTH. Ohmic heating on CTH has an input power of 100 kW. Operations Plasmas electron temperatures are typically up to 200 electronvolts with electron densities up to 5 m−3. Plasmas last between 60 ms and 100 ms It takes 6 min-7 min to store enough energy to power the magnet coils Subsystems The following gives a list of subsystems needed for CTH operation. a set of 10 GE752 motors with attached 1-ton flywheels to store energy and produce currents for magnetic field generation two 18 GHz klystrons for Electron cyclotron resonance heating gyrotron for 2nd harmonic Electron cyclotron resonance heating a 2 kV, 50 μF capacitor bank and a 1 kV, 3 F capacitor bank to power the ohmic system a 640 channel data acquisition system Diagnostics The CTH has a large set of diagnostics to measure properties of plasma and magnetic fields. The following gives a list of major diagnostics. 4-channel Interferometer for electron density measurements two color soft-X-ray camera for tomography and temperature profile soft x-ray spectrometer hard x-ray detector Coils for measuring Mirnov oscillations in the plasma Rogowski coils for determining plasma current Passive spectroscopy for temperature and density measurements, and tungsten erosion diagnostic measurements Langmuir probe (triple) V3FIT V3FIT is a code to reconstruct the equilibrium between the plasma and confining magnetic field in cases where the magnetic field is toroidal in nature, but not axisymmetric as is the case with tokamak equilibria. Because stellarators are non-axisymmetric, the CTH group uses the V3FIT and VMEC codes for reconstructing equilibria. The V3FIT code uses as inputs the currents in the magnetic confinement coils, the plasma current, and data from the various diagnostics such as the Rogowski coils, SXR cameras, and interferometer. The output of the V3FIT code includes the structure of the magnetic field, and profiles of the plasma current, density, and SXR emissivity. Data from the CTH experiment was and continues to be used as a testbed for the V3FIT code which has also used for equilibrium reconstruction on the Helically Symmetric eXperiment (HSX), Large Helical Device (LHD), and Wendelstein 7-X (W7-X) stellarators, and the Reversed-Field eXperiment (RFX) and Madison Symmetric Torus (MST) reversed field pinches. Goals and major achievements CTH has made and continues to make fundamental contributions to the physics of current carrying stellarators. CTH researchers have studied disruption limits and characterizations as a function of the externally applied rotational transform (due to external magnet coils) for: Low safety factor (low-q) tokamak-like disruption avoidance Vertical displacement events (VDEs) Ongoing experiments CTH students and staff work on a number of experimental and computational research projects. Some of these are solely in house while others are in collaboration with other universities and national laboratories in the United States and abroad. Current research projects include: Density limit studies as a function of the vacuum rotational transform Using spectroscopic techniques to measure tungsten erosion with the DIII-D group Measuring plasma flows with a Coherence Imaging system on CTH and on the W-7X stellarator Heavy ion transport studies on the W-7X stellarator Studying transition regions between fully ionized and neutrally dominated plasmas Implementation of a 4th channel for the interferometer system 2nd harmonic electron cyclotron resonance heating with a gyrotron History CTH is the third torsatron device to be built at Auburn University. Previous Magnetic Confinement Devices built at the university were: The Auburn Torsatron (1983–1990) The Auburn Torsatron had an l=2, m=10 helical coil. The vacuum vessel had a major radius was Ro = 0.58 m with a minor radius of av=0.14 m. The magnetic field strength was |B| ≤ 0.2 T and plasmas were formed with ECRH using a 2.45 GHz magnetron taken from a microwave oven. The Auburn Torsatron was used to study basic plasma physics and diagnostics, and magnetic surface mapping techniques The Compact Auburn Torsatron (1990–2000) The Compact Auburn Torsatron (CAT) had two helical coils, an l=1,m=5 and an l=2,m=5 whose currents could be controlled independently. Varying the relative currents between the helical coils modified the rotational transform. The vacuum vessel major radius was Ro = 0.53 m with a plasma minor radius of av=0.11 m. The steady state magnetic field strength was |B| 0.1 T. CAT plasmas were formed with ECRH using a low ripple, 6 kW, 2.45 GHz magnetron source. CAT was used to study magnetic islands, magnetic island minimization, and driven plasma rotations Other Stellarators Below is a list of other Stellarators in the US and around the world: Wendelstein 7-X in Greifswald Germany The Large Helical Device (LDH) in Japan The National Compact Stellarator Experiment (NCSX) - A device designed and partially built at Princeton Plasma Physics Laboratory (PPPL) The Helically Symmetric Experiment at the University of Wisconsin - Madison The Hybrid Illinois Device for Research and Applications (HIDRA) experiment at the University of Illinois The Columbia Non-neutral Torus (CNT) at Columbia University in New York The Heliotron J experiment in Japan The TJ-II in Spain The Stellarator of Costa Rica (SCR-1) Uragan-2M in Ukraine References External links CTH website Physics Department Auburn University Plasma physics facilities Stellarators Auburn University
Compact Toroidal Hybrid
[ "Physics" ]
1,680
[ "Plasma physics facilities", "Plasma physics" ]
61,482,208
https://en.wikipedia.org/wiki/C17H17N3O2
{{DISPLAYTITLE:C17H17N3O2}} The molecular formula C17H17N3O2 (molar mass: 295.336 g/mol, exact mass: 295.1321 u) may refer to: Divaplon (RU-32698) GYKI-52895 Molecular formulas
C17H17N3O2
[ "Physics", "Chemistry" ]
75
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
63,675,213
https://en.wikipedia.org/wiki/Atomic%20manipulation
Atomic manipulation is the process of moving single atoms on a substrate using Scanning Tunneling Microscope (STM). The atomic manipulation is a surface science technique usually used to create artificial objects on the substrate made out of atoms and to study electronic behaviour of matter. These objects do not occur in nature and therefore need to be created artificially. The first demonstration of atomic manipulation was done by IBM scientists in 1989, when they created IBM in atoms. Vertical manipulation Vertical manipulation is a process of transferring an atom from substrate to STM tip, repositioning the STM tip and transferring the atom back on a desired position. Transferring an atom from substrate to STM tip is done by placing the tip above the atom in a constant current mode, turning off the feedback loop and applying high bias for a few seconds. In some cases it is also required to slowly approach the tip while applying high bias. Sudden spikes or drops in current during this process correspond to either transfer or to the atom being pushed away from the given spot. As such, there is always some level of randomness in this process. Transferring an atom from STM tip to substrate is done the same way but by applying opposite bias. Lateral manipulation Lateral manipulation means moving an adsorbate on the surface by making a temporary chemical or physical bond between the STM tip and the adsorbate. A typical lateral manipulation sequence begins by positioning the tip close to the adsorbate, bringing the tip close to the surface by increasing the tunneling current setpoint, moving the tip along a desired route and finally retracting the tip to normal scanning height. Lateral manipulation is typically applied to strongly bound adsorbates, such as metal adatoms on metal surfaces. The probability that the surface adsorbate moves the same distance traveled by the tip is strongly dependent on the tip conditions. Depending on the tip apex and the surface/adsorbate system, the lateral motion can occur by pushing, pulling or sliding of the adsorbate. These modes result in distinct tunneling current signals during the lateral motion. For example, periodic steps in the tunneling current indicate that the adsorbate is “jumping” between adsorption sites while following the tip: this means the tip pushes or pulls the adsorbate. Notable experiments Several groups have applied atomic manipulation techniques for artistic purposes to demonstrate control over the adatom positions. These include various institutional logos and a movie called “A Boy and His Atom” composed of individual STM scans by IBM researchers. Several notable condensed matter physics experiments have been realized with atomic manipulation techniques. These include the demonstration of electron confinement in so-called quantum corrals by Michael F. Crommie et al., and the subsequent Quantum mirage experiment, where the Kondo signature of an adatom was reflected from one focus to another in an elliptical quantum corral. Atomic manipulation has also sparked interest as a computation platform. Andreas J. Heinrich et al. built logic gates out of molecular cascades of CO adsorbates, and Kalff et al. demonstrated a rewritable kilobyte memory made of individual atoms. Recent experiments on artificial lattice structures have utilized atomic manipulation techniques to study the electronic properties of Lieb lattices, artificial graphene and Sierpiński triangles. References Surface science Nanotechnology
Atomic manipulation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
669
[ "Nanotechnology", "Condensed matter physics", "Surface science", "Materials science" ]
63,676,220
https://en.wikipedia.org/wiki/Okubo%E2%80%93Weiss%20parameter
In fluid mechanics, the Okubo–Weiss parameter, (normally given by "W") is a measure of the relative importance of deformation and rotation at a given point. It is calculated as the sum of the squares of normal and shear strain minus the relative vorticity. This is widely applicable in fluid properties particularly in identifying and describing oceanic eddies. For a horizontally non-divergent flow in the ocean, the parameter is given by: where: is the normal strain. is the shear strain. is the relative vorticity. References Okubo, A., 1970: Horizontal dispersion of floatable particles in the vicinity of velocity singularities such as convergences. Deep-Sea Res., 17, 445–454 Weiss, J., 1991: The dynamics of enstrophy transfer in two-dimensional hydrodynamics. PhysicaD, 48, 273–294 Fluid dynamics Continuum mechanics
Okubo–Weiss parameter
[ "Physics", "Chemistry", "Engineering" ]
193
[ "Continuum mechanics", "Chemical engineering", "Classical mechanics", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
63,677,270
https://en.wikipedia.org/wiki/Spoof%20surface%20plasmon
Spoof surface plasmons, also known as spoof surface plasmon polaritons and designer surface plasmons, are surface electromagnetic waves in microwave and terahertz regimes that propagate along planar interfaces with sign-changing permittivities. Spoof surface plasmons are a type of surface plasmon polariton, which ordinarily propagate along metal and dielectric interfaces in infrared and visible frequencies. Since surface plasmon polaritons cannot exist naturally in microwave and terahertz frequencies due to dispersion properties of metals, spoof surface plasmons necessitate the use of artificially-engineered metamaterials. Spoof surface plasmons share the natural properties of surface plasmon polaritons, such as dispersion characteristics and subwavelength field confinement. They were first theorized by John Pendry et al. Theory Surface plasmon polaritons (SPP) result from the coupling of delocalized electron oscillations ("surface plasmon") to electromagnetic waves ("polariton"). SPPs propagate along the interface between a positive- and a negative-permittivity material. These waves decay perpendicularly from the interface ("evanescent field"). For a plasmonic medium that is stratified along the z-direction in Cartesian coordinates, dispersion relation for SPPs can be obtained from solving Maxwell's equations: where is the wave vector that is parallel to the interface. It is in the direction of propagation. is the angular frequency. is the speed of light. and are the relative permittivies for metal and the dielectric. Per this relation, SPPs have shorter wavelengths than light in free space for a frequency band below surface plasmon frequency; this property, as well as subwavelength confinement, enables new applications in subwavelength optics and systems beyond the diffraction-limit. Nevertheless, for lower frequency bands such as microwave and terahertz, surface plasmon polariton modes are not supported; metals function approximately as perfect electrical conductors with imaginary dielectric functions in this regime. Per the effective medium approach, metal surfaces with subwavelength structural elements can mimic the plasma behaviour, resulting in artificial surface plasmon polariton excitations with similar dispersion behaviour. For the canonical case of a metamaterial medium that is formed by thin metallic wires on a periodic square lattice, the effective relative permittivity can be represented by the Drude model formula: where is the effective plasma frequency of the medium. is the vacuum permittivity. is the lattice period. is the radius of the constitutive wires. is the electrical conductivity of the metal. Methods and applications The use of subwavelength structures to induce low-frequency plasmonic excitations was first theorized by John Pendry et al. in 1996; Pendry proposed that a periodic lattice of thin metallic wires with a radius of 1 μm could be used to support surface-bound modes, with a plasma cut-off frequency of 8.2 GHz. In 2004, Pendry et al. extended the approach to metal surfaces that are perforated by holes, terming the artificial SPP excitations as "spoof surface plasmons." In 2006, terahertz pulse propagation in planar metallic structures with holes were shown via FDTD simulations. Martin-Cano et al. has realized the spatial and temporal modulation of guided terahertz modes via metallic parallelepiped structures, which they termed as "domino plasmons." Designer spoof plasmonic structures were also tailored to improve the performance of terahertz quantum cascade lasers in 2010. Spoof surface plasmons were proposed as a possible solution for decreasing the crosstalk in microwave integrated circuits, transmission lines and waveguides. In 2013, Ma et al. demonstrated a matched conversion from coplanar waveguide with a characteristic impedance of 50Ω to a spoof-plasmonic structure. In 2014, integration of commercial low-noise amplifier with spoof plasmonic structures was realized; the system reportedly worked from 6 to 20 GHz with a gain around 20 dB. Kianinejad et al. also reported the design of a slow-wave spoof-plasmonic transmission line; conversion from quasi-TEM microstrip modes to TM spoof plasmon modes were also demonstrated. Khanikaev et al. reported nonreciprocal spoof surface plasmon modes in structured conductor embedded in an asymmetric magneto-optical medium, which results in one-way transmission. Pan et al. observed the rejection of certain spoof plasmon modes with an introduction of electrically resonant metamaterial particles to the spoof plasmonic strip. Localized spoof surface plasmons were also demonstrated for metallic disks in microwave frequencies. See also Photonic crystal Plasmonic metamaterial Split-ring resonator Superlens Terahertz metamaterial References Further reading Plasmonics Metamaterials Microwave technology Terahertz technology Microtechnology Surface waves
Spoof surface plasmon
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,086
[ "Plasmonics", "Physical phenomena", "Spectrum (physical sciences)", "Metamaterials", "Microtechnology", "Surface waves", "Electromagnetic spectrum", "Surface science", "Materials science", "Waves", "Condensed matter physics", "Nanotechnology", "Solid state engineering", "Terahertz technolo...
63,682,456
https://en.wikipedia.org/wiki/Ring%20and%20Ball%20Apparatus
Ring and Ball Apparatus is used to determine the softening point of bitumen, waxes, LDPE, HDPE/PP blend granules, rosin and solid hydrocarbon resins. The apparatus was first designed in the 1910s while ASTM adopted a test method in 1916. This instrument is ideally used for materials having softening point in the range of 30 °C to 157 °C. Components Two brass rings. Two steel balls. Two ball guides to hold the balls in position. A Support to hold the rings, balls and thermometer in position A Glass beaker Thermometer Hot plate Magnetic stirrer Glycerol or water as heating bath Procedure The solid sample is taken in a Petri dish and melted by heating it on a standard hot plate. The bubble free liquefied sample is poured from the Petri dish and cast into the ring. The brass shouldered rings in this apparatus have 6.4 mm depth. The cast sample in the ring is kept undisturbed for one hour to solidify. Excess material is removed with a hot knife. The ring is set with the ball on top with ball guides on the grooved plate within the heating bath. As the temperature rises, the balls begin to sink through the rings carrying a portion of the softened sample with it. The temperature at which the steel balls touch the bottom plate determines the softening point in degrees Celsius. References Temperature Polymer chemistry
Ring and Ball Apparatus
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
288
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical quantities", "SI base quantities", "Intensive quantities", "Materials science", "Thermodynamics", "Polymer chemistry", "Wikipedia categories named after physical quantities" ]
63,684,284
https://en.wikipedia.org/wiki/Nuclear%20operator
In mathematics, nuclear operators are an important class of linear operators introduced by Alexander Grothendieck in his doctoral dissertation. Nuclear operators are intimately tied to the projective tensor product of two topological vector spaces (TVSs). Preliminaries and notation Throughout let X,Y, and Z be topological vector spaces (TVSs) and L : X → Y be a linear operator (no assumption of continuity is made unless otherwise stated). The projective tensor product of two locally convex TVSs X and Y is denoted by and the completion of this space will be denoted by . L : X → Y is a topological homomorphism or homomorphism, if it is linear, continuous, and is an open map, where , the image of L, has the subspace topology induced by Y. If S is a subspace of X then both the quotient map X → X/S and the canonical injection S → X are homomorphisms. The set of continuous linear maps X → Z (resp. continuous bilinear maps ) will be denoted by L(X, Z) (resp. B(X, Y; Z)) where if Z is the underlying scalar field then we may instead write L(X) (resp. B(X, Y)). Any linear map can be canonically decomposed as follows: where defines a bijection called the canonical bijection associated with L. X* or will denote the continuous dual space of X. To increase the clarity of the exposition, we use the common convention of writing elements of with a prime following the symbol (e.g. denotes an element of and not, say, a derivative and the variables x and need not be related in any way). will denote the algebraic dual space of X (which is the vector space of all linear functionals on X, whether continuous or not). A linear map L : H → H from a Hilbert space into itself is called positive if for every . In this case, there is a unique positive map r : H → H, called the square-root of L, such that . If is any continuous linear map between Hilbert spaces, then is always positive. Now let R : H → H denote its positive square-root, which is called the absolute value of L. Define first on by setting for and extending continuously to , and then define U on by setting for and extend this map linearly to all of . The map is a surjective isometry and . A linear map is called compact or completely continuous if there is a neighborhood U of the origin in X such that is precompact in Y. In a Hilbert space, positive compact linear operators, say L : H → H have a simple spectral decomposition discovered at the beginning of the 20th century by Fredholm and F. Riesz: There is a sequence of positive numbers, decreasing and either finite or else converging to 0, and a sequence of nonzero finite dimensional subspaces of H (i = 1, 2, ) with the following properties: (1) the subspaces are pairwise orthogonal; (2) for every i and every , ; and (3) the orthogonal of the subspace spanned by is equal to the kernel of L. Notation for topologies σ(X, X′) denotes the coarsest topology on X making every map in X′ continuous and or denotes X endowed with this topology. σ(X′, X) denotes weak-* topology on X* and or denotes X′ endowed with this topology. Note that every induces a map defined by . σ(X′, X) is the coarsest topology on X′ making all such maps continuous. b(X, X′) denotes the topology of bounded convergence on X and or denotes X endowed with this topology. b(X′, X) denotes the topology of bounded convergence on X′ or the strong dual topology on X′ and or denotes X′ endowed with this topology. As usual, if X* is considered as a topological vector space but it has not been made clear what topology it is endowed with, then the topology will be assumed to be b(X′, X). A canonical tensor product as a subspace of the dual of Bi(X, Y) Let X and Y be vector spaces (no topology is needed yet) and let Bi(X, Y) be the space of all bilinear maps defined on and going into the underlying scalar field. For every , let be the canonical linear form on Bi(X, Y) defined by for every u ∈ Bi(X, Y). This induces a canonical map defined by , where denotes the algebraic dual of Bi(X, Y). If we denote the span of the range of 𝜒 by X ⊗ Y then it can be shown that X ⊗ Y together with 𝜒 forms a tensor product of X and Y (where x ⊗ y := 𝜒(x, y)). This gives us a canonical tensor product of X and Y. If Z is any other vector space then the mapping Li(X ⊗ Y; Z) → Bi(X, Y; Z) given by u ↦ u ∘ 𝜒 is an isomorphism of vector spaces. In particular, this allows us to identify the algebraic dual of X ⊗ Y with the space of bilinear forms on X × Y. Moreover, if X and Y are locally convex topological vector spaces (TVSs) and if X ⊗ Y is given the -topology then for every locally convex TVS Z, this map restricts to a vector space isomorphism from the space of continuous linear mappings onto the space of continuous bilinear mappings. In particular, the continuous dual of X ⊗ Y can be canonically identified with the space B(X, Y) of continuous bilinear forms on X × Y; furthermore, under this identification the equicontinuous subsets of B(X, Y) are the same as the equicontinuous subsets of . Nuclear operators between Banach spaces There is a canonical vector space embedding defined by sending to the map Assuming that X and Y are Banach spaces, then the map has norm (to see that the norm is , note that so that ). Thus it has a continuous extension to a map , where it is known that this map is not necessarily injective. The range of this map is denoted by and its elements are called nuclear operators. is TVS-isomorphic to and the norm on this quotient space, when transferred to elements of via the induced map , is called the trace-norm and is denoted by . Explicitly, if is a nuclear operator then . Characterization Suppose that X and Y are Banach spaces and that is a continuous linear operator. The following are equivalent: is nuclear. There exists a sequence in the closed unit ball of , a sequence in the closed unit ball of , and a complex sequence such that and is equal to the mapping: for all . Furthermore, the trace-norm is equal to the infimum of the numbers over the set of all representations of as such a series. If Y is reflexive then is a nuclear if and only if is nuclear, in which case . Properties Let X and Y be Banach spaces and let be a continuous linear operator. If is a nuclear map then its transpose is a continuous nuclear map (when the dual spaces carry their strong dual topologies) and . Nuclear operators between Hilbert spaces Nuclear automorphisms of a Hilbert space are called trace class operators. Let X and Y be Hilbert spaces and let N : X → Y be a continuous linear map. Suppose that where R : X → X is the square-root of and U : X → Y is such that is a surjective isometry. Then N is a nuclear map if and only if R is a nuclear map; hence, to study nuclear maps between Hilbert spaces it suffices to restrict one's attention to positive self-adjoint operators R. Characterizations Let X and Y be Hilbert spaces and let N : X → Y be a continuous linear map whose absolute value is R : X → X. The following are equivalent: N : X → Y is nuclear. R : X → X is nuclear. R : X → X is compact and is finite, in which case . Here, is the trace of R and it is defined as follows: Since R is a continuous compact positive operator, there exists a (possibly finite) sequence of positive numbers with corresponding non-trivial finite-dimensional and mutually orthogonal vector spaces such that the orthogonal (in H) of is equal to (and hence also to ) and for all k, for all ; the trace is defined as . is nuclear, in which case . There are two orthogonal sequences in X and in Y, and a sequence in such that for all , . N : X → Y is an integral map. Nuclear operators between locally convex spaces Suppose that U is a convex balanced closed neighborhood of the origin in X and B is a convex balanced bounded Banach disk in Y with both X and Y locally convex spaces. Let and let be the canonical projection. One can define the auxiliary Banach space with the canonical map whose image, , is dense in as well as the auxiliary space normed by and with a canonical map being the (continuous) canonical injection. Given any continuous linear map one obtains through composition the continuous linear map ; thus we have an injection and we henceforth use this map to identify as a subspace of . Definition: Let X and Y be Hausdorff locally convex spaces. The union of all as U ranges over all closed convex balanced neighborhoods of the origin in X and B ranges over all bounded Banach disks in Y, is denoted by and its elements are call nuclear mappings of X into Y. When X and Y are Banach spaces, then this new definition of nuclear mapping is consistent with the original one given for the special case where X and Y are Banach spaces. Sufficient conditions for nuclearity Let W, X, Y, and Z be Hausdorff locally convex spaces, a nuclear map, and and be continuous linear maps. Then , , and are nuclear and if in addition W, X, Y, and Z are all Banach spaces then . If is a nuclear map between two Hausdorff locally convex spaces, then its transpose is a continuous nuclear map (when the dual spaces carry their strong dual topologies). If in addition X and Y are Banach spaces, then . If is a nuclear map between two Hausdorff locally convex spaces and if is a completion of X, then the unique continuous extension of N is nuclear. Characterizations Let X and Y be Hausdorff locally convex spaces and let be a continuous linear operator. The following are equivalent: is nuclear. (Definition) There exists a convex balanced neighborhood U of the origin in X and a bounded Banach disk B in Y such that and the induced map is nuclear, where is the unique continuous extension of , which is the unique map satisfying where is the natural inclusion and is the canonical projection. There exist Banach spaces and and continuous linear maps , , and such that is nuclear and . There exists an equicontinuous sequence in , a bounded Banach disk , a sequence in B, and a complex sequence such that and is equal to the mapping: for all . If X is barreled and Y is quasi-complete, then N is nuclear if and only if N has a representation of the form with bounded in , bounded in Y and . Properties The following is a type of Hahn-Banach theorem for extending nuclear maps: If is a TVS-embedding and is a nuclear map then there exists a nuclear map such that . Furthermore, when X and Y are Banach spaces and E is an isometry then for any , can be picked so that . Suppose that is a TVS-embedding whose image is closed in Z and let be the canonical projection. Suppose all that every compact disk in is the image under of a bounded Banach disk in Z (this is true, for instance, if X and Z are both Fréchet spaces, or if Z is the strong dual of a Fréchet space and is weakly closed in Z). Then for every nuclear map there exists a nuclear map such that . Furthermore, when X and Z are Banach spaces and E is an isometry then for any , can be picked so that . Let X and Y be Hausdorff locally convex spaces and let be a continuous linear operator. Any nuclear map is compact. For every topology of uniform convergence on , the nuclear maps are contained in the closure of (when is viewed as a subspace of ). See also References Bibliography External links Nuclear space at ncatlab Topological vector spaces Tensors Operator theory Topological tensor products Linear operators
Nuclear operator
[ "Mathematics", "Engineering" ]
2,631
[ "Functions and mappings", "Tensors", "Vector spaces", "Mathematical objects", "Linear operators", "Topological vector spaces", "Space (mathematics)", "Mathematical relations", "Topological tensor products" ]
70,867,509
https://en.wikipedia.org/wiki/Bimodal%20atomic%20force%20microscopy
Bimodal Atomic Force Microscopy (bimodal AFM) is an advanced atomic force microscopy technique characterized by generating high-spatial resolution maps of material properties. Topography, deformation, elastic modulus, viscosity coefficient or magnetic field maps might be generated. Bimodal AFM is based on the simultaneous excitation and detection of two eigenmodes (resonances) of a force microscope microcantilever. History Numerical and theoretical considerations prompted the development of bimodal AFM. The method was initially thought to enhance topographic contrast in air environments. Three subsequent advances such as the capability to detect non-topography properties such electrostatic and magnetic interactions; imaging in liquid and ultra-high vacuum and its genuine quantitative features set the stage for further developments and applications. Principles of Bimodal AFM The interaction of the tip with the sample modifies the amplitudes, phase shifts and frequency resonances of the excited modes. Those changes are detected and processed by the feedback of the instrument. Several features make bimodal AFM a very powerful surface characterization method at the nanoscale. (i) Resolution. Atomic, molecular or nanoscale spatial resolution was demonstrated. (ii) Simultaneity. Maps of different properties are generated at the same time. (iii) Efficiency. A maximum number of four data points per pixel are needed to generate material property maps. (iv) Speed. Analytical solutions link observables with material properties. Configurations In AFM, feedback loops control the operation of the microscope by keeping a fixed value a parameter of the tip's oscillation. If the main feedback loop operates with the amplitude, the AFM mode is called amplitude modulation (AM). If it operates with the frequency shift, the AFM mode is called frequency modulation (FM). Bimodal AFM might be operated with several feedback loops. This gives rise to a variety of bimodal configurations. The configurations are termed AM-open loop, AM-FM, FM-FM. For example, bimodal AM-FM means that the first mode is operated with an amplitude modulation loop while the 2nd mode is operated with a frequency modulation loop. The configurations might not be equivalent in terms of sensitivity, signal-to-noise ratio or complexity. Let's consider the AM-FM configuration. The first mode is excited to reach free amplitude (no interaction) and the changes of its amplitude and phase shift are tracked by a lock-in amplifier. The main feedback loop keeps constant the amplitude, at a certain set-point by modifying the tip vertical position (AM). In a nanomechanical mapping experiment, must be kept below 90°, i.e., the AFM is operated in the repulsive regime. At the same time, an FM loop acts on the second eigenmode. A phase-lock-loop regulates the excitation frequency by keeping the phase shift of the second mode at 90°. An additional feedback loop might be used to maintain the amplitude constant. Theory The theory of bimodal AFM operation encompasses several aspects. Among them, the approximations to express the Euler-Bernoulli equation of a continuous cantilever beam in terms of the equations of the excited modes, the type of interaction forces acting on the tip, the theory of demodulation methods or the introduction of finite-size effects. In a nutshell, the tip displacement in AFM is approximated by a point-mass model, where , , , , , and are, respectively, the driving frequency, the free resonant frequency, the quality factor, the stiffness, the driving force of the i-th mode, and the tip–sample interaction force. In bimodal AFM, the vertical motion of the tip (deflection) has two components, one for each mode, with , , , as the static, the first, and the second mode deflections; , and are, respectively, the amplitude, frequency and phase shift of mode i. The theory that transforms bimodal AFM observables into material properties is based on applying the virial and energy dissipation theorems to the equations of motion of the excited modes. The following equations were derived where is a time where the oscillation of both modes are periodic; the quality factor of mode i. Bimodal AFM operation might be involve any pair of eigenmodes. However, experiments are commonly performed by exciting the first two eigenmodes. The theory of bimodal AFM provides analytical expressions to link material properties with microscope observables. For example, for a paraboloid probe (radius ) and a tip-sample force given by the linear viscoelastic Kelvin-Voigt model, the effective elastic modulus of the sample, viscous coefficient of compressibility , loss tangent or retardation time are expressed by For an elastic material, the second term of equation to calculate disappears because which gives . The elastic modulus is obtained from the equation above. Other analytical expressions were proposed for the determination of the Hamaker constant and the magnetic parameters of a ferromagnetic sample. Applications Bimodal AFM is applied to characterize a large variety of surfaces and interfaces. Some applications exploit the sensitivity of bimodal observables to enhance spatial resolution. However, the full capabilities of bimodal AFM are shown in the generation of quantitative maps of material properties. The section is divided in terms of the achieved spatial resolution, atomic-scale or nanoscale. Atomic and molecular-scale resolution Atomic-scale imaging of graphene, semiconductor surfaces and adsorbed organic molecules were obtained in ultra high-vacuum. Angstrom-resolution images of hydration layers formed on proteins and Young's modulus map of a metal-organic frame work, purple membrane and a lipid bilayer were reported in aqueous solutions. Material property applications Bimodal AFM is widely used to provide high-spatial resolution maps of material properties, in particular, mechanical properties. Elastic and/or viscoelastic property maps of polymers, DNA, proteins, protein fibers, lipids or 2D materials were generated. Non-mechanical properties and interactions including crystal magnetic garnets, electrostatic strain, superparamagnetic particles and high-density disks were also mapped. Quantitative property mapping requires the calibration of the force constants of the excited modes. References Scanning probe microscopy Intermolecular forces Scientific techniques
Bimodal atomic force microscopy
[ "Chemistry", "Materials_science", "Engineering" ]
1,326
[ "Molecular physics", "Materials science", "Intermolecular forces", "Scanning probe microscopy", "Microscopy", "Nanotechnology" ]
70,875,524
https://en.wikipedia.org/wiki/Underwater%20survey
An underwater survey is a survey performed in an underwater environment or conducted remotely on an underwater object or region. Survey can have several meanings. The word originates in Medieval Latin with meanings of looking over and detailed study of a subject. One meaning is the accurate measurement of a geographical region, usually with the intention of plotting the positions of features as a scale map of the region. This meaning is often used in scientific contexts, and also in civil engineering and mineral extraction. Another meaning, often used in a civil, structural, or marine engineering context, is the inspection of a structure or vessel to compare actual condition with the specified nominal condition, usually with the purpose of reporting on the actual condition and compliance with, or deviations from, the nominal condition, for quality control, damage assessment, valuation, insurance, maintenance, and similar purposes. In other contexts it can mean inspection of a region to establish presence and distribution of specified content, such as living organisms, either to establish a baseline, or to compare with a baseline. These types of survey may be done in or of the underwater environment, in which case they may be referred to as underwater surveys, which may include bathymetric, hydrographic, and geological surveys, archaeological surveys, ecological surveys, and structural or vessel safety surveys. In some cases they can be done by remote sensing, using a variety of tools, and sometimes by direct human intervention, usually by a professional diver. Underwater surveys are an essential part of the planning, and often of quality control and monitoring, of underwater construction, dredging, mineral extraction, ecological monitoring, and archaeological investigations. They are often required as part of an ecological impact study. Types The types of underwater survey include, but are not necessarily restricted to, archeological, bathymetric and hydrographic, ecological, geological, and construction site surveys, and inspection surveys of marine and coastal structures and vessels afloat. A survey of the vessel structural condition and the adjacent site and hydrographic conditions would also be done when assessing proposed marine salvage operations. Archaeological surveys Archaeological surveys of underwater sites have traditionally been done by divers, but at sites where the depth is too great, sonar surveys have been done from surface and submersible vehicles, and photomosaic techniques have been done using ROUVs. Traditional methods include direct measurement from a baseline or grid set up at the site, and triangulation by direct measurement from marks of known position installed at the site, in the same way these would be used at a terrestrial site. Accuracy may be compromised by water conditions. This work is usually done by archaeologists who are qualified scientific divers. Bathymetric and hydrographic surveys Bathymetric surveys are traditionally done from the surface, by measuring depth (soundings) at measured positions along transect lines and later plotting the data onto a bathymetric chart, on which lines of constant depth (isobaths) may be drawn by interpolation of soundings. It is also conventional to provide a representative set of spot depths on the chart. Originally, soundings were made manually by measuring the length of a weighted line lowered to the bottom, bur after the development of accurate and reliable echo-sounding equipment it became the standard method. Data recording was automated when the equipment became available, and later precise position data was integrated into the data sets. Multibeam sonar with GPS position data corrected for vessel motion and combined in real time is the state of the art in the early 21st century. Bathymetric surveys of some bodies of water have required different procedures, particularly for sinkholes, caverns and caves where a significant portion of the bottom walls, and in some cases ceilings, are not visible to the sounding equipment from the surface, and it has been necessary to use remotely operated underwater vehicles or divers to gather the data. One of the complications of this class of underwater survey is the relative difficulty of establishing a baseline, or an accurate position for the ROUV, as GPS signals do not propagate through water. In some cases a physical line has been used, but sometimes a baseline can be established using sonar transducers set up at accurately surveyed positions, and relative offsets measured. Ecological surveys Various techniques have been used for underwater ecological surveys. Divers are frequently used to collect data, either by direct observation and recording, or by photographic recording at recorded locations, which may be specified to a given precision depending on the requirements of the project and available location technology. One method is for divers to use geolocated photographs taken by divers following a route recorded by a towed surface GPS receiver on a float kept above the camera by line tension. Date and time data are recorded concurrently by the camera and GPS unit, allowing position data for each photo to be extracted by post-processing or inspection. GPS precision may be augmented by Wide Area Augmentation System (WAAS). Depth data may be captured on camera from dive computers or depth gauges carried by the divers or mounted in view of the camera. The photos may be viewed on a map or via a geographic information system (GIS) for analysis. This method can also be used for spatial surveys of small areas, particularly in places where a survey vessel cannot go. To map an area the diver tows the float along bottom contours and the GPS track is used to create a map using drafting or GIS software. Spot depths may also be taken, using a digital camera to record time and depth from a depth gauge or dive computer to synchronize with the track data. This procedure can be combined with photographic recording of the benthic communities at intervals along the contour or perimeter. Surveys by professional divers tend to be relatively expensive, and some ecological monitoring programs and data gathering programs have enlisted the aid of volunteer recreational divers to conduct data collection appropriate to their certification and in some cases, further training, such as the Australian-based Reef Life Survey. Others, such as iNaturalist, have used the crowdsourcing system of uploaded digital photographic records of observations, with location data to whatever standard is available, which can vary considerably, thereby taking advantage of the thousands of amateur photographers who record their underwater surroundings anyway. In this way millions of observations from dive sites all over the world have been accumulated. Types of ecological survey: Sometimes more than one type of observations are combined in a survey. For example, the Reef Life Survey procedure includes three components along the same transect: Visual count of fish, visual count of benthic fauna, and photographs of the bottom at regular intervals. Geological surveys A geological survey is the systematic investigation of the geology beneath a given piece of ground for the purpose of creating a geological map or model. Underwater geological surveying employs techniques from the underwater equivalent of a traditional walk-over survey, studying outcrops and landforms, to intrusive methods, such as boreholes, to the use of geophysical techniques and remote sensing methods. An underwater geological survey map typically superimposes the surveyed extent and boundaries of geological units on a bathymetric map, together with information at points (such as measurements of orientation of bedding planes) and lines (such as the intersection of faults with the seabed surface). The map may include cross sections to illustrate the three-dimensional interpretation. Much of this work is done from surface vessels by remote sensing, bur in some cases such as in flooded caves, measurement and sampling requires remotely operated underwater vehicles or direct intervention by divers. Reflection seismology techniques are used for shipborne subsurface remote sensing. Seismic sources include air guns, sparkers and boomers. Airborne geophysical methods include magnetic, electromagnetic, and gravity measurement. Site surveys Site surveys are inspections of an area where work is proposed, to gather information for a design. It can determine a precise location, access, best orientation for the site and the location of obstacles. The type of site survey and the best practices required depend on the nature of the project. In hydrocarbon exploration, for example, site surveys are run over the proposed locations of offshore exploration or appraisal wells. They consist typically of a tight grid of high resolution (high frequency) reflection seismology profiles to look for possible gas hazards in the shallow section beneath the seabed and detailed bathymetric data to look for possible obstacles on the seafloor (e.g. shipwrecks, existing pipelines) using multibeam echosounders. A type of site survey is performed during marine salvage operations, to assess the structural condition of a stranded vessel and to identify aspects of the vessel, site and environment that may affect the operation. Such a survey may include investigation of hull structural and watertight integrity, extent of flooding, bathymetry and geology of the immediate vicinity, currents and tidal effects, hazards, and possible environmental impact of the salvage work. Structural surveys Structural integrity inspections of inland, coastal and offshore underwater structures, including bridges, dams, causeways, harbours, breakwaters, jetties, embankments, levees, petroleum and gas production platforms and infrastructure, pipelines, wellheads and moorings. Vessel safety surveys Vessel safety surveys are inspections of the structure and equipment of a vessel to assess the condition of the surveyed items and check that they comply with legal or classification society requirements for insurance and registration. They may occur at any time when there is reason to suspect that the condition has changed significantly since the previous survey, or as a condition of purchase, and the first survey is generally during construction (built under survey) or before first registration. The criteria for acceptance are defined by the licensing or registration authority for a variety of equipment vital to the safe operation of the vessel, such as hull structure, static stability, propulsion machinery, auxiliary machinery, safety equipment, lifting equipment, rigging, ground tackle, etc. Some surveys must be done in dry dock, but this is expensive, and in some cases for intermediate surveys the underwater part of the external survey may be done afloat using divers or ROUVs to do the inspection, usually providing live video to the surveyor, or possibly video recording for later analysis. Live video has the advantage that the surveyor can instruct the diver to investigate further or provide views from other angles. Live video would normally also be recorded for the records. Tools Remote measurement through water Single beam echosounders are used to measure distance of a reflecting surface, like the seabed, by comparing the time between emission of a sound signal and first receiving the reflected signal back at the transceiver, using the speed of sound in water. They are usually used to make a series of spot depth measurements along the path of the transducer, which can be used to map the bottom profile. Multibeam echosounders use beamforming to extract directional information from the returning sound waves, producing a swath of depth readings across the path of the transducer from a single ping. The rate of data acquisition is far greater than for single beam systems, but they are susceptible to shadowing effects from high-profile surfaces offset to the side of the transducer path. This can be compensated by overlapping swaths. The data is processed to give a three dimensional image of the bottom. Acoustic Doppler current profilers (ADCP) are hydro-acoustic current meters, used to measure water current velocities over a depth range using the Doppler effect of sound waves scattered back from particles within the water column. The traveling time of the sound waves gives the distance, and the frequency shift of the echo is proportional to the water velocity along the acoustic path. Lidar uses a laser light source and optical receiver to measure range and direction of reflected signals, but is limited by the water transparency. Side-scan sonar is used to efficiently create images of large areas of the sea floor, as seen from the point of view of the transducer. Seismic sources such as sparkers and boomers are used in seismic reflection profiling, using sound pulse frequencies that effectively penetrate the solid seabed and are partially reflected by changes in acoustic impedance, often signifying a change in rock type. Boomers work in the 500 to 4000 Hz range. and sparkers in the 200 to 800 Hz range. Lower frequency will usually penetrate to greater depth, but with lower resolution. Platforms Dedicated survey vessels and vessels of opportunity. Diving support vessels for surface-supplied diving operations, and dive boats for scuba surveys. DSVs are often fitted for ROV support and other underwater surveys. Autonomous survey vessels are more economical to operate than crewed vessels, and can be sent into waters that are too shallow or confined or otherwise hazardous for larger crewed vessels. Autonomous underwater vehicles are more economical than crewed vehicles. Researchers have focused on the development of AUVs for long-term data collection in oceanography and coastal management. The oil and gas industry uses AUVs to make detailed maps of the seafloor before they start building subsea infrastructure. The AUV allows survey companies to conduct precise surveys of areas where traditional bathymetric surveys would be less effective or too costly. Also, post-lay pipe surveys which include pipeline inspection are possible. The use of AUVs for pipeline inspection and inspection of underwater man-made structures is becoming more common. Scientists use AUVs to study lakes, the ocean, and the ocean floor. A variety of sensors can also be carried to measure the concentration of various elements or compounds in the water, the absorption or reflection of light, and the presence of microscopic life. Examples include conductivity-temperature-depth sensors (CTDs), chlorophyll fluorometers, and pH sensors. Remotely operated underwater vehicles. Survey or inspection ROVs are generally smaller than workclass ROVs and are often sub-classified as either Class I: Observation Only or Class II Observation with payload. They are used to assist with hydrographic survey, and also for inspection work. Survey ROVs, although smaller than workclass, often have comparable performance with regard to the ability to hold position in currents, and often carry similar tools and equipment - lighting, cameras, sonar, USBL (Ultra-short baseline) beacon, and strobe flasher depending on the payload capability of the vehicle and the needs of the user. Underwater position measurement systems Underwater acoustic positioning systems are systems for the tracking, navigation and location of underwater vehicles or divers by means of acoustic distance and/or direction measurements, and subsequent position triangulation. They are commonly used in a wide variety of underwater work, including oil and gas exploration, ocean sciences, salvage operations, marine archaeology, law enforcement and military activities. Long baseline acoustic positioning systems (LBL systems) use networks of sea-floor mounted baseline transponders as reference points for navigation. These are generally deployed around the perimeter of a work site. The LBL technique results in very high positioning accuracy and position stability that is independent of water depth. It is generally better than 1-meter and can reach a few centimeters accuracy. LBL systems are generally used for precision underwater survey work where the accuracy or position stability of ship-based short or ultra-short baseline positioning systems does not suffice. Short baseline acoustic positioning system (SBL acoustic positioning systems) SBL systems do not require any seafloor mounted transponders or equipment and are thus suitable for tracking underwater targets from boats or ships that are either anchored or under way. However, unlike USBL systems, which offer a fixed accuracy, SBL positioning accuracy improves with transducer spacing. Thus, where space permits, such as when operating from larger vessels or a dock, the SBL system can achieve a precision and position robustness that is similar to that of sea floor mounted LBL systems, making the system suitable for high-accuracy survey work. When operating from a smaller vessel where transducer spacing is limited (i.e. when the baseline is short), the SBL system will exhibit reduced precision. Ultra-short baseline acoustic positioning system (USBL), also known as super short base line (SSBL), consists of a transceiver, which is mounted under a ship, and a transponder or responder on the seafloor, on a towfish, or on an ROV. A computer, is used to calculate a position from the ranges and bearings measured by the transceiver. USBLs are also used in "inverted" (iUSBL) configurations, with the transceiver mounted on an autonomous underwater vehicle, and the transponder on the installation that launches it. In this case, the signal processing happens inside the vehicle to allow it to locate the transponder for applications such as automatic docking and target tracking. Manual measurement underwater Measurements can be made using a variety of instruments. Vertical position relative to the surface, also known as depth measurement, may use: Depth gauges (using pressure as a proxy) Dive computers (using pressure as a proxy) Measuring tapes (direct linear measurement) Pneumofathometers (using pressure as a proxy) Length measurement in other directions, using: Measuring tape, Surveyor's chain, Calibrated distance line, particularly in cave surveys, Towed GPS receivers on floats (by spherical trigonometry) Inertial navigation system, integrated from accelerometer output in three dimensions Hand-held range-finding sonar Vernier and plain calipers, for small dimensions. Length measurements may also be derived by triangulation from a baseline, angular measurement, and trigonometry Angular measurements may be made using: Magnetic compass Protractor Clinometer Goniometer Or may be derived from GPS positions, from linear triangulation and trigonometry, and from inertial navigation position data. Non-destructive testing measurements may include: Ultrasonic thickness measurement Ultrasonic crack detection Measurements of visibility, using Secchi discs and similar methods, and spot measurements of other physical and chemical characteristics by local measurementor recording by a diver, or sampling of the water and bottom composition. Sampling and specimen collection Samples of seafloor sediments and rock can be collected using grabs, coring devices, ROUVs and divers. Coring devices include core drills and impact penetrators. Divers and ROUV operators are more discriminating in their selection of samples than grabs and remotely operated coring devices. Biological samples can be collected by dredges, grabs, traps, or nets, but more directed sampling generally requires visual input and human intervention, and is commonly done by divers, ROUVs and crewed submersibles equipped for collection. Recording and counting Underwater photography. Digital underwater cameras can conveniently be used to record an image, and the time at which the photo was taken. In some cases direction, inclination and depth are also available from the camera, or can be recorded by photographing the display of appropriate instruments. Jump cameras are cameras mounted on a frame that triggers an exposure when the frame hits the bottom. To operate, the frame is lowered until the rope slacks off, then lifted and the boat moved to the next position. Underwater videography, the branch of electronic underwater photography concerned with capturing underwater moving images, and live video feeds, which allow a remote operator to see the underwater environment from elsewhere. Baited remote underwater video (BRUV) is a system used in marine biology research. By attracting fish into the field of view of a remotely controlled camera, the technique records fish diversity, abundance and behaviour of species. Sites are sampled by video recording the region surrounding a baited canister which is lowered to the bottom. The video can be transmitted directly to the surface by cable, or recorded for later analysis. Baited cameras are highly effective at attracting scavengers and subsequent predators, and are a non-invasive method of generating relative abundance indices for a number of marine species. Checklists are useful when a reasonably small range of objects or types of object are to be recorded as being present, as they reduce the amount of writing that must be done underwater. (legibility tends to suffer in cold water or in moving water) Clipboard or and pencil, are used when sketches and measurements are to be recorded, and are versatile though not very efficient for data recording. Waterproof paper is available for use on clipboards, and can be printed with checklists. Quadrat frames are used to establish a discrete area for examination, and can be visually examined, photographed, or both. Presentation of results Results of underwater surveys can be presented in several ways, depending on the target demographic and intended use of the data. A common presentation format is a map indicating spatial distribution or general topography, often involving a depth dimension. Drawings, photographic images, graphs, tables, and text descriptions may also be used, often in conjunction with one or more maps. Maps may also be used to indicate variations over time in comparison with a baseline. See also References Sonar Geodesy Underwater work Surveying
Underwater survey
[ "Mathematics", "Engineering" ]
4,226
[ "Applied mathematics", "Civil engineering", "Surveying", "Geodesy" ]
58,486,357
https://en.wikipedia.org/wiki/Soft%20configuration%20model
In applied mathematics, the soft configuration model (SCM) is a random graph model subject to the principle of maximum entropy under constraints on the expectation of the degree sequence of sampled graphs. Whereas the configuration model (CM) uniformly samples random graphs of a specific degree sequence, the SCM only retains the specified degree sequence on average over all network realizations; in this sense the SCM has very relaxed constraints relative to those of the CM ("soft" rather than "sharp" constraints). The SCM for graphs of size has a nonzero probability of sampling any graph of size , whereas the CM is restricted to only graphs having precisely the prescribed connectivity structure. Model formulation The SCM is a statistical ensemble of random graphs having vertices () labeled , producing a probability distribution on (the set of graphs of size ). Imposed on the ensemble are constraints, namely that the ensemble average of the degree of vertex is equal to a designated value , for all . The model is fully parameterized by its size and expected degree sequence . These constraints are both local (one constraint associated with each vertex) and soft (constraints on the ensemble average of certain observable quantities), and thus yields a canonical ensemble with an extensive number of constraints. The conditions are imposed on the ensemble by the method of Lagrange multipliers (see Maximum-entropy random graph model). Derivation of the probability distribution The probability of the SCM producing a graph is determined by maximizing the Gibbs entropy subject to constraints and normalization . This amounts to optimizing the multi-constraint Lagrange function below: where and are the multipliers to be fixed by the constraints (normalization and the expected degree sequence). Setting to zero the derivative of the above with respect to for an arbitrary yields the constant being the partition function normalizing the distribution; the above exponential expression applies to all , and thus is the probability distribution. Hence we have an exponential family parameterized by , which are related to the expected degree sequence by the following equivalent expressions: References Random graphs
Soft configuration model
[ "Physics", "Mathematics" ]
414
[ "Graph theory", "Mathematical relations", "Random graphs", "Statistical ensembles", "Statistical mechanics" ]
58,487,522
https://en.wikipedia.org/wiki/Preconditioned%20Crank%E2%80%93Nicolson%20algorithm
In computational statistics, the preconditioned Crank–Nicolson algorithm (pCN) is a Markov chain Monte Carlo (MCMC) method for obtaining random samples – sequences of random observations – from a target probability distribution for which direct sampling is difficult. The most significant feature of the pCN algorithm is its dimension robustness, which makes it well-suited for high-dimensional sampling problems. The pCN algorithm is well-defined, with non-degenerate acceptance probability, even for target distributions on infinite-dimensional Hilbert spaces. As a consequence, when pCN is implemented on a real-world computer in large but finite dimension N, i.e. on an N-dimensional subspace of the original Hilbert space, the convergence properties (such as ergodicity) of the algorithm are independent of N. This is in strong contrast to schemes such as Gaussian random walk Metropolis–Hastings and the Metropolis-adjusted Langevin algorithm, whose acceptance probability degenerates to zero as N tends to infinity. The algorithm as named was highlighted in 2013 by Cotter, Roberts, Stuart and White, and its ergodicity properties were proved a year later by Hairer, Stuart and Vollmer. In the specific context of sampling diffusion bridges, the method was introduced in 2008. Description of the algorithm Overview The pCN algorithm generates a Markov chain on a Hilbert space whose invariant measure is a probability measure of the form for each measurable set , with normalising constant given by where is a Gaussian measure on with covariance operator and is some function. Thus, the pCN method applied to target probability measures that are re-weightings of a reference Gaussian measure. The Metropolis–Hastings algorithm is a general class of methods that try to produce such Markov chains , and do so by a two-step procedure of first proposing a new state given the current state and then accepting or rejecting this proposal, according to a particular acceptance probability, to define the next state . The idea of the pCN algorithm is that a clever choice of (non-symmetric) proposal for a new state given might have an associated acceptance probability function with very desirable properties. The pCN proposal The special form of this pCN proposal is to take or, equivalently, The parameter is a step size that can be chosen freely (and even optimised for statistical efficiency). One then generates and sets The acceptance probability takes the simple form It can be shown that this method not only defines a Markov chain that satisfies detailed balance with respect to the target distribution , and hence has as an invariant measure, but also possesses a spectral gap that is independent of the dimension of , and so the law of converges to as . Thus, although one may still have to tune the step size parameter to achieve a desired level of statistical efficiency, the performance of the pCN method is robust to the dimension of the sampling problem being considered. Contrast with symmetric proposals This behaviour of pCN is in stark contrast to the Gaussian random walk proposal with any choice of proposal covariance , or indeed any symmetric proposal mechanism. It can be shown using the Cameron–Martin theorem that for infinite-dimensional this proposal has acceptance probability zero for -almost all and . In practice, when one implements the Gaussian random walk proposal in dimension , this phenomenon can be seen in the way that for fixed , the acceptance probability tends to zero as , and for a fixed desired positive acceptance probability, as . References Monte Carlo methods Markov chain Monte Carlo Sampling techniques
Preconditioned Crank–Nicolson algorithm
[ "Physics" ]
720
[ "Monte Carlo methods", "Computational physics" ]
58,490,342
https://en.wikipedia.org/wiki/Falck-Hillarp%20method%20of%20fluorescence
The Falck-Hillarp method of fluorescence (the F-H method) is a technique that makes it possible to demonstrate and study, with unique precision and susceptibility, certain monoamines, among those the three catecholamines dopamine, noradrenaline, and adrenaline, as well as serotonin and related substances. The method is based on the important and decisive discovery that these compounds are able to react with formaldehyde – in near complete absence of water – to form fluorophores, i.e. molecules that, when irradiated with light invisible to the eye, will emit visible light. This happens in a “dry” state, without extracting the monoamines from the cells during the entire procedure, a process that starts with separation of a tissue sample and ends with a thin tissue slice that can be examined in a fluorescence microscope. The F-H method allowed, for the first time, the examiner to watch these monoamines light up in the microscope and to precisely determine in which cells they were present, and thereby understanding their functions. The method was developed by Bengt Falck and Nils-Åke Hillarp in the 1960s at the Department of Histology, University of Lund. For intense neurobiological research it became possible to demonstrate the presence of monoamines in nerve cells belonging to the central and the peripheral nervous system and for the first time comprehend that these substances act as signal substances, i.e. transmitters. The initial publication, written already in 1961, described a wide-ranging examination of nerves supplying a large number of organs in the body. This work validated the concept of Ulf von Euler, the Nobel prize winner, that noradrenaline is the signal substance in peripheral autonomic nerves. In the same year, this first publication was followed by an explanation of the chemical background of the F–H method. Very thin membranes, such as the rat iris or mesentery, do not have to be sectioned for microscopic studies but may simply be spread on glass, dried, and then exposed to gaseous formaldehyde for subsequent study with a fluorescence microscope. The publication on the chemical background was later named among "The 200 Most-Cited Papers of All Time". In 2012, the Faculty of Medicine at the University of Lund arranged a symposium “From Nerve to Pills” celebrating the 50th anniversary of the initial publication of the F-H method. References External links The Falck-Hillarp Fluorescence Method, Biochemistry methods
Falck-Hillarp method of fluorescence
[ "Chemistry", "Biology" ]
525
[ "Biochemistry methods", "Biochemistry" ]
78,142,100
https://en.wikipedia.org/wiki/Soterenol
Soterenol (), also known as soterenol hydrochloride (; developmental code name MJ-1992) in the case of the hydrochloride salt, is a drug of the phenethylamine family described as an adrenergic, bronchodilator, and antiasthmatic which was never marketed. It is an analogue of salbutamol and acts as a β-adrenergic receptor agonist. The drug was first developed in 1964 and was first described in the literature by 1967. References Abandoned drugs Amines Antiasthmatic drugs Beta-adrenergic agonists Bronchodilators Phenylethanolamines Sulfonamides Sympathomimetics Isopropylamino compounds
Soterenol
[ "Chemistry" ]
163
[ "Drug safety", "Functional groups", "Amines", "Bases (chemistry)", "Abandoned drugs" ]
78,142,608
https://en.wikipedia.org/wiki/Trecadrine
Trecadrine () is a drug that was originally developed as an anti-ulcer agent but was found to act as a β3-adrenergic receptor agonist with potential anti-obesity and anti-diabetic properties. It is selective for the β3-adrenergic receptor, lacking activity at the β1- and β2-adrenergic receptors. The drug is orally active. Structurally, trecadrine is a substituted β-hydroxyamphetamine and derivative of β-hydroxy-N-methylamphetamine (ephedrine, pseudoephedrine) with a tricyclic moiety attached at the amine. References Abandoned drugs Anti-diabetic drugs Anti-obesity drugs Beta3-adrenergic agonists Drugs for acid-related disorders Methamphetamines Sympathomimetics Tricyclic compounds Phenethylamines Ethanolamines Dibenzocycloheptenes
Trecadrine
[ "Chemistry" ]
206
[ "Drug safety", "Abandoned drugs" ]
78,147,827
https://en.wikipedia.org/wiki/Electrostatic%20solitary%20wave
In space physics, an electrostatic solitary wave (ESW) is a type of electromagnetic soliton occurring during short time scales (when compared to the general time scales of variations in the average electric field) in plasma. When a rapid change occurs in the electric field in a direction parallel to the orientation of the magnetic field, and this perturbation is caused by a unipolar or dipolar electric potential, it is classified as an ESW. Since the creation of ESWs is largely associated with turbulent fluid interactions, some experiments use them to compare how chaotic a measured plasma's mixing is. As such, many studies which involve ESWs are centered around turbulence, chaos, instabilities, and magnetic reconnection. History The discovery of solitary waves in general is attributed to John Scott Russell in 1834, with their first mathematical conceptualization being finalized in 1871 by Joseph Boussinesq (and later refined and popularized by Lord Rayleigh in 1876). However, these observations and solutions were for oscillations of a physical medium (usually water), and not describing the behavior of non-particle waves (including electromagnetic waves). For solitary waves outside of media, which ESWs are classified as, the first major framework was likely developed by Louis de Broglie in 1927, though his work on the subject was temporarily abandoned and was not completed until the 1950s. Electrostatic structures were first observed near Earth's polar cusp by Donald Gurnett and Louis A. Frank using data from the Hawkeye 1 satellite in 1978. However, it is Michael Temerin, William Lotko, Forrest Mozer, and Keith Cerny who are credited with the first observation of electrostatic solitary waves in Earth's magnetosphere in 1982. Since then, a wide variety of magnetospheric satellites have observed and documented ESWs, allowing for analysis of them and the surrounding plasma conditions. Detection Electrostatic solitary waves, by their nature, are a phenomenon occurring in the electric field of a plasma. As such, ESWs are technically detectable by any instrument that can measure changes to the electric field during a sufficiently short time window. However, since a given plasma's electric field can vary widely depending on the properties of the plasma and since ESWs occur in short time windows, detection of ESWs can require additional screening of the data in addition to the measurement of the electric field itself. One solution to this obstacle for detecting ESWs, implemented by NASA's Magnetospheric Multiscale Mission (MMS), is to use a digital signal processor to analyze the electric field data and isolate short-duration spikes as a candidate for an ESW. Though the following detection algorithm is specific to MMS, other ESW-detecting algorithms function on similar principles. To detect an ESW, the data from a device measuring the electric field is sent to the digital signal processor. This data is analyzed across a short time window (in the case of MMS, 1 millisecond), taking both the average electric field magnitude and the largest electric field magnitude during that time window. If the peak field strength exceeds some multiple of the average field strength (4 times the field strength in MMS), then the time window is considered to contain an ESW. After this occurs, the ESW can be associated with the peak electric field strength and categorized accordingly. These algorithms vary in success at detection, since both the time window and detection multiplier are chosen by scientists based on the parameters they wish to detect. As such, these algorithms often have false positives and false negatives. Interactions One of the primary physical consequences of ESWs is their creation of electron phase-space holes, a type of structure which prevents low velocity electrons from remaining close to the source of the ESW. These phase-space holes, like the ESWs themselves, can travel stably through the surrounding plasma. Since most plasmas are overall electrically neutral, these phase-space holes often end up behaving as a positive pseudoparticle. In general, in order to form an electron phase-space hole, the electric potential energy associated with the ESW's potential needs to exceed the kinetic energy of electrons in the plasma (behavior analogous to potential hills). Research has shown that one possible set of situations where this occurs naturally are kinetic instabilities. One observed example of this is the increased occurrence of these holes near Earth's bow shock and magnetopause, where the incoming solar wind collides with Earth's magnetosphere to produce large amounts of turbulence in the plasma. Forms The definition of an ESW is broad enough that, on occasion, research distinguishes between different types: Ion-acoustic solitary waves: A type of ESW that occurs when the electric potential that causes the ESW produces an ion acoustic wave. Electron-acoustic solitary waves: A type of ESW that produces an acoustic wave associated with electrons. These tend to be substantially faster and higher frequency than ion-acoustic solitary waves. Supersolitary waves: A type of ESW whose electric potential include pulses on even smaller time scales than the ESW itself. See also Soliton Interplanetary magnetic field Solar wind Electric potential Turbulence Time domain electromagnetics Notes a.An ESW itself is strictly an electromagnetic phenomenon, and as such is technically non-dependent on media. However, this technicality should be observed with caution. Nearly all conditions that give rise to an ESW are theorized to be dependent on the plasma medium they reside in. b.Though the identity of the other 3 co-authors is known for certain, the career of K. Cerny after the publishing of their paper is poorly documented. The first name, date, school, and major associated with graduation heavily suggest that Keith Cerny is the K. Cerny credited on the paper, but this is (as-of-yet) unconfirmed. References Physics Wave mechanics Space physics Solitons 1982 in science Waves in plasmas Quasiparticles
Electrostatic solitary wave
[ "Physics", "Materials_science", "Astronomy" ]
1,224
[ "Waves in plasmas", "Physical phenomena", "Matter", "Outer space", "Plasma phenomena", "Classical mechanics", "Waves", "Wave mechanics", "Condensed matter physics", "Quasiparticles", "Subatomic particles", "Space physics" ]
78,149,844
https://en.wikipedia.org/wiki/EfficientNet
EfficientNet is a family of convolutional neural networks (CNNs) for computer vision published by researchers at Google AI in 2019. Its key innovation is compound scaling, which uniformly scales all dimensions of depth, width, and resolution using a single parameter. EfficientNet models have been adopted in various computer vision tasks, including image classification, object detection, and segmentation. Compound scaling EfficientNet introduces compound scaling, which, instead of scaling one dimension of the network at a time, such as depth (number of layers), width (number of channels), or resolution (input image size), uses a compound coefficient to scale all three dimensions simultaneously. Specifically, given a baseline network, the depth, width, and resolution are scaled according to the following equations:subject to and . The condition is such that increasing by a factor of would increase the total FLOPs of running the network on an image approximately times. The hyperparameters , , and are determined by a small grid search. The original paper suggested 1.2, 1.1, and 1.15, respectively. Architecturally, they optimized the choice of modules by neural architecture search (NAS), and found that the inverted bottleneck convolution (which they called MBConv) used in MobileNet worked well. The EfficientNet family is a stack of MBConv layers, with shapes determined by the compound scaling. The original publication consisted of 8 models, from EfficientNet-B0 to EfficientNet-B7, with increasing model size and accuracy. EfficientNet-B0 is the baseline network, and subsequent models are obtained by scaling the baseline network by increasing . Variants EfficientNet has been adapted for fast inference on edge TPUs and centralized TPU or GPU clusters by NAS. EfficientNet V2 was published in June 2021. The architecture was improved by further NAS search with more types of convolutional layers. It also introduced a training method, which progressively increases image size during training, and uses regularization techniques like dropout, RandAugment, and Mixup. The authors claim this approach mitigates accuracy drops often associated with progressive resizing. See also Convolutional neural network SqueezeNet MobileNet You Only Look Once References External links EfficientNet: Improving Accuracy and Efficiency through AutoML and Model Scaling (Google AI Blog) Machine learning Computer vision Artificial neural networks Google software
EfficientNet
[ "Engineering" ]
489
[ "Artificial intelligence engineering", "Packaging machinery", "Machine learning", "Computer vision" ]
78,151,021
https://en.wikipedia.org/wiki/Conditioned%20avoidance%20response%20test
The conditioned avoidance response (CAR) test, also known as the active avoidance test, is an animal test used to identify drugs with antipsychotic-like effects. It is most commonly employed as a two-way active avoidance test with rodents. The test assesses the conditioned ability of an animal to avoid an unpleasant stimulus. Drugs that selectively suppress conditioned avoidance responses without affecting escape behavior are considered to have antipsychotic-like activity. Variations of the test, like testing for enhancement of avoidance and escape responses, have also been used to assess other drug effects, like pro-motivational and antidepressant-like effects. Dopamine D2 receptor antagonists, like most classical antipsychotics, are active in the CAR test once occupancy of the dopamine D2 receptor reaches around 70%. Dopamine D2 receptor partial agonists like aripiprazole are likewise active in the test. Serotonin 5-HT2A receptor antagonists can enhance suppression of conditioned avoidance responses in the test. Various other types of drugs have also been found to be active in the CAR test. The effects of drugs that are active in the test are thought to be mediated by inhibition of signaling in the nucleus accumbens or ventral striatum of the mesolimbic pathway. This is a major brain area involved in behavioral activation and motivation. The CAR test was developed in the 1950s soon after the discovery of antipsychotics. It is one of the oldest animal tests of antipsychotic-like activity. Other animal tests that are used to evaluate antipsychotic-like activity include inhibition of drug-induced hyperactivity or stereotypy, reversal of drug-induced prepulse inhibition deficits, and restoration of latent inhibition. Description There are several variations of the CAR test. The most common form of the test is the two-way active avoidance test (also known as the two-way discriminated shuttle box procedure). Other variations of the test include the one-way active avoidance test (also known as the one-way discriminated pole jump procedure or the pole-jumping test) and the non-discriminated operant continuous avoidance procedure (also known as the continuous avoidance test, the Sidman avoidance test, or simply the Sidman procedure). In the two-way active avoidance test, an animal is placed in a two-compartment shuttle box with an open doorway. Then, the animal is trained to avoid an aversive stimulus (unconditioned stimulus), usually an electric footshock, on presentation of a neutral stimulus (conditioned stimulus), usually an auditory or visual stimulus like a tone or light, that shortly precedes it. The animal does this by performing a specific behavioral response, like moving to the other compartment of the box, and this response is referred to as "avoidance" or "conditioned avoidance". If the animal is late in performing the avoidance, the aversive stimulus is presented until the animal responds by moving to the compartment. This is referred to as "escape". If the animal does not escape within a certain amount of time, it is designated "escape failure". As such, there are three variables that can be measured in the CAR test: avoidance, escape, and escape failure. Drugs that are considered to show antipsychotic-like effects selectively suppress the avoidance response without affecting escape behavior. Conversely, drugs that are not considered to have antipsychotic-like effects either have no effect in the CAR test or suppress both avoidance behavior and escape behavior at the same doses. Examples of drugs that inhibit both avoidance and escape responses include sedatives like benzodiazepines, barbiturates, and meprobamate and antidepressants like many tricyclic antidepressants (TCAs). The CAR test is considered to have high predictive validity in the identification of potential antipsychotics and is frequently used in drug development. However, its face validity and construct validity have been described as low or absent. Moreover, a described major limitation of the model is that drugs active in the test work by impairing a normal self-preservation function; that is, avoiding an unpleasant or painful stimulus. Another limitation of the CAR test is that selective suppression of avoidance responses by drugs is procedure-specific. In procedures besides the one-way discriminated pole jump procedure and the two-way active avoidance test, such as the Sidman procedure, antipsychotics block avoidance behavior and escapes at almost the same doses. Conversely, benzodiazepines selectively suppress avoidance behavior without affecting escape behavior in the Sidman procedure. This is opposite to what is generally described as reflecting antipsychotic-like activity. Hence, selective suppression of avoidance responses is not a specific predictor of antipsychotic efficacy, or at best, selective suppression of avoidance responses as a predictor of antipsychotic activity is dependent on the specific CAR procedure employed. Drugs affecting the test Active drugs The test can detect antipsychotic-like activity both in the case of dopamine D2 receptor antagonists and in the case of drugs lacking D2 receptor antagonism. The occupancy of the D2 receptor by antagonists of this receptor required to inhibit the CAR is around 65 to 80%, which is similar to the occupancy at which therapeutic antipsychotic effects occur in humans with these drugs. Both typical antipsychotics and atypical antipsychotics are active in the CAR test. Similarly to dopamine D2 receptor antagonists, dopamine depleting agents like reserpine and tetrabenazine suppress conditioned avoidance responses and hence are active in the CAR test. Selective serotonin 5-HT2A receptor antagonists like volinanserin (MDL-100907) and ritanserin can enhance the suppression of conditioned avoidance responses by dopamine D2 receptor antagonists. Serotonin 5-HT1A receptor agonism, for instance with buspirone, 8-OH-DPAT, or antipsychotics with concomitant 5-HT1A receptor agonism, may also enhance suppression of conditioned avoidance responses. Dopamine D2 receptor partial agonists like aripiprazole, brexpiprazole, and bifeprunox suppress conditioned avoidance responses in the CAR test similarly to dopamine D2 receptor antagonists. Other drugs that may produce or enhance suppression of conditioned avoidance responses include serotonin 5-HT2C receptor agonists like CP-809101, WAY-163909, and meta-chlorophenylpiperazine (mCPP); α1-adrenergic receptor antagonists like prazosin; α2-adrenergic receptor antagonists like idazoxan; norepinephrine reuptake inhibitors like reboxetine; acetylcholinesterase inhibitors (and hence indirect cholinergics) like galantamine; the muscarinic acetylcholine receptor agonist xanomeline (used clinically as xanomeline/trospium); κ-opioid receptor agonists like spiradoline; AMPA receptor antagonists like GYKI-52466 and tezampanel (LY-326325); metabotropic glutamate mGlu2 and mGlu3 receptor agonists like pomaglumetad (LY-404039); and phosphodiesterase inhibitors like the PDE4 inhibitor rolipram and the PDE10A inhibitors papaverine, mardepodect (PF-2545920), and balipodect (TAK-063). Dopamine D1 receptor antagonists have either shown no effect in the CAR, for instance ecopipam (SCH-39166), or have inhibited both avoidance and escape responses at the same doses, such as SCH-23390. However, different findings have also been reported, for instance ecopipam being effective in the CAR test. In contrast to dopamine D2 receptor antagonists, clinical trials of dopamine D1 receptor antagonists, including ecopipam and NNC 01-0687, have found that they were ineffective in the treatment of psychosis. Inactive drugs Various antidepressants, like tricyclic antidepressants (TCAs) as well as the selective serotonin reuptake inhibitor (SSRI) fluoxetine, reduce both avoidance and escape responses in the CAR test and hence are not considered to be active since they are not selective for avoidance responses. Reversal agents Dopaminergic agents, like the dopamine precursor levodopa (L-DOPA), the dopamine releasing agents amphetamine and methamphetamine, the dopamine reuptake inhibitors methylphenidate, bupropion, and nomifensine, the non-selective dopamine receptor agonist apomorphine, and the indirect dopaminergic agent amantadine, can all markedly reverse the effects of drugs like reserpine that are active in the CAR test and restore conditioned avoidance responses. Selective dopamine D1 receptor agonists. like SKF-38,393, and selective dopamine D2 receptor agonists, like quinpirole, are only weakly effective in reversing the effects of reserpine in suppressing conditioned avoidance responses when given individually. However, they are synergistic and robustly effective when administered in combination. Similarly, anticholinergics like atropine and scopolamine increase rates of conditioned avoidance responses. In contrast to dopaminergic agents, non-dopaminergic antidepressants, like many tricyclic antidepressants (TCAs), are generally ineffective in antagonizing agents that are active in the test. Mechanism The effects of drugs that are active in the CAR test, suppression of conditioned avoidance responses without affecting escape behavior, are thought to be mediated specifically by modulation of signaling in the nucleus accumbens shell or ventral striatum, part of the mesolimbic pathway. This area of the brain plays a major role in behavioral activation and in appetitive and aversive motivational processes. Drugs active in the CAR test may work by dampening behavioral responses to motivationally salient stimuli. Some academics, such as Joanna Moncrieff and David Healy, maintain that antipsychotics do not actually directly treat psychotic symptoms or delusions, but rather simply induce a state of psychic indifference or blunted emotions and resultant behavioral suppression (e.g., of agitation), thereby helping to reduce the functional consequences of psychotic symptoms. This interpretation is notably consistent with the behavioral effects of antipsychotics in the CAR test, in which treated animals lose their interest or motivation in preemptively avoiding an unpleasant stimulus. History The CAR test was developed in the 1950s soon after the discovery of antipsychotics. It is one of the oldest and most classical tests of antipsychotic-like activity. The test was originally performed as the one-way active avoidance or pole-jumping test, but subsequently the two-way active avoidance test was introduced and became more commonly used. By 1998, the popularity of the CAR test had declined somewhat, but it continues to be frequently employed. Test of other drug effects The CAR test can additionally be used to assess behavioral activity or drive and associated learning. The dopamine depleting agent tetrabenazine can strongly and almost completely inhibit acquisition of conditioned avoidance responses in the shuttle box and also results in a very high rate of escape failures. Dopaminergic agents, like the catecholaminergic activity enhancers selegiline, phenylpropylaminopentane (PPAP), and benzofuranylpropylaminopentane (BPAP), can reverse the effects of tetrabenazine and enhance learning in this test. In addition, the CAR test, by testing the capacity of drugs to enhance escape responses and thereby reverse learned helplessness, has been used as a test of antidepressant-like activity. κ-Opioid receptor antagonists like norbinaltorphimine have been found to be active in this test. Acquisition of conditioned avoidance responses has been used as a test of anxiolytic and anxiogenic drug effects. Since there is a learning (acquisition) phase, there have also been attempts to use the CAR test to assess activity of drugs in enhancing learning and memory. However, there have been no consistent data for this use. In addition, the CAR test may be inducing more of a behavioral reflex rather than involving higher-order memory associated with areas like the prefrontal cortex. Other tests of antipsychotic-like activity Other animal tests used to evaluate antipsychotic-like activity of drugs include inhibition of drug-induced stereotypy, inhibition of drug-induced hyperlocomotion or climbing behavior, and reversal of drug-induced prepulse inhibition or startle response deficits. Drugs that induce such effects include dopaminergic agents like amphetamine and apomorphine and NMDA receptor antagonists like dizocilpine (MK-801). Another test of antipsychotic-like activity is restoration of latent inhibition. See also Animal model of schizophrenia Dopamine hypothesis of schizophrenia References Animal testing techniques Neuroscience of schizophrenia Psychology experiments Schizophrenia research
Conditioned avoidance response test
[ "Chemistry" ]
2,838
[ "Animal testing", "Animal testing techniques" ]
78,153,179
https://en.wikipedia.org/wiki/Examples%20of%20anonymous%20functions
Examples of anonymous functions Numerous languages support anonymous functions, or something similar. APL Only some dialects support anonymous functions, either as dfns, in the tacit style or a combination of both. f←{⍵×⍵} As a dfn f 1 2 3 1 4 9 g←⊢×⊢ As a tacit 3-train (fork) g 1 2 3 1 4 9 h←×⍨ As a derived tacit function h 1 2 3 1 4 9 C (non-standard extension) The anonymous function is not supported by standard C programming language, but supported by some C dialects, such as GCC and Clang. GCC The GNU Compiler Collection (GCC) supports anonymous functions, mixed by nested functions and statement expressions. It has the form: ( { return_type anonymous_functions_name (parameters) { function_body } anonymous_functions_name; } ) The following example works only with GCC. Because of how macros are expanded, the l_body cannot contain any commas outside of parentheses; GCC treats the comma as a delimiter between macro arguments. The argument l_ret_type can be removed if __typeof__ is available; in the example below using __typeof__ on array would return testtype *, which can be dereferenced for the actual value if needed. #include <stdio.h> //* this is the definition of the anonymous function */ #define lambda(l_ret_type, l_arguments, l_body) \ ({ \ l_ret_type l_anonymous_functions_name l_arguments \ l_body \ &l_anonymous_functions_name; \ }) #define forEachInArray(fe_arrType, fe_arr, fe_fn_body) \ { \ int i=0; \ for(;i<sizeof(fe_arr)/sizeof(fe_arrType);i++) { fe_arr[i] = fe_fn_body(&fe_arr[i]); } \ } typedef struct { int a; int b; } testtype; void printout(const testtype * array) { int i; for ( i = 0; i < 3; ++ i ) printf("%d %d\n", array[i].a, array[i].b); printf("\n"); } int main(void) { testtype array[] = { {0,1}, {2,3}, {4,5} }; printout(array); /* the anonymous function is given as function for the foreach */ forEachInArray(testtype, array, lambda (testtype, (void *item), { int temp = (*( testtype *) item).a; (*( testtype *) item).a = (*( testtype *) item).b; (*( testtype *) item).b = temp; return (*( testtype *) item); })); printout(array); return 0; } Clang (C, C++, Objective-C, Objective-C++) Clang supports anonymous functions, called blocks, which have the form: ^return_type ( parameters ) { function_body } The type of the blocks above is return_type (^)(parameters). Using the aforementioned blocks extension and Grand Central Dispatch (libdispatch), the code could look simpler: #include <stdio.h> #include <dispatch/dispatch.h> int main(void) { void (^count_loop)() = ^{ for (int i = 0; i < 100; i++) printf("%d\n", i); printf("ah ah ah\n"); }; /* Pass as a parameter to another function */ dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), count_loop); /* Invoke directly */ count_loop(); return 0; } The code with blocks should be compiled with -fblocks and linked with -lBlocksRuntime C++ (since C++11) C++11 supports anonymous functions (technically function objects), called lambda expressions, which have the form: [ captures ] ( params ) specs requires (optional) { body } where "specs" is of the form "specifiers exception attr trailing-return-type in that order; each of these components is optional". If it is absent, the return type is deduced from return statements as if for a function with declared return type auto. This is an example lambda expression: [](int x, int y) { return x + y; } C++11 also supports closures, here called captures. Captures are defined between square brackets [and ] in the declaration of lambda expression. The mechanism allows these variables to be captured by value or by reference. The following table demonstrates this: [] // No captures, the lambda is implicitly convertible to a function pointer. [x, &y] // x is captured by value and y is captured by reference. [&] // Any external variable is implicitly captured by reference if used [=] // Any external variable is implicitly captured by value if used. [&, x] // x is captured by value. Other variables will be captured by reference. [=, &z] // z is captured by reference. Other variables will be captured by value. Variables captured by value are constant by default. Adding mutable after the parameter list makes them non-constant. C++14 and newer versions support init-capture, for example: std::unique_ptr<int> ptr = std::make_unique<int>(42); [ptr]{ /* ... */ }; // copy assignment is deleted for a unique pointer [ptr = std::move(ptr)]{ /* ... */ }; // ok auto counter = [i = 0]() mutable { return i++; }; // mutable is required to modify 'i' counter(); // 0 counter(); // 1 counter(); // 2 The following two examples demonstrate use of a lambda expression: std::vector<int> some_list{ 1, 2, 3, 4, 5 }; int total = 0; std::for_each(begin(some_list), end(some_list), [&total](int x) { total += x; }); // Note that std::accumulate would be a way better alternative here... This computes the total of all elements in the list. The variable total is stored as a part of the lambda function's closure. Since it is a reference to the stack variable total, it can change its value. std::vector<int> some_list{ 1, 2, 3, 4, 5 }; int total = 0; int value = 5; std::for_each(begin(some_list), end(some_list), [&total, value, this](int x) { total += x * value * this->some_func(); }); This will cause total to be stored as a reference, but value will be stored as a copy. The capture of this is special. It can only be captured by value, not by reference. However in C++17, the current object can be captured by value (denoted by *this), or can be captured by reference (denoted by this). this can only be captured if the closest enclosing function is a non-static member function. The lambda will have the same access as the member that created it, in terms of protected/private members. If this is captured, either explicitly or implicitly, then the scope of the enclosed class members is also tested. Accessing members of this does not need explicit use of this-> syntax. The specific internal implementation can vary, but the expectation is that a lambda function that captures everything by reference will store the actual stack pointer of the function it is created in, rather than individual references to stack variables. However, because most lambda functions are small and local in scope, they are likely candidates for inlining, and thus need no added storage for references. If a closure object containing references to local variables is invoked after the innermost block scope of its creation, the behaviour is undefined. Lambda functions are function objects of an implementation-dependent type; this type's name is only available to the compiler. If the user wishes to take a lambda function as a parameter, the parameter type must be a template type, or they must create a std::function or a similar object to capture the lambda value. The use of the auto keyword can help store the lambda function, auto my_lambda_func = [&](int x) { /*...*/ }; auto my_onheap_lambda_func = new auto([=](int x) { /*...*/ }); Here is an example of storing anonymous functions in variables, vectors, and arrays; and passing them as named parameters: #include <functional> #include <iostream> #include <vector> double eval(std::function<double(double)> f, double x = 2.0) { return f(x); } int main() { std::function<double(double)> f0 = [](double x) { return 1; }; auto f1 = [](double x) { return x; }; decltype(f0) fa[3] = {f0, f1, [](double x) { return x * x; }}; std::vector<decltype(f0)> fv = {f0, f1}; fv.push_back([](double x) { return x * x; }); for (size_t i = 0; i < fv.size(); i++) { std::cout << fv[i](2.0) << std::endl; } for (size_t i = 0; i < 3; i++) { std::cout << fa[i](2.0) << std::endl; } for (auto& f : fv) { std::cout << f(2.0) << std::endl; } for (auto& f : fa) { std::cout << f(2.0) << std::endl; } std::cout << eval(f0) << std::endl; std::cout << eval(f1) << std::endl; std::cout << eval([](double x) { return x * x; }) << std::endl; } A lambda expression with an empty capture specification ([]) can be implicitly converted into a function pointer with the same type as the lambda was declared with. So this is legal: auto a_lambda_func = [](int x) -> void { /*...*/ }; void (* func_ptr)(int) = a_lambda_func; func_ptr(4); //calls the lambda. Since C++17, a lambda can be declared constexpr, and since C++20, consteval with the usual semantics. These specifiers go after the parameter list, like mutable. Starting from C++23, the lambda can also be static if it has no captures. The static and mutable specifiers are not allowed to be combined. Also since C++23 a lambda expression can be recursive through explicit this as first parameter: auto fibonacci = [](this auto self, int n) { return n <= 1 ? n : self(n - 1) + self(n - 2); }; fibonacci(7); // 13 In addition to that, C++23 modified the syntax so that the parentheses can be omitted in the case of a lambda that takes no arguments even if the lambda has a specifier. It also made it so that an attribute specifier sequence that appears before the parameter list, lambda specifiers, or noexcept specifier (there must be one of them) applies to the function call operator or operator template of the closure type. Otherwise, it applies to the type of the function call operator or operator template. Previously, such a sequence always applied to the type of the function call operator or operator template of the closure type making e.g the [[noreturn]] attribute impossible to use with lambdas. The Boost library provides its own syntax for lambda functions as well, using the following syntax: for_each(a.begin(), a.end(), std::cout << _1 << ' '); Since C++14, the function parameters of a lambda can be declared with auto. The resulting lambda is called a generic lambda and is essentially an anonymous function template since the rules for type deduction of the auto parameters are the rules of template argument deduction. As of C++20, template parameters can also be declared explicitly with the following syntax: [ captures ] < tparams > requires (optional) ( params ) specs requires (optional) { body } C# In C#, support for anonymous functions has deepened through the various versions of the language compiler. The language v3.0, released in November 2007 with .NET Framework v3.5, has full support of anonymous functions. C# names them lambda expressions, following the original version of anonymous functions, the lambda calculus. // the first int is the x' type // the second int is the return type // Func<int,int> foo = x => x * x; Console.WriteLine(foo(7)); While the function is anonymous, it cannot be assigned to an implicitly typed variable, because the lambda syntax may be used for denoting an anonymous function or an expression tree, and the choice cannot automatically be decided by the compiler. E.g., this does not work: // will NOT compile! var foo = (int x) => x * x; However, a lambda expression can take part in type inference and can be used as a method argument, e.g. to use anonymous functions with the Map capability available with System.Collections.Generic.List (in the ConvertAll() method): // Initialize the list: var values = new List<int>() { 7, 13, 4, 9, 3 }; // Map the anonymous function over all elements in the list, return the new list var foo = values.ConvertAll(d => d * d) ; // the result of the foo variable is of type System.Collections.Generic.List<Int32> Prior versions of C# had more limited support for anonymous functions. C# v1.0, introduced in February 2002 with the .NET Framework v1.0, provided partial anonymous function support through the use of delegates. C# names them lambda expressions, following the original version of anonymous functions, the lambda calculus. This construct is somewhat similar to PHP delegates. In C# 1.0, delegates are like function pointers that refer to an explicitly named method within a class. (But unlike PHP, the name is unneeded at the time the delegate is used.) C# v2.0, released in November 2005 with the .NET Framework v2.0, introduced the concept of anonymous methods as a way to write unnamed inline statement blocks that can be executed in a delegate invocation. C# 3.0 continues to support these constructs, but also supports the lambda expression construct. This example will compile in C# 3.0, and exhibits the three forms: public class TestDriver { delegate int SquareDelegate(int d); static int Square(int d) { return d * d; } static void Main(string[] args) { // C# 1.0: Original delegate syntax needed // initializing with a named method. SquareDelegate A = new SquareDelegate(Square); System.Console.WriteLine(A(3)); // C# 2.0: A delegate can be initialized with // inline code, called an "anonymous method". This // method takes an int as an input parameter. SquareDelegate B = delegate(int d) { return d * d; }; System.Console.WriteLine(B(5)); // C# 3.0. A delegate can be initialized with // a lambda expression. The lambda takes an int, and returns an int. // The type of x is inferred by the compiler. SquareDelegate C = x => x * x; System.Console.WriteLine(C(7)); // C# 3.0. A delegate that accepts one input and // returns one output can also be implicitly declared with the Func<> type. System.Func<int,int> D = x => x * x; System.Console.WriteLine(D(9)); } } In the case of the C# 2.0 version, the C# compiler takes the code block of the anonymous function and creates a static private function. Internally, the function gets a generated name, of course; this generated name is based on the name of the method in which the Delegate is declared. But the name is not exposed to application code except by using reflection. In the case of the C# 3.0 version, the same mechanism applies. ColdFusion Markup Language (CFML) Using the keyword: fn = function(){ // statements }; Or using an arrow function: fn = () => { // statements }; fn = () => singleExpression // singleExpression is implicitly returned. There is no need for the braces or the return keyword fn = singleParam => { // if the arrow function has only one parameter, there's no need for parentheses // statements } fn = (x, y) => { // if the arrow function has zero or multiple parameters, one needs to use parentheses // statements } CFML supports any statements within the function's definition, not simply expressions. CFML supports recursive anonymous functions: factorial = function(n){ return n > 1 ? n * factorial(n-1) : 1; }; CFML anonymous functions implement closure. D D uses inline delegates to implement anonymous functions. The full syntax for an inline delegate is return_type delegate(arguments){/*body*/} If unambiguous, the return type and the keyword delegate can be omitted. (x){return x*x;} delegate (x){return x*x;} // if more verbosity is needed (int x){return x*x;} // if parameter type cannot be inferred delegate (int x){return x*x;} // ditto delegate double(int x){return x*x;} // if return type must be forced manually Since version 2.0, D allocates closures on the heap unless the compiler can prove it is unnecessary; the scope keyword can be used for forcing stack allocation. Since version 2.058, it is possible to use shorthand notation: x => x*x; (int x) => x*x; (x,y) => x*y; (int x, int y) => x*y; An anonymous function can be assigned to a variable and used like this: auto sqr = (double x){return x*x;}; double y = sqr(4); Dart Dart supports anonymous functions. var sqr = (x) => x * x; print(sqr(5)); or print(((x) => x * x)(5)); Delphi Delphi introduced anonymous functions in version 2009. program demo; type TSimpleProcedure = reference to procedure; TSimpleFunction = reference to function(const x: string): Integer; var x1: TSimpleProcedure; y1: TSimpleFunction; begin x1 := procedure begin Writeln('Hello World'); end; x1; //invoke anonymous method just defined y1 := function(const x: string): Integer begin Result := Length(x); end; Writeln(y1('bar')); end. PascalABC.NET PascalABC.NET supports anonymous functions using lambda syntax begin var n := 10000000; var pp := (1..n) .Select(x -> (Random, Random)) .Where(p -> Sqr(p[0]) + Sqr(p[1]) < 1) .Count / n * 4; Print(pp); end. Elixir Elixir uses the closure fn for anonymous functions. sum = fn(a, b) -> a + b end sum.(4, 3) #=> 7 square = fn(x) -> x * x end Enum.map [1, 2, 3, 4], square #=> [1, 4, 9, 16] Erlang Erlang uses a syntax for anonymous functions similar to that of named functions. % Anonymous function bound to the Square variable Square = fun(X) -> X * X end. % Named function with the same functionality square(X) -> X * X. Go Go supports anonymous functions. foo := func(x int) int { return x * x } fmt.Println(foo(10)) Haskell Haskell uses a concise syntax for anonymous functions (lambda expressions). The backslash is supposed to resemble λ. \x -> x * x Lambda expressions are fully integrated with the type inference engine, and support all the syntax and features of "ordinary" functions (except for the use of multiple definitions for pattern-matching, since the argument list is only specified once). map (\x -> x * x) [1..5] -- returns [1, 4, 9, 16, 25] The following are all equivalent: f x y = x + y f x = \y -> x + y f = \x y -> x + y Haxe In Haxe, anonymous functions are called lambda, and use the syntax function(argument-list) expression; . var f = function(x) return x*x; f(8); // 64 (function(x,y) return x+y)(5,6); // 11 Java Java supports anonymous functions, named Lambda Expressions, starting with JDK 8. A lambda expression consists of a comma separated list of the formal parameters enclosed in parentheses, an arrow token (->), and a body. Data types of the parameters can always be omitted, as can the parentheses if there is only one parameter. The body can consist of one statement or a statement block. // with no parameter () -> System.out.println("Hello, world.") // with one parameter (this example is an identity function). a -> a // with one expression (a, b) -> a + b // with explicit type information (long id, String name) -> "id: " + id + ", name:" + name // with a code block (a, b) -> { return a + b; } // with multiple statements in the lambda body. It needs a code block. // This example also includes two nested lambda expressions (the first one is also a closure). (id, defaultPrice) -> { Optional<Product> product = productList.stream().filter(p -> p.getId() == id).findFirst(); return product.map(p -> p.getPrice()).orElse(defaultPrice); } Lambda expressions are converted to "functional interfaces" (defined as interfaces that contain only one abstract method in addition to one or more default or static methods), as in the following example: public class Calculator { interface IntegerMath { int operation(int a, int b); default IntegerMath swap() { return (a, b) -> operation(b, a); } } private static int apply(int a, int b, IntegerMath op) { return op.operation(a, b); } public static void main(String... args) { IntegerMath addition = (a, b) -> a + b; IntegerMath subtraction = (a, b) -> a - b; System.out.println("40 + 2 = " + apply(40, 2, addition)); System.out.println("20 - 10 = " + apply(20, 10, subtraction)); System.out.println("10 - 20 = " + apply(20, 10, subtraction.swap())); } } In this example, a functional interface called IntegerMath is declared. Lambda expressions that implement IntegerMath are passed to the apply() method to be executed. Default methods like swap define methods on functions. Java 8 introduced another mechanism named method reference (the :: operator) to create a lambda on an existing method. A method reference does not indicate the number or types of arguments because those are extracted from the abstract method of the functional interface. IntBinaryOperator sum = Integer::sum; In the example above, the functional interface IntBinaryOperator declares an abstract method int applyAsInt(int, int), so the compiler looks for a method int sum(int, int) in the class java.lang.Integer. Differences compared to Anonymous Classes Anonymous classes of lambda-compatible interfaces are similar, but not exactly equivalent, to lambda expressions. To illustrate, in the following example, and are both instances of that add their two parameters: IntegerMath anonymousClass = new IntegerMath() { @Override public int operation(int a, int b) { return a + b; } }; IntegerMath lambdaExpression = (a, b) -> a + b; The main difference here is that the lambda expression does not necessarily need to allocate a new instance for the , and can return the same instance every time this code is run. Additionally, in the OpenJDK implementation at least, lambdas are compiled to instructions, with the lambda body inserted as a static method into the surrounding class, rather than generating a new class file entirely. Java limitations Java 8 lambdas have the following limitations: Lambdas can throw checked exceptions, but such lambdas will not work with the interfaces used by the Collection API. Variables that are in-scope where the lambda is declared may only be accessed inside the lambda if they are effectively final, i.e. if the variable is not mutated inside or outside of the lambda scope. JavaScript JavaScript/ECMAScript supports anonymous functions. alert((function(x){ return x * x; })(10)); ES6 supports "arrow function" syntax, where a => symbol separates the anonymous function's parameter list from the body: alert((x => x * x)(10)); This construct is often used in Bookmarklets. For example, to change the title of the current document (visible in its window's title bar) to its URL, the following bookmarklet may seem to work. document.title=location.href; However, as the assignment statement returns a value (the URL itself), many browsers actually create a new page to display this value. Instead, an anonymous function, that does not return a value, can be used: (function(){document.title=location.href;})(); The function statement in the first (outer) pair of parentheses declares an anonymous function, which is then executed when used with the last pair of parentheses. This is almost equivalent to the following, which populates the environment with f unlike an anonymous function. var f = function(){document.title=location.href;}; f(); Use void() to avoid new pages for arbitrary anonymous functions: void(function(){return document.title=location.href;}()); or just: void(document.title=location.href); JavaScript has syntactic subtleties for the semantics of defining, invoking and evaluating anonymous functions. These subliminal nuances are a direct consequence of the evaluation of parenthetical expressions. The following constructs which are called immediately-invoked function expression illustrate this: (function(){ ... }()) and (function(){ ... })() Representing "function(){ ... }" by f, the form of the constructs are a parenthetical within a parenthetical (f()) and a parenthetical applied to a parenthetical (f)(). Note the general syntactic ambiguity of a parenthetical expression, parenthesized arguments to a function and the parentheses around the formal parameters in a function definition. In particular, JavaScript defines a , (comma) operator in the context of a parenthetical expression. It is no mere coincidence that the syntactic forms coincide for an expression and a function's arguments (ignoring the function formal parameter syntax)! If f is not identified in the constructs above, they become (()) and ()(). The first provides no syntactic hint of any resident function but the second MUST evaluate the first parenthetical as a function to be legal JavaScript. (Aside: for instance, the ()'s could be ([],{},42,"abc",function(){}) as long as the expression evaluates to a function.) Also, a function is an Object instance (likewise objects are Function instances) and the object literal notation brackets, {} for braced code, are used when defining a function this way (as opposed to using new Function(...)). In a very broad non-rigorous sense (especially since global bindings are compromised), an arbitrary sequence of braced JavaScript statements, {stuff}, can be considered to be a fixed point of (function(){( function(){( ... {( function(){stuff}() )} ... )}() )}() ) More correctly but with caveats, ( function(){stuff}() ) ~= A_Fixed_Point_of( function(){ return function(){ return ... { return function(){stuff}() } ... }() }() ) Note the implications of the anonymous function in the JavaScript fragments that follow: function(){ ... }() without surrounding ()'s is generally not legal (f=function(){ ... }) does not "forget" f globally unlike (function f(){ ... }) Performance metrics to analyze the space and time complexities of function calls, call stack, etc. in a JavaScript interpreter engine implement easily with these last anonymous function constructs. From the implications of the results, it is possible to deduce some of an engine's recursive versus iterative implementation details, especially tail-recursion. Julia In Julia anonymous functions are defined using the syntax (arguments)->(expression), julia> f = x -> x*x; f(8) 64 julia> ((x,y)->x+y)(5,6) 11 Kotlin Kotlin supports anonymous functions with the syntax {arguments -> expression}, val sum = { x: Int, y: Int -> x + y } sum(5,6) // returns 11 val even = { x: Int -> x%2==0} even(4) // returns true Lisp Lisp and Scheme support anonymous functions using the "lambda" construct, which is a reference to lambda calculus. Clojure supports anonymous functions with the "fn" special form and #() reader syntax. (lambda (arg) (* arg arg)) Common Lisp Common Lisp has the concept of lambda expressions. A lambda expression is written as a list with the symbol "lambda" as its first element. The list then contains the argument list, documentation or declarations and a function body. Lambda expressions can be used inside lambda forms and with the special operator "function". (function (lambda (arg) (do-something arg))) "function" can be abbreviated as #'. Also, macro lambda exists, which expands into a function form: ; using sharp quote #'(lambda (arg) (do-something arg)) ; using the lambda macro: (lambda (arg) (do-something arg)) One typical use of anonymous functions in Common Lisp is to pass them to higher-order functions like mapcar, which applies a function to each element of a list and returns a list of the results. (mapcar #'(lambda (x) (* x x)) '(1 2 3 4)) ; -> (1 4 9 16) The lambda form in Common Lisp allows a lambda expression to be written in a function call: ((lambda (x y) (+ (sqrt x) (sqrt y))) 10.0 12.0) Anonymous functions in Common Lisp can also later be given global names: (setf (symbol-function 'sqr) (lambda (x) (* x x))) ; which allows us to call it using the name SQR: (sqr 10.0) Scheme Scheme's named functions is simply syntactic sugar for anonymous functions bound to names: (define (somename arg) (do-something arg)) expands (and is equivalent) to (define somename (lambda (arg) (do-something arg))) Clojure Clojure supports anonymous functions through the "fn" special form: (fn [x] (+ x 3)) There is also a reader syntax to define a lambda: #(+ % %2%3) ; Defines an anonymous function that takes three arguments and sums them. Like Scheme, Clojure's "named functions" are simply syntactic sugar for lambdas bound to names: (defn func [arg] (+ 3 arg)) expands to: (def func (fn [arg] (+ 3 arg))) Lua In Lua (much as in Scheme) all functions are anonymous. A named function in Lua is simply a variable holding a reference to a function object. Thus, in Lua function foo(x) return 2*x end is just syntactical sugar for foo = function(x) return 2*x end An example of using anonymous functions for reverse-order sorting: table.sort(network, function(a,b) return a.name > b.name end) Wolfram Language, Mathematica The Wolfram Language is the programming language of Mathematica. Anonymous functions are important in programming the latter. There are several ways to create them. Below are a few anonymous functions that increment a number. The first is the most common. #1 refers to the first argument and & marks the end of the anonymous function. #1+1& Function[x,x+1] x \[Function] x+1 So, for instance: f:= #1^2&;f[8] 64 #1+#2&[5,6] 11 Also, Mathematica has an added construct to make recursive anonymous functions. The symbol '#0' refers to the entire function. The following function calculates the factorial of its input: If[#1 == 1, 1, #1 * #0[#1-1]]& For example, 6 factorial would be: If[#1 == 1, 1, #1 * #0[#1-1]]&[6] 720 MATLAB, Octave Anonymous functions in MATLAB or Octave are defined using the syntax @(argument-list)expression. Any variables that are not found in the argument list are inherited from the enclosing scope and are captured by value. >> f = @(x)x*x; f(8) ans = 64 >> (@(x,y)x+y)(5,6) % Only works in Octave ans = 11 Maxima In Maxima anonymous functions are defined using the syntax lambda(argument-list,expression), f: lambda([x],x*x); f(8); 64 lambda([x,y],x+y)(5,6); 11 ML The various dialects of ML support anonymous functions. OCaml Anonymous functions in OCaml are functions without a declared name. Here is an example of an anonymous function that multiplies its input by two: fun x -> x*2 In the example, fun is a keyword indicating that the function is an anonymous function. We are passing in an argument x and -> to separate the argument from the body. F# F# supports anonymous functions, as follows: (fun x -> x * x) 20 // 400 Standard ML Standard ML supports anonymous functions, as follows: fn arg => arg * arg Nim Nim supports multi-line multi-expression anonymous functions. var anon = proc (var1, var2: int): int = var1 + var2 assert anon(1, 2) == 3 Multi-line example: var anon = func (x: int): bool = if x > 0: result = true else: result = false assert anon(9) Anonymous functions may be passed as input parameters of other functions: var cities = @["Frankfurt", "Tokyo", "New York"] cities.sort( proc (x, y: string): int = cmp(x.len, y.len) ) An anonymous function is basically a function without a name. Perl Perl 5 Perl 5 supports anonymous functions, as follows: (sub { print "I got called\n" })->(); # 1. fully anonymous, called as created my $squarer = sub { my $x = shift; $x * $x }; # 2. assigned to a variable sub curry { my ($sub, @args) = @_; return sub { $sub->(@args, @_) }; # 3. as a return value of another function } # example of currying in Perl programming sub sum { my $tot = 0; $tot += $_ for @_; $tot } # returns the sum of its arguments my $curried = curry \&sum, 5, 7, 9; print $curried->(1,2,3), "\n"; # prints 27 ( = 5 + 7 + 9 + 1 + 2 + 3 ) Other constructs take bare blocks as arguments, which serve a function similar to lambda functions of one parameter, but do not have the same parameter-passing convention as functions -- @_ is not set. my @squares = map { $_ * $_ } 1..10; # map and grep don't use the 'sub' keyword my @square2 = map $_ * $_, 1..10; # braces unneeded for one expression my @bad_example = map { print for @_ } 1..10; # values not passed like normal Perl function PHP Before 4.0.1, PHP had no anonymous function support. PHP 4.0.1 to 5.3 PHP 4.0.1 introduced the create_function which was the initial anonymous function support. This function call makes a new randomly named function and returns its name (as a string) $foo = create_function('$x', 'return $x*$x;'); $bar = create_function("\$x", "return \$x*\$x;"); echo $foo(10); The argument list and function body must be in single quotes, or the dollar signs must be escaped. Otherwise, PHP assumes "$x" means the variable $x and will substitute it into the string (despite possibly not existing) instead of leaving "$x" in the string. For functions with quotes or functions with many variables, it can get quite tedious to ensure the intended function body is what PHP interprets. Each invocation of create_function makes a new function, which exists for the rest of the program, and cannot be garbage collected, using memory in the program irreversibly. If this is used to create anonymous functions many times, e.g., in a loop, it can cause problems such as memory bloat. PHP 5.3 PHP 5.3 added a new class called Closure and magic method __invoke() that makes a class instance invocable. $x = 3; $func = function($z) { return $z * 2; }; echo $func($x); // prints 6 In this example, $func is an instance of Closure and echo $func($x) is equivalent to echo $func->__invoke($x). PHP 5.3 mimics anonymous functions but it does not support true anonymous functions because PHP functions are still not first-class objects. PHP 5.3 does support closures but the variables must be explicitly indicated as such: $x = 3; $func = function() use(&$x) { $x *= 2; }; $func(); echo $x; // prints 6 The variable $x is bound by reference so the invocation of $func modifies it and the changes are visible outside of the function. PHP 7.4 Arrow functions were introduced in PHP 7.4 $x = 3; $func = fn($z) => $z * 2; echo $func($x); // prints 6 Prolog's dialects Logtalk Logtalk uses the following syntax for anonymous predicates (lambda expressions): {FreeVar1, FreeVar2, ...}/[LambdaParameter1, LambdaParameter2, ...]>>Goal A simple example with no free variables and using a list mapping predicate is: | ?- meta::map([X,Y]>>(Y is 2*X), [1,2,3], Ys). Ys = [2,4,6] yes Currying is also supported. The above example can be written as: | ?- meta::map([X]>>([Y]>>(Y is 2*X)), [1,2,3], Ys). Ys = [2,4,6] yes Visual Prolog Anonymous functions (in general anonymous predicates) were introduced in Visual Prolog in version 7.2. Anonymous predicates can capture values from the context. If created in an object member, it can also access the object state (by capturing This). mkAdder returns an anonymous function, which has captured the argument X in the closure. The returned function is a function that adds X to its argument: clauses mkAdder(X) = { (Y) = X+Y }. Python Python supports simple anonymous functions through the lambda form. The executable body of the lambda must be an expression and can't be a statement, which is a restriction that limits its utility. The value returned by the lambda is the value of the contained expression. Lambda forms can be used anywhere ordinary functions can. However these restrictions make it a very limited version of a normal function. Here is an example: >>> foo = lambda x: x * x >>> foo(10) 100 In general, the Python convention encourages the use of named functions defined in the same scope as one might typically use an anonymous function in other languages. This is acceptable as locally defined functions implement the full power of closures and are almost as efficient as the use of a lambda in Python. In this example, the built-in power function can be said to have been curried: >>> def make_pow(n): ... def fixed_exponent_pow(x): ... return pow(x, n) ... return fixed_exponent_pow ... >>> sqr = make_pow(2) >>> sqr(10) 100 >>> cub = make_pow(3) >>> cub(10) 1000 R In R the anonymous functions are defined using the syntax function(argument-list)expression , which has shorthand since version 4.1.0 \, akin to Haskell. > f <- function(x)x*x; f(8) [1] 64 > (function(x,y)x+y)(5,6) [1] 11 > # Since R 4.1.0 > (\(x,y) x+y)(5, 6) [1] 11 Raku In Raku, all blocks (even those associated with if, while, etc.) are anonymous functions. A block that is not used as an rvalue is executed immediately. fully anonymous, called as created { say "I got called" }; assigned to a variable my $squarer1 = -> $x { $x * $x }; # 2a. pointy block my $squarer2 = { $^x * $^x }; # 2b. twigil my $squarer3 = { my $x = shift @_; $x * $x }; # 2c. Perl 5 style currying sub add ($m, $n) { $m + $n } my $seven = add(3, 4); my $add_one = &add.assuming(m => 1); my $eight = $add_one($seven); WhateverCode object my $w = * - 1; # WhateverCode object my $b = { $_ - 1 }; # same functionality, but as Callable block Ruby Ruby supports anonymous functions by using a syntactical structure called block. There are two data types for blocks in Ruby. Procs behave similarly to closures, whereas lambdas behave more analogous to an anonymous function. When passed to a method, a block is converted into a Proc in some circumstances. # Example 1: # Purely anonymous functions using blocks. ex = [16.2, 24.1, 48.3, 32.4, 8.5] => [16.2, 24.1, 48.3, 32.4, 8.5] ex.sort_by { |x| x - x.to_i } # Sort by fractional part, ignoring integer part. => [24.1, 16.2, 48.3, 32.4, 8.5] # Example 2: # First-class functions as an explicit object of Proc - ex = Proc.new { puts "Hello, world!" } => #<Proc:0x007ff4598705a0@(irb):7> ex.call Hello, world! => nil # Example 3: # Function that returns lambda function object with parameters def multiple_of?(n) lambda{|x| x % n == 0} end => nil multiple_four = multiple_of?(4) => #<Proc:0x007ff458b45f88@(irb):12 (lambda)> multiple_four.call(16) => true multiple_four[15] => false Rust In Rust, anonymous functions are called closures. They are defined using the following syntax: |<parameter-name>: <type>| -> <return-type> { <body> }; For example: let f = |x: i32| -> i32 { x * 2 }; With type inference, however, the compiler is able to infer the type of each parameter and the return type, so the above form can be written as: let f = |x| { x * 2 }; With closures with a single expression (i.e. a body with one line) and implicit return type, the curly braces may be omitted: let f = |x| x * 2; Closures with no input parameter are written like so: let f = || println!("Hello, world!"); Closures may be passed as input parameters of functions that expect a function pointer: // A function which takes a function pointer as an argument and calls it with // the value '5'. fn apply(f: fn(i32) -> i32) -> i32 { // No semicolon, to indicate an implicit return f(5) } fn main() { // Defining the closure let f = |x| x * 2; println!("{}", apply(f)); // 10 println!("{}", f(5)); // 10 } However, one may need complex rules to describe how values in the body of the closure are captured. They are implemented using the Fn, FnMut, and FnOnce traits: Fn: the closure captures by reference (&T). They are used for functions that can still be called if they only have reference access (with &) to their environment. FnMut: the closure captures by mutable reference (&mut T). They are used for functions that can be called if they have mutable reference access (with &mut) to their environment. FnOnce: the closure captures by value (T). They are used for functions that are only called once. With these traits, the compiler will capture variables in the least restrictive manner possible. They help govern how values are moved around between scopes, which is largely important since Rust follows a lifetime construct to ensure values are "borrowed" and moved in a predictable and explicit manner. The following demonstrates how one may pass a closure as an input parameter using the Fn trait: // A function that takes a value of type F (which is defined as // a generic type that implements the 'Fn' trait, e.g. a closure) // and calls it with the value '5'. fn apply_by_ref<F>(f: F) -> i32 where F: Fn(i32) -> i32 { f(5) } fn main() { let f = |x| { println!("I got the value: {}", x); x * 2 }; // Applies the function before printing its return value println!("5 * 2 = {}", apply_by_ref(f)); } // ~~ Program output ~~ // I got the value: 5 // 5 * 2 = 10 The previous function definition can also be shortened for convenience as follows: fn apply_by_ref(f: impl Fn(i32) -> i32) -> i32 { f(5) } Scala In Scala, anonymous functions use the following syntax: (x: Int, y: Int) => x + y In certain contexts, like when an anonymous function is a parameter being passed to another function, the compiler can infer the types of the parameters of the anonymous function and they can be omitted in the syntax. In such contexts, it is also possible to use a shorthand for anonymous functions using the underscore character to introduce unnamed parameters. val list = List(1, 2, 3, 4) list.reduceLeft( (x, y) => x + y ) // Here, the compiler can infer that the types of x and y are both Int. // Thus, it needs no type annotations on the parameters of the anonymous function. list.reduceLeft( _ + _ ) // Each underscore stands for a new unnamed parameter in the anonymous function. // This results in an even shorter equivalent to the anonymous function above. Smalltalk In Smalltalk anonymous functions are called blocks and they are invoked (called) by sending them a "value" message. If several arguments are to be passed, a "value:...value:" message with a corresponding number of value arguments must be used. For example, in GNU Smalltalk, st> f:=[:x|x*x]. f value: 8 . 64 st> [:x :y|x+y] value: 5 value: 6 . 11 Smalltalk blocks are technically closures, allowing them to outlive their defining scope and still refer to the variables declared therein. st> f := [:a|[:n|a+n]] value: 100 . a BlockClosure "returns the inner block, which adds 100 (captured in "a" variable) to its argument." st> f value: 1 . 101 st> f value: 2 . 102 Swift In Swift, anonymous functions are called closures. The syntax has following form: { (parameters) -> returnType in statement } For example: { (s1: String, s2: String) -> Bool in return s1 > s2 } For sake of brevity and expressiveness, the parameter types and return type can be omitted if these can be inferred: { s1, s2 in return s1 > s2 } Similarly, Swift also supports implicit return statements for one-statement closures: { s1, s2 in s1 > s2 } Finally, the parameter names can be omitted as well; when omitted, the parameters are referenced using shorthand argument names, consisting of the $ symbol followed by their position (e.g. $0, $1, $2, etc.): { $0 > $1 } Tcl In Tcl, applying the anonymous squaring function to 2 looks as follows: apply {x {expr {$x*$x}}} 2 # returns 4 This example involves two candidates for what it means to be a function in Tcl. The most generic is usually called a command prefix, and if the variable f holds such a function, then the way to perform the function application f(x) would be {*}$f $x where {*} is the expansion prefix (new in Tcl 8.5). The command prefix in the above example is apply {x {expr {$x*$x}}} Command names can be bound to command prefixes by means of the interp alias command. Command prefixes support currying. Command prefixes are very common in Tcl APIs. The other candidate for "function" in Tcl is usually called a lambda, and appears as the {x {expr {$x*$x}}} part of the above example. This is the part which caches the compiled form of the anonymous function, but it can only be invoked by being passed to the apply command. Lambdas do not support currying, unless paired with an apply to form a command prefix. Lambdas are rare in Tcl APIs. Vala In Vala, anonymous functions are supported as lambda expressions. delegate int IntOp (int x, int y); void main () { IntOp foo = (x, y) => x * y; stdout.printf("%d\n", foo(10,5)); } Visual Basic .NET Visual Basic .NET 2008 introduced anonymous functions through the lambda form. Combined with implicit typing, VB provides an economical syntax for anonymous functions. As with Python, in VB.NET, anonymous functions must be defined on one line; they cannot be compound statements. Further, an anonymous function in VB.NET must truly be a VB.NET Function - it must return a value. Dim foo = Function(x) x * x Console.WriteLine(foo(10)) Visual Basic.NET 2010 added support for multiline lambda expressions and anonymous functions without a return value. For example, a function for use in a Thread. Dim t As New System.Threading.Thread(Sub () For n As Integer = 0 To 10 'Count to 10 Console.WriteLine(n) 'Print each number Next End Sub ) t.Start() References Functions and mappings
Examples of anonymous functions
[ "Mathematics" ]
12,349
[ "Mathematical analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
78,153,188
https://en.wikipedia.org/wiki/Carotenoid%20biosynthesis
Carotenoids are a class of natural pigments synthesized by various organisms, including plants, algae, and photosynthetic bacteria. They are characterized by their vibrant yellow, orange, and red colors, which contribute significantly to the coloration of fruits and vegetables. Carotenoids play essential roles in photosynthesis and offer various health benefits, such as antioxidant properties and serving as precursors to vitamin A. Biosynthetic pathway Carotenoid biosynthesis occurs primarily in the plastids of plant cells, particularly within chloroplasts and chromoplasts. The biosynthetic pathway initiates with the condensation of two molecules of geranylgeranyl pyrophosphate (GGPP), a 20-carbon isoprenoid precursor. The key steps in this pathway are as follows: Formation of phytoene: The enzyme phytoene synthase (PSY) catalyzes the condensation of two GGPP molecules to produce phytoene, a colorless carotenoid. Desaturation to lycopene: Phytoene undergoes a series of desaturation reactions facilitated by enzymes such as phytoene desaturase (PDS) and ζ-carotene isomerase (Z-ISO), resulting in the formation of lycopene, a red carotenoid. Cyclization to carotenoids: Lycopene is cyclized into various carotenoids, including α-carotene and β-carotene, through the action of lycopene cyclase (LCY), which catalyzes cyclization at the ends of the lycopene molecule. Further modifications: Subsequent modifications, such as hydroxylation and oxidation, lead to the formation of xanthophylls (e.g., lutein and zeaxanthin) and other derivatives. Key enzymes Several enzymes play critical roles in the carotenoid biosynthetic pathway: Phytoene synthase (PSY): Catalyzes the first committed step in carotenoid biosynthesis, converting GGPP into phytoene. Phytoene desaturase (PDS): Introduces double bonds into phytoene, facilitating its conversion into lycopene. Lycopene cyclase (LCY): Responsible for the cyclization of lycopene into α-carotene or β-carotene. Carotenoid hydroxylases: Enzymes such as lutein epoxide cyclase (LUT) introduce hydroxyl groups into carotenoids, leading to the formation of xanthophylls. Regulation The regulation of carotenoid biosynthesis is influenced by various factors, including: Gene Expression: Many carotenoid biosynthetic genes are upregulated by light, enhancing the expression of PSY and subsequently increasing carotenoid production. Hormonal Regulation: Phytohormones such as auxins and abscisic acid modulate carotenoid biosynthesis. Notably, abscisic acid enhances carotenoid accumulation under stress conditions. Environmental Factors: Stressors like drought or pathogen attack can trigger carotenoid accumulation as a protective response, thereby enhancing plant resilience. Significance In plants Carotenoids play roles in photosynthetic organisms by: Protecting chlorophyll from photodamage. Scavenging reactive oxygen species (ROS). Attracting pollinators and seed dispersers through their bright colors. In human health Carotenoids, especially provitamin A carotenoids such as β-carotene, are essential for human health. Their benefits include: Supporting vision, particularly in low-light conditions. Enhancing immune function. Contributing to skin health. Providing antioxidant properties that may reduce the risk of chronic diseases, including cardiovascular diseases and certain cancers. References Carotenoids Biosynthesis
Carotenoid biosynthesis
[ "Chemistry", "Biology" ]
849
[ "Biomarkers", "Carotenoids", "Biosynthesis", "Chemical synthesis", "Metabolism" ]
67,980,941
https://en.wikipedia.org/wiki/Coordination%20sequence
In crystallography and the theory of infinite vertex-transitive graphs, the coordination sequence of a vertex is an integer sequence that counts how many vertices are at each possible distance from . That is, it is a sequence where each is the number of vertices that are steps away from . If the graph is vertex-transitive, then the sequence is an invariant of the graph that does not depend on the specific choice of . Coordination sequences can also be defined for sphere packings, by using either the contact graph of the spheres or the Delaunay triangulation of their centers, but these two choices may give rise to different sequences. As an example, in a square grid, for each positive integer , there are grid points that are steps away from the origin. Therefore, the coordination sequence of the square grid is the sequence in which, except for the initial value of one, each number is a multiple of four. The concept was proposed by Georg O. Brunner and Fritz Laves and later developed by Michael O'Keefe. The coordination sequences of many low-dimensional lattices and uniform tilings are known. The coordination sequences of periodic structures are known to be quasi-polynomial. References Crystallography Infinite graphs Integer sequences
Coordination sequence
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
251
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Infinity", "Infinite graphs", "Combinatorics", "Materials science", "Crystallography", "Condensed matter physics", "Numbers", "Number theory" ]
67,981,364
https://en.wikipedia.org/wiki/Redheffer%20star%20product
In mathematics, the Redheffer star product is a binary operation on linear operators that arises in connection to solving coupled systems of linear equations. It was introduced by Raymond Redheffer in 1959, and has subsequently been widely adopted in computational methods for scattering matrices. Given two scattering matrices from different linear scatterers, the Redheffer star product yields the combined scattering matrix produced when some or all of the output channels of one scatterer are connected to inputs of another scatterer. Definition Suppose are the block matrices and , whose blocks have the same shape when . The Redheffer star product is then defined by: , assuming that are invertible, where is an identity matrix conformable to or , respectively. This can be rewritten several ways making use of the so-called push-through identity . Redheffer's definition extends beyond matrices to linear operators on a Hilbert space . . By definition, are linear endomorphisms of , making linear endomorphisms of , where is the direct sum. However, the star product still makes sense as long as the transformations are compatible, which is possible when and so that . Properties Existence exists if and only if exists. Thus when either exists, so does the Redheffer star product. Identity The star identity is the identity on , or . Associativity The star product is associative, provided all of the relevant matrices are defined. Thus . Adjoint Provided either side exists, the adjoint of a Redheffer star product is . Inverse If is the left matrix inverse of such that , has a right inverse, and exists, then . Similarly, if is the left matrix inverse of such that , has a right inverse, and exists, then . Also, if and has a left inverse then . The star inverse equals the matrix inverse and both can be computed with block inversion as . Derivation from a linear system The star product arises from solving multiple linear systems of equations that share variables in common. Often, each linear system models the behavior of one subsystem in a physical process and by connecting the multiple subsystems into a whole, one can eliminate variables shared across subsystems in order to obtain the overall linear system. For instance, let be elements of a Hilbert space such that and giving the following equations in variables: . By substituting the first equation into the last we find: . By substituting the last equation into the first we find: . Eliminating by substituting the two preceding equations into those for results in the Redheffer star product being the matrix such that: . Connection to scattering matrices Many scattering processes take on a form that motivates a different convention for the block structure of the linear system of a scattering matrix. Typically a physical device that performs a linear transformation on inputs, such as linear dielectric media on electromagnetic waves or in quantum mechanical scattering, can be encapsulated as a system which interacts with the environment through various ports, each of which accepts inputs and returns outputs. It is conventional to use a different notation for the Hilbert space, , whose subscript labels a port on the device. Additionally, any element, , has an additional superscript labeling the direction of travel (where + indicates moving from port i to i+1 and - indicates the reverse). The equivalent notation for a Redheffer transformation, , used in the previous section is . The action of the S-matrix, , is defined with an additional flip compared to Redheffer's definition: , so . Note that for in order for the off-diagonal identity matrices to be defined, we require be the same underlying Hilbert space. (The subscript does not imply any difference, but is just a label for bookkeeping.) The star product, , for two S-matrices, , is given by , where and , so . Properties These are analogues of the properties of for Most of them follow from the correspondence . , the exchange operator, is also the S-matrix star identity defined below. For the rest of this section, are S-matrices. Existence exists when either or exist. Identity The S-matrix star identity, , is . This means Associativity Associativity of follows from associativity of and of matrix multiplication. Adjoint From the correspondence between and , and the adjoint of , we have that Inverse The matrix that is the S-matrix star product inverse of in the sense that is where is the ordinary matrix inverse and is as defined above. Connection to transfer matrices Observe that a scattering matrix can be rewritten as a transfer matrix, , with action , where . Here the subscripts relate the different directions of propagation at each port. As a result, the star product of scattering matrices , is analogous to the following matrix multiplication of transfer matrices , where and , so . Generalizations Redheffer generalized the star product in several ways: Arbitrary bijections If there is a bijection given by then an associative star product can be defined by: . The particular star product defined by Redheffer above is obtained from: where . 3x3 star product A star product can also be defined for 3x3 matrices. Applications to scattering matrices In physics, the Redheffer star product appears when constructing a total scattering matrix from two or more subsystems. If system has a scattering matrix and system has scattering matrix , then the combined system has scattering matrix . Transmission line theory Many physical processes, including radiative transfer, neutron diffusion, circuit theory, and others are described by scattering processes whose formulation depends on the dimension of the process and the representation of the operators. For probabilistic problems, the scattering equation may appear in a Kolmogorov-type equation. Electromagnetism The Redheffer star product can be used to solve for the propagation of electromagnetic fields in stratified, multilayered media. Each layer in the structure has its own scattering matrix and the total structure's scattering matrix can be described as the star product between all of the layers. A free software program that simulates electromagnetism in layered media is the Stanford Stratified Structure Solver. Semiconductor interfaces Kinetic models of consecutive semiconductor interfaces can use a scattering matrix formulation to model the motion of electrons between the semiconductors. Factorization on graphs In the analysis of Schrödinger operators on graphs, the scattering matrix of a graph can be obtained as a generalized star product of the scattering matrices corresponding to its subgraphs. References Scattering theory Scattering, absorption and radiative transfer Hilbert spaces Matrices Mathematical physics
Redheffer star product
[ "Physics", "Chemistry", "Mathematics" ]
1,340
[ "Scattering theory", " absorption and radiative transfer (optics)", "Applied mathematics", "Theoretical physics", "Mathematical objects", "Quantum mechanics", "Matrices (mathematics)", "Scattering", "Hilbert spaces", "Mathematical physics" ]
67,984,129
https://en.wikipedia.org/wiki/Praseodymium%28IV%29%20fluoride
Praseodymium(IV) fluoride (also praseodymium tetrafluoride) is a binary inorganic compound, a highly oxidised metal salt of praseodymium and fluoride with the chemical formula PrF4. Synthesis Praseodymium(IV) fluoride can be prepared by the effect of krypton difluoride on praseodymium(IV) oxide: Praseodymium(IV) fluoride can also be made by the dissolution of sodium hexafluoropraseodymate(IV) in liquid hydrogen fluoride: Properties Praseodymium(IV) fluoride forms light yellow crystals. The crystal structure is anticubic and isomorphic to that of uranium tetrafluoride UF4. It decomposes when heated: Due to the high normal potential of the tetravalent praseodymium cations (Pr3+ / Pr4+: +3.2 V), praseodymium(IV) fluoride decomposes in water, releasing oxygen, O2. See also Praseodymium(III) fluoride Uranium tetrafluoride References Fluorides Praseodymium compounds Inorganic compounds Lanthanide halides
Praseodymium(IV) fluoride
[ "Chemistry" ]
273
[ "Fluorides", "Inorganic compounds", "Salts" ]
67,986,120
https://en.wikipedia.org/wiki/HD%2026764
HD 26764, also known as HR 1314 or rarely 14 H. Camelopardalis, is a solitary white hued star located in the northern circumpolar constellation Camelopardalis. It has an apparent magnitude of 5.19, making it faintly to the naked eye if viewed under good conditions. Gaia DR3 parallax measurements place the object at a distance of 266 light years and is drifting closer with a poorly constrained heliocentric radial velocity of . At its current distance, HD 26764's brightness is diminished by 0.26 magnitudes due to interstellar dust. HD 26764 has a stellar classification of either A2 Vn or A1 Vn. Both classes indicate that the object is an A-type main-sequence star with broad (nebulous) absorption lines due to rapid rotation. At present it has 2.74 times the mass of the Sun and 3.4 times the Sun's radius. It radiates 94 times the luminosity of the Sun from its photosphere at an effective temperature of . At the age of 388 million years, HD 26764 is a rather evolved dwarf star, having completed 91.2% of its main sequence lifetime. Like many hot stars, it spins rapidly with a projected rotational velocity of . An X-ray emission with a luminosity of has been detected around the star. A-type stars are not expected to produce X-rays, so it must be coming from an unseen companion. References A-type main-sequence stars 1314 019949 026764 Camelopardalis Durchmusterung objects
HD 26764
[ "Astronomy" ]
333
[ "Camelopardalis", "Constellations" ]
67,987,267
https://en.wikipedia.org/wiki/The%20Equidistribution%20of%20Lattice%20Shapes%20of%20Rings%20of%20Integers%20of%20Cubic%2C%20Quartic%2C%20and%20Quintic%20Number%20Fields
The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: An Artist's Rendering is a mathematics book by Piper Harron (also known as Piper H and Piper Harris), based on her Princeton University doctoral thesis of the same title. It has been described as "feminist", "unique", "honest", "generous", and "refreshing". Thesis and reception Harron was advised by Fields Medalist Manjul Bhargava, and her thesis deals with the properties of number fields, specifically the shape of their rings of integers. Harron and Bhargava showed that, viewed as a lattice in real vector space, the ring of integers of a random number field does not have any special symmetries. Rather than simply presenting the proof, Harron intended for the thesis and book to explain both the mathematics and the process (and struggle) that was required to reach this result. The writing is accessible and informal, and the book features sections targeting three different audiences: laypeople, people with general mathematical knowledge, and experts in number theory. Harron intentionally departs from the typical academic format as she is writing for a community of mathematicians who "do not feel that they are encouraged to be themselves". Unusually for a mathematics thesis, Harron intersperses her rigorous analysis and proofs with cartoons, poetry, pop-culture references, and humorous diagrams. Science writer Evelyn Lamb, in Scientific American, expresses admiration for Harron for explaining the process behind the mathematics in a way that is accessible to non-mathematicians, especially "because as a woman of color, she could pay a higher price for doing it." Mathematician Philip Ording calls her approach to communicating mathematical abstractions "generous". Her thesis went viral in late 2015, especially within the mathematical community, in part because of the prologue which begins by stating that "respected research math is dominated by men of a certain attitude". Harron had left academia for several years, later saying that she found the atmosphere oppressive and herself miserable and verging on failure. She returned determined that, even if she did not do math the "right way", she "could still contribute to the community". Her prologue states that the community lacks diversity and discourages diversity of thought. "It is not my place to make the system comfortable with itself", she concludes. A concise proof was published in Compositio Mathematica in 2016. Author Harron earned her doctorate from Princeton in 2016. In the academic year 2023-2024 she was a teacher at Philips Exeter Academy. She has changed her name to Piper Harris. In 2017 in an American Mathematical Society blog she asked white cisgender men to leave their job, because they are "taking up room that should go to someone else". References External links The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields (Harron's PhD thesis) The Liberated Mathematician AMS.Blogs 2021 non-fiction books English-language non-fiction books Birkhäuser books Feminist books Literature by African-American women Mathematical proofs Mathematics books Theses
The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields
[ "Mathematics" ]
669
[ "nan" ]
67,989,719
https://en.wikipedia.org/wiki/Crohns%20MAP%20Vaccine
The Crohns MAP Vaccine is an experimental Viral vector vaccine intended to prevent or treat Crohn's disease, by provoking an immune response to one possible causative agent of the disease, Mycobacterium avium subsp. paratuberculosis. The vaccine is currently about to begin Phase 2 of its development. One of the scientists involved with this research is Thomas Borody, known for his work in developing the 'Triple Therapy' for treating ulcers caused by Helicobacter pylori. References Vaccines
Crohns MAP Vaccine
[ "Biology" ]
115
[ "Vaccination", "Vaccines" ]
66,472,311
https://en.wikipedia.org/wiki/Vanadium%20cycle
The global vanadium cycle is controlled by physical and chemical processes that drive the exchange of vanadium between its two main reservoirs: the upper continental crust and the ocean. Anthropogenic processes such as coal and petroleum production release vanadium to the atmosphere. Sources Natural sources Vanadium is a trace metal that is relatively abundant in the Earth (~100 part per million in the upper crust). Vanadium is mobilized from minerals through weathering and transported to the ocean. Vanadium can enter the atmosphere through wind erosion and volcanic emissions and will remain there until it is removed by precipitation. Anthropogenic sources Human activity has increased the amount of vanadium emissions to the atmosphere. Vanadium is abundant in fossil fuels because it is incorporated in porphyrins during organic matter degradation. Coal and petroleum factory pollution release significant vanadium to the atmosphere. Vanadium is also mined and using for industrial purposes including for steel reinforcement, electronics, and batteries. Sink Vanadium is removed from the ocean by burial marine sediments and incorporation into iron oxides at hydrothermal vents. Biological processes Biological processes play a relatively minor role in the global vanadium cycle. Vanadium bromoperoxidase is present in some marine bacteria and algae. Vanadium can also takes the place of molybdenum in alternative nitrogenases. References Vanadium Biogeochemical cycle
Vanadium cycle
[ "Chemistry" ]
291
[ "Biogeochemical cycle", "Biogeochemistry" ]
66,474,041
https://en.wikipedia.org/wiki/Supernova%20neutrinos
Supernova neutrinos are weakly interactive elementary particles produced during a core-collapse supernova explosion. A massive star collapses at the end of its life, emitting on the order of 1058 neutrinos and antineutrinos in all lepton flavors. The luminosity of different neutrino and antineutrino species are roughly the same. They carry away about 99% of the gravitational energy of the dying star as a burst lasting tens of seconds. The typical supernova neutrino energies are . Supernovae are considered the strongest and most frequent source of cosmic neutrinos in the MeV energy range. Since neutrinos are generated in the core of a supernova, they play a crucial role in the star's collapse and explosion. Neutrino heating is believed to be a critical factor in supernova explosions. Therefore, observation of neutrinos from supernovae provides detailed information about core collapse and the explosion mechanism. Further, neutrinos undergoing collective flavor conversions in a supernova's dense interior offers opportunities to study neutrino-neutrino interactions. The only supernova neutrino event detected so far is SN 1987A. Nevertheless, with current detector sensitivities, it is expected that thousands of neutrino events from a galactic core-collapse supernova would be observed. The next generation of experiments are designed to be sensitive to neutrinos from supernova explosions as far as Andromeda or beyond. The observation of supernovae will broaden our understanding of various astrophysical and particle physics phenomena. Further, coincident detection of supernova neutrino in different experiments would provide an early alarm to astronomers about a supernova. History Stirling A. Colgate and Richard H. White, and independently W. David Arnett, identified the role of neutrinos in core collapse, which resulted in the subsequent development of the theory of supernova explosion mechanism. In February 1987, the observation of supernova neutrinos experimentally verified the theoretical relationship between neutrinos and supernovae. The Nobel Prize-winning event, known as SN 1987A, was the collapse of a blue supergiant star Sanduleak -69° 202, in the Large Magellanic Cloud outside our Galaxy, 51 kpc away. About lightweight weakly-interacting neutrinos were produced, carrying away almost all of the energy of the supernova. Two kiloton-scale water Cherenkov detectors, Kamiokande II and IMB, along with a smaller Baksan Observatory, detected a total of 25 neutrino-events over a period of about 13 seconds. Only electron-type neutrinos were detected because neutrino energies were below the threshold of muon or tau production. The SN 1987A neutrino data, although sparse, confirmed the salient features of the basic supernova model of gravitational collapse and associated neutrino emission. It put strong constraints on neutrino properties such as charge and decay rate. The observation is considered a breakthrough in the field of supernova and neutrino physics. Properties Neutrinos are fermions, i.e. elementary particles with a spin of 1/2. They interact only through weak interaction and gravity. A core-collapse supernova emits a burst of ~ neutrinos and antineutrinos on a time scale of tens of seconds. Supernova neutrinos carry away about 99% of the gravitational energy of the dying star in the form of kinetic energy. Energy is divided roughly equally between the three flavors of neutrinos and three flavors of antineutrinos. Their average energy is of the order 10 MeV. The neutrino luminosity of a supernova is typically on the order of or . The core-collapse events are the strongest and most frequent source of cosmic neutrinos in the MeV energy range. During a supernova, neutrinos are produced in enormous numbers inside the core. Therefore, they have a fundamental influence on the collapse and supernova explosions. Neutrino heating is predicted to be responsible for the supernova explosion. Neutrino oscillations during the collapse and explosion generate the gravitational wave bursts. Furthermore, neutrino interactions set the neutron-to-proton ratio, determining the nucleosynthesis outcome of heavier elements in the neutrino driven wind. Production Supernova neutrinos are produced when a massive star collapses at the end of its life, ejecting its outer mantle in an explosion. Wilson's delayed neutrino explosion mechanism has been used for 30 years to explain core collapse supernova. Near the end of life, a massive star is made up of onion-layered shells of elements with an iron core. During the early stage of the collapse, electron neutrinos are created through electron-capture on protons bound inside iron-nuclei: The above reaction produces neutron-rich nuclei, leading to neutronization of the core. Therefore, this is known as the neutronization phase. Some of these nuclei undergo beta-decay and produce anti-electron neutrinos: The above processes reduce the core energy and its lepton density. Hence, the electron degeneracy pressure is unable to stabilize the stellar core against the gravitational force, and the star collapses. When the density of the central region of collapse exceeds , the diffusion time of neutrinos exceeds the collapse time. Therefore, the neutrinos become trapped inside the core. When the central region of the core reaches nuclear densities (~ 1014 g/cm3), the nuclear pressure causes the collapse to deaccelerate. This generates a shock wave in the outer core (region of iron core), which triggers the supernova explosion. The trapped electron neutrinos are released in the form of neutrino burst in the first tens of milliseconds. It is found from simulations that the neutrino burst and iron photo-disintegration weaken the shock wave within milliseconds of propagation through the iron core. The weakening of the shock wave results in mass infall, which forms a neutron star. This is known as the accretion phase and lasts between few tens to few hundreds of milliseconds. The high-density region traps neutrinos. When the temperature reaches 10 MeV, thermal photons generate electron–positron pairs. Neutrinos and antineutrinos are created through weak-interaction of electron–positron pairs: The luminosity of electron flavor is significantly higher than for non-electrons. As the neutrino temperature rises in the compressionally heated core, neutrinos energize the shock wave through charged current reactions with free nucleons: When the thermal pressure created by neutrino heating increases above the pressure of the infalling material, the stalled shock wave is rejuvenated, and neutrinos are released. The neutron star cools down as the neutrino-pair production and neutrino release continues. Therefore, it is known as the cooling phase. The Luminosities of different neutrino and antineutrino species are roughly the same. The supernova neutrino luminosity drops significantly after several tens of seconds. Oscillation The knowledge of flux and flavor content of the neutrinos behind the shock wave is essential to implement the neutrino-driven heating mechanism in computer simulations of supernova explosions. Neutrino oscillations in dense matter is an active field of research. Neutrinos undergo flavor conversions after they thermally decouple from the proto-neutron star. Within the neutrino-bulb model, neutrinos of all flavors decouple at a single sharp surface near the surface of the star. Also, the neutrinos travelling in different directions are assumed to travel the same path length in reaching a certain distance R from the center. This assumption is known as single angle approximation, which along with spherical symmetricity of the supernova, allows us to treat neutrinos emitted in the same flavor as an ensemble and describe their evolution only as a function of distance. The flavor evolution of neutrinos for each energy mode is described by the density matrix: Here, is the initial neutrino luminosity at the surface of a proto-neutron star which drops exponentially. Assuming decay time by , the total energy emitted per unit time for a particular flavor can be given by . represents average energy. Therefore, the fraction gives the number of neutrinos emitted per unit of time in that flavor. is normalized energy distribution for the corresponding flavor. The same formula holds for antineutrinos too. Neutrino luminosity is found by the following relation: The integral is multiplied by 6 because the released binding energy is divided equally between the three flavors of neutrinos and three flavors of antineutrinos. The evolution of the density operator is given by Liouville's equation: The Hamiltonian covers vacuum oscillations, charged current interaction of neutrinos from electrons and protons, as well as neutrino–neutrino interactions. Neutrino self-interactions are non-linear effects that result in collective flavor conversions. They are significant only when interaction frequency exceeds vacuum oscillation frequency. Typically, they become negligible after a few hundred kilometers from the center. Thereafter, Mikheyev–Smirnov–Wolfenstein resonances with the matter in the stellar envelope can describe the neutrino evolution. Detection There are several different ways to observe supernova neutrinos. Almost all of them involves the inverse beta decay reaction for the detection of neutrinos. The reaction is a charged current weak interaction, where an electron antineutrino interacts with a proton produces a positron and a neutron: The positron retains most of the energy of the incoming neutrino. It produces a cone of Cherenkov light, which is detected by photomultiplier tubes (PMT's) arrayed on the walls of the detector. Neutrino oscillations in the Earth matter may affect the supernova neutrino signals detected in experimental facilities. With current detector sensitivities, it is expected that thousands of neutrino events from a galactic core-collapse supernova would be observed. Large-scale detectors such as Hyper-Kamiokande or IceCube can detect up to events. Unfortunately, SN 1987A is the only supernova neutrino event detected so far. There have not been any galactic supernova in the Milky Way in the last 120 years, despite the expected rate of 0.8-3 per century. Nevertheless, a supernova at 10 kPc distance will enable a detailed study of the neutrino signal, providing unique physics insights. Additionally, the next generation of underground experiments, like Hyper-Kamiokande, are designed to be sensitive to neutrinos from supernova explosions as far as Andromeda or beyond. Further they are speculated to have good supernova pointing capability too. Significance Since supernova neutrinos originate deep inside the stellar core, they are a relatively reliable messenger of the supernova mechanism. Due to their weakly interacting nature, the neutrino signals from a galactic supernova can give information about the physical conditions at the center of core collapse, which would be otherwise inaccessible. Furthermore, they are the only source of information for core-collapse events which don't result in a supernova or when the supernova is in a dust-obscured region. Future observations of supernova neutrinos will constrain the different theoretical models of core collapse and explosion mechanism, by testing them against the direct empirical information from the supernova core. Due to their weakly interacting nature, near light speed neutrinos emerge promptly after the collapse. In contrast, there may be a delay of hours or days before the photon signal emerges from the stellar envelope. Therefore, a supernova will be observed in neutrino observatories before the optical signal, even after travelling millions of light years. The coincident detection of neutrino signals from different experiments would provide an early alarm to astronomers to direct telescopes to the right part of the sky to capture the supernova's light. The Supernova Early Warning System is a project which aims to connect neutrino detectors around the world, and trigger the electromagnetic counterpart experiments in case of a sudden influx of neutrinos in the detectors. The flavor evolution of neutrinos, propagating through the dense and turbulent interior of the supernova, is dominated by the collective behavior associated with neutrino-neutrino interactions. Therefore, supernova neutrinos offer an opportunity to examine neutrino flavor mixing under high-density conditions. Being sensitive to neutrino mass ordering and mass hierarchy, they can provide information about neutrino properties. Further, they can act as a standard candle to measure cosmic distance as the neutronization burst signal does not depend on its progenitor. Diffused supernova neutrino background The Diffuse Supernova Neutrino Background (DSNB) is a cosmic background of (anti)neutrinos formed by the accumulation of neutrinos emitted from all past core-collapse supernovae. Their existence was predicted even before the observation of supernova neutrinos. DSNB can be used to study physics on the cosmological scale. They provide an independent test of the supernova rate. They can also give information about neutrino emission properties, stellar dynamics and failed progenitors. Super-Kamiokande has put the observational upper limit on the DSNB flux as above 19.3 MeV of neutrino energy. The theoretically estimated flux is only half this value. Therefore, the DSNB signal is expected to be detected in the near future with detectors like JUNO and SuperK-Gd. Notes References Supernovae Neutrino astronomy
Supernova neutrinos
[ "Chemistry", "Astronomy" ]
2,902
[ "Supernovae", "Neutrino astronomy", "Astronomical events", "Explosions", "Astronomical sub-disciplines" ]
76,807,453
https://en.wikipedia.org/wiki/Mass%20injection%20flow
Mass injection flow ( Limbach Flow) refers to inviscid, adiabatic flow through a constant area duct where the effect of mass addition is considered. For this model, the duct area remains constant, the flow is assumed to be steady and one-dimensional, and mass is added within the duct. Because the flow is adiabatic, unlike in Rayleigh flow, the stagnation temperature is a constant. Compressibility effects often come into consideration, though this flow model also applies to incompressible flow. For supersonic flow (an upstream Mach number greater than 1), deceleration occurs with mass addition to the duct and the flow can become choked. Conversely, for subsonic flow (an upstream Mach number less than 1), acceleration occurs and the flow can become choked given sufficient mass addition. Therefore, mass addition will cause both supersonic and subsonic Mach numbers to approach Mach 1, resulting in choked flow. Theory The 1D mass injection flow model begins with a mass-velocity relation derived for mass injection into a steady, adiabatic, frictionless, constant area flow of calorically perfect gas: where represents a mass flux, . This expression describes how velocity will change with a change in mass flux (i.e. how a change in mass flux drives a change in velocity ). From this relation, two distinct modes of behavior are seen: When flow is subsonic () the quantity is negative, so the right-hand side of the equation becomes positive. This indicates that increasing mass flux will increase subsonic flow velocity toward Mach 1. When flow is supersonic () the quantity is positive, so the right-hand side of the equation becomes negative. This indicates that increasing mass flux will decrease supersonic flow velocity towards Mach 1. From the mass-velocity relation, an explicit mass-Mach relation may be derived: Derivations Although Fanno flow and Rayleigh flow are covered in detail in many textbooks, mass injection flow is not. For this reason, derivations of fundamental mass flow properties are given here. In the following derivations, the constant is used to denote the specific gas constant (i.e. ). Mass-Velocity Relation We begin by establishing a relationship between the differential enthalpy, pressure, and density of a calorically perfect gas: From the adiabatic energy equation () we find: Substituting the enthalpy-pressure-density relation () into the adiabatic energy relation () yields Next, we find a relationship between differential density, mass flux (), and velocity: Substituting the density-mass-velocity relation () into the modified energy relation () yields Substituting the 1D steady flow momentum conservation equation (see also the Euler equations) of the form into () yields From the ideal gas law we find, and from the definition of a calorically perfect gas we find, Substituting expressions () and () into the combined equation () yields Using the speed of sound in an ideal gas () and the definition of the Mach number () yields This is the mass-velocity relationship for mass injection into a steady, adiabatic, frictionless, constant area flow of calorically perfect gas. Mass-Mach Relation To find a relationship between differential mass and Mach number, we will find an expression for solely in terms of the Mach number, . We can then substitute this expression into the mass-velocity relation to yield a mass-Mach relation. We begin by relating differential velocity, mach number, and speed of sound: We can now re-express in terms of : Substituting () into () yields, We can now re-express in terms of : By substituting () into (), we can create an expression completely in terms of and . Performing this substitution and solving for yields, Finally, expression () for in terms of may be substituted directly into the mass-velocity relation (): This is the mass-Mach relationship for mass injection into a steady, adiabatic, frictionless, constant area flow of calorically perfect gas. See Also Fanno flow Rayleigh flow Compressible flow Choked flow Fanno flow Inviscid flow Adiabatic process Gas dynamics References Fluid mechanics Fluid dynamics Aerodynamics Thermodynamic processes
Mass injection flow
[ "Physics", "Chemistry", "Engineering" ]
874
[ "Chemical engineering", "Thermodynamic processes", "Aerodynamics", "Civil engineering", "Thermodynamics", "Aerospace engineering", "Piping", "Fluid mechanics", "Fluid dynamics" ]
69,378,275
https://en.wikipedia.org/wiki/Flag%20algebra
Flag algebras are an important computational tool in the field of graph theory which have a wide range of applications in homomorphism density and related topics. Roughly, they formalize the notion of adding and multiplying homomorphism densities and set up a framework to solve graph homomorphism inequalities with computers by reducing them to semidefinite programming problems. Originally introduced by Alexander Razborov in a 2007 paper, the method has since come to solve numerous difficult, previously unresolved graph theoretic questions. These include the question regarding the region of feasible edge density, triangle density pairs and the maximum number of pentagons in triangle free graphs. Motivation The motivation of the theory of flag algebras is credited to John Adrian Bondy and his work on the Caccetta-Haggkvist conjecture, where he illustrated his main ideas via a graph homomorphism flavored proof to Mantel's Theorem. This proof is an adaptation on the traditional proof of Mantel via double counting, except phrased in terms of graph homomorphism densities and shows how much information can be encoded with just density relationships. Theorem (Mantel): The edge density in a triangle-free graph is at most . In other words, As the graph is triangle-free, among 3 vertices in , they can either form an independent set, a single induced edge , or a path of length 2 . Denoting as the induced density of a subgraph in , double counting gives: Intuitively, since a just consists of two s connected together, and there are 3 ways to label the common vertex among a set of 3 points. In fact, this can be rigorously proven by double counting the number of induced s. Letting denote the number of vertices of , we have: where is the path of length 2 with its middle vertex labeled, and represents the density of s subject to the constraint that the labeled vertex is used, and that is counted as a proper induced subgraph only when its labeled vertex coincides with . Now, note that since the probability of choosing two s where the unlabeled vertices coincide is small (to be rigorous, a limit as should be taken, so acts as a limit function on a sequence of larger and larger graphs . This idea will be important for the actual definition of flag algebras.) To finish, apply the Cauchy–Schwarz inequality to get Plugging this back into our original relation proves what was hypothesized intuitively. Finally, note that so The important ideas from this proof which will be generalized in the theory of flag algebras are substitutions such as , the use of labeled graph densities, considering only the "limit case" of the densities, and applying Cauchy at the end to get a meaningful result. Definition Fix a collection of forbidden subgraphs and consider the set of graphs of -free graphs. Now, define a type of size to be a graph with labeled vertices . The type of size 0 is typically denoted as . First, we define a -flag, a partially labeled graph which will be crucial for the theory of flag algebras: Definition: A -flag is a pair where is an underlying, unlabeled, -free graph, while defines a labeled graph embedding of onto the vertices . Denote the set of -flags to be and the set of -flags of size to be . As an example, from the proof of Mantel's Theorem above is a -flag where is a type of size 1 corresponding to a single vertex. For -flags satisfying , we can define the density of the -flags onto the underlying graph in the following way: Definition: The density of the -flags in is defined to be the probability of successfully randomly embedding into such that they are nonintersecting on and are all labeled in the exact same way as on . More precisely, choose pairwise disjoint at random and define to be the probability that the -flag is isomorphic to for all . Note that, when embedding into , where are -flags, it can be done by first embedding into a -flag of size and then embedding into , which gives the formula: . Extending this to sets of -flags gives the Chain Rule: Theorem (Chain Rule): If are -flags, are naturals such that fit in , fit in a -flag of size , and a -flag of size combined with fit in , then . Recall that the previous proof for Mantel's involved linear combinations of terms of the form . The relevant ideas were slightly imprecise with letting tend to infinity, but explicitly there is a sequence such that converges to some for all , where is called a limit functional. Thus, all references to really refer to the limit functional. Now, graph homomorphism inequalities can be written as linear combinations of with different s, but it would be convenient to express them as a single term. This motivates defining , the set of formal linear combinations of -flags over , and now can be extended to a linear function over . However, using the full space is wasteful when investigating just limit functionals, since there exist nontrivial relations between densities of certain -flags. In particular, the Chain Rule shows that is always true. Rather than dealing with all of these elements of the kernel, let the set of expressions of the above form (i.e. those obtained from Chain Rule with a single -flag) as and quotient them out in our final analysis. These ideas combine to form the definition for a flag algebra: Definition (Flag Algebras): A flag algebra is defined on the space of linear combinations of -flags equipped with bilinear operator for and any natural such that fit in a -flag of size , extending the operator linearly to . It remains to check that the choice of does not matter for a pair provided it is large enough (this can be proven with Chain Rule) as well as that if then , meaning that the operator respects the quotient and thus forms a well-defined algebra on the desired space. One important result of this definition for the operator is that multiplication is respected on limit functionals. In particular, for a limit functional , the identity holds true. For example, it was shown that in our proof for Mantel's, and this result is just a corollary of this statement. More generally, the fact that is multiplicative means that all limit functionals are algebra homomorphisms between and . The downward operator The definition above provides a framework for dealing with -flags, which are partially labeled graphs. However, most of the time, unlabeled graphs, or -flags, are of greatest interest. To get from the former to the latter, define the downward operator. The downward operator is defined in the most natural way: given a -flag , let to be the -flag resulting from forgetting the labels assigned to . Now, to define a natural mapping between -flags and unlabeled graphs, let be the probability that an injective map taken at random has image isomorphic to , and define . Extending linearly to gives a valid linear map which sends combinations of -flags to combinations of unlabeled ones. The most important result regarding is its averaging properties. In particular, fix a -flag and unlabeled graph with , then choosing an embedding of on at random defines random variable . It can be shown that Optimization with flag algebras All linear functionals, are algebra homomorphisms . Furthermore, by definition, for any -flag since represents a density limit. Thus, say that a homomorphism is positive if and only if , and let be the set of positive homomorphisms. One can show that the set of limit functionals is exactly the set of positive homomorphisms , so it suffices to understand the latter definition of the set. In order for a linear combination to yield a valid graph homomorphism inequality, it needs to be nonnegative over all possible linear functionals, which will then imply that it is true for all graphs. With this in mind, define the semantic cone of type , a set such that Once again, is the case of most interest, which corresponds to the case of unlabeled graphs. However, the downward operator has the property of mapping to , and it can be shown that the image of under is a subset of , meaning that any results on the type semantic cone readily generalize to unlabeled graphs as well. Just by naively manipulating elements of , numerous elements of the semantic cone can be generated. For example, since elements of are nonnegative for -flags, any conical combination of elements of will yield an element of . Perhaps more non-trivially, any conical combination of squares of elements of will also yield an element of the semantic cone. Though one can find squares of flags which sum to nontrivial results manually, it is often simpler to automate the process. In particular, it is possible to adapt the ideas in sum-of-squares optimization for polynomials to flag algebras. Define the degree of a vector to be the largest flag with nonzero coefficient in the expansion of , and let the degree of to be the minimum degree of a vector over all choices in . Also, define as the canonical embedding sending to itself for all . These definitions give rise to the following flag-algebra analogue: Theorem: Given , , then there exist for some if and only if there is a positive semidefinite matrix such that . With this theorem, graph homomorphism problems can now be relaxed into semidefinite programming ones which can be solved via computer. For example, Mantel's Theorem can be rephrased as finding the smallest such that . As is poorly understood, it is difficult to make progress on the question in this form, but note that conic combinations of -flags and squares of vectors lie in , so instead take a semidefinite relaxation. In particular, minimize under the constraint that where is a conic combination of -flags and is positive semi-definite. This new optimization problem can be transformed into a semidefinite-programming problem which is then solvable with standard algorithms. Generalizations The method of flag algebras readily generalizes to numerous graph-like constructs. As Razborov wrote in his original paper, flags can be described with finite model theory instead. Instead of graphs, models of some nondegenerate universal first-order theory with equality in a finite relational signature with only predicate symbols can be used. A model , which replaces our previous notion of a graph, has ground set , whose elements are called vertices. Now, defining sub-models and model embeddings in an analogous way to subgraphs and graph embeddings, all of the definitions and theorems above can be nearly directly translated into the language of model theory. The fact that the theory of flag algebras generalizes well means that it can be used not only to solve problems in simple graphs, but also similar constructs such as, but not limited to, directed graphs and hypergraphs. References Graph theory
Flag algebra
[ "Mathematics" ]
2,281
[ "Discrete mathematics", "Mathematical relations", "Graph theory", "Combinatorics" ]
73,817,188
https://en.wikipedia.org/wiki/Global%20Human%20Settlement%20Layer
The Global Human Settlement Layer (GHSL) is a project from the European Commission that creates global geographical data about the evolution of human habitation on Earth. This in the form of population density maps, built-up maps, and settlement maps. This information is produced using new geographic data mining tools and knowledge and analytics based on empirical data. The GHSL processing framework uses a range of data, including census data, archives of fine-scale global satellite imagery, and voluntarily provided geographic information. Data is processed automatically to produce analytics and knowledge that methodically and objectively describe the existence of people and developed infrastructure. The GHSL maps human presence on Earth, sourcing information from 1975 and up to 2030. Background In 2010–2011, the JRC Directorate E "Space, Security & Migration" developed the initial version of the GHSL concept, which was used to create the Atlases of the Human Planet. The JRC is currently supporting GHSL activities through its scientific working plans and is collaborating with the Directorate-General for Regional and Urban Policy (DG REGIO) and the Directorate-General for Defence Industry and Space (DG DEFIS) to develop a routine and operational monitoring system. References External links Official Website World Population Density Interactive Map of urban settlements stretching from Washington to Boston GHSL at developers.google.com Satellite imagery Global Human Settlement Layer Global Human Settlement Layer European Commission projects Population ecology Demography Data mapping
Global Human Settlement Layer
[ "Technology", "Engineering", "Environmental_science" ]
294
[ "Data mapping", "Data engineering", "Data", "Demography", "Geographic data and information", "Environmental social science" ]
73,820,067
https://en.wikipedia.org/wiki/St.%20Lucia%20Electricity%20Services
The St. Lucia Electricity Services Limited (LUCELEC) is an electric utility company of Saint Lucia. History The company was established in 1964. The company went public on 11 August 1994. Finance In 2022, the company revenue reached EC$134,189,000. Power stations Cul De Sac Power Station References External links 1964 establishments in Saint Lucia Companies of Saint Lucia Electric power companies Electric power in Saint Lucia Public utilities established in 1964
St. Lucia Electricity Services
[ "Engineering" ]
90
[ "Electrical engineering organizations", "Electric power companies" ]
73,829,782
https://en.wikipedia.org/wiki/John%20J.%20Monaghan
John J. Monaghan is a British mass spectrometrist and former editor of Rapid Communications in Mass Spectrometry. Early life and career Monaghan attended the University of Glasgow, where he completed his undergraduate degree in chemistry. He then undertook a PhD with Durward Cruickshank involving the study of gas-phase electron diffraction. After completing his studies he moved to work at Imperial Chemical Industries in Blackley site under the direction of mass spectrometrist John Beynon focusing on the analysis of textile dyestuffs. He was an early adopter and enthusiast of the Fast Atom Bombardment technique developed at the nearby UMIST by Mickey Barber and Don Sedgwick. Other interests Monaghan is an active member of the British Mass Spectrometry Society and has been given life membership for making a significant contribution to the practice of mass spectrometry in the UK. In 2003 the BMSS made John its first President with responsibility to promote the work done by the Society, particularly on the international stage and beyond the core MS community. Monaghan has also been a member and president of the Peterloo Speakers Club in Manchester. He is also a keen cricketer and football referee. References Living people Year of birth missing (living people) British chemists Mass spectrometrists People associated with the University of Manchester People associated with the University of Glasgow Alumni of the University of Glasgow
John J. Monaghan
[ "Physics", "Chemistry" ]
288
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
73,829,878
https://en.wikipedia.org/wiki/Spherical%20collapse%20model
The spherical collapse model describes the evolution of nearly homogeneous matter in the early Universe into collapsed virialized structures - dark matter halos. This model assumes that halos are spherical and dominated by gravity which leads to an analytical solution for several of the halos' properties such as density and radius over time. The framework for spherical collapse was first developed to describe the infall of matter into clusters of galaxies. At this time, in the early 1970s, astronomical evidence for dark matter was still being collected, and it was believed that the Universe was dominated by ordinary, visible matter. However, it is now thought that dark matter is the dominating species of matter. Derivation and key equations The simplest halo formation scenario involves taking a sufficiently overdense spherical patch, which we call a proto-halo (e.g., Descjacques et al. 2018), of the early Universe and tracking its evolution under the effect of its self-gravity. Once the proto-halo has collapsed and virialized, it becomes a halo. Since the matter outside this sphere is spherically symmetric, we can apply Newton's shell theorem or Birkhoff's theorem (for a more general description), so that external forces average to zero and we can treat the proto-halo as isolated from the rest of the Universe. The proto-halo has a density , mass , and radius (given in physical coordinates) which are related by . To model the collapse of the spherical region, we can either use Newton's law or the second Friedmann equation, giving The effect of the accelerated expansion of the Universe can be included if desired, but it is a subdominant effect. The above equation admits the explicit solution where is the maximum radius, assumed to occur at time , and is the quantile function of the Beta distribution, also known as the inverse function of the regularized incomplete beta function . The time is the free-fall time, where . Long before the derivation of this explicit solution, the spherical collapse equation has been known to admit a parametric solution in terms of a parameter . The origin of time, , now occurs at a vanishing radius, and the time increases with increasing . The coefficients are given by the energy contents of the sphere (cf. equation 5.89 in Dodelson et al.). Initially the sphere expands at the rate of the Universe (), but then it slows down, turns around (), and ultimately collapses (). If we split the density into a background and perturbation by , we can solve for the fully nonlinear perturbation Initially , at the turn-around point , and at collapse . Alternatively, if one considers linear perturbations, or equivalently small times , the above equation gives us an expression for linear perturbations We can then extrapolate the linear perturbation into nonlinear regimes (more on the usefulness of this below). At turn-around and at collapse we get the spherical collapse threshold Although the halo does not physically have an overdensity of 1.69 at collapse, the above collapse threshold is nevertheless useful. It tells us that if we model the initial (linear) density field and extrapolate into the future, wherever can be thought of as a collapsing region that will form a halo. See also Press–Schechter formalism – A mathematical model used to predict the number of dark matter halos of a certain mass. References Galaxies Dark matter
Spherical collapse model
[ "Physics", "Astronomy" ]
703
[ "Dark matter", "Unsolved problems in astronomy", "Concepts in astronomy", "Galaxies", "Unsolved problems in physics", "Exotic matter", "Astronomical objects", "Physics beyond the Standard Model", "Matter" ]
65,231,607
https://en.wikipedia.org/wiki/MethaneSAT
MethaneSAT is an American-New Zealand space mission launched in 2024 aboard SpaceX's Transporter 10 rideshare mission. It is an Earth observation satellite that will monitor and study global methane emissions in order to combat climate change. The spacecraft will carry a high performance spectrometer methane sensing system, which will allow the spacecraft to take high resolution measurements of global methane emissions from roughly 50 major regions across Earth. The mission is jointly funded and operated by the Environmental Defense Fund (EDF), an American non-governmental organization, and the New Zealand Space Agency. It marks New Zealand's first space science mission. The Bezos Earth Fund, founded by Jeff Bezos, announced a US$100 million grant to EDF that will support critical work including completion and launch of MethaneSAT. Sara Mikaloff-Fletcher, a National Institute of Water and Atmospheric Research (NIWA) carbon cycle expert, has been named as the mission's lead scientist. History The MethaneSAT program was started by MethaneSAT, LLC, a wholly owned subsidiary of the Environmental Defense Fund (EDF), with the goal of providing global high resolution data regarding methane emissions from oil and gas facilities. In January 2020, MethaneSAT announced that the spacecraft will be built using the Blue Canyon Technologies X-SAT satellite bus, with the spacecraft's methane sensing spectrometer being provided by Ball Aerospace & Technologies. In November 2019, the New Zealand Space Agency (NZSA) joined the program, committing NZ$26 million to the program. Rocket Lab will build and operate the mission control center for the flight in Auckland, New Zealand. NZSA will also take part in launch operations and may contribute to the scientific payload. Ball Aerospace and Blue Canyon Technologies completed an intensive technology review of their respective contributions to the mission in early 2020. On 13 January 2021, the nonprofit MethaneSAT LLC announced that it had signed a contract with SpaceX to deliver the 350 kg MethaneSAT into orbit aboard a Falcon 9 Block 5 launch vehicle with a launch window opening on 1 October 2022. By November 2022, the launch had been delayed to NET October 2023 by supply chain issues during the Covid-19 pandemic. The satellite launched as part of Transporter 10 on 4 March 2024. Results Early reports using the MethaneSat equipment mounted on a jet aircraft (MethaneAir) showed that oil and gas producers in the U.S. are emitting methane into the atmosphere at over four times the rates estimated by the Environmental Protection Agency. The largest emitter is the Permian Basin in Texas which is emitting 256 tons of methane every day, accounting for 1.9% of total gas production. This is in addition to the CO2 produced by extensive gas flaring in the region. Across continental U.S. the aggregate methane loss rate across 12 basins amounts to 1.6% of gas produced, which is 8 times higher than the 0.2% emissions intensity target adopted by the Oil and Gas Climate Initiative. References Spacecraft launched in 2024 Methane Earth observation satellites Satellites of New Zealand Satellites monitoring GHG emissions
MethaneSAT
[ "Chemistry" ]
626
[ "Greenhouse gases", "Methane" ]
65,239,613
https://en.wikipedia.org/wiki/Copper%20%2864Cu%29%20oxodotreotide
{{DISPLAYTITLE:Copper (64Cu) oxodotreotide}} Copper (64Cu) oxodotreotide or Copper Cu 64 dotatate, sold under the brand name Detectnet, is a radioactive diagnostic agent indicated for use with positron emission tomography (PET) for localization of somatostatin receptor positive neuroendocrine tumors (NETs) in adults. Common side effects include nausea, vomiting and flushing. It was approved for medical use in the United States in September 2020. History The U.S. Food and Drug Administration (FDA) approved copper 64Cu dotatate based on data from two trials that evaluated 175 adults. Trial 1 evaluated adults, some of whom had known or suspected NETs and some of whom were healthy volunteers. The trial was conducted at one site in the United States (Houston, TX). Both groups received copper 64Cu dotatate and underwent PET scan imaging. Trial 2 data came from the literature-reported trial of 112 adults, all of whom had history of NETs and underwent PET scan imaging with copper 64Cu dotatate. The trial was conducted at one site in Denmark. In both trials, copper 64Cu dotatate images were compared to either biopsy results or other images taken by different techniques to detect the sites of a tumor. The images were read as either positive or negative for presence of NETs by three independent image readers who did not know participant clinical information. See also Copper-64 DOTA-TATE References External links Radiopharmaceuticals Orphan drugs DOTA (chelator) derivatives Copper complexes
Copper (64Cu) oxodotreotide
[ "Chemistry" ]
328
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]