id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
73,998,007 | https://en.wikipedia.org/wiki/Oxidative%20coupling%20of%20phenols | Oxidative coupling of phenols is a chemical reaction wherein two phenolic compounds are coupled via an oxidative process. Oxidative phenol couplings are often catalyzed by transition metal complexes including V, Cr, Mn, Cu, Fe, among others. Such reactions often form C–C, or C–O bonds between the coupling partners and can be employed as either homo- or cross-couplings.
Mechanism
A representative example is the reaction of phenol with a solution of vanadium tetrachloride, which yields about 60% yield of three isomeric dihydroxybiphenyl compounds. The isomer ratio and yields are unaffected by the reagent/substrate ratio. Vanadium tetrachloride is known to effect one-electron oxidations, which is invoked in this conversion.
Oxidative phenol couplings can occur through either inner sphere or outer sphere processes. In inner sphere processes, the phenolic substrate coordinates to the metal center to give a phenoxide complex. Oxidation to the phenoxide occurs via electron transfer or hydrogen atom abstraction. The resulting reactive intermediate can engage in downstream chemical processes which can occur via either coordinated (inner-sphere) or non-coordinated coupling partners.
Radical-radical reactions are simple to envision but unlikely since it requires the coexistence of two long-lived radicals. Instead, the phenol or phenoxy radical adds to another phenol or phenoxide. The initial C-C bond forming process is followed hydrogen atom abstraction and tautomerization.
Couplings where metal catalysts are not involved generally proceed via the radical-phenol mechanism.
Although select examples of unsymmetrical homocouplings are known, they are notoriously challenging to design and are often arrived at empirically.
Enantioselective asymmetric phenol oxidative couplings are not well-established or general yet, however there exist reports leveraging asymmetric vanadium catalysts to enantioselectively homocouple phenols. In contrast, much progress has been made in asymmetric 2-napthol couplings using Ru, Cu, V, and Fe catalysts, which have had a large impact on the development of BINAP-type ligands used asymmetric catalysis.
Scope
Lignin
Lignin, a polyphenol that is found in most plants, is a very abundant form of biomass that arises, in part, by oxidative coupling of phenols. Lignins are particularly important in the formation of cell walls, especially in wood and bark, because they lend rigidity and do not rot easily. Chemically, lignins are polymers made by cross-linking phenolic precursors.
Organic synthesis
The first example of an oxidative phenol coupling in synthetic chemistry can be traced to Julius Löwe’s 1868 synthesis of ellagic acid, accomplished by heating gallic acid with arsenic acid.
In the synthesis of complex organic compounds, oxidative phenol couplings are sometimes employed. The reaction is attractive for their atom economy because it avoid pre-functionalized starting materials often required in traditional redox-neutral cross-couplings. Oxidative phenol couplings, however, often suffer from over-oxidation, especially since the intended coupled product is more oxidizable (has a lower oxidation potential) than the starting material. In such cases, the catalyst can be quenched or poisoned by engaging in off-cycle redox processes with the product. Additionally, the product may oxidize further, giving way to higher-order oligomers.
Selectivity issues may arise during oxidative phenol couplings between C–C coupled and C–O coupled products. Moreover, stereoselectivity is an important consideration if the resulting biphenol compound displays axial chirality or atropoisomerism. Selectivity between homo- and hetero-coupled products must be considered, and can often be addressed through transition-metal catalysis.
Intramolecular phenol couplings
Intramolecular oxidative phenol couplings have long been known. The most well-studied examples of such transformations are those yielding spirocyclic phenol-dienone coupled products. The coupling partners in an intramolecular coupling must approach in a near-parallel arrangement to allow for orbital overlap; these stringent geometric restraints on pre-cyclized compounds often render the process sluggish, if possible.
C–O couplings
Laccases often effect oxidative couplings, sometimes forming C-O linkages.
Selective C–O coupling of phenols are represented by few examples in synthetic chemistry. In many cases, selective C–O coupling can only be achieved if all ortho and para-positions on the arene are blocked. Poor C–O coupling selectivity is likely due to the lack of radical spin-density on oxygen after phenol oxidation, resulting in kinetic trapping of C–C coupling products.
Nonphenolic arene couplings
Oxidative couplings have also been studied between phenols and nonphenolic compounds including anilines, beta-ketoesters/malonates/malononitriles, electron-rich arenes, olefins, and other functional groups.
References
Coupling reactions
Biphenyls
Organic oxidation reactions | Oxidative coupling of phenols | [
"Chemistry"
] | 1,127 | [
"Coupling reactions",
"Organic oxidation reactions",
"Organic reactions"
] |
74,004,225 | https://en.wikipedia.org/wiki/T%20centre | The T centre is a radiation damage centre in silicon composed of a carbon-carbon pair (C-C) sharing a substitutional site of the silicon lattice. Additionally, one of the substitutional carbon atoms is bonded with a hydrogen atom while the other carbon contains an unpaired electron in the ground state of a dangling bond. Much like the nitrogen-vacancy centres in diamond, the T centre contains spin-dependent optical transitions addressable through photoluminescence. These spin-dependent transitions, however, emit light within the technologically efficient telecommunication O-band. Consequentially, the T centre is an intriguing candidate for quantum information technologies with development of integrated quantum devices benefiting from techniques within the silicon photonic community.
Structure
The T centre is a radiation damage centre in silicon. It contains a substitutional carbon-carbon pair terminated by an additional hydrogen atom within the lattice. This structure also contains a dangling bond on the other substitutional carbon.
Historically, the structure of the T centre was uncovered using spectroscopic measurements. The presence of carbon as the main constituent within the lattice was hypothesized when a shift in the defect's zero phonon line (ZPL) was observed in samples enriched with 13C. Similarly, the presence of hydrogen was determined using a shift in the ZPL in a deuterium defused sample. Splitting within the local vibration modes (LVM) introduced by the presence of 13C from 2 lines into 4 subsequent lines suggested the presence of a second carbon atom. The suggested formation mechanism is, therefore, the capture of an interstitial C-H pair onto a substitutional carbon with a dangling bond predicted by ab initio calculations
External field perturbation measurements are used to determine axial symmetry and orientation of luminescent transitions. Stress-dependent spectral line studies have previously suggested that rhombic I (C2v) symmetry is present within the defect.; however, it was later shown to have monoclinic I (C1h) symmetry. Consequentially, the defect is expected to have 24 orientations, which form 12 optically resolvable orientation pairs under a magnetic field. These have been studied using photoluminescence spectroscopy
Formation
The current formation model for the T centre contains an interstitial carbon capturing a hydrogen atom before migrating to a substitutional site with another carbon during heat treatment between 350 and 600 °C. T centres have been observed in silicon semiconductors grown using the float-zone and Czochralski (CZ) technique as well as Silicon-On-Insulator devices. They are produced by irradiating the sample followed by a thermal annealing process. It has been shown that both plasma etching as well as irradiating the sample with either neutrons or electrons may produce the desired radiation centre. Hydrogen may be introduced through water vapour or in its gaseous state, or it may be present within the sample. An excess of hydrogen may, however, fill the dangling bond and render the radiation damage center optically inert. Alternatively, rather than irradiating the sample and treating it with a subsequent thermal annealing process, T centres may be developed using only a thermal treatment in carbon rich CZ grown silicon.
Optical properties
The T centre's zero-phonon line photoluminescence feature is near 935 meV. This represents a transition from an unpaired electron in the ground state to a bound exciton within the first excited state. The 1.8 meV-split doublet is the result of two states within the same defect.
The inhomogeneous linewidth for this feature reduces in isotopically pure silicon-28. Natural silicon contains a mixture of various isotope masses resulting in variations in both the local band gap and binding energies. Without these variations introduced from neighbouring 29Si nuclei, the linewidth reduces from 26.9(8) eV to 0.25 eV.
Energy level structure
The current accepted model of the T centre proposes an unpaired electron in the ground state and an additional bound exciton in the excited states labeled T and TX respectively. The two electrons in the excited state pair into a spin-0 singlet and the remaining unpaired spin-3/2 hole spin state is split into two Kramers doublets TX0 and TX1 by the internal stress of the defect. The TX centre is characterized as a pseudo-acceptor with effective mass-like states labeled K for even and odd parity. represents the principal quantum number and indicates the symmetry group of the state. The TX ground state is, therefore, an acceptor-like fourfold degenerate 8+ state.
Fine structure behavior
Both the ground state electron and the first excited state hole are doubly degenerate and split under the Zeeman interaction when exposed to an external magnetic field. Due to the splitting of each state, each orientation subset of the T-centre allows for 4 optical transitions from the ground state to TX0. For the subset, the transitions are labeled . Characterization of these transitions is essential for hyperpolarizing the electron into the different transitions for various state manipulation protocols. Further hyperfine spin interactions between the electron and hydrogen are resolved under electron paramagnetic resonance or read using optically detected magnetic resonance signals.
State manipulation
For a centre composed of two 12C constituents subject to an external magnetic field , the spin Hamiltonian for the ground state is given by
This Hamiltonian describes the coupling between the unpaired electron and the hydrogen nucleus. The coefficient denotes the Bohr magneton. The electron spin vector and g-factor tensor are given by and . The g-factor tensor is approximately isotropic with . The hydrogen nuclear spin vector is given by . represents the hydrogen nuclear spin g-factor, and is the nuclear spin magneton. The hyperfine tensor is specific to each optically resolvable orientation subset.
State preparation
Both the electron and nuclear spins can by hyperpolarized using a single optical radio frequency (RF) and a selectively resonant microwave frequency (MF). Continuous-wave electron paramagnetic resonance can be used to depolarize or mix the electron spin state, and the optical transitions and are used for state preparation. Specifically, continuously driving the transition excites the electron into the . The state is prepared in the spin-up state following a subsequent decay through the spin-dependent transition. Alternatively, driving the transition hyperpolarizes the population to the spin-down state through the transition.
Coherence times
The T1 lifetimes for both the electron and nuclear spin state have been measured using nuclear magnetic resonance and have been shown to far exceed 16 seconds in 28Si. The averaged electron and nuclear Hahn-echo (T2) times are 2.1(1) ms and 0.28(1)s respectively. A tighter lower bound for the nuclear coherence time was found by averaging the top 10% highest measurements per time, resulting in an average maximum magnitude nuclear coherence time of s.
See also
Silicon
Crystallographic defect
Notes
References
Crystallographic defects
Silicon compounds | T centre | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,454 | [
"Crystallographic defects",
"Crystallography",
"Materials degradation",
"Materials science"
] |
74,004,311 | https://en.wikipedia.org/wiki/Liquified%20gas%20electrolyte | A liquified gas electrolyte (LGE) is a battery/capacitor electrolyte made by compressing an ambient pressure gas into liquid form. Candidate gases are those composed of reasonably polar molecules that can be liquified at pressures low enough to be accommodate in a standard battery can.
Research
One study reported on a liquified hydrofluorocarbon (HFC) electrolyte . HFCs features relatively strong chemical bonds and a large electrochemical window that protect them from oxidation/reduction across charge/discharge cycles. It combined a moderate relative permittivity with low viscosity to produce a dielectric-fluidity factor and conductivity higher than existing solvents. Because of its low melting point, it has the potential for improved operation at low temperatures.Difluoromethane ()) was able to operate at a range of temperatures from –78° to +65 °C at 3.0 volts. Fluoromethane () showed dendrite-free ~97% platting and stripping efficiency on lithium metal over hundreds of cycles It further achieved good cycling and rate performance on a cathode with discharge capacity retention of 60.6% at –60 °C. It reported that conductivity reversibly ended at high temperature as the salt precipitated near the supercritical point (~+40° to 80 °C), reducing the potential for thermal runaway. However, the material's high saturated vapor pressure was a fire risk.
A later study by the same lab reviewed nonflammable 1,1,1,2-tetrafluoroethane and pentafluoroethane and reported >3 mS cm−1 ionic conductivity from −78 to +80 °C. Lithium cycling maintained >99% coulombic efficiency for over 200 cycles at 3 mA cm−2 and 3 mAh cm−2. Li/NMC622 full batteries demonstrated stable cycling from −60 to +55 °C.
See also
Lithium ion battery
References
External links
Electrolytes
Gas technologies | Liquified gas electrolyte | [
"Chemistry"
] | 422 | [
"Electrochemistry",
"Electrolytes"
] |
78,371,044 | https://en.wikipedia.org/wiki/Buscemi%20nonlocality | Buscemi nonlocality, a concept proposed by Francesco Buscemi in 2012, refers to a type of quantum nonlocality that arises in Bell tests where the local measurement settings are determined not by classical programs but by quantum states. Such generalized tests are called semiquantum nonlocal games. While, as the counterexample of Werner states shows, Bell nonlocality is known not to be equivalent to quantum entanglement, the latter instead turns out to be equivalent to Buscemi nonlocality: a quantum state is "Buscemi nonlocal" if and only if it is entangled.
Semiquantum nonlocal tests constitute the basis for measurement device-independent entanglement witnesses and their feasibility has been experimentally verified several times. Buscemi nonlocality has been given an operational interpretation similar to that of standard Bell nonlocality in the framework of quantum resource theories. It also motivates the study of quantum entanglement based not on the LOCC framework, but rather on the Local Operations and Shared Randomness (LOSR) framework.
References
Quantum mechanics | Buscemi nonlocality | [
"Physics"
] | 235 | [
"Quantum measurement",
"Quantum mechanics"
] |
78,373,815 | https://en.wikipedia.org/wiki/PKS%201510-089 | PKS 1510-089 is a blazar located in the constellation of Libra, categorized as a highly polarized quasar showing fast variations in polarization angles, with a redshift of (z) 0.361. It was first discovered in 1966 as an astronomical radio source during the Parkes Observatory survey in 1966. The radio spectrum of the source appears flat, thus making it a flat-spectrum radio quasar (FRSQ).
Description
PKS 1510-089 is found violently variable on the electromagnetic spectrum according to scientists. It is known to show variations in all wavebands ranging from radio to gamma rays as well as varying in optical brightness. This makes it a key target of several observation campaigns and also by both MAGIC Florian Goebel Telescopes and High Energy Stereoscopic System (HESS). It also shows outbursts, which was detected in 1979, by astronomers via using a 46-meter telescope at the Algonquin Radio Observatory. During this period, the flux density in PKS 1510-089 drastically increased from a low value of 1.5 Jansky (Jy) in 1978 to 4.80 Jy by January 1979 making it the highest recorded flux density during the 12 year observation period.
In March 2009, PKS 1510-089 showed extreme gamma ray activity as observed by the AGILE satellite, which the emission originated, had an average flux of (311 ± 21) x 10−8 photons cm−2 s−1 above 100 MeV. This was then followed by a flaring episode detected in both ultraviolet and near-infrared wavebands. PKS 1510-089 was also observed by Fermi-LAT from August 2008 right up to May 2012, showing several flares when its daily 0.1-300 GeV gamma ray flux exceeded 10−5 photons cm−2 s−1. A short but significant flare was observed in September 2013 although it wasn't high compared to 2009. Between its three quiescent states in 2015, it showed four flares
A powerful complex gamma ray flare was detected in PKS 1510–089 in July 2015. According to multi-frequency optical, radio and gamma ray light curves on the object conducted from 2013 to 2018 as well as analyzing jet kinematic and linear polarization via data from Very Long Baseline Array, a radio flare was discovered trailing the gamma ray flares. This radio flare was shown to have a thick spectrum at the start which then optically becomes thin over a period of time. In additional, two separated emission knots emerging from the radio core during flaring period and linear polarization located near the core, were also detected, prompting astronomers led by Jongho Park to conclude gamma ray flares might arise through the compression of knots caused by a shockwaves inside the core. In additional, a near-infrared flare detected in 2019.
In 2021, PKS 1510-089 underwent a peculiar new state showing a decrease in optical flux, high-energy gamma ray flux in MeV bands and optical polarization degree, reaching zero in 2022. However the X-ray and high-energy gamma ray flux in GeV bands remained constant through the two years.
Radio structure
According to Very Long Baseline interferometry radio imaging at both 6 and 20 cm, the source of PKS 1510-089 shows an unresolved core with a secondary component located 8" towards southeast. When viewed at 1.67 GHz, a dominant component is found lying in a north direction suggesting the core is faint at this frequency.
Astrophysical jet
The jet of PKS 1510-089 is found to move at superluminal speeds. This jet is made up of a milli-arcsecond jet located at position angle -28° and an arcsecond jet with an initial position angle of 55°. Furthermore, the jet is also turbulent with its components moving faster. This causes them to interact with it creating plasma shocks. A counter jet located 0.3 mas from the core, appears to be dominated by shocked emission with a perfect aligned magnetic field. A bright knot of emission was detected in January 2010, which it was found moving down the jet at speeds of 22c while emitting strong gamma ray energy as the outburst in PKS 1510-089 increased.
Quasi-periodic oscillation
The supermassive black hole in PKS 1510-089 is known to detect signals of quasi-periodic oscillation. One signal was detected in 2009 during the outburst lasting for five cycles with 3.6 day period. The second signal occurred in 2018 with a period of 92 days until in 2020, when the period evolved to around 650 days. In light of shifting oscillation periods, scientists established a model in order to compare the oscillation behavior of PKS 1510-089 suggesting a binary black hole system with non-asymmetric instability revolving around a central black hole near the innermost orbit. The presence of nearly equidistant magnetic islands in the inner part of the jet, as well as the geometric model which involves a plasma blob in a curved jet moving helically, seems to fit with observations, meaning its period shift was probably caused by a highly eccentric orbit of a secondary black hole.
Supermassive black hole
Black hole mass
By measuring hydrogen spectral series and iron emission lines, scientists were able to identify a dark region absorbing emission of the object (broad line region). According to close-up spectroscopies, they found the observed frame region size is 61.1-3.2+4.0 (64.7-10.6+27.1) light-days with an intrinsic line width speed of 1262 ± 247 km s−1. By correlating the two values with the laws of gravitation, they were able to identify a black hole mass of 5.71-0.58+0.62 x 107 Mʘ. However, a study estimated the mass of the black hole to be 1.37 x 109 Mʘ while another study calculates the mass as 5.4 x 108 Mʘ, from the blazar's recorded isotopic luminosity of 2 x 1048 erg s−1.
Secondary black hole
Based on current measurements, it is proposed PKS 1510-089 has a secondary black hole. It is found orbiting around the primary black hole with a period of 336 ± 14 days and a projected distance of 0.1 parsecs from each other. The mass of the secondary black hole is 1.37 x 107 Mʘ.
References
External links
PKS 1510-089 on SIMBAD
PKS 1510-089 on NASA/IPAC Database
Blazars
Quasars
Libra (constellation)
Astronomical objects discovered in 1966
OVV quasars
Active galaxies
BL Lacertae objects
2828331
Supermassive black holes | PKS 1510-089 | [
"Physics",
"Astronomy"
] | 1,414 | [
"Black holes",
"Libra (constellation)",
"Unsolved problems in physics",
"Supermassive black holes",
"Constellations"
] |
63,924,633 | https://en.wikipedia.org/wiki/IBTS%20Greenhouse | The IBTS (“Integrated Biotectural System") greenhouse is a biotectural, urban development project suited for hot arid deserts. It was part of the Egyptian strategy for the afforestation of desert lands from 2011 until spring of 2015, when geopolitical changes like the Islamic State of Iraq and the Levant – Sinai Province in Egypt forced the project to a halt. The project begun in spring 2007 as an academic study in urban development and desert greening. It was further developed by N. Berdellé and D. Voelker as a private project until 2011. Afterwards LivingDesert Group including Prof. Abdel Ghany El Gindy and Dr. Mosaad Kotb from the Central Laboratory for Agricultural Climate in Egypt, Forestry Scientist Hany El-Kateb, Agroecologist Wil van Eijsden and permaculturist Sepp Holzer was created to introduce the finished project in Egypt.
The IBTS Greenhouse, together with the programme for the afforestation of desert lands in Egypt, became part of relocation strategies. These play a role in Egypt as urbanization of the Nile Delta is a problem for the agricultural sector and because of infrastructural problems like traffic congestion in Cairo.
The IBTS features sea-water farming but inside a large greenhouse. All of the evaporated water can thus be harvested. The generation of liquid water from the atmosphere inside the IBTS requires large amounts of cooling power. This is done with the incoming sea-water. Thus the cooling requirement and the cooling power are always balanced.
The IBTS relies on a new quality of systems integration including architectural, technological and natural elements. It combines food production and residence, as well as desalination of sea water, or brackish groundwater. A CAE demonstration project using real weather-, soil and economic conditions proved feasibility under hyperarid conditions.
The relevance of the IBTS is its capacity for water Desalination with an efficiency of 0.45kwh per cubic metre of distillate. This is because operational cost for Desalination utilities far outweigh initial building cost over time. Also because the energy requirement for Desalination plants reach up into the GigaWatt region. The dependence on large amounts of fossil energy leaves water provision from industrial plants insecure.
Through the high efficiency, Desalination has become financially and ecologically viable for large scale agriculture, forestry and aquaculture.
Another point of relevance is the creation of a bio-diverse landscape and many jobs instead of smoking chimneys and factories along the valuable waterfront.
Particular relevance also lies in the applicability inland, also that would exclude the high Desalination capacity.
The building has its roots in construction engineering and construction physics in contrast to food production as it is for most greenhouses. It is fundamentally different from the seawater greenhouses. It differs for its performance in desalination. Alternative desalination-technologies, air-to-water utilities and desalination-greenhouses in testing, require a multiple of the energy for fresh-water production.
The significance of the term Integration lies within the efficiency that systems integration can achieve, by imitation of natural systems, especially closed cycles. The establishment of closed watercycles being the most crucial of all, because of the increasing severity of the Global Water crisis particularly in hot desert climates.
The industrial-scale desalination is bound to hot climates because it requires high amounts of solar thermal power. It has turned out to be suitable in mitigation of the sinking of water tables in agricultural areas of the MENA region and beyond. In future versions the IBTS can be deployed in cold climates using extra heat energy sources like compact fusion, or small modular reactors.
Charging the watercycle
The IBTS can be charged by seawater, which is turned into freshwater by evaporation. This is the primary type because it is important. Seawater is unlimited and the IBTS can thus produce excess water for sale.
At the beginning of the saltwater charging lies the seawater farming operation inside the IBTS Greenhouse. This only requires small amounts of seawater. Most of the water flows through the food-production system and is then processed in the full-desalination utility.
The IBTS can also be charged by a continuous inflow of organic matter for the workers, animals, and later residents. The organic matter, which is food and drink first, is regained through waste treatment. The waste-water treatment is part of the ordinary water cycle. The organic matter is partly infiltrated underground into the root zones of the plants and partly processed in septic tanks and then applied as topsoil in the forestry. This concept has been implemented inside residential homes (A common type is an Earthship).
In general, it is possible to build the IBTS as solids and liquids waste treatment sites for settlements, hotels, or cities.
The water cycle can also be charged by a single rain event, which does occur in the desert and can be counted on. Lastly, it is possible to charge the water cycle by pumping saline or contaminated groundwater and to some extent by atmospheric water generation.
The volume of water inside the water cycle is not important as it is a quasi-closed cycle, causing evaporation from soil and exhaled moisture from people getting captured under the roof.
Losses occur due to the export of food and in case of a leak in the roof. Leaks would occur frequently under normal conditions. The Skyroof is maintained with a special refurbishment and replacement system that can deal with harsh weather and objects landing on the thin foil.
Charging the nutrient cycle
The nutrient cycle is connected to the watercycle. Charging it mainly means the practice of building up soil fertility and soil organic matter. This can entail import of biomass through organic waste, but mainly by biowaste from the production of food inside the IBTS.
In sea-water systems the biomass is created from salt-tolerant plants called halophytes. Biomass yields of up to 52 tons per hectare per year have been recorded.
Moreover, the biomass generation of roots are important for Carbon sequestration. This is up to 35t/ha*y extra. The IBTS-Greenhouse is a Blue Carbon project.
A third source of biomass are external seawater farms, which do not require the pricy space under the roof of the IBTS. These can be on land or in sea. Most noteworthy are seaweed farms.
Just as the nutrient cycle has to be charged with biomass there is an option to charge the atmosphere inside the IBTS, or seaweed water-ponds, with . This would increase the biomass yield. This process has certain limits. One is the availability of trace element like phosphorus required by any organism. As the best source for the charging with additional would be industrial waste this is another way in which the IBTS can function as waste treatment site.
Performance
The energy of operation is 0.45 kWh per cubic metre of distilled water in the full-scale version. This performance is more than 10 times lower than the records set by desalination plants in Dubai and Perth according to official numbers given by the respective authorities. The IBTS is based on a modular concept, with a core size of 1 hectare. This is the minimum size for the construction and for self-sufficiency, but the circular, architectural modules can be built 10 hectare large, or more. Each module is based on sub-modules allowing for immediate commencement of operation and generation of profit (like a re-afforestation site generating profit in its early stages). Best efficiency and full capacity can be provided with a superstructure approximately 100 modules large. 10 km2 have the capacity of an industrial desalination plant, which is 0,5 million cubic meters of water per day.
Since the first version of the IBTS the atmospheric water generation has evolved through a series of hygrothermal models and can now be operated at 0.45 kwh/m3 according to the developer. The IBTS works with natural processes in closed cycles, hosted in a building. Therefore, it never hits natural, or physical limitations for growth like the desalination technology in the Persian Gulf already has because of brine discharge and temperature rise.
Primary energy
The IBTS is operated with electrical and thermal energy produced from windpower and concentrated solar power, on-site (in a proprietary process). This means that the energy requirement and the use of primary energy can be considered the same, which is not the case for common desalination plants.
Common desalination plants are dependent on power-plants using fossil fuels. Accounting for energy-loss during the energy transformation in the power-plant, common desalination plants use 2-3 times more energy than stated in the usual performance data. These are common factors for energy-conversion losses for the combustion engines used in the desalination industry.
Taking this into account the IBTS uses less than 5% of the current efficiency world-record. This industrial record is about 3.5kWh/m3 plus ca. 1.0kWh/m3 for seawater pumping and other factors not accounted for. It is multiplied with the efficiency of primary energy use. Together 9-14 kWh/m3.
The term of primary energy should be combined with energy quality for realistic understanding. Energy quality in context of desalination shows a new picture for the overall efficiency not only of the physical process of desalination, but the overall economic efficiency of the IBTS using proprietary renewable energy.
Design
The maximum of 500m³ of freshwater production per day and hectare, multiplies to 0.5 million m³ on 1000 ha, equaling the output of the largest industrial desalination power plants in the world. It is reached by heat-recovery from the hot fresh-water. This recovered energy is used to heat the brine leaving the Mariculture in the IBTS doubling the daily evaporation of 100m³ and generating salt for sale. The recovered energy is also used to preheat incoming salt-water for the Mariculture. The chosen breed of fish needs warm water and that warm water also increases the natural evaporation inside the Greenhouse. The design points arose out of the computational engineering of the physical model as well as the financial plan in an iterative process.
Economic implications
Because of the independence of primary energy- and material resources, the efficiency of water production and the scalable, modular design the IBTS Greenhouse is sustainable. A strategic, national infrastructure project like the IBTS allows for the successful energy-transition into a sustainable economy.
This can be understood by a comparison of GDP growth, the generation of real values and a weighted GDP.
An example for the infrastructure services of the IBTS Greenhouse is water purification. Wastewater is percolated into the ground and provides water and nutrients for the growth of trees. This is not so easy with food crops for hygienic reasons. Thus the IBTS provides sewage treatment in countries, or areas that lack treatment plants
The IBTS Greenhouse is an open concept compatible with most other technologies and practices for water- energy- and food production. It is plugin-ready for upcoming technologies like nuclear power from compact fusion, the traveling wave reactor, or breeder reactors. When these energy sources become available they can be integrated into existing IBTS infrastructure and generate even more fresh water without brine discharge into natural water bodies and the appending environmental problems. For infrastructure developments taking decades for the roll-out and upscaling it is crucial to design in terms of future-readiness, a key engineering principle.
The manufacturing process of the IBTS is designed for automation, which requires more electricity than common construction sites, or manufacturing processes. This platform design is also future ready for more available energy. An example is the large roof of the IBTS, which needs to be observed and cleaned continuously and refurbished several times over the lifecycle of the IBTS. This can only be done by special bots, or drones on the scale that the IBTS was developed for as national desert greening strategy for reclaiming and regreening entire regions.
Examples of other biotecture
The most famous example is the Biosphere 2, a research project and demonstration site integrating residential areas into a new type of greenhouse. It was designed to be self-sufficient including food production in an ecosystemic context. Another example for Biotecture, which is foremost a residential home, is an Earthship. Earthships incorporate water-purification and reuse on multiple levels.
Since 2010 urban developments labeled Forest Cities, drawing from the IBTS and other pioneer projects have been created. The Gardens by the Bay using all of the core design elements of the TSPC Forest City from 2008 like artificial trees with spherical buildings on top is an outstanding example. The Liuzhou Forest City is one of many examples for green architecture, respectively green urban developments of new cities with a lot of green areas, including the facades of buildings.
The international efforts to create Forest Cities are another level of implication. China is going forward with the introduction of several hundred designated Forest Cities. One of the latest examples is Shenzhen.
See also
References
External links
Effects of a Solar Desalination Module integrated in a Greenhouse Roof on Light Transmission and Crop Growth by M.Thameur. Chaibi
Water desalination
Drinking water
Greenhouses
Water technology
Water conservation | IBTS Greenhouse | [
"Chemistry"
] | 2,718 | [
"Water treatment",
"Water technology",
"Water desalination"
] |
63,931,134 | https://en.wikipedia.org/wiki/C16H12ClN3O3 | {{DISPLAYTITLE:C16H12ClN3O3}}
The molecular formula C16H12ClN3O3 (molar mass: 329.738 g/mol, exact mass: 329.0567 u) may refer to:
Meclonazepam
Ro05-4082
Molecular formulas | C16H12ClN3O3 | [
"Physics",
"Chemistry"
] | 74 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
63,932,134 | https://en.wikipedia.org/wiki/Helga%20E.%20Rafelski | Helga Ernestine Rafelski, (née Betz) (3 September 1949 – 5 November 2000) was a German particle physicist. She got her professional degree from Goethe-Universität Frankfurt am Main, her master's degree from University of Illinois at Chicago in 1977 and her PhD from University of Cape Town in 1988 under the advisement of Raoul D. Viollier. She studied muon-catalysed fusion and relativistic heavy-ion collisions.
Returning from South Africa, Rafelski held a visiting position at Goethe-Universität Frankfurt am Main where she worked with Berndt Müller. In-between other appointments she was a scientific associate at CERN in the Data Handling Division.
Her last position was as a computer science teacher at Catalina Foothills High School in Tucson.
Helga Rafelski died of cancer in November 2000 in Tucson, Arizona.
References
1949 births
2000 deaths
Particle physicists
Women physicists
University of Cape Town alumni
People associated with CERN
University of Illinois Chicago alumni | Helga E. Rafelski | [
"Physics"
] | 208 | [
"Particle physicists",
"Particle physics"
] |
63,937,042 | https://en.wikipedia.org/wiki/TRiC%20%28complex%29 | T-complex protein Ring Complex (TRiC), otherwise known as Chaperonin Containing TCP-1 (CCT), is a multiprotein complex and the chaperonin of eukaryotic cells. Like the bacterial GroEL, the TRiC complex aids in the folding of ~10% of the proteome, and actin and tubulin are some of its best known substrates. TRiC is an example of a biological machine that folds substrates within the central cavity of its barrel-like assembly using the energy from ATP hydrolysis.
Subunits
The human TRiC complex is formed by two rings containing 8 similar but non-identical subunits, each with molecular weights of ~60 kDa. The two rings are stacked in an asymmetrical fashion, forming a barrel-like structure with a molecular weight of ~1 MDa.
Molecular weight of human subunits.
Counterclockwise from the exterior, each ring is made of the subunits in the following order: 6-8-7-5-2-4-1-3.
Evolution
The CCT evolved from the archaeal thermosome ~2Gya, with the two subunits diversifying into multiple units. The CCT changed from having one type of subunit, to having two, three, five, and finally eight types.
See also
Chaperone
Chaperonin
Heat shock protein
Notes
References
Gene expression
Protein complexes | TRiC (complex) | [
"Chemistry",
"Biology"
] | 295 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
66,749,514 | https://en.wikipedia.org/wiki/Maritime%20sociology | Maritime sociology is a sub-discipline of sociology studying the relationship of human societies and cultures to the oceans and the marine environment as well as related social processes. Subjects studied by maritime sociology are human activities at and with the sea such as seafaring, fisheries, maritime and coastal tourism, off-shore extraction, deep-sea mining, or marine environmental conservation. Institutions and discourses related to those activities are also studied by the sub-discipline. Another area of study is the societal-natural relations in the marine realm such as, for instance, the problem of over-fishing or the social consequences of climate change. In sum, maritime sociology conceptualizes the oceans as a social rather than a merely natural space.
Relation to other sociological disciplines
Maritime sociological research is often closely related, uses theories and methods of and collaborates with other sub-disciplines such as the sociology of work or environmental sociology.
Schools and institutions
Although it is almost as old as the sociological discipline itself, maritime sociology has not been institutionalized to any great extent to date and is practiced by various more or less independent schools around the world. Currently, there are efforts within the research community to establish maritime sociology as an independent sub-discipline.
The Polish universities of Szczecin, Gdansk, and Poznan are national centers of research mainly in the sociology of maritime professions. After the second world war, when the Polish coastline had increased significantly due to the outcome of the war, maritime matters became important subject of political and scientific discourse in Poland leading to the establishment of maritime sociology.
From 1985 to 1992, there was a working group at the Institute of Sociology at Christian-Albrechts-University in Kiel, Germany that aimed to establish maritime sociology in Germany.
Several Chinese universities (Ocean University, Shanghai Ocean University) conduct research in and operate institutes of maritime sociology. The discipline was also institutionalized in the Chinese academic landscape by founding a professional committee in the Chinese Sociological Association in 2010.
Since 2009, research streams on maritime sociology take place regularly at the biannual conference of the European Sociological Association
Centers in Canada are the Memorial University in Newfoundland, Dalhousie University, St. Francis Xavier University, University of British Columbia and the University of Toronto. These Canadian research efforts mainly focus on environmental aspects of the human-ocean relation and fisheries research.
Theoretical positions
Until now, there is no established theoretical framework or overarching paradigm of maritime sociology. Sociological studies in maritime subjects share the identity of dealing with subjects related to the sea rather than a common theoretical ground. With the exception of Janiszewski's concept of marinization, maritime sociologists borrow theoretical approaches from other sociologies to apply them to their field.
Marinization
Since the 1970s, the Polish sociologist Ludwik Janiszewski's developed the theory of "marinization" (Polish: "Marynizacja"). Analogous to ideas like industrialization, urbanization, or digitalization, the notion describes a historical process or tendency of increasing entanglement of terrestrial societies with the maritime realm, or a tendency of growing importance of relations with and the use of the sea for human societies.
Critical political economy of oceans
Scholars using the critical political economy approach draw on theories about the interaction of the capitalist mode of production with the ocean in the Marxist tradition. One strand of this approach is concerned with environmental and sustainability issues, using social-ecological theories such as social metabolic analysis. John Hannigan criticizes that its proponents fail to conceptualize the ocean as a distinct social space but, instead, remain in a terrestrial bias applying the categories of land-based society to the maritime realm.
Another strand under the umbrella of critical political economy focuses on the role of the sea with regard to international trade, migratory movements of people, and relations of maritime labor, and piracy. An early publication of this research direction was Steinberg's book "The Social Construction of the Ocean". A recent publication exploring the sociology of the oceans from a critical political economy perspective is presented by Liam Campling and Alejandro Colás: "[...] we aim to demonstrate how capitalism has transformed the spatial relationship between land and sea in ways that has made them both increasingly interdependent and resolutely differentiated."
Posthumanist and postmodern theories
Recently, a number of maritime sociology researchers draw on theories of posthumanism and postmodernism to conceptualize human-ocean relations. In the sense of Actor-Network-Theory and the works of feminist scholar Donna Haraway, they argue for transcending the separation of nature and society, conceptualizing the oceans as a "hybrid" instead of viewing it separately as a social and natural space.
Fields of research
Examples of maritime-sociological research and theory building include:
Ferdinand Tönnies' empirical study of the social situation of dockworkers and seafarers in Northern-German seaports.
Norbert Elias' work on the origins of the naval profession in England
The academic controversy weather the ship is a total institution. While most maritime scholars studying seafaring subscribe to the notion, there is also critique on the application of the concept arguing that a merchant ship is not a disciplinary institution and can therefore not be adequately be described in these terms.
The works of the Seafarers International Research Centre (SIRC) at Cardiff University. The members of the research center conduct research in the social situation of global seafarers in the world's merchant fleets.
Related Journals
Currently, there are no scientific journals exclusively dedicated to the subject of maritime sociology. Below is a list of interdisciplinary journals covering the subject:
Asia-Pacific Journal of Marine Science & Education
Constanta Maritime University Annals
Marine Policy
Maritime Policy & Management
Maritime Studies
Pomorstvo. Scientific Journal of Maritime Research
Roczniki Socjologii Morskiej. Annals of Maritime Sociology (1986-2016)
WMU Journal of Maritime Affairs
References
Oceanography
Environmental sociology | Maritime sociology | [
"Physics",
"Environmental_science"
] | 1,189 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Environmental sociology",
"Environmental social science"
] |
66,755,839 | https://en.wikipedia.org/wiki/Abiological%20nitrogen%20fixation%20using%20homogeneous%20catalysts | Abiological nitrogen fixation describes chemical processes that fix (react with) N2, usually with the goal of generating ammonia. The dominant technology for abiological nitrogen fixation is the Haber process, which uses iron-based heterogeneous catalysts and H2 to convert N2 to NH3. This article focuses on homogeneous (soluble) catalysts for the same or similar conversions.
Transition metals
Vol'pin and Shur
An early influential discovery of abiological nitrogen fixation was made by Vol'pin and co-workers in Russia in 1970. Aspects are described in an early review:
"using a non-protic Lewis acid, aluminium tribromide, were able to demonstrate the truly catalytic effect of titanium by treating dinitrogen with a mixture of titanium tetrachloride, metallic aluminium, and aluminium tribromide at 50 °C, either in the absence or in the presence of a solvent, e.g. benzene. As much as 200 mol of ammonia per mol of was obtained after hydrolysis.…" These results led to many studies on dinitrogen complexes of titanium and zirconium.
Mo- and Fe-based systems
Because Mo and Fe are found at the active site of the most common and most active form of nitrogenase, these metals have been the focus of particular attention for homogeneous catalysis. Most catalytic systems operate according to the following stoichiometry:
N2 + 6H+ + 6e− → 2NH3
The reductive protonation of metal dinitrogen complexes was popularized by Chatt and coworkers, using Mo(N2)2(dppe)2 as substrate. Treatment of this complex with acid gave substantial amounts of ammonium. This work revealed the existence of several intermediates, including hydrazido complexes (Mo=N-NH2). Catalysis was not demonstrated. Schrock developed a related system based on the amido Mo(III) ocomplex Mo[(HIPTN)3N]. With this complex, catalytic nitrogen fixation occurred, albeit with only a few turnovers.
Intense effort has focussed on family of pincer ligand-supported Mo(0)-N2 complexes. In terms of it donor set, and oxidation state, these pincer complexes are similar to Chatt's complexes. Their advantage is that they catalyze the hydrogenation of dinitrogen. A Mo-PCP (PCP = phosphine-NHC-phosphine) complex exhibits >1000 turnovers when the reducing agent is samarium(II) iodide and the proton source is methanol.
Iron complexes of N2 are numerous. Derivatives of Fe(0) with C3-symmetric ligands catalyze nitrogen fixation.
Photolytic routes
Photolytic nitrogen splitting is also considered.
p-Block systems
Although nitrogen fixation is usually associated with transition metal complexes, a boron-based system has been described. One molecule of dinitrogen is bound by two transient Lewis-base-stabilized borylene species. The resulting dianion was subsequently oxidized to a neutral compound, and reduced using water.
Nitriding
Particular metals can react with nitrogen gas to give nitrides, a process called nitriding. For example, metallic lithium burns in an atmosphere of nitrogen, giving lithium nitride. Hydrolysis of the resulting nitride gives ammonia. In a related process, trimethylsilyl chloride, lithium and nitrogen react in the presence of a catalyst to give tris(trimethylsilyl)amine, which can be further elaborated. Processes that involve oxidising the lithium metal are however of little practical interest, since they are non-catalytic and re-reducing the ion residue is difficult. The hydrogenation of Li3N to produce ammonia has seen some exploration since the resulting lithium hydride can be thermally decomposed back to lithium metal.
Some Mo(III) complexes also cleave N2:
2Mo(NR2)3 + N2 → 2N≡Mo(NR2)3
This and related terminal nitrido complexes have been used to make nitriles.
See also
Nitrogenase: enzymes used by organisms to fix nitrogen
Transition metal dinitrogen complex
Metal nitrido complex
References
Homogeneous catalysis | Abiological nitrogen fixation using homogeneous catalysts | [
"Chemistry"
] | 892 | [
"Catalysis",
"Homogeneous catalysis"
] |
77,042,503 | https://en.wikipedia.org/wiki/Dissociative%20adsorption | Dissociative adsorption is a process in which a molecule adsorbs onto a surface and simultaneously dissociates into two or more fragments. This process is the basis of many applications, particularly in heterogeneous catalysis reactions. The dissociation involves cleaving of the molecular bonds in the adsorbate, and formation of new bonds with the substrate.
Breaking the atomic bonds of the dissociating molecule requires a large amount of energy, thus dissociative adsorption is an example of chemisorption, where strong adsorbate-substrate bonds are created. These bonds can be atomic, ionic or metallic in nature. In contrast to dissociative adsorption, in molecular adsorption the adsorbate stays intact as it bonds with the surface. Often, a molecular adsorption state can act as a precursor in the adsorption process, after which the molecule can dissociate only after sufficient additional energy is available.
A dissociative adsorption process may be homolytic or heterolytic, depending on how the electrons participating in the molecular bond are divided in the dissociation process. In homolytic dissociative adsorption, electrons are divided evenly between the fragments, while in heterolytic dissociation, both electrons of a bond are transferred to one fragment.
Kinetic theory
In the Langmuir model
The Langmuir model of adsorption assumes
The maximum coverage is one adsorbate molecule per substrate site.
Independent and equivalent adsorption sites.
This model is the simplest useful approximation that still retains the dependence of the adsorption rate on the coverage, and in the simplest case, precursor states are not considered. For dissociative adsorption to be possible, each incident molecule requires n available adsorption sites, where n is the number of dissociated fragments. The probability of an incident molecule impacting a site with a valid configuration has the form
,
when the existing coverage is and the dissociative products are mobile on the surface. The order of the kinetics for the process is n. The order of kinetics has implications for the sticking coefficient
,
where denotes the initial sticking coefficient or the sticking coefficient at 0 coverage. The adsorption kinetics are given by
for (n=2),
where is the impinging flux of molecules on the surface. The shape of the coverage function over time is different for each kinetic order, so assuming desorption is negligible, dissociative adsorption for a system following the Langmuir model can be determined by monitoring the adsorption rate as a function of time under a constant impinging flux.
Precursor states
Often the adsorbing molecule does not dissociate directly upon contact with the surface, but is instead first bound to an intermediate precursor state. The molecule can then attempt to dissociate to the final state through fluctuations. The precursor molecules can be intrinsic, meaning they occupy an empty site, or extrinsic, meaning they are bound on top of an already occupied site. The energies of these states can also be different, resulting in different forms of the overall sticking coefficient . If extrinsic and intrinsic sites are assumed energetically equivalent and the adsorption rate to the precursor state is assumed to follow the Langmuir model, the following expression for the coverage dependence of the overall sticking coefficient is obtained:
,
where is the ratio between the rate constants of dissociation and desorption reactions of the precursor.
Temperature dependence
The behaviour of the sticking coefficient as a function of temperature is governed by the shape of the potential energy surface of adsorption. For the direct mechanism, the sticking coefficient is almost temperature independent, because for most systems . When a precursor state is involved, thermal fluctuations determine the probability of the weakly bound precursor either dissociating into the final state or escaping the surface. The initial sticking coefficient is related to the energy barrier for dissociation and desorption , and their rate constants and as
.
From this arises two distinct cases for the temperature dependence:
When , the sticking coefficient increases with substrate temperature. This is the case of activated adsorption
When , the sticking coefficient decreases as substrate temperature increases. This is the behaviour of non-activated adsorption.
By measuring the sticking coefficient at different temperatures, it is then possible to extract the value of .
Experimental techniques
The measurement of adsorption properties relies on controlling and measuring the surface coverage and conditions, including the substrate temperature, impinging molecular flux or partial pressure. To detect dissociation on the surface, additional techniques that can distinguish surface ordering due to the interaction of dissociated fragments, identify desorbed particles, determine the order of kinetics or measure the chemical bond energies of the adsorbed species are required. In many experiments, a combination of multiple methods that probe different surface properties is used to form a complete picture of the adsorbed species. Comparisons between the experimental adsorption energy and simulated energies for dissociative and molecular adsorption can also indicate the type of adsorption for a system
For measurement of adsorption isotherms, a controlled gas pressure and temperature determine the coverage when adsorption and desorption rates are in balance. The coverage can then be measured with various surface sensitive methods like AES or XPS. Often, the coverage can also be related to a change in the surface work function, which can enable faster measurements in otherwise challenging conditions. The shape of the isotherms is sensitive to the order of kinetics of the adsorption and desorption processes, and though the exact forms can be difficult to find, simulations have been used to find general functional forms for isotherms of dissociative adsorption for specific systems.
XPS is a surface sensitive method that allows the direct probing of the chemical bonds of the surface atoms, thus being capable of differentiating bond energies corresponding to intact molecules or dissociated fragments. A challenge with this method is that the incident photons can induce surface modifications that are difficult to separate from the effects to be measured. LEED patterns are often combined with other measurements to verify surface structure and recognize ordering of the adsorbates.
Temperature programmed desorption (TPD or TDS) can be used to measure the properties of desorption, namely the desorption energy, order of desorption kinetics and the initial surface coverage. The desorption order contains information about the mechanisms like recombination required for the desorption process. As TPD also measures the masses of the desorbed particles, it can be used to detect individually desorbed dissociated fragments or their different combinations. Presence of masses different from the original molecules, or the detection of additional desorption peaks with higher order kinetics can indicate that the adsorption is dissociative.
Modeling
Density functional theory (DFT) can be used to calculate the change in energy caused by the adsorption and dissociation of molecules. The activation energy is calculated as the highest energy point on the optimal molecular paths of the fragments as they transform from the initial molecular state to the dissociated state. This is the saddle point of the potential energy surface of the process.
Another approach for considering the stretching and dissociation of adsorbates is through the charge-transfer between the electron bands near the Fermi surface using molecular orbital (MO) theory. A strong charge transfer caused by overlap of unoccupied and occupied orbitals weakens the molecular bonds, which lowers or eliminates the barrier for dissociation. The charge transfer can be local or delocalized in terms of the substrate electrons, depending on which orbitals participate in the interaction. The simplest method used for approximating the electronic structure of systems using MO theory is Hartree-Fock self-consistent field, which can be extended to include electron correlations through various approximations.
Applications and examples
Water and transition metals
In atmospheric conditions, the adsorption of water and oxygen on transition metal surfaces is a well studied phenomenon. It has also been found that dissociated oxygen content on a surface lowers the activation energy for the dissociation of water, which on a clean metal surface can have a high barrier for dissociation. This is explained by the oxygen atoms binding with one hydrogen of the adsorbing water molecule to form an energetically favourable hydroxyl group. Likewise, molecular pre-adsorbed water can be used to lower the barrier for dissociation of oxygen that is needed in metal-catalyzed oxidation reactions. The relevant effects for this promoting role are hydrogen bonding between the water molecule and oxygen, and the electronic modification of the surface by the adsorbed water.
On clean close-packed surfaces of Ag, Au, Pt, Rh and Ni, dissociated oxygen prefers adsorption to hollow sites. Hydroxyl and molecular water prefers to adsorb on low coordination top sites, while the dissociated hydrogen atoms prefer hollow sites for most transition metals. A typical dissociation pathway on these metals is that as a top-site adsorbed molecule dissociates, at least one fragment migrates to a bridge or hollow site.
The formation and dissociation of water on transition metals like palladium has important applications in reactions for obtaining hydrogen and for the operation of proton-exchange membrane fuel cells, and much research has been conducted to understand the phenomenon. The rate-determining reaction for water formation is the creation of adsorbed OH. However, details of the specific adsorption sites and preferred reaction pathways for water formation have been difficult to determine. From kinetic Monte Carlo simulations combined with DFT calculations of the reaction energetics, it has been found that water formation on Pd(111) is dominated by step-edges through a combination of reactions:
O + H → OH
OO + H → OOH
OOH → OH + H
OH + OH → H2O + O
OH + H → H2O
At low temperatures and low relative pressure of H2, the dominant reaction path for hydroxyl group formation is the direct association of O and H, and the ratios of each reaction path vary significantly in different conditions.
Metal-catalyzed oxidation
The oxidation of carbon monoxide in catalytic converters utilizes a transition metal surface as a catalyst in the reaction
2CO + O2 → 2CO2.
This system has been extensively studied to minimize the emissions of toxic CO from internal combustion engines, and there is a trade-off in the preparation of the Pt catalyst surface between the dissociative adsorption of oxygen and the sticking of CO to the metal surface. A larger step density increases the dissociation of oxygen, but at the same time decreases the probability of CO oxidation. The optimal configuration for the reaction is with a CO on a flat terrace and a dissociated O at a step edge.
Hydrogen economy
The most prevalent method for hydrogen production, steam reforming, relies on transition metal catalysts which dissociatively adsorb the initial molecules of the reaction to form intermediates, which then can recombine to form gaseous hydrogen. Kinetic models of the possible dissociative adsorption paths have been used to simulate the properties of the reaction.
A method for hydrogen purification involves passing the gas through a thin film of Pd-Ag alloy between two gas vessels. The hydrogen gas dissociates on the surface of the film, after which the individual atoms are able to diffuse through the metal, and recombine to form a higher hydrogen content atmosphere inside the low-pressure receiving vessel.
A challenge with hydrogen storage and transport through conventional steel vessels is hydrogen-induced-cracking, where a hydrogen atoms enter the container walls through dissociative adsorption. If enough partial pressure builds up inside the material, this can cause cracks, blistering or embrittlement of the walls.
See also
Adsorption
Chemisorption
References
Surface science
Chemical physics
Physical chemistry
Catalysis | Dissociative adsorption | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,473 | [
"Catalysis",
"Applied and interdisciplinary physics",
"Surface science",
"Condensed matter physics",
"nan",
"Chemical kinetics",
"Physical chemistry",
"Chemical physics"
] |
77,051,679 | https://en.wikipedia.org/wiki/National%20Aerospace%20Standard | National Aerospace Standards (NAS) are U.S. industry standards for the aerospace industry. They are created and maintained by the Aerospace Industries Association (AIA). The Federal Aviation Administration recognizes National Aerospace Standards as "traditional standards" for the purposes of parts approval.
The primary AIA committee responsible for developing standards is the National Aerospace Standards Committee (NASC). Since 1938, NASC has developed more than 2,600 standards for aerospace fasteners and other mechanical parts. Personnel from the defense services, Defense Industrial Supply Center and Defense Electronics Supply Center participate in the preparation of NAS standards, and liaison is maintained with the FAA, NASA, AIA Canada, and the airlines. NAS standards are developed on the basis of user requirements, although coordination is accomplished with suppliers and other materially affected interests.
References
Standards of the United States
Aerospace | National Aerospace Standard | [
"Physics"
] | 170 | [
"Spacetime",
"Space",
"Aerospace"
] |
71,079,573 | https://en.wikipedia.org/wiki/Markov%20chain%20tree%20theorem | In the mathematical theory of Markov chains, the Markov chain tree theorem is an expression for the stationary distribution of a Markov chain with finitely many states. It sums up terms for the rooted spanning trees of the Markov chain, with a positive combination for each tree. The Markov chain tree theorem is closely related to Kirchhoff's theorem on counting the spanning trees of a graph, from which it can be derived. It was first stated by , for certain Markov chains arising in thermodynamics, and proved in full generality by , motivated by an application in limited-memory estimation of the probability of a biased coin.
A finite Markov chain consists of a finite set of states, and a transition probability for changing from state to state , such that for each state the outgoing transition probabilities sum to one. From an initial choice of state (which turns out to be irrelevant to this problem), each successive state is chosen at random according to the transition probabilities from the previous state. A Markov chain is said to be irreducible when every state can reach every other state through some sequence of transitions, and aperiodic if, for every state, the possible numbers of steps in sequences that start and end in that state have greatest common divisor one. An irreducible and aperiodic Markov chain necessarily has a stationary distribution, a probability distribution on its states that describes the probability of being on a given state after many steps, regardless of the initial choice of state.
The Markov chain tree theorem considers spanning trees for the states of the Markov chain, defined to be trees, directed toward a designated root, in which all directed edges are valid transitions of the given Markov chain. If a transition from state to state has transition probability , then a tree with edge set is defined to have weight equal to the product of its transition probabilities:
Let denote the set of all spanning trees having state at their root. Then, according to the Markov chain tree theorem, the stationary probability for state is proportional to the sum of the weights of the trees rooted at . That is,
where the normalizing constant is the sum of over all spanning trees.
References
Markov processes
Spanning tree | Markov chain tree theorem | [
"Mathematics"
] | 456 | [
"Theorems about stochastic processes",
"Theorems in probability theory"
] |
71,083,305 | https://en.wikipedia.org/wiki/Quantum%20boomerang%20effect | The quantum boomerang effect is a quantum mechanical phenomenon whereby wavepackets launched through disordered media return, on average, to their starting points, as a consequence of Anderson localization and the inherent symmetries of the system. At early times, the initial parity asymmetry of the nonzero momentum leads to asymmetric behavior: nonzero displacement of the wavepackets from their origin. At long times, inherent time-reversal symmetry and the confining effects of Anderson localization lead to correspondingly symmetric behavior: both zero final velocity and zero final displacement.
History
In 1958, Philip W. Anderson introduced the eponymous model of disordered lattices which exhibits localization, the confinement of the electrons' probability distributions within some small volume. In other words, if a wavepacket were dropped into a disordered medium, it would spread out initially but then approach some maximum range. On the macroscopic scale, the transport properties of the lattice are reduced as a result of localization, turning what might have been a conductor into an insulator. Modern condensed matter models continue to study disorder as an important feature of real, imperfect materials.
In 2019, theorists considered the behavior of a wavepacket not merely dropped, but actively launched through a disordered medium with some initial nonzero momentum, predicting that the wavepacket's center of mass would asymptotically return to the origin at long times — the quantum boomerang effect. Shortly after, quantum simulation experiments in cold atom settings confirmed this prediction
by simulating the quantum kicked rotor, a model that maps to the Anderson model of disordered lattices.
Description
Consider a wavepacket with initial momentum which evolves in the general Hamiltonian of a Gaussian, uncorrelated, disordered medium:
where and , and the overbar notation indicates an average over all possible realizations of the disorder.
The classical Boltzmann equation predicts that this wavepacket should slow down and localize at some new point — namely, the terminus of its mean free path. However, when accounting for the quantum mechanical effects of localization and time-reversal symmetry (or some other unitary or antiunitary symmetry), the probability density distribution exhibits off-diagonal, oscillatory elements in its eigenbasis expansion that decay at long times, leaving behind only diagonal elements independent of the sign of the initial momentum. Since the direction of the launch does not matter at long times, the wavepacket must return to the origin.
The same destructive interference argument used to justify Anderson localization applies to the quantum boomerang. The Ehrenfest theorem states that the variance (i.e. the spread) of the wavepacket evolves thus:
where the use of the Wigner function allows the final approximation of the particle distribution into two populations of positive and negative velocities, with centers of mass denoted
A path contributing to at some time must have negative momentum by definition; since every part of the wavepacket originated at the same positive momentum behavior, this path from the origin to and from initial momentum to final momentum can be time-reversed and translated to create another path from back to the origin with the same initial and final momenta. This second, time-reversed path is equally weighted in the calculation of and ultimately results in . The same logic does not apply to because there is no initial population in the momentum state . Thus, the wavepacket variance only has the first term:
This yields long-time behavior
where and are the scattering mean free path and scattering mean free time, respectively. The exact form of the boomerang can be approximated using the diagonal Padé approximants extracted from a series expansion derived with the Berezinskii diagrammatic technique.
References
Condensed matter physics
Quantum mechanics | Quantum boomerang effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 775 | [
"Theoretical physics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Matter"
] |
69,557,760 | https://en.wikipedia.org/wiki/Flight-time%20equivalent%20dose | Flight-time equivalent dose (FED) is an informal unit of measurement of ionizing radiation exposure. Expressed in units of flight-time (i.e., flight-seconds, flight-minutes, flight-hours), one unit of flight-time is approximately equivalent to the radiological dose received during the same unit of time spent in an airliner at cruising altitude. FED is intended as a general educational unit to enable a better understanding of radiological dose by converting dose typically presented in sieverts into units of time. FED is only meant as an educational exercise and is not a formally adopted dose measurement.
History
The flight-time equivalent dose concept is the creation of Ulf Stahmer, a Canadian professional engineer working in the field of radioactive materials transport. It was first presented in the poster session at the 18th International Symposium of the Packaging and Transport of Radioactive Materials (PATRAM) held in Kobe, Hyogo, Japan where the poster received an Aoki Award for distinguished poster presentation. In 2018, an article on FED appeared in the peer-reviewed journal The Physics Teacher.
Usage
Flight-time equivalent dose is an informal measurement, so any equivalences are necessarily approximate. It has been found useful to provide context between radiological doses received from various every-day activities and medical procedures.
Dose calculation
FED corresponds to the time spent in an airliner flying at altitude required to receive a corresponding radiological dose. FED is calculated by taking a known dose (typically in millisieverts) and dividing it by the average dose rate (typically in millisieverts per hour) at an altitude of 10,000 m, a typical cruising altitude for a commercial airliner.
While radiological dose at cruising altitudes varies with latitude, for FED calculations, the radiological dose rate at an altitude of 10,000 m has been standardized to be 0.004 mSv/h, about 15 times greater than the average dose rate at the Earth's surface. Using this technique, the FED received from a 0.01 mSv panoramic dental x-ray is approximately equivalent to 2.5 flight-hours; the FED received from eating one banana is approximately equal to 1.5 flight-minutes; and the FED received each year from naturally occurring background radiation (2.4 mSv/year) is approximately equivalent to 600 flight-hours.
Radiological exposures and limits
For comparison, a list of activities (including common medical procedures) and their estimated radiological exposures are tabulated below. Regulatory occupational dose limits for the public and radiation workers are also included. Items on this list are represented pictorially in the accompanying illustrations.
See also
Background radiation
Background radiation equivalent time
Banana equivalent dose
List of unusual units of measurement
References
Radioactivity quantities
Background radiation
Equivalent units | Flight-time equivalent dose | [
"Physics",
"Chemistry",
"Mathematics"
] | 562 | [
"Equivalent quantities",
"Units of measurement",
"Physical quantities",
"Quantity",
"Equivalent units",
"Radioactivity quantities",
"Radioactivity"
] |
69,557,765 | https://en.wikipedia.org/wiki/Complex%20hyperbolic%20space | In mathematics, hyperbolic complex space is a Hermitian manifold which is the equivalent of the real hyperbolic space in the context of complex manifolds. The complex hyperbolic space is a Kähler manifold, and it is characterised by being the only simply connected Kähler manifold whose holomorphic sectional curvature is constant equal to -1. Its underlying Riemannian manifold has non-constant negative curvature, pinched between -1 and -1/4 (or -4 and -1, according to the choice of a normalization of the metric): in particular, it is a CAT(-1/4) space.
Complex hyperbolic spaces are also the symmetric spaces associated with the Lie groups . They constitute one of the three families of rank one symmetric spaces of noncompact type, together with real and quaternionic hyperbolic spaces, classification to which must be added one exceptional space, the Cayley plane.
Construction of the complex hyperbolic space
Projective model
Let be a pseudo-Hermitian form of signature in the complex vector space . The projective model of the complex hyperbolic space is the projectivized space of all negative vectors for this form:
As an open set of the complex projective space, this space is endowed with the structure of a complex manifold. It is biholomorphic to the unit ball of , as one can see by noting that a negative vector must have non zero first coordinate, and therefore has a unique representative with first coordinate equal to 1 in the projective space. The condition when is equivalent to . The map sending the point of the unit ball of to the point of the projective space thus defines the required biholomorphism.
This model is the equivalent of the Poincaré disk model. Unlike the real hyperbolic space, the complex projective space cannot be defined as a sheet of the hyperboloid , because the projection of this hyperboloid onto the projective model has connected fiber (the fiber being in the real case).
A Hermitian metric is defined on in the following way: if belongs to the cone , then the restriction of to the orthogonal space defines a definite positive hermitian product on this space, and because the tangent space of at the point can be naturally identified with , this defines a hermitian inner product on . As can be seen by computation, this inner product does not depend on the choice of the representative . In order to have holomorphic sectional curvature equal to -1 and not -4, one needs to renormalize this metric by a factor of . This metric is a Kähler metric.
Siegel model
The Siegel model of complex hyperbolic space is the subset of such that
It is biholomorphic to the unit ball in via the Cayley transform
Boundary at infinity
In the projective model, the complex hyperbolic space identifies with the complex unit ball of dimension , and its boundary can be defined as the boundary of the ball, which is diffeomorphic to the sphere of real dimension . This is equivalent to defining :
As a CAT(0) space, the complex hyperbolic space also has a boundary at infinity . This boundary coincides with the boundary just defined.
The boundary of the complex hyperbolic space naturally carries a CR structure. This structure is also the standard contact structure on the (odd dimensional) sphere.
Group of holomorphic isometries and symmetric space
The group of holomorphic isometries of the complex hyperbolic space is the Lie group . This group acts transitively on the complex hyperbolic space, and the stabilizer of a point is isomorphic to the unitary group . The complex hyperbolic space is thus homeomorphic to the homogeneous space . The stabilizer is the maximal compact subgroup of .
As a consequence, the complex hyperbolic space is the Riemannian symmetric space , where is the pseudo-unitary group.
The group of holomorphic isometries of the complex hyperbolic space also acts on the boundary of this space, and acts thus by homeomorphisms on the closed disk . By Brouwer's fixed point theorem, any holomorphic isometry of the complex hyperbolic space must fix at least one point in . There is a classification of isometries into three types:
An isometry is said to be elliptic if it fixes a point in the complex hyperbolic space.
An isometry is said to be parabolic if it does not fix a point in the complex hyperbolic space and fixes a unique point in the boundary.
An isometry is said to be hyperbolic (or loxodromic) if it does not fix a point in the complex hyperbolic space and fixes exactly two points in the boundary.
The Iwasawa decomposition of is the decomposition , where is the unitary group, is the additive group of real numbers and is the Heisenberg group of real dimension . Such a decomposition depends on the choice of :
A point in the boundary of the complex hyperbolic space ( is then the group of unipotent parabolic elements of fixing )
An oriented geodesic line going to at infinity ( is then the group of hyperbolic elements of acting as a translation along this geodesic and with no rotational part around it)
The choice of an origin for , i.e. a unit speed parametrization whose image is ( is then the group of elliptic elements of fixing )
For any such decomposition of , the action of the subgroup is free and transitive, hence induces a diffeomorphism . This diffeomorphism can be seen as a generalization of the Siegel model.
Curvature
The group of holomorphic isometries acts transitively on the tangent complex lines of the hyperbolic complex space. This is why this space has constant holomorphic sectional curvature, that can be computed to be equal to -4 (with the above normalization of the metric). This property characterizes the hyperbolic complex space : up to isometric biholomorphism, there is only one simply connected complete Kähler manifold of given constant holomorphic sectional curvature.
Furthermore, when a Hermitian manifold has constant holomorphic sectional curvature equal to , the sectional curvature of every real tangent plane is completely determined by the formula :
where is the angle between and , ie the infimum of the angles between a vector in and a vector in . This angle equals 0 if and only if is a complex line, and equals if and only if is totally real. Thus the sectional curvature of the complex hyperbolic space varies from -4 (for complex lines) to -1 (for totally real planes).
In complex dimension 1, every real plane in the tangent space is a complex line: thus the hyperbolic complex space of dimension 1 has constant curvature equal to -1, and by the uniformization theorem, it is isometric to the real hyperbolic plane. Hyperbolic complex spaces can thus be seen as another high-dimensional generalization of the hyperbolic plane, less standard than the real hyperbolic spaces. A third possible generalization is the homogeneous space , which for again coincides with the hyperbolic plane, but becomes a symmetric space of rank greater than 1 when .
Totally geodesic subspaces
Every totally geodesic submanifold of the complex hyperbolic space of dimension n is one of the following :
a copy of a complex hyperbolic space of smaller dimension
a copy of a real hyperbolic space of real dimension smaller than
In particular, there is no codimension 1 totally geodesic subspace of the complex hyperbolic space.
Link with other metrics on the ball
On the unit ball, the complex hyperbolic metric coincides, up to some scalar renormalization, with the Bergman metric. This implies that every biholomorphism of the ball is actually an isometry of the complex hyperbolic metric.
The complex hyperbolic metric also coincides with the Kobayashi metric.
Up to renormalization, the complex hyperbolic metric is Kähler-Einstein, which means that its Ricci curvature is a multiple of the metric.
See also
Hyperbolic space
Quaternionic hyperbolic space
References
Lie groups
Homogeneous spaces
Complex manifolds
Hyperbolic geometry | Complex hyperbolic space | [
"Physics",
"Mathematics"
] | 1,663 | [
"Lie groups",
"Mathematical structures",
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Algebraic structures",
"Geometry",
"Symmetry"
] |
75,418,219 | https://en.wikipedia.org/wiki/The%20Politics%20of%20Large%20Numbers | The Politics of Large Numbers:A History of Statistical Reasoning is a book by French statistician, sociologist and historian of science, Alain Desrosières, which was originally published in French in 1993. The English translation, by Camille Naish, was published in 1998 by Harvard University Press.
Synopsis
Alain Desrosières's ambition is to reconcile an “Internal” history of the field, focusing on theory building and data collection, with an “External” history, examining the social conditions where and why a discipline develops. In his words, applying a science-in-the-making perspective “the distinction between technical and social objects—underlying the separation between internal and external history—
disappears” (p. 5).
The work of Desrosières mobilize the French style of social analysis of cognitive forms, looking at statistics as the ensemble of concepts, methods, and practices concerned with "making up things that hold".
A central part of the book explores how
socio-political structures of France, Britain, Germany, and the
United States affect the establishment and evolution of the nationals statistical offices in these countries. The author discusses in depth how the activity of cathegorization, allocating individuals to classes, provide the encoding necessary for the realization of statistical constructs, following Durkheim's motto to 'treat social facts as things', thus creating new entities as poverty or unemployment. This project that Desrosières names 'objectification' is also offered by the author as a way to reconcile objective and subjective visions of probabilities, a dichotomy he retraces to the fourteenth-century confrontation between realists and nominalists.
Reactions
Among the critiques to this work is that it reads more as a work of sociology and political economy than as a technical account
of how statistical operations developed, and the critical balance Desrosières needs to maintain between defending the necessity and legitimacy of critical attacks on
statistical concepts and methods in the name of sociopolitical progress and the stated need for "durably solidified forms" of statistical technique and concepts.
Related readings
Ian Hacking, 2006. The Emergence of Probability : A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. Cambridge University Press.
Theodore M. Porter, 1988. The Rise of Statistical Thinking, 1820–1900. Reprint edition. Princeton, NJ: Princeton University Press.
Stephen M. Stigler, 1986. The History of Statistics: The Measurement of Uncertainty before 1900. Cambridge, Mass.: Belknap Press.
See also
Sociology of quantification
References
1993 non-fiction books
French non-fiction books
History books about science
History of probability and statistics
Science and technology studies | The Politics of Large Numbers | [
"Mathematics",
"Technology"
] | 545 | [
"Probability and statistics",
"Science and technology studies",
"History of probability and statistics"
] |
75,419,268 | https://en.wikipedia.org/wiki/Second-order%20Jahn-Teller%20distortion%20in%20main-group%20element%20compounds | Second-order Jahn-Teller distortion (commonly known as pseudo Jahn-Teller distortion) is a singular, general, and powerful approach rigorously based in first-principle vibronic coupling interactions. It enables prediction and explication of molecular geometries that are not necessarily satisfactorily or even correctly explained by semi-empirical theories such as Walsh diagrams, atomic state hybridization, valence shell electron pair repulsion (VSEPR), softness-hardness-based models, aromaticity and antiaromaticity, hyperconjugation, etc.
The application to main-group element compounds utilizes principles of group theory and symmetry. A molecule will distort in order to maximize symmetry-allowed interactions between the highest occupied molecular orbitals and lowest unoccupied molecular orbitals, and thereby stabilize the HOMOs and destabilize the LUMOs (resulting in the overall stabilization of the molecule). The extent of second-order Jahn-Teller distortion is inversely proportional to the energy difference between orbitals. Direct products are used to determine the allowedness of a given interaction: the interaction is allowed if the product of the symmetry of the first molecular orbital, the symmetry of the vibration, and the symmetry of the second molecular orbital contains the totally symmetric irreducible representation of the molecule’s point group. For heavier main-group compounds, molecular orbital interactions are larger due to the decreasing bond strength resulting in a smaller energy difference between the interacting orbitals.
Geometries of heavier group 13 and 14 analogues of multiply bound species
Group 14 analogues of alkenes and alkynes have previously been prepared. Moving down the group, the compounds experience increasing geometric distortion, becoming increasingly trans-bent from the original linear geometry and displaying increasingly limited shortening of the multiple bond. These patterns are also observed in group 13 multiply-bonded compounds. These geometry trends are rationalized below.
Semiempirical approaches
Hybridization
This trend can be rationalized with hybridization – moving down a group, the gap between the ns and np orbitals widens and there is an increasing mismatch between valence orbital sizes. The mismatch leads to lower hybridization – that is, increased nonbonding character on each of the heavier group 13 or 14 atoms involved in multiple bonding, which manifests as increased deviation from the typically expected linear and planar geometries. This rationalization is not especially cohesive with the typical approach to multiple bonds in organic chemistry – that is, a single σ-bond and one or two π-bonds.
Double donor-acceptor bonding
This rationalization is simple and preserves the double-bond nature of the group 13 or 14 atom interaction. The multiple bond is not exactly a typical σ+π interaction; rather the two halves of the alkyne analogue are treated as singlet bent monomers and the multiple bond is treated as an aggregation between them, with the spxpy-hybridized filled orbital on one group 13 or 14 atom donating to the vacant pz of the other.
Valence bond resonating lone pair
This rationalization is consistent with valence bond theory and suggests a weakened E-E multiple bond. The electron pair is described as resonating between the two group 13 or 14 atoms, and the resonance is favored by occupation of the empty (but not mandatorily vacant) orbital.
Second-order Jahn-Teller distortion approach
Second-order Jahn-Teller distortion provides a rigorous and first-principles approach to the distortion problem. The interactions between the HOMOs and LUMOs to afford a new set of molecular orbitals is an example of second-order Jahn-Teller distortion.
Cis- and trans-pyramidalization of alkene analogues
The trans-pyramidalization distortion is taken as an example. The frontier molecular orbitals of the undistorted alkene possessing D2h symmetry have symmetries ag (HOMO-1), b2u (HOMO), b1g (LUMO), and b3u (LUMO+1). The symmetry of the trans-pyramidalization vibration is b1g. A triple product of ground state, vibrational mode, and excited state that can be taken is b2u (HOMO) x b1g (trans-pyramidalizing vibrational mode) x b3u (LUMO+1) = a1g. Since a1g is the totally symmetric representation, the b2u and b3u molecular orbitals participate in an allowed interaction through the trans-pyramidalizing vibrational mode. The molecule will distort in a trans-pyramidal fashion (into C2h symmetry) in order to enable this interaction, which produces a more stabilized HOMO and more destabilized LUMO.
This treatment can be repeated for all other combinations of HOMO-1, HOMO and LUMO, LUMO+1. Notably, it is found that the HOMO and LUMO are symmetry-disallowed to mix.
Cis- and trans-bending of alkyne analogues
This distortion can be treated in the same fashion, using the triple product to determine whether or not the distortion from the undistorted linear D∞h symmetry will produce a symmetry-allowed interaction (and therefore, whether or not the distortion will occur).
Pyramidalization and inversion of trivalent group 15 compounds and group 14 radicals
The pyramidalization and energies of inversion of group 15 :MR3 (M = N, P, As, Sb, Bi) and group 14 •MR3 molecules can also be predicted and rationalized using a second-order Jahn-Teller distortion treatment. The “parent” planar molecule possessing D3h symmetry has frontier orbitals of a2” (HOMO) and a1’ (LUMO) symmetries. The pyramidalizing vibration mode has symmetry a2”. The triple product yields the totally symmetric representation a1’, indicating that the molecule will indeed pyramidalize into C3v symmetry.
The energies of inversion can also be predicted and compared. Due to lower energy overlap between the 3p and 1s orbitals in PH3 (versus between 2p and 1s in NH3), the HOMO-LUMO energy gap in PH3 will be smaller than that of NH3. This allows for a stronger interaction between the HOMO and LUMO in second-order Jahn-Teller fashion. The distortion stabilizes the HOMO and destabilizes the LUMO, resulting in a larger barrier to inversion in PH3.
Tetrahedral geometry of tetravalent second- and third-row main-group-element hydrides
Tetravalent main-group-element hydrides of form APH4 (AP = B−, C, N+, O2+, Al−, Si, P+, and S2+, where AP is a tetravalent atom or ion) are known to distort from the square planar to tetrahedral geometry. For all APH4 systems in D4h symmetry, the ground state is a1g. The exact electronic configuration, however, is dependent on the electronegativity of the main group element. The distortion to tetrahedral geometry has b2u symmetry. For these APH4 systems, the a2u→b1g* and eu→eg* one-electron charge-transfer transitions are most active in the b2u mode.
References
Condensed matter physics
Solid-state chemistry | Second-order Jahn-Teller distortion in main-group element compounds | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,522 | [
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Matter",
"Solid-state chemistry"
] |
75,421,556 | https://en.wikipedia.org/wiki/Assembloid | An assembloid is an in vitro model that combines two or more organoids, spheroids, or cultured cell types to recapitulate structural and functional properties of an organ. They are typically derived from induced pluripotent stem cells. Assembloids have been used to study cell migration, neural circuit assembly, neuro-immune interactions, metastasis, and other complex tissue processes. The term "assembloid" was coined by Sergiu P. Pașca's lab in 2017.
Generation of assembloids
Assembloids were described in 2017 in a study from a laboratory at Stanford to model forebrain development. Assembloids joining ventral and dorsal forebrain neural organoids demonstrated that cortical interneurons migrate and integrate into synaptically connected cortical microcircuits. This was confirmed by multiple research groups applying similar approaches to model regionalized organoid interactions and study interneuron migration. Assembloids have subsequently been generated to model projections between brain regions, such as cortico-striatal, cortico-spinal, or retino-thalamic. Methods such as Cre recombination combined with G-deleted rabies tracing can be used to identify cells projecting within assembloids; additionally, optogenetic stimulation can demonstrate the assembly of functional neural circuits in vitro.
Assembloid formation starts with the generation of organoids. Initially, human induced pluripotent stem (hiPS) cells are aggregated to generate regionalized organoids through directed differentiation. There are multiple ways in which organoids can be assembled. Regionalized organoids can be put in close proximity resulting in their fusion to generate multi-region assembloids. Alternatively, organoids can be assembled by co-culture with other cell lineages, such as microglia or endothelial cells, or with tissue samples from animal dissection, leading to multi-lineage assembloids. Lastly, organoids can be assembled with morphogenic or organizer-like cells, thus generating polarized assembloids.
The assembloid type depends on the scientific question and the accessibility of cell types required. Major biological fields utilizing the assembloid technique include cancer, gastroenterology, cardiology, and neuroscience. For instance, there are liver assembloids, kidney assembloids, pericytes assembloids to study SARS-COVID2, endometrium assembloids, stomach and colon assembloids, and bladder assembloids.
Types
Assembloids are composed of at least two organoids and/or cells derived from stem cells or primary tissue. They can be assembled to form multi-region or multi-lineage assembloids, as described above.
A. Multi-region assembloids of the nervous system
There are techniques to guide organoid differentiation into specific regions of the nervous system. For example, fusion of thalamic and cortical neural organoids models thalamo-cortical projections of ascending sensory input while cortico-striatal assembloids generate the initial projections of motor planning circuits. Forebrain assembloids model interneuron migration into the cerebral cortex. Cortico-motor assembloids can reconstitute aspects of the cortico-spinal-muscle circuit in vitro. Finally, retinal organoids can be combined with thalamic and cortical organoids to model aspects of the ascending visual pathway.
B. Multi-lineage assembloids of the nervous system
Some cell types of interest are challenging to differentiate within organoids but can be isolated from tissue explants or derived in monolayer culture. These tissue samples or enriched cell populations can then be integrated with organoid(s) of interest to study their interaction. For example, one current limitation of organoids and assembloids is their lack of functional vasculature, which hinders the supply of nutrients and trophic factors. In a technical advancement, researchers have been able to achieve vascularization by combining neural organoids with endothelial organoids and mesenchymal cells or human embryonic stem cell-derived vascular organoids. Next, microglia-like cells derived from hiPS cells can be introduced into midbrain neural organoids to model neuro-immune interactions. Similarly, oligodendrocytes can be generated in neural organoids and then migrate from the ventral forebrain to the dorsal forebrain. Lastly, combining hiPS cell-derived intestinal organoids with neural crest cells can derive assembloids of the enteric nervous system.
Additionally, assembloids can be categorized as inter-individual or inter-species, depending on whether the organoids are combined from different stem cell lines (e.g., control with disease-associated lines) or different species, respectively. These combinations help determine what aspects of development are cell-autonomous.
Disease models and applications
Assembloids help determine the complex pathophysiology of developmental disorders. For example, Timothy syndrome, which affects L-type calcium channels, was modeled in neural assembloid experiments. When dorsal and ventral forebrain organoids were integrated into an assembloid, interneurons migrated into the dorsal cortical neurons. Timothy syndrome-derived interneurons showed impaired migration. The resulting assembloids developed hypersynchronous neuronal activity, hypothesized to be due to abnormal interneuron integration into circuits. Next, Phelan-McDermid syndrome, also known as 22q13.3 deletion syndrome, is a neurodevelopmental disorder with a high risk of autism spectrum disorder that was modeled in assembloids containing cortical and striatal organoids. This research demonstrated increased striatal medium spiny neuron activity in Phelan-McDermid-derived assembloids after fusion of striatal and cortical organoids but not in isolated striatal organoids. Rett syndrome-derived assembloids displayed hypersynchronous activity perhaps due to an increase in calretinin interneurons. Alzheimer's disease risk allele APOE4, which increases the risk of dementia, has been modeled in assembloids. APOE4-derived assembloids of neural organoids combined with microglia demonstrated increased amyloid-beta-42 secretion, a known Alzheimer biomarker. APOE4 microglia in assembloids had a more complex morphology than in two-dimensional culture and had limited amyloid-beta-42 clearance.
Limitations
Despite the research benefits of assembloids, as for any model system, they have limitations. First, assembloids, like organoids, lack vascularisation, which impairs nutrient diffusion to the surface and eventually leads to necrosis in the core, thus limiting their growth. One way to address this limitation is through transplantation. Grafting cortical organoids into the brains of laboratory rats leads to improved growth and neural development.Another critique of both assembloids and organoids is the lack of sensory input, which is important for the maturation and shaping of circuits during embryonic development. Assembloids and organoids do not currently have a blood brain barrier or immune cells, limiting the biological validity for drug screening or disease modeling.There is a temporal limitation on the investigation of clinically relevant pathophysiology; organoids most closely model initial developmental stages corresponding to fetal and infant neurodevelopment and thus may not accurately model later-onset psychiatric disorders or degenerative conditions. Future directions to address this limitation include studies to understand and accelerate developmental clocks.Next, organoids and assembloids have batch-to-batch variability. Guided differentiation methods reduce variability significantly, yet reproducibility still requires optimization. Finally, the derivation and maintenance of organoids and assembloids require expertise and can be time-intensive and expensive.
See also
Organoid
Myelinoid
Scaffold-free 3D cell culture
Induced pluripotent stem cells
References
Biological models
Stem cells
Cell culture techniques | Assembloid | [
"Chemistry",
"Biology"
] | 1,698 | [
"Biochemistry methods",
"Cell culture techniques",
"Biological models"
] |
75,423,232 | https://en.wikipedia.org/wiki/Bismuth%20subhalides | Bismuth-containing solid-state compounds pose an interest to both the physical inorganic chemists as well as condensed matter physicists due to the element's massive spin-orbit coupling, stabilization of lower oxidation states, and the inert pair effect. Additionally, the stabilization of the Bi in the +1 oxidation state gives rise to a plethora of subhalide compounds with interesting electronics and 3D structures.
Overview of subhalide bismuth solid-state chemistry
Topological insulators and the relationship to bismuth solid-state chemistry
Bismuth subhalides, such as Bi4Br4 and β-Bi4I4, have been recently reported as topological insulators. Topological insulators have caught attention of physical inorganic chemists as well as condensed matter physicists due to the unique physicochemical properties emerging upon transition from bulk to surface states. Exhibiting an energy band gap of classic insulator, the edge/surface states of the material acquire dissipationless electric transport. The subject has been investigated by condensed matter physicists as well as mathematicians to provide a link between the experimental emerging properties and the modeled topology. Broadly, the material's physics pertains to the Quantum Hall effect relying upon two pillars: time-reversal symmetry and spin orbit coupling, the latter dependent on the elemental material composition. Bismuth's heavy pnictogen nature yields a large spin-orbit coupling. Additionally, when bound to heavy halogens, bismuth subhalides give rise to a low-dimensional van der Waals bonded structure, exfoliatable into nanowires.
Structure of β-Bi4I4
Low dimensional van der Waals bonded materials display a fundamental material unit, usually depicted as the simplest molecular formula obeying stoichiometry. A series of such fundamental units align in the bulk material phase due to weak van der Waals interactions. Overall, key advantages conferred by the chemical structure are the ease to scale the materials down to nanostructures under simultaneous conservation of the bulk structure and the reduction in defects amount.
Belonging to the larger class of quasi 1-dimensional van der Waals bonded materials, β-Bi4I4 has been recently reported as a novel topological insulator. The binary bismuth-iodine family class includes the known bismuth(III) iodide along with additional representatives such as α-Bi4I4, Bi14I4, Bi16I4, and Bi18I4. Having the same stoichiometric chemical formula, α-Bi4I4 and β-Bi4I4 show similar solid-state structures yet critically different physicochemical properties. Specifically, α-Bi4I4 represents the trivial insulator phase, while stacking of the bismuth atoms along the b crystallographic axis in the β-Bi4I4 phase yield a different topological insulator phase. Both isoforms crystallyse in the C2/m space group, with α-Bi4I4 having a unit cell volume almost double of its topological insulator counterpart. The β crytallographic angle is higher in the β-Bi4I4: 107.87 vs 92.96, making the β-Bi4I4 more tilted (see images above).
Synthesis
Crystal growth of β-Bi4I4 was achieved through a solid-state reaction between Bi and HgI2 in a ratio of 1:2. The mixture of solid-state precursors was sealed under dynamic vacuum in a quartz ampoule and subjected to a temperature gradient of 250 °C - 210 °C in a two-zone furnace for 20 days. Needle-like blue crystals were obtained with sizes varying from a couple of mm in length and tenths of mm in diameter.
DFT Calculations
Key to modeling the topology of a material are the special points along k-vector of the Brillouin zone, accounting for the accurate depiction of the density of states emerging from the electronics of the material. Density functional theory (DFT) analyses predicted an indirect band gap of 0.158 eV in the β-Bi4I4 phase with the valence and conduction band maxima localized at the Γ and M k-space points, respectively. Interestingly enough, the major contributors to the band structure around the Fermi level are bismuth's p orbitals of even and odd parity, thus giving the gerade and ungerade points of symmetry.
ARPES measurements
The allowed electron energies in the topological insulator were probed with the well-employed angle-resolved photoemission spectroscopy (ARPES). Γ and M space points were found to exhibit binding energies of 0.3 eV and 0.8 eV, respectively. ARPES also probed the Fermi electron velocities along the x and y axes to be 0.1(1)×106 m*s−1 and 0.60(4)×106 m*s−1. The emerging non-trivial states of the topological insulator are expected to show at the space point where the conductive and valence bands almost cross or, in other words, display the smallest band gap. This point indeed showed a binding energy of 0.06 eV as measured by ARPES. ARPES measurements on a different β-Bi4Br4 topological insulator phase show similarity to its iodine counterpart.
Subhalide complexes
A ternary rhodium-centered bipyramidal dibismuth complex is an example of subhalide complexes with interesting geometry and unusual electronic properties, particularly what has been reported as an example of Möbius aromaticity. The complex exhibits a 4-electron-5-centered bond in the central plane occupied by a Bi5 equatorial pentagon with the rhodium center in the middle. Based on the electronic analysis carried out by Ruck (2003), the bismuth bonding consists of 2-centered-2-electron bonds, namely, Bi-Rh and Bi-Br one (see structure on the right).
The electronic analysis was carried out starting with counting the available skeletal electrons. Each of the 7 bismuth atoms contribute a total of 3x7=21 electrons (3 per each atom), while Rh gives all of its 9 electrons and the 8 bridging bromide atoms yield 3 electrons each. The total skeletal electron count is thus 54. The total skeletal electron count gets distributed as follows: 2 electrons per each of the 16 2c-2e− Bi-Br bond, 2 electrons per each of the 7 2c-2e− Rh-Bi metallic bond, 2 rhodium lone pairs remaining on the Rh centre (total of 4e−), and 4 electrons for 5c-4e− bond pertaining to the central pentagon. The sum of electrons used in bonding is therefore 54. Hence, the subhalide complex is electron-precise, i.e., with all of its skeletal electrons involved in chemical bonding.
The bonding in such a system was compared to the aromatic cyclopentadienyl aromatic anion. Contrary to the π-type all in-phase orbital overlap exhibited by the organic cyclopentadienyl anion, σ-type bonding of the RhBi5 unit yields a phase change for an orbital pair (see figure).
The relative orbital energy diagram is rationalized for each of the systems relying on the Frost-Musulin mnemonic. The two lone pairs stemming from the rhodium metallic center are localized on the lowest-lying twicely degenerate set of molecular orbitals, consistent with the Möbius-type aromaticity. For reference, the electronics of the aromatic organice cyclopentadienyl unit is shown to the right of the rhodium-centered pentagonal Bi5 unit. As can be seen, Hückel rules dictate the molecular orbital splitting is inverted compared to its metallic counterpart, the highest-occupied molecular orbitals this time being twicely degenerate.
See also
Solid-state chemistry
Topological insulator
Bismuth compounds
References
Bismuth compounds
Halides
Cluster chemistry | Bismuth subhalides | [
"Chemistry"
] | 1,657 | [
"Cluster chemistry",
"Organometallic chemistry"
] |
75,424,903 | https://en.wikipedia.org/wiki/Phosphetene | A phosphetene is an unsaturated four-membered organophosphorus heterocycle containing one phosphorus atom. It is a heavier analog of an azetine, or dihydroazete. The first synthesis of a stable, isolable phosphetene was reported in 1985 by Mathey et al. via ring expansion of a phosphirene-metal carbonyl complex. Since then, other synthesis routes for phosphetenes have been developed such as cyclization of phosphabutadienes, [2 + 2] cycloaddition, intramolecular arrangement, addition, and through organometallic intermediates.
The latter is of interest to the application, where synthesis through organometallic intermediates led to colored phosphetene compounds that were able to be incorporated in OLED devices. Phosphetenes can also participate in reactions involving the lone pair of electrons at the phosphorus, ring opening, or ring expansion.
Nomenclature
According to the extended Hantzsch-Widman system of naming heterocyclic parent hydrides, the saturated four-membered ring containing one phosphorus atom is called a phosphetane. The presence of one double bond gives a name of phosphetene. Alternately, it can be called a dihydrophosphete, as a phosphete is the cyclobutadiene-like structure containing two double bonds.
There are different constitutional isomers possible, depending on whether the double bond is attached to the phosphorus or not. The phosphorus is assigned position number 1 of the ring, and then the other structural features can be numbered relative to it. They can be distinguished based on where the double bond is in the phosphetene (Δ1-phosphetene vs Δ2-phosphetene) or where the double bond is missing as compared with the phosphete (1,2-dihydrophosphete vs 3,4-dihydrophosphete, the latter also called 2,3-dihydrophosphete).
Synthesis
Via ring expansion
Isolation of the first phosphetene was reported by Mathey et al. in 1985 by the thermolysis of phosphirene-metal pentacarbonyl complexes in the presence of carbon monoxide. This resulted in a 34% yield of a solid phosphorus analogue of unsaturated β-lactams with orange color, where the phosphorus atom was coordinated to the metal pentacarbonyl complex. Mathey et al. then proceeded to decomplex the metal pentacarbonyl complex from the phosphetene, but oxidation at the phosphorus takes place spontaneously, resulting in a λ5σ4 1,2-dihydrophosphete oxide solid with a yellow color.
The reaction of phosphatriafulvenes with azides resulted in the ring expansion into 1H-2-iminophosphetes as reported by Regitz et al. in 1994. However, in the presence of excess azide, the Staudinger reaction can take place, which transforms the yellow λ3σ3 1H-2-iminophosphete into a λ5σ4 iminophosphorane product.
Streubel et al. (1999) explored the reaction of phosphirene-metal complexes with various reactants and found that a 1,2-dihydro-1-phosphet-2-one complex can be obtained, in a mixture with other compounds, from the reaction of phosphirene-metal complexes with diethylamine.
Via cyclization of phosphabutadienes
Ring formation from phosphabutadienes was observed in when Nielson et al. (1987) obtained 1,2-dihydrophosphetes from the reaction of 1-[bis(trimethylsilyl)amino]phosphadiene with Me3SiN3 or elemental sulfur through a 3-coordinate (methylene)phosphorane intermediate.
Grützmaker et al. (1993) reacted halogenated ylides with AlCl3 to form dihydrophosphetium salts in an intramolecular cyclization reaction. Aromaticity is restored by subsequent reaction with pyridine, then a strong base, sodium bis(trimethylsilyl)amide, to form a neutral λ5-phosphete with a highly polarized P=C bond.
An air-stable, colorless, isolable 1,2-dihydrophosphete intermediate was discovered by Angelici et al. in 1993 during the synthesis of phosphaalkynes from phosphalkenes.
Streubel et al. (1997) synthesized a η1-3,4-dihydrophosphete ligand complexed to metal pentacarbonyl from η1-2-phosphabutadiene complexes, which was in turn synthesized from the reaction of metal carbene complexes with a chlorophosphane.
Via [2 + 2] cycloaddition reactions
The formation of 1,2-dihydrophosphetes from [2 + 2] cycloaddition reactions involves the reaction of metal phosphaalkane complexes with alkenes or alkynes as reported by various groups, such as Mathey et al. (1990) and Boese et al. (1994).
Via intramolecular rearrangement
Niecke et al. (1994) reacted three equivalents of iminophosphoranes with a diyne, which resulted in a 1,2-dihydrophosphete attached to a diphosphole ring, with the formation of intermediate diphosphetene compounds.
In 1995, Fluck et al. obtained 3,4-dihydrophosphetes from the reaction of 1,3-diphosphetes with CS2, COS, or CO2.
Via addition reaction
The 2,4-diphosphoniodihydrophosphetide cation is an unusually stable synthetic intermediate discovered by Schmidpeter et al. in 1996 from the addition reaction of 1,3-diphosphoniopropenide with chlorophosphines.
Via organometallic intermediates
The synthesis of 1,2-dihydrophosphetes by Doxsee et al. in 1989 from diphenyltitanacyclobutene (prepared from Tebbe's reagent) and dichlorophenylphosphine was remarkable due to the clean synthesis and workup and the obtained white 1,2-dihydrophosphetes were inert toward oxidation and in high yield (66%). Mathey et al. (1996) also used Tebbe's reagent to prepare a novel bidentate ligand consisting of phosphorus heterocycles of 1-phosphinine-1,2-dihydrophosphetes.
Majoral et al. (1997) were able to synthesize a 1,2-dihydrophosphete-zirconium complex from intramolecular coupling of dialkynyl phosphane and zirconocene-benzyne.
The complex was then treated with an acid to yield a π-extended dihydrophosphetes. This method was applied by Hissler et al. in 2014 to observe the optical and redox properties of π-extended dihydrophosphetes. With the introduction of various electron-rich substiuents on the π-backbone, the dihydrophosphetes displayed shifting of color in the visible region varying from blue to green, which was tested in a multilayer OLED device. As of 2023, Cummins et al. have been able to synthesize free, uncomplexed phosphet-2-ones with high yield using a phosphinidene transfer agent derived from anthracene. The reactivity of the phosphet-2-ones with stabilized Wittig reagents at 100 °C led to a high yield of 1,2-dihydrophosphete products with a backbone structure that resembles Hissler's polyunsaturated dihydrophosphetes used in the OLED devices.
Reactivity
Reactivity at the phosphorus atom
Phosphetenes which have a lone pair at the phosphorus atom behaved similarly to a two-electron P-donor, such as the ability to coordinate with metals. Doxsee et al. (1991) discovered that structural changes in phosphetene metal complexes are consistent with an increased s-character of the phosphorus.
Ring-opening followed by cycloaddition
Ring-opening of phosphetenes leads to phosphabutadiene derivatives that can further react. Mathey et al. reacted 1,2-dihydrophosphete tungsten pentacarbonyl complexes at high temperatures with dienophiles through [4 + 2] cycloaddition to obtain 6-membered phosphorus heterocycles, through a masked 1-phoshabutadiene intermediate. They found from X-ray crystal structure analysis that the bond between the phosphorus and sp3 carbon is long and weak (1.904 Å), which leads to the equilibrium of the 1,2-dihydrophosphete with 1-phosphabutadiene. Doxsee et al. observed Michael addition to 1,3,4-triphenyl-1,2-dihydrophosphete with various compounds to form 6-membered phosphorus heterocycles as well.
Ring expansion
Ring strain in four-membered phosphetene rings can be released by ring expansion to five-membered phosphole rings, which were studied by Mathey et al. for the incorporation of O, S, Se, and Pt and Schmidpeter et al. for the incorporation of a phosphorus atom.
References
Wikipedia Student Program
Phosphorus heterocycles
Four-membered rings
Unsaturated compounds | Phosphetene | [
"Chemistry"
] | 2,137 | [
"Organic compounds",
"Unsaturated compounds"
] |
75,427,858 | https://en.wikipedia.org/wiki/Imidazole-4-acetaldehyde | Imidazole-4-acetaldehyde is a metabolite of histamine in biological species.
Biological inactivation of histamine
The process of histamine inactivation in biological species involves its metabolism through the oxidative deamination of its primary amino group. This reaction is catalyzed by the enzyme diamine oxidase (DAO). The metabolite produced from this reaction is imidazole-4-acetaldehyde.
Imidazole-4-acetaldehyde is then further oxidized by a NAD-dependent aldehyde dehydrogenase, leading to imidazole-4-acetic acid.
Synthesis under prebiotic conditions
Under prebiotic conditions, imidazole-4-acetaldehyde can be synthesized from erythrose, formamidine, formaldehyde, and ammonia.
Role in fungal amine oxidase and bacterial aldehyde oxidase
In a study of imidazole-4-acetaldehyde presence in the reaction mixture during the coupling reaction of fungal amine oxidase and bacterial aldehyde oxidase for histamine elimination, imidazole 4-acetaldehyde was not detected, which suggests that imidazole 4-acetaldehyde was not produced as a result of the coupling reaction between fungal amine oxidase and aldehyde oxidase, as such, its absence in the reaction mixture implies that the fungal amine oxidase-aldehyde oxidase coupling reaction likely proceeded directly from histamine to imidazole-4-acetic acid with an apparent yield of 100%, without the intermediate formation of imidazole 4-acetaldehyde.
Postoperative opioid prediction
In a 2022 observational study aimed to identify preoperative serum metabolites that could predict postoperative opioid consumption, the role of imidazole-4-acetaldehyde was identified as one of the metabolites that showed different trends between gastric cancer patients with high postoperative opioid consumption and those with low opioid consumption group. The results suggest that imidazole-4-acetaldehyde, along with other metabolites, was significantly different between the two groups, so that that imidazole-4-acetaldehyde may serve as a potential biomarker for predicting postoperative opioid consumption in gastric cancer patients, still, the results of this study is inconclusive.
References
Biogenic amines
Amines
Imidazoles
Histamine
Metabolism
Aldehydes | Imidazole-4-acetaldehyde | [
"Chemistry",
"Biology"
] | 554 | [
"Biomolecules by chemical classification",
"Biogenic amines",
"Functional groups",
"Cellular processes",
"Biochemistry",
"Amines",
"Bases (chemistry)",
"Metabolism"
] |
72,592,334 | https://en.wikipedia.org/wiki/Collaborative%20combat%20aircraft | Collaborative combat aircraft (CCA) is a US program for unmanned combat air vehicles (UCAVs) that is considered broadly equivalent to a loyal wingman. CCAs are intended to operate in collaborative teams with the next generation of manned combat aircraft, including sixth-generation fighters and bombers such as the Northrop Grumman B-21 Raider. Unlike the conventional UCAVs, the CCA incorporates artificial intelligence (AI), denoted an "autonomy package", increasing its survivability on the battlefield. It is still expected to cost much less than a manned aircraft with similar capabilities. The US Air Force plans to spend more than $8.9 billion on its CCA programs from fiscal years 2025 to 2029, with an additional $661 million planned for fiscal year 2024. The success of the CCA program may lessen the need for additional manned squadrons.
Characteristics
A CCA is a military drone with an onboard AI control system and capability to carry and deliver a significant military weapons load. Its AI system is envisaged as being significantly lighter and lower-cost than a human pilot with their associated life support systems, but offering comparable capability in flying the aircraft and in mission execution.
Role
The principal application is to elevate the role of human pilots to mission commanders, leaving AIs to operate under their tactical control as high-skill operators of relatively low-cost robotic craft.
CCAs can perform other missions as well, as "a sensor, as a shooter, as a weapons carrier, as a cost reducer".
Capabilities
Although a CCA will be a fraction of the cost of a manned fighter, they would not be considered expendable or even vulnerable to attrition. A CCA would have sufficient intelligence and onboard defense systems to survive on the battlefield. US Air Force Secretary Frank Kendall has described them as playing perhaps "100 roles": remotely controlled versions of targeting pods, electronic warfare pods or weapons carriers to provide additional sensors and munitions; to balance affordability and capability.
The price point of a CCA will determine how many types of missions a single airframe can perform, with more expensive designs able to be multirole aircraft, while cheaper designs could be modular to perform different tasks on different days which can afford to be lost in combat. Two increments are planned: increment 1 CCAs will have sensor and targeting systems to focus on carrying additional munitions for manned aircraft; increment 2 CCAs will have greater stealth and autonomy to perform missions including EW, SEAD, and potentially act as decoys. It's possible two distinct solutions could emerge from this stage, one high end and "exquisite" and the other more basic and inexpensive oriented around a single mission. Service officials started out developing the increment 2 CCA as a high-end, stealthy platform, but wargames showing that large numbers of low-end aircraft would be more effective than small numbers of high-end versions in a simulated Pacific conflict influenced them to rethink their approach.
The USAF is seeking CCAs with greater thrust than the current MQ-28 and the XQ-58.
History
The concept of the CCA arose in the early 2000s. CCA programs include the USAF Next Generation Air Dominance (NGAD) program. The US Navy and USAF plan to be able to control the CCAs and NGADs of either service. The CCA is being developed in collaborative fashion by multiple commands of the USAF: MG Heather L. Pringle of the Air Force Research Laboratory (AFRL); MG R. Scott Jobe of Air Combat Command (ACC); LTG Dale R. White, program executive officer (PEO) for fighters and advanced aircraft; and BG Joseph Kunkel, DCS, Plans and Programs. All four generals agreed on the need to put CCAs into the Joint Simulation Environment.
Defense policy expert Heather Penney has identified five key elements for the collaborative development of crewed-uncrewed teaming of autonomous loyal wingmen, remote pilots of unmanned aerial vehicles (UAVs), and pilots flying separately in manned aircraft (also called manned-unmanned teaming).
Create concepts that will maximize the strengths of both CCA and piloted aircraft working as a team.
Include operators in CCA development to ensure they understand how they will perform in the battlespace.
Warfighters must be able to depend on CCA autonomy.
Warfighters must have assured control over CCA in highly dynamic operations.
Human workloads must be manageable.
The Autonomous Core System, Skyborg's autonomy package, was shown to be portable across multiple airframes; this has led Skyborg to become a Program of Record with a Program Executive Officer (PEO) for acquisition. Skyborg will continue to serve as a science and technology platform.
Most UAVs are remotely piloted, but an AI program piloting a collaborative combat aircraft would need a mission commander for crewed-uncrewed teaming. —Heather Penney. In 2020, The Defense Advanced Research Projects Agency (DARPA) AlphaDogfight test program established that AI programs that fly fighter aircraft will overmatch human pilots, to the extent that the AI agents even flew with fine motor control. An autonomy package on the VISTA testbed has demonstrated dogfighting capability. US Air Force Secretary Frank Kendall flew in the X-62A VISTA, which was under AI control. The NGAD is anticipated to use loyal wingmen (CCAs). Air Force Secretary Frank Kendall envisions these uncrewed aircraft as performing parts of a larger mission; CCA development can be conducted in parallel with NGAD development, which has to take into account a larger set of requirements. Up to five autonomous CCAs would operate with an NGAD.
Air Force Research Laboratory (AFRL) will test their Skyborg manned-unmanned programs such as Autonomous Air Combat Operations (AACO), and DARPA will test its Air Combat Evolution (ACE) artificial intelligence program. The System for Autonomous Control of Simulation (SACS) software for human interface is being developed by Calspan.
DARPA's Longshot is an air-launched UAV meant to extend the range of a mission and reduce the risk to manned aircraft, which could then remain at standoff range; if Longshot were to use Air Combat Evolution (ACE), missiles launched from that Longshot could more effectively select targets. On March 6, 2023, DARPA chose General Atomics Aeronautical Systems (GA-ASI) to carry out the design of the air-launched Longshot drone through Critical Design Review (CDR); a LongShot would itself carry an AMRAAM or Sidewinder missile, which greatly extends the range of these missiles. In this way, a Boeing F-15EX Eagle II or similar 4th-generation fighter can greatly increase their survivability, when armed with a LongShot. GA-ASI is developing a core package (Gambit) for the CCA market.
On 9 December 2022, the Air Force Test Pilot School tested its General Dynamics X-62 VISTA, a modified F-16 Fighting Falcon which can fly autonomously, with 2 different AI packages. By 16 December 2022 the VISTA had flown eight sorties using ACE, and six sorties using AACO, at a rate of two sorties per day. Six F-16s from Eglin AFB will be fitted with autonomy agents, to establish the foundation of the Collaborative Combat Aircraft (CCA) program. The CCA lines of effort were:
Developing the Collaborative combat aircraft platform itself,
developing the autonomy package that will fly a CCA, and
figuring out how to organize, train, equip, and supply the CCA program
On 24 January 2024, the US Air Force awarded contracts to five contractor teams led by Anduril, Boeing, General Atomics, Lockheed Martin, and Northrop Grumman for the development of collaborative combat aircraft.
On 24 April 2024, the US Air Force announced that they had eliminated Boeing, Lockheed Martin, and Northrop Grumman from the Increment I competition and that the Anduril Fury and General Atomics Gambit would be moving forward with development. The Air Force expects to make a final decision between the two companies' offerings by 2026. As the CCA program is expected to result in multiple types of aircraft with varying capabilities and costs, all companies are expected to bid again for follow-on Increments.
On 19 September 2024, General Atomics displayed a full-scale model of a CCA. One such CCA version is a 'missile truck', which would augment the capabilities of a crewed/uncrewed mission. Anduril, a competing CCA vendor also displayed a full-scale model.
Funding
A CCA is estimated to cost between one-half and one-quarter as much as $80 million Lockheed Martin F-35 Lightning II; the desired cost is between $25-30 million per airframe. US Air Force Secretary Frank Kendall is aiming for an initial fleet of 1,000 CCAs. As elements of a crewed-uncrewed team, two CCAs could be teamed with an NGAD or F-35, say two for each of the 200 NGAD platforms, and two for each of the 300 F-35s, in order to work out concepts to integrate them into the service, but the full inventory could be twice that size. As of 3 July 2024, the Air Force requested reprogramming an additional $150 million for CCA development in 2024. This is a 40% increase over the $392 million budget previously requested; the FY2025 budget request will reflect an additional increment; the money for NGAD was adjusted appropriately.
The 26th Secretary of the US Air Force listed CCAs among his top seven priorities for the fiscal year (FY) 2024 budget request to its Chief of staff: Collaborative combat aircraft are entering the FY2024 presidential budget request; Collaborative Combat Aircraft (CCA) projects are estimated to be $500 million for perhaps "100 roles" in USAF missions in FY2024. The US Air Force plans to spend more than $6 billion on its CCA programs over the next five years (2023 to 2028).
List of CCAs
Several CCAs are or have been under development.
Examples include:
GA-ASI Gambit
General Atomics XQ-67
General Dynamics X-62 VISTA
Kratos XQ-58 Valkyrie
Skyborg Vanguard program entrants.
Boeing MQ-28 Ghost Bat
See also
Loyal wingman
Index of aviation articles
Notes and references
Unmanned military aircraft of the United States
Robotics
Command and control | Collaborative combat aircraft | [
"Engineering"
] | 2,181 | [
"Robotics",
"Automation"
] |
72,592,525 | https://en.wikipedia.org/wiki/Floating%20cable-stayed%20bridge | A floating cable-stayed bridge is a type of cable-stayed bridge where the towers float on tension-leg submerged material, tethered to the seabed for buoyancy. No floating cable-stayed bridge has been made or planned yet, a floating suspension bridge has been planned in Norway. This bridge could be more stable horizontally across the bridge than floating suspension bridges, the lateral movement force from the wind and current in the water is a problem trying to be resolved by placing the tethered cables at different angles from the floating platform to the seabed.
See also
Cable-stayed suspension bridge
Floating suspension bridge
List of cable-stayed bridges in the United States
List of longest cable-stayed bridge spans
List of longest suspension bridge spans
List of straits
References
Bridges by structural type
Structural engineering | Floating cable-stayed bridge | [
"Engineering"
] | 159 | [
"Structural engineering",
"Civil engineering",
"Construction"
] |
72,596,889 | https://en.wikipedia.org/wiki/Hydrogen%20ozonide | Hydrogen ozonide () is a radical molecule consisting of a hydrogen atom covalently bonded to an ozonide unit.
It is possibly produced in the reaction of the hydroxyl radical with dioxygen: OH• + O2 → HO3•.
It has been detected in a mass spectrometer experiment using (protonated ozone) as precursor.
References
Extra reading
Oxoacids
Ozonides | Hydrogen ozonide | [
"Chemistry"
] | 87 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
72,597,265 | https://en.wikipedia.org/wiki/Liposome%20extruder | A liposome extruder is a device that prepares cell membranes, exosomes and also generates nanoscale liposome formulations. The liposome extruder employs the track-etched membrane to filter huge particles and achieve sterile filtration.
Function
A liposome is made up of phospholipid bilayers, with the liposome being a spherical vesicle. Phospholipid bilayers have both hydrophobic and hydrophilic properties, which are important characteristics of cell membranes. The hydrophobic ends of phospholipid molecules are constrained, often to each other, creating spherical liposomes that are smaller when the hydrophobic ends are exposed to a solution that is aqueous in nature. The preparation of liposomes results in the formation of the liposome extruder. A liposome extruder is characterized by the uniform, narrow size distribution of its output, and has a particle-size control mechanism that is highly precise. Complex, toxic, injectable products such as the antifungal liposomal Amphotericin B, or the liposomal cytotoxic anticancer agents doxorubicin, paclitaxel, irinotecan, Adriamycin, and cytarabine contain liposomes which are prepared using the liposome extruder.
The technology for extruding liposomes relies on the performance and structural characteristics of the lipid bilayers in the liposomal phospholipids. An external extrusion force pushes the vesicles of liposomes that are large through the polycarbonate membranes with pore sizes that are specific when the transition temperature of the phospholipids rises slightly due to the change in operating temperature. Re-polymerization of the multiple compartments pr liposomes that are large in particle size occurs, and smaller liposomes are created due to the rupturing of the membrane pores. Extrusion of liposomes occurs at a uniform size, based on the pore size in the polycarbonate membrane. This happens when the big vesicles are passed through the cell membrane with a nanopore size specified in size several times due to the extrusion of polycarbonate membranes having uniform and vertical nanopore distribution on the surface of the membrane.
Application
Liposome extruders are applied in the formulation of liposomes of homogeneous size distributions.
Types
Hand-Driven liposome extruders
This type of liposome extruder is primarily used in research laboratories, as it can process mini-sample volumes between 0.25 ml and 2.5 ml. The hand-driven liposome extruders are further categorized into liposome extruders with a thermal-jacketed option and liposome extruders under ambient temperature. They are operated by manually by pushing a plunger. Liposome extruders under ambient temperature can be fitted with a cooling jacket to regulate temperatures during liposome extrusion.
Jacketed liposome extruders
Jacketed liposome extruders are applied in laboratories and in pilot-scale research phases. They process volumes between 2 mL and 3 L. The jacketed extruders are fitted with barrels to regulate the temperatures of the samples. To drive this extruder, a compressed nitrogen cylinder is used.
Online liposome extruders
Online liposome extruders process volumes of between 2 ml and 20L. They are driven by a high-pressure electric pump, making them appropriate for use in pilot-scale liposome production.
Multiple liposome extruder system
A multiple liposome extruder system is fitted with pressure and temperature sensors and a control panel to regulate liposome production. it processes capacities of between 1L and 200L.
Gallery
References
Membrane biology
Drug delivery devices
Dosage forms
Applied genetics | Liposome extruder | [
"Chemistry"
] | 818 | [
"Pharmacology",
"Membrane biology",
"Drug delivery devices",
"Molecular biology"
] |
74,007,634 | https://en.wikipedia.org/wiki/Operational%20design%20domain | Operational design domain (ODD) is a term for a particular operating context for an automated system, often used in the field of autonomous vehicles. The context is defined by a set of conditions, including environmental, geographical, time of day, and other conditions. For vehicles, traffic and roadway characteristics are included. Manufacturers use ODD to indicate where/how their product operates safely. A given system may operate differently according to the immediate ODD.
The concept presumes that automated systems have limitations. Relating system function to the ODD it supports is important for developers and regulators to establish and communicate safe operating conditions. Systems should operate within those limitations. Some systems recognize the ODD and modify their behavior accordingly. For example, an autonomous car might recognize that traffic is heavy and disable its automated lane change feature.
ODD is used for cars, for ships, trains, agricultural robots, and other robots.
Definitions
Various regulators have offered definitions of related terms:
Examples
In 2022, Mercedes-Benz announced a product with an ODD of Level 3 autonomous driving at 130 km/h.
See also
Scenario (vehicular automation)
References
Vehicular automation
Technical specifications
Robotics engineering | Operational design domain | [
"Technology",
"Engineering"
] | 231 | [
"Computer engineering",
"Robotics engineering",
"Vehicular automation",
"Automation",
"nan"
] |
74,009,456 | https://en.wikipedia.org/wiki/Prognosis%20of%20autism | There is currently no evidence of a cure for autism. The degree of symptoms can decrease, occasionally to the extent that people lose their diagnosis of autism; this occurs sometimes after intensive treatment and sometimes not. It is not known how often this outcome happens, with reported rates in unselected samples ranging from 3% to 25%. Although core difficulties tend to persist, symptoms often become less severe with age. Acquiring language before age six, having an IQ above 50, and having a marketable skill all predict better outcomes; independent living is unlikely in autistic people with higher support needs.
Developmental course
There are two possible developmental courses of autism. One course of development is more gradual in nature, with symptoms appearing fairly early in life and persisting. A second course of development is characterized by normal or near-normal development before onset of regression or loss of skills, which is known as regressive autism.
Gradual autism development
Most parents report that the onset of autism features appear within the first or second year of life. This course of development is fairly gradual, in that parents typically report concerns in development over the first two years of life and diagnosis can be made around 3–4 years of age. Overt features gradually begin after the age of six months, become established by age two or three years, and tend to continue through adulthood, although often in more muted form. Some of the early signs of autism in this course include decreased attention at faces, failure to obviously respond when name is called, failure to show interests by showing or pointing, and delayed imaginative play.
Regressive autism development
Regressive autism occurs when a child appears to develop typically but then starts to lose speech and social skills and is subsequently diagnosed with ASD. Other terms used to describe regression in children with autism are autism with regression, autistic regression, setback-type autism, and acquired autistic syndrome.
Within the regressive autism developmental course, there are two patterns. The first pattern is when developmental losses occur in the first 15 months to 3 years. The second pattern, childhood disintegrative disorder (a diagnosis now included under ASD in the DSM, but not the ICD), is characterized by regression after normal development in the first 3 to 4, or even up to 9 years of life.
After the regression, the child follows the standard pattern of autistic neurological development. The term regressive autism refers to the appearance that neurological development has reversed; it is actually only the affected developmental skills, rather than the neurology as a whole, that regresses.
Usually, the apparent onset of regressive autism can be surprising and distressing to parents, who often initially suspect severe hearing loss. Attribution of regression to environmental stress factors may result in a delay in diagnosis.
There is no standard definition for regression. Some children show a mixture of features, with some early delays and some later losses; and there is evidence of a continuous spectrum of behaviors, rather than, or in addition to, a black-and-white distinction, between autism with and without regression. There are several intermediate types of development, which do not neatly fit into either the traditional early onset or the regressive categories, including mixtures of early deficits, failures to progress, subtle diminishment, and obvious losses.
Regression may occur in a variety of domains, including communication, social, cognitive, and self-help skills; however, the most common regression is loss of language. Some children lose social development instead of language; some lose both. Skill loss may be quite rapid, or may be slow and preceded by a lengthy period of no skill progression; the loss may be accompanied by reduced social play or increased irritability. The temporarily acquired skills typically amount to a few words of spoken language, and may include some rudimentary social perception.
The prevalence of regression varies depending on the definition used. If regression is defined strictly to require loss of language, it is less common; if defined more broadly, to include cases where language is preserved but social interaction is diminished, it is more common. Although regressive autism is often thought to be a less common (compared with gradual course of autism onset described above), this remains an area of ongoing debate; some evidence suggests that a pattern of regressive autism may be more common than previously thought. There are some who believe that regressive autism is simply early-onset autism which was recognized at a later date. Researchers have conducted studies to determine whether regressive autism is a distinct subset of ASD, but the results of these studies have contradicted one another.
Differential outcomes
There continues to be a debate over the differential outcomes based on these two developmental courses. Some studies suggest that regression is associated with poorer outcomes and others report no differences between those with early gradual onset and those who experience a regression period. While there is conflicting evidence surrounding language outcomes in autism, some studies have shown that cognitive and language abilities at age may help predict language proficiency and production after age 5. Overall, the literature stresses the importance of early intervention in achieving positive longitudinal outcomes.
Academic performance
The number of students identified and served as eligible for autism services in the United States has increased from 5,413 children in 1991–1992 to 370,011 children in the 2010–2011 academic school year. The United States Department of Health and Human Services reported approximately 1 in 68 children are diagnosed with autism at age 8, and onset is typically between ages 2 and 4.
The increasing number of students diagnosed with autism in the schools presents significant challenges to teachers, school psychologists, and other school professionals. These challenges include developing a consistent practice that best support the social and cognitive development of the increasing number of autistic students. Although there is considerable research addressing assessment, identification, and support services for autistic children, there is a need for further research focused on these topics within the school context. Further research on appropriate support services for students with ASD will provide school psychologists and other education professionals with specific directions for advocacy and service delivery that aim to enhance school outcomes for students with ASD.
Attempts to identify and use best intervention practices for students with autism also pose a challenge due to over dependence on popular or well-known interventions and curricula. Some evidence suggests that although these interventions work for some students, there remains a lack of specificity for which type of student, under what environmental conditions (one-on-one, specialized instruction or general education) and for which targeted deficits they work best. More research is needed to identify what assessment methods are most effective for identifying the level of educational needs for students with ASD. Additionally, children living in higher resources settings in the United States tend to experience earlier ASD interventions than children in lower resource settings (e.g. rural areas).
A difficulty for academic performance in students with autism is the tendency to generalize learning. Learning is different for each student, which is the same for students with autism. To assist in learning, accommodations are commonly put into place for students with differing abilities. The existing schema of these students works in different ways and can be adjusted to best support the educational development for each student.
The cost of educating a student with autism in the US would be about $20,600 while educating an average student would be about $12,000.
Though much of the focus on early childhood intervention for autism has centered on high-income countries like the United States, some of the most significant unmet needs for autistic individuals are in low- and middle-income countries. In these contexts, research has been more limited but there is evidence to suggest that some comprehensive care plans can be successfully delivered by non-specialists in schools and in the community.
Adulthood
Many autistic people face significant obstacles in transitioning to adulthood. Autistic people may face socialization issues, which may impact relationships such as community participation, employment, independent living, friendships, dating and marriage, and having children. Some autistic adults are unable to live independently.
Employment
The majority of the economic burden of autism is caused by lost productivity in the job market. Compared to the general population, autistic people are more likely to be unemployed and to have never had a job. About half of people in their 20s with autism are not employed.
In various developed countries, the autism unemployment rate can range from 62% to as high as 85%., although in some it can be as low as 25%. While employers state hiring concerns about productivity and supervision, experienced employers of autistics give positive reports of above average memory and detail orientation as well as a high regard for rules and procedure in autistic employees.
From the perspective of the social model of disability, much of this unemployment is caused by the lack of understanding from employers and coworkers. Adding content related to autism in existing diversity training can clarify misconceptions, support employees, and help provide new opportunities for autistic people. As of 2021, new autism employment initiatives by major employers in the United States continue to grow, as the initiative "Autism at Work" grew to 20 of the largest companies in the United States. However, special hiring programs remain largely limited to entry-level technology positions, such as software testing, and exclude those who have talents outside of technology. An alternative approach is systemic neurodiversity inclusion. Developing organizational systems with enough flexibility and fairness to include autistic employees improves the work experience of all employees.
References
Developmental psychology
Learning disabilities | Prognosis of autism | [
"Biology"
] | 1,928 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
74,012,932 | https://en.wikipedia.org/wiki/Macromolecular%20Crystallographic%20Information%20File | The Macromolecular Crystallographic Information File (mmCIF) also known as PDBx/mmCIF is a standard text file format for representing macromolecular structure data, developed by the International Union of Crystallography (IUCr) and the Protein Data Bank It is an extension of the Crystallographic Information File (CIF), specifically for macromolecular data, such as proteins and nucleic acids, incorporating elements from the PDB file format.
mmCIF is intended as an alternative to the Protein Data Bank (PDB) format and is now the default format used by the Protein Data Bank.
mmCIF was designed to address limitations of the PDB format in terms of capacity and flexibility, especially with the increasing size and complexity of macromolecular structures being determined.
The format is part of the larger Crystallographic Information Framework, a system of exchange protocols based on data dictionaries and relational rules expressible in different machine-readable manifestations, including, but not restricted to, the original Crystallographic Information File and XML.
Example
An example of the mmCIF file format is key-value style is:
_cell.entry_id 4HHB
_cell.length_a 63.150
_cell.length_b 83.590
_cell.length_c 53.800
_cell.angle_alpha 90.00
_cell.angle_beta 99.34
_cell.angle_gamma 90.00
_cell.Z_PDB 4
External links
International Union of Crystallography
wwPDB: mmCIF Resources
PDBx/mmCIF conversion service
References
Chemical file formats
Computer file formats
Crystallography
Biological sequence format | Macromolecular Crystallographic Information File | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 351 | [
"Materials science stubs",
"Chemistry software",
"Materials science",
"Crystallography stubs",
"Bioinformatics",
"Crystallography",
"Condensed matter physics",
"Biological sequence format",
"Chemical file formats"
] |
74,014,106 | https://en.wikipedia.org/wiki/IEEE%20P80 | IEEE standard P80 is a technical standard of the Institute of Electrical and Electronics Engineers (IEEE), governing outdoor AC substations (although under special circumstances it may also be applied to indoor AC substations). It was last approved on the 28th September, 2017. The standard governs requirements for the grounding and insulation of substations for safety purposes. The standard, along with IEEE P81, is widely used within the industry in power applications.
Specifications
For AC currents at the frequency used for power grids (i.e. 50 or 60 Hz), the threshold of lethality for current passing through the human body is only 0.1 A, although this value can be much higher for short surges, or for higher frequencies. As a result, the standard recommends an emphasis on fast fault clearing time in order to reduce both the probability and duration of any potential exposure of humans to dangerous fault current.
In general, a system of horizontal grid conductors and vertical rods and electrodes is recommended. Ths is to reduce the aforementioned fault clearing time, as well as ensuring that there are multiple paths for a high fault current to dissipate, ensuring that ground potential gradients dangerous to those standing near the substation will not occur. An example implementation given consists of copper rods buried 0.3-0.5m below ground, and spaced 3-7m apart. In situations where space is at a premium, or other difficulties prevent the construction of a proper grounding grid, ground rods may be driven in deeper, and a wire mat may also be used.
References
IEEE standards | IEEE P80 | [
"Technology"
] | 325 | [
"Computer standards",
"IEEE standards"
] |
74,017,634 | https://en.wikipedia.org/wiki/Thioquinanthrene | Thioquinanthrene, also known as thiochinathren, is an aromatic organic chemical compound. It has the chemical formula C18H10N2S2 and reacts with alcoholates or alkoxides. One of the key uses is to act as a catalyst poison in the Rosenmund reduction. It has the IUPAC name of 2,13-dithia-10,21-diazapentacyclo[12.8.0.03,12.04,9.015,20]docosa-1(14),3(12),4,6,8,10,15,17,19,21-decaene.
Rosenmund catalyst poison
In the Rosenmeund reaction, an acid chloride is reduced to an aldehyde. Continuing the reduction produces an alcohol. This further reaction is undesirable as the alcohol will now react with the acyl chloride to produce the unwanted ester product. For this reaction (over reduction) to be prevented, the catalyst needs to be poisoned. Thioquinanthrene was used initially, although other materials have been used since.
References
Aromatic compounds
Catalysis
Heterocyclic compounds with 5 rings
Dithiins
Nitrogen heterocycles
Quinolines | Thioquinanthrene | [
"Chemistry"
] | 264 | [
"Catalysis",
"Aromatic compounds",
"Chemical kinetics",
"Organic compounds"
] |
74,020,014 | https://en.wikipedia.org/wiki/Vector%20database | A vector database, vector store or vector search engine is a database that can store vectors (fixed-length lists of numbers) along with other data items. Vector databases typically implement one or more Approximate Nearest Neighbor algorithms, so that one can search the database with a query vector to retrieve the closest matching database records.
Vectors are mathematical representations of data in a high-dimensional space. In this space, each dimension corresponds to a feature of the data, with the number of dimensions ranging from a few hundred to tens of thousands, depending on the complexity of the data being represented. A vector's position in this space represents its characteristics. Words, phrases, or entire documents, as well as images, audio, and other types of data, can all be vectorized.
These feature vectors may be computed from the raw data using machine learning methods such as feature extraction algorithms, word embeddings or deep learning networks. The goal is that semantically similar data items receive feature vectors close to each other.
Vector databases can be used for similarity search, semantic search, multi-modal search, recommendations engines, large language models (LLMs), object detection, etc.
Vector databases are also often used to implement retrieval-augmented generation (RAG), a method to improve domain-specific responses of large language models. The retrieval component of a RAG can be any search system, but is most often implemented as a vector database. Text documents describing the domain of interest are collected, and for each document or document section, a feature vector (known as an "embedding") is computed, typically using a deep learning network, and stored in a vector database. Given a user prompt, the feature vector of the prompt is computed, and the database is queried to retrieve the most relevant documents. These are then automatically added into the context window of the large language model, and the large language model proceeds to create a response to the prompt given this context.
Techniques
The most important techniques for similarity search on high-dimensional vectors include:
Hierarchical Navigable Small World (HNSW) graphs
Locality-sensitive Hashing (LSH) and Sketching
Product Quantization (PQ)
Inverted Files
and combinations of these techniques.
In recent benchmarks, HNSW-based implementations have been among the best performers. Conferences such as the International Conference on Similarity Search and Applications, SISAP and the Conference on Neural Information Processing Systems (NeurIPS) host competitions on vector search in large databases.
Implementations
See also
References
External links
Machine learning
Types of databases | Vector database | [
"Engineering"
] | 512 | [
"Artificial intelligence engineering",
"Machine learning"
] |
65,460,149 | https://en.wikipedia.org/wiki/Ovorubin | Ovorubin (PcOvo or PcPV1) is the most abundant perivitellin (>60 % total protein) of the perivitelline fluid from Pomacea canaliculata snail eggs. This glyco-lipo-caroteno protein complex is a approx. 300 kDa multimer of a combination of multiple copies of six different ~30 kDa subunits.
Together with the other perivitellins from Pomacea canaliculata eggs, ovorubin serves a nutrient source for developing embryos, notably to the intermediate and late stages. Moreover, after hatching, the protein is still detected in the lumen of the digestive gland ready to be endocytosed, therefore, acting as a nutrient source for the newly hatched snail.
Ovorubin contains carbohydrates and carotenoid pigments as main prosthetic groups, which are related to many physiological roles on Pomacea aerial egg-laying strategy. Given that carbohydrates tend to retain water, the high glycosylation of ovorubin (~17 % w/w) was proposed as an embryo defense against water loss. The carotenoid pigments stabilized by ovorubin also provide the eggs of antioxidant and photoprotective capacities, crucial roles to cope with the harsh conditions of the aerial environment. The presence of carotenoid pigments is also responsible for the brightly reddish coloration of Ovorubin, and therefore snail eggs, which was related to a warning coloration (aposematism) advertising predators about the presence of deterrents. In fact, field evidence of egg unpalatability is provided by the fact that most animals foraging in habitats where the apple snails live ignore these eggs.
Like most other studied perivitellins from Pomacea snails, ovorubin is highly stable in a wide range of pH values and withstands gastrointestinal digestion, characteristics associated with an antinutritive defense system that deters predation by lowering the nutritional value of the eggs.
References
Proteins | Ovorubin | [
"Chemistry"
] | 437 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
65,466,889 | https://en.wikipedia.org/wiki/Crude%20oil%20stabilisation | Crude oil stabilisation (or stabilization) is a partial distillation process that renders crude oil suitable for storage in atmospheric tanks, or of a quality suitable for sales or pipeline transportation. Stabilization is achieved by subjecting ‘live’ crude to temperature and pressure conditions in a fractionation vessel, which drives off light hydrocarbon components to form a ‘dead’ or stabilized crude oil with a lower vapor pressure.
Specification
Typically, the live crude from an oil production installation would have a vapor pressure of 120 psia at 100 °F (726 kPa at 37.8 °C) or 125 psig at 60 °F (862 kPa at 15.5 °C). After stabilisation dead crude would have a Reid vapor pressure of 9 – 10 psig at 100 °F (62 – 69 kPa at 37.8 °C).
The stabilization process
Live crude is heated in a furnace or heat exchanger to an elevated temperature. The crude oil is fed to a stabilizer which is typically a tray or packed tower column that achieves a partial fractionation or distillation of the oil. The heavier components, pentane (C5H12), hexane (C6H14), and higher hydrocarbons (C7+), flow as liquid down through the column where the temperature is increasingly higher. At the bottom of the column, some of the liquid is withdrawn and circulated through a reboiler which adds heat to the tower. Here the lighter fractions are finally driven off as a gas, which rises up through the column. At each tray or stage, the rising gas strips the light ends from heavy ends, and the rising gas becomes richer in the light components and leaner in the heavy ends.
Alternatively, if a finer separation is required the column may be provided with an upper section reflux system making it similar to a distillation column. As the reflux liquid flows down through the column it becomes leaner in light components and richer in heavy ends. Overhead gas from the stabilizer passes through a back pressure control valve that maintains the pressure in the stabilizer.
The stabilised crude oil, comprising pentane and higher hydrocarbons (C5+), is drawn from the base of the stabilizer and is cooled. This may be by heat exchange with the incoming live crude and by cooling water in a heat exchanger. The dead, stabilized crude flows to tanks for storage or to a pipeline for transport to customers such as an oil refinery.
The stabilization tower may typically operate at approximately 50 to 200 psig (345 – 1378 kPa). Where the crude oil contains high levels of hydrogen sulphide (H2S) a sour stabilization is undertaken. This entails operating the stabilizer at the lower end of the pressure range, whereas sweet (low H2S) stabilization would take place at a higher pressure.
Gas processing
The light hydrocarbons stripped from the crude are usually processed to yield useful products. Gas from the top of the stabilizer column is compressed and fed to a de-methanizer column. This column separates the lightest hydrocarbons, methane (CH4) and ethane (C2H6), from the heavier components. Methane and ethane are withdrawn from the top of the column and are used as fuel gas in the plant. Excess gas may be flared.
Liquid from the base of the de-methanizer is routed to the de-ethanizer. Gas from the top is principally ethane and is compressed and returned to the de-methanizer.
Liquid from the base of the de-ethanizer is routed to the de-propanizer. Gas from the top is principally propane (C3H8) and is compressed or chilled for storage and sales.
Liquid from the base of the de-propanizer is principally butane (C4H10) and some heavier components. Butane is stored and sold, and the heavier fraction is sold or spiked into the stabilized crude.
See also
Oil production plant
Oil platform
Upstream (oil industry)
Oil industry
Flotta oil terminal
Sullom Voe terminal
References
Petroleum technology
Oil refining
Distillation | Crude oil stabilisation | [
"Chemistry",
"Engineering"
] | 849 | [
"Separation processes",
"Petroleum engineering",
"Petroleum technology",
"Distillation",
"Oil refining"
] |
65,470,508 | https://en.wikipedia.org/wiki/Quadratic%20Fourier%20transform | In mathematical physics and harmonic analysis, the quadratic Fourier transform is an integral transform that generalizes the fractional Fourier transform, which in turn generalizes the Fourier transform.
Roughly speaking, the Fourier transform corresponds to a change of variables from time to frequency (in the context of harmonic analysis) or from position to momentum (in the context of quantum mechanics). In phase space, this is a 90 degree rotation. The fractional Fourier transform generalizes this to any angle rotation, giving a smooth mixture of time and frequency, or of position and momentum. The quadratic Fourier transform extends this further to the group of all linear symplectic transformations in phase space (of which rotations are a subgroup).
More specifically, for every member of the metaplectic group (which is a double cover of the symplectic group) there is a corresponding quadratic Fourier transform.
References
Fourier analysis
Integral transforms
Time–frequency analysis | Quadratic Fourier transform | [
"Physics"
] | 187 | [
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis",
"Quantum mechanics",
"Quantum physics stubs"
] |
65,472,779 | https://en.wikipedia.org/wiki/Kovner%E2%80%93Besicovitch%20measure | In plane geometry the Kovner–Besicovitch measure is a number defined for any bounded convex set describing how close to being centrally symmetric it is. It is the fraction of the area of the set that can be covered by its largest centrally symmetric subset.
Properties
This measure is one for a set that is centrally symmetric, and less than one for sets whose closure is not centrally symmetric. It is invariant under affine transformations of the plane. If is the center of symmetry of the largest centrally-symmetric set within a given convex body , then the centrally-symmetric set itself is the intersection of with its reflection across .
Minimizers
The convex sets with the smallest possible Kovner–Besicovitch measure are the triangles, for which the measure is 2/3. The result that triangles are the minimizers of this measure is known as Kovner's theorem or the Kovner–Besicovitch theorem, and the inequality bounding the measure above 2/3 for all convex sets is the Kovner–Besicovitch inequality. The curve of constant width with the smallest possible Kovner–Besicovitch measure is the Reuleaux triangle.
Computational complexity
The Kovner–Besicovitch measure of any given convex polygon with vertices can be found in time by determining a translation of the reflection of the polygon that has the largest possible overlap with the unreflected polygon.
History
Branko Grünbaum writes that the Kovner–Besicovitch theorem was first published in Russian, in a 1935 textbook on the calculus of variations by Mikhail Lavrentyev and Lazar Lyusternik, where it was credited to Soviet mathematician and geophysicist . Additional proofs were given by Abram Samoilovitch Besicovitch and by István Fáry, who also proved that every minimizer of the Kovner–Besicovitch measure is a triangle.
See also
Estermann measure, a measure of central symmetry defined using supersets in place of subsets
References
External links
A Measure of Central Symmetry, Tanya Khovanova's Math Blog, September 2, 2012
Euclidean symmetries | Kovner–Besicovitch measure | [
"Physics",
"Mathematics"
] | 447 | [
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Mathematical relations",
"Symmetry"
] |
78,385,053 | https://en.wikipedia.org/wiki/Keller%27s%20reagent%20%28metallurgy%29 | In metallurgy, Keller's reagent is a mixture of nitric acid, hydrochloric acid, and hydrofluoric acid, used to etch aluminum alloys to reveal their grain boundaries and orientations. It is also sometimes called Dix–Keller reagent, after E. H. Dix, Jr., and Fred Keller of the Aluminum Corporation of America, who pioneered the use of this technique in the late 1920s and early 1930s.
Safety
Keller's reagent contains oxidizing nitric acid and toxic hydrofluoric acid. The reagent and its fumes may be lethal via contact, inhalation of its fumes, etc. Hydrogen produced on contact with some metals may pose a fire hazard.
See also
Aqua regia
References
Metallurgy
Chemical mixtures | Keller's reagent (metallurgy) | [
"Chemistry",
"Materials_science",
"Engineering"
] | 169 | [
"Metallurgy",
"Chemical mixtures",
"Materials science",
"nan"
] |
63,944,266 | https://en.wikipedia.org/wiki/Differentiable%20vector%E2%80%93valued%20functions%20from%20Euclidean%20space | In the mathematical discipline of functional analysis, a differentiable vector-valued function from Euclidean space is a differentiable function valued in a topological vector space (TVS) whose domains is a subset of some finite-dimensional Euclidean space.
It is possible to generalize the notion of derivative to functions whose domain and codomain are subsets of arbitrary topological vector spaces (TVSs) in multiple ways.
But when the domain of a TVS-valued function is a subset of a finite-dimensional Euclidean space then many of these notions become logically equivalent resulting in a much more limited number of generalizations of the derivative and additionally, differentiability is also more well-behaved compared to the general case.
This article presents the theory of -times continuously differentiable functions on an open subset of Euclidean space (), which is an important special case of differentiation between arbitrary TVSs.
This importance stems partially from the fact that every finite-dimensional vector subspace of a Hausdorff topological vector space is TVS isomorphic to Euclidean space so that, for example, this special case can be applied to any function whose domain is an arbitrary Hausdorff TVS by restricting it to finite-dimensional vector subspaces.
All vector spaces will be assumed to be over the field where is either the real numbers or the complex numbers
Continuously differentiable vector-valued functions
A map which may also be denoted by between two topological spaces is said to be or if it is continuous. A topological embedding may also be called a .
Curves
Differentiable curves are an important special case of differentiable vector-valued (i.e. TVS-valued) functions which, in particular, are used in the definition of the Gateaux derivative. They are fundamental to the analysis of maps between two arbitrary topological vector spaces and so also to the analysis of TVS-valued maps from Euclidean spaces, which is the focus of this article.
A continuous map from a subset that is valued in a topological vector space is said to be ( or ) if for all it is which by definition means the following limit in exists:
where in order for this limit to even be well-defined, must be an accumulation point of
If is differentiable then it is said to be or if its , which is the induced map is continuous.
Using induction on the map is or if its derivative is continuously differentiable, in which case the is the map
It is called , or if it is -times continuously differentiable for every integer
For it is called if it is -times continuous differentiable and is differentiable.
A continuous function from a non-empty and non-degenerate interval into a topological space is called a or a in
A in is a curve in whose domain is compact while an or in is a path in that is also a topological embedding.
For any a curve valued in a topological vector space is called a if it is a topological embedding and a curve such that for every where it is called a if it is also a path (or equivalently, also a -arc) in addition to being a -embedding.
Differentiability on Euclidean space
The definition given above for curves are now extended from functions valued defined on subsets of to functions defined on open subsets of finite-dimensional Euclidean spaces.
Throughout, let be an open subset of where is an integer.
Suppose and is a function such that with an accumulation point of Then is if there exist vectors in called the , such that
where
If is differentiable at a point then it is continuous at that point.
If is differentiable at every point in some subset of its domain then is said to be ( or ) , where if the subset is not mentioned then this means that it is differentiable at every point in its domain.
If is differentiable and if each of its partial derivatives is a continuous function then is said to be ( or ) or
For having defined what it means for a function to be (or times continuously differentiable), say that is or that if is continuously differentiable and each of its partial derivatives is
Say that is , or if is for all
The of a function is the closure (taken in its domain ) of the set
Spaces of Ck vector-valued functions
In this section, the space of smooth test functions and its canonical LF-topology are generalized to functions valued in general complete Hausdorff locally convex topological vector spaces (TVSs). After this task is completed, it is revealed that the topological vector space that was constructed could (up to TVS-isomorphism) have instead been defined simply as the completed injective tensor product of the usual space of smooth test functions with
Throughout, let be a Hausdorff topological vector space (TVS), let and let be either:
an open subset of where is an integer, or else
a locally compact topological space, in which case can only be
Space of Ck functions
For any let denote the vector space of all -valued maps defined on and let denote the vector subspace of consisting of all maps in that have compact support.
Let denote and denote
Give the topology of uniform convergence of the functions together with their derivatives of order on the compact subsets of
Suppose is a sequence of relatively compact open subsets of whose union is and that satisfy for all
Suppose that is a basis of neighborhoods of the origin in Then for any integer the sets:
form a basis of neighborhoods of the origin for as and vary in all possible ways.
If is a countable union of compact subsets and is a Fréchet space, then so is
Note that is convex whenever is convex.
If is metrizable (resp. complete, locally convex, Hausdorff) then so is
If is a basis of continuous seminorms for then a basis of continuous seminorms on is:
as and vary in all possible ways.
Space of Ck functions with support in a compact subset
The definition of the topology of the space of test functions is now duplicated and generalized.
For any compact subset denote the set of all in whose support lies in (in particular, if then the domain of is rather than ) and give it the subspace topology induced by
If is a compact space and is a Banach space, then becomes a Banach space normed by
Let denote
For any two compact subsets the inclusion
is an embedding of TVSs and that the union of all as varies over the compact subsets of is
Space of compactly support Ck functions
For any compact subset let
denote the inclusion map and endow with the strongest topology making all continuous, which is known as the final topology induced by these map.
The spaces and maps form a direct system (directed by the compact subsets of ) whose limit in the category of TVSs is together with the injections
The spaces and maps also form a direct system (directed by the total order ) whose limit in the category of TVSs is together with the injections
Each embedding is an embedding of TVSs.
A subset of is a neighborhood of the origin in if and only if is a neighborhood of the origin in for every compact
This direct limit topology (i.e. the final topology) on is known as the .
If is a Hausdorff locally convex space, is a TVS, and is a linear map, then is continuous if and only if for all compact the restriction of to is continuous. The statement remains true if "all compact " is replaced with "all ".
Properties
Identification as a tensor product
Suppose henceforth that is Hausdorff.
Given a function and a vector let denote the map defined by
This defines a bilinear map into the space of functions whose image is contained in a finite-dimensional vector subspace of
this bilinear map turns this subspace into a tensor product of and which we will denote by
Furthermore, if denotes the vector subspace of consisting of all functions with compact support, then is a tensor product of and
If is locally compact then is dense in while if is an open subset of then is dense in
See also
Notes
Citations
References
Banach spaces
Differential calculus
Euclidean geometry
Functions and mappings
Generalizations of the derivative
Topological vector spaces | Differentiable vector–valued functions from Euclidean space | [
"Mathematics"
] | 1,649 | [
"Mathematical analysis",
"Functions and mappings",
"Vector spaces",
"Calculus",
"Mathematical objects",
"Space (mathematics)",
"Topological vector spaces",
"Mathematical relations",
"Differential calculus"
] |
63,945,981 | https://en.wikipedia.org/wiki/Isolation%20pod | An isolation pod is a capsule which is used to provide medical isolation for a patient. Examples include the Norwegian EpiShuttle and the USAF's Transport Isolation System (TIS) or Portable Bio-Containment Module (PBCM), which are used to provide isolation when transporting patients by air.
Isolation devices were developed in the 1970s for the aerial evacuation of patients with Lassa fever. In 2015, Human Stretcher Transit Isolator (HSTI) pods were used for the aerial evacuation of health workers during the Ebola virus epidemic in Guinea.
Isolation pod provide 100% protection to Frontline/Health workers [biosafety level-4] In covid outbrack in India 2020-21 ahmedabad based company Edithheathcare.in Developed such pods to isolate infectious patients.
A review of 14 relevant studies concluded that the use of isolation pods for the transport of COVID-19 patients would not normally be appropriate as the use of oxygen masks and other, less-demanding precautions would be adequate. During the COVID-19 pandemic, the UK's NHS hospitals set up separate reception areas, which were called isolation pods, but these were typically temporary accommodation such as a portacabin or tent, without special technical features, just being located at a distance from the permanent facilities.
See also
References
Containment efforts related to the COVID-19 pandemic
Medical transport devices | Isolation pod | [
"Physics"
] | 283 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
63,948,433 | https://en.wikipedia.org/wiki/Quasi-complete%20space | In functional analysis, a topological vector space (TVS) is said to be quasi-complete or boundedly complete if every closed and bounded subset is complete.
This concept is of considerable importance for non-metrizable TVSs.
Properties
Every quasi-complete TVS is sequentially complete.
In a quasi-complete locally convex space, the closure of the convex hull of a compact subset is again compact.
In a quasi-complete Hausdorff TVS, every precompact subset is relatively compact.
If is a normed space and is a quasi-complete locally convex TVS then the set of all compact linear maps of into is a closed vector subspace of .
Every quasi-complete infrabarrelled space is barreled.
If is a quasi-complete locally convex space then every weakly bounded subset of the continuous dual space is strongly bounded.
A quasi-complete nuclear space then has the Heine–Borel property.
Examples and sufficient conditions
Every complete TVS is quasi-complete.
The product of any collection of quasi-complete spaces is again quasi-complete.
The projective limit of any collection of quasi-complete spaces is again quasi-complete.
Every semi-reflexive space is quasi-complete.
The quotient of a quasi-complete space by a closed vector subspace may fail to be quasi-complete.
Counter-examples
There exists an LB-space that is not quasi-complete.
See also
References
Bibliography
Functional analysis | Quasi-complete space | [
"Mathematics"
] | 295 | [
"Functional analysis",
"Mathematical objects",
"Functions and mappings",
"Mathematical relations"
] |
63,950,387 | https://en.wikipedia.org/wiki/Infrabarrelled%20space | In functional analysis, a discipline within mathematics, a locally convex topological vector space (TVS) is said to be infrabarrelled (also spelled infrabarreled) if every bounded barrel is a neighborhood of the origin.
Similarly, quasibarrelled spaces are topological vector spaces (TVS) for which every bornivorous barrelled set in the space is a neighbourhood of the origin.
Quasibarrelled spaces are studied because they are a weakening of the defining condition of barrelled spaces, for which a form of the Banach–Steinhaus theorem holds.
Definition
A subset of a topological vector space (TVS) is called bornivorous if it absorbs all bounded subsets of ;
that is, if for each bounded subset of there exists some scalar such that
A barrelled set or a barrel in a TVS is a set which is convex, balanced, absorbing and closed.
A quasibarrelled space is a TVS for which every bornivorous barrelled set in the space is a neighbourhood of the origin.
Characterizations
If is a Hausdorff locally convex space then the canonical injection from into its bidual is a topological embedding if and only if is infrabarrelled.
A Hausdorff topological vector space is quasibarrelled if and only if every bounded closed linear operator from into a complete metrizable TVS is continuous.
By definition, a linear operator is called closed if its graph is a closed subset of
For a locally convex space with continuous dual the following are equivalent:
is quasibarrelled.
Every bounded lower semi-continuous semi-norm on is continuous.
Every -bounded subset of the continuous dual space is equicontinuous.
If is a metrizable locally convex TVS then the following are equivalent:
The strong dual of is quasibarrelled.
The strong dual of is barrelled.
The strong dual of is bornological.
Properties
Every quasi-complete infrabarrelled space is barrelled.
A locally convex Hausdorff quasibarrelled space that is sequentially complete is barrelled.
A locally convex Hausdorff quasibarrelled space is a Mackey space, quasi-M-barrelled, and countably quasibarrelled.
A locally convex quasibarrelled space that is also a σ-barrelled space is necessarily a barrelled space.
A locally convex space is reflexive if and only if it is semireflexive and quasibarrelled.
Examples
Every barrelled space is infrabarrelled.
A closed vector subspace of an infrabarrelled space is, however, not necessarily infrabarrelled.
Every product and locally convex direct sum of any family of infrabarrelled spaces is infrabarrelled.
Every separated quotient of an infrabarrelled space is infrabarrelled.
Every Hausdorff barrelled space and every Hausdorff bornological space is quasibarrelled.
Thus, every metrizable TVS is quasibarrelled.
Note that there exist quasibarrelled spaces that are neither barrelled nor bornological.
There exist Mackey spaces that are not quasibarrelled.
There exist distinguished spaces, DF-spaces, and -barrelled spaces that are not quasibarrelled.
The strong dual space of a Fréchet space is distinguished if and only if is quasibarrelled.
Counter-examples
There exists a DF-space that is not quasibarrelled.
There exists a quasibarrelled DF-space that is not bornological.
There exists a quasibarrelled space that is not a σ-barrelled space.
See also
References
Bibliography
Functional analysis
Topological vector spaces | Infrabarrelled space | [
"Mathematics"
] | 761 | [
"Functions and mappings",
"Functional analysis",
"Vector spaces",
"Mathematical objects",
"Space (mathematics)",
"Topological vector spaces",
"Mathematical relations"
] |
63,951,372 | https://en.wikipedia.org/wiki/Strong%20dual%20space | In functional analysis and related areas of mathematics, the strong dual space of a topological vector space (TVS) is the continuous dual space of equipped with the strong (dual) topology or the topology of uniform convergence on bounded subsets of where this topology is denoted by or The coarsest polar topology is called weak topology.
The strong dual space plays such an important role in modern functional analysis, that the continuous dual space is usually assumed to have the strong dual topology unless indicated otherwise.
To emphasize that the continuous dual space, has the strong dual topology, or may be written.
Strong dual topology
Throughout, all vector spaces will be assumed to be over the field of either the real numbers or complex numbers
Definition from a dual system
Let be a dual pair of vector spaces over the field of real numbers or complex numbers
For any and any define
Neither nor has a topology so say a subset is said to be if for all
So a subset is called if and only if
This is equivalent to the usual notion of bounded subsets when is given the weak topology induced by which is a Hausdorff locally convex topology.
Let denote the family of all subsets bounded by elements of ; that is, is the set of all subsets such that for every
Then the on also denoted by or simply or if the pairing is understood, is defined as the locally convex topology on generated by the seminorms of the form
The definition of the strong dual topology now proceeds as in the case of a TVS.
Note that if is a TVS whose continuous dual space separates point on then is part of a canonical dual system
where
In the special case when is a locally convex space, the on the (continuous) dual space (that is, on the space of all continuous linear functionals ) is defined as the strong topology and it coincides with the topology of uniform convergence on bounded sets in i.e. with the topology on generated by the seminorms of the form
where runs over the family of all bounded sets in
The space with this topology is called of the space and is denoted by
Definition on a TVS
Suppose that is a topological vector space (TVS) over the field
Let be any fundamental system of bounded sets of ;
that is, is a family of bounded subsets of such that every bounded subset of is a subset of some ;
the set of all bounded subsets of forms a fundamental system of bounded sets of
A basis of closed neighborhoods of the origin in is given by the polars:
as ranges over ).
This is a locally convex topology that is given by the set of seminorms on :
as ranges over
If is normable then so is and will in fact be a Banach space.
If is a normed space with norm then has a canonical norm (the operator norm) given by ;
the topology that this norm induces on is identical to the strong dual topology.
Bidual
The bidual or second dual of a TVS often denoted by is the strong dual of the strong dual of :
where denotes endowed with the strong dual topology
Unless indicated otherwise, the vector space is usually assumed to be endowed with the strong dual topology induced on it by in which case it is called the strong bidual of ; that is,
where the vector space is endowed with the strong dual topology
Properties
Let be a locally convex TVS.
A convex balanced weakly compact subset of is bounded in
Every weakly bounded subset of is strongly bounded.
If is a barreled space then 's topology is identical to the strong dual topology and to the Mackey topology on
If is a metrizable locally convex space, then the strong dual of is a bornological space if and only if it is an infrabarreled space, if and only if it is a barreled space.
If is Hausdorff locally convex TVS then is metrizable if and only if there exists a countable set of bounded subsets of such that every bounded subset of is contained in some element of
If is locally convex, then this topology is finer than all other -topologies on when considering only 's whose sets are subsets of
If is a bornological space (e.g. metrizable or LF-space) then is complete.
If is a barrelled space, then its topology coincides with the strong topology on and with the Mackey topology on generated by the pairing
Examples
If is a normed vector space, then its (continuous) dual space with the strong topology coincides with the Banach dual space ; that is, with the space with the topology induced by the operator norm. Conversely -topology on is identical to the topology induced by the norm on
See also
References
Bibliography
Functional analysis
Topology of function spaces
Linear functionals | Strong dual space | [
"Mathematics"
] | 950 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
63,953,698 | https://en.wikipedia.org/wiki/The%20Mathematics%20of%20Chip-Firing | The Mathematics of Chip-Firing is a textbook in mathematics on chip-firing games and abelian sandpile models. It was written by Caroline Klivans, and published in 2018 by the CRC Press.
Topics
A chip-firing game, in its most basic form, is a process on an undirected graph, with each vertex of the graph containing some number of chips. At each step, a vertex with more chips than incident edges is selected, and one of its chips is sent to each of its neighbors. If a single vertex is designated as a "black hole", meaning that chips sent to it vanish, then the result of the process is the same no matter what order the other vertices are selected. The stable states of this process are the ones in which no vertex has enough chips to be selected; two stable states can be added by combining their chips and then stabilizing the result. A subset of these states, the so-called critical states, form an abelian group under this addition operation. The abelian sandpile model applies this model to large grid graphs, with the black hole connected to the boundary vertices of the grid; in this formulation, with all eligible vertices selected simultaneously, it can also be interpreted as a cellular automaton. The identity element of the sandpile group often has an unusual fractal structure.
The book covers these topics, and is divided into two parts. The first of these parts covers the basic theory outlined above, formulating chip-firing in terms of algebraic graph theory and the Laplacian matrix of the given graph. It describes an equivalence between states of the sandpile group and the spanning trees of the graph, and the group action on spanning trees, as well as similar connections to other combinatorial structures, and applications of these connections in algebraic combinatorics. And it studies chip-firing games on other classes of graphs than grids, including random graphs.
The second part of the book has four chapters devoted to more advanced topics in chip-firing. The first of these generalizes chip-firing from Laplacian matrices of graphs to M-matrices, connecting this generalization to root systems and representation theory. The second considers chip-firing on abstract simplicial complexes instead of graphs. The third uses chip-firing to study graph-theoretic analogues of divisor theory and the Riemann–Roch theorem. And the fourth applies methods from commutative algebra to the study of chip-firing.
The book includes many illustrations, and ends each chapter with a set of exercises making it suitable as a textbook for a course on this topic.
Audience and reception
Although the book may be readable by some undergraduate mathematics students, reviewer David Perkinson suggests that its main audience should be graduate students in mathematics, for whom it could be used as the basis of a graduate course or seminar. He calls it "a thorough introduction to an exciting and growing subject", with "clear and concise exposition". Reviewer Paul Dreyer calls it a "deep dive" into "incredibly deep mathematics".
Another book on the same general topic, published at approximately the same time, is Divisors and Sandpiles: An Introduction to Chip-Firing by Corry and Perkinson (American Mathematical Society, 2018). It is written at a lower level aimed at undergraduate students, covering mainly the material from the first part of The Mathematics of Chip-Firing, and framed more in terms of algebraic geometry than combinatorics.
References
Graph theory
Cellular automata
Critical phenomena
Mathematics textbooks
2018 non-fiction books
CRC Press books | The Mathematics of Chip-Firing | [
"Physics",
"Materials_science",
"Mathematics"
] | 730 | [
"Physical phenomena",
"Discrete mathematics",
"Recreational mathematics",
"Critical phenomena",
"Graph theory",
"Cellular automata",
"Combinatorics",
"Mathematical relations",
"Condensed matter physics",
"Statistical mechanics",
"Dynamical systems"
] |
69,571,689 | https://en.wikipedia.org/wiki/Thulium%20phosphide | Thulium phosphide is an inorganic compound of thulium and phosphorus with the chemical formula TmP.
Synthesis
Reaction of thulium metal with phosphorus:
4 Tm + P4 → 4 TmP
Physical properties
The dense phosphide film will prevent further reactions inside the metal. After etching gallium arsenide, an epitaxial layer of thulium phosphide can be grown on the surface to obtain a TmP/GaAs heterostructure.
The compound forms crystals of a cubic system, space group Fm3m. TmP crystallizes in a NaCl-type structure at ambient pressure.
Uses
The compound is a semiconductor used in high power, high frequency applications and in laser and other photo diodes.
References
Phosphides
Thulium compounds
Semiconductors
Rock salt crystal structure | Thulium phosphide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 174 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
69,573,115 | https://en.wikipedia.org/wiki/Thyroid%20Feedback%20Quantile-based%20Index | The Thyroid Feedback Quantile-based Index (TFQI) is a calculated parameter for thyrotropic pituitary function. It was defined to be more robust to distorted data than established markers including Jostel's TSH index (JTI) and the thyrotroph thyroid hormone sensitivity index (TTSI).
How to determine the TFQI
The TFQI can be calculated with
from quantiles of FT4 and TSH concentration (as determined based on cumulative distribution functions). Per definition the TFQI has a mean of 0 and a standard deviation of 0.37 in a reference population. This explains the reference range of –0.74 to + 0.74.
Reference range
Clinical significance
Higher values of TFQI are associated with obesity, metabolic syndrome, impaired renal function, diabetes, and diabetes-related mortality. In a large population of community-dwelling euthyroid subjects the thyroid feedback quantile-based index predicted all-cause mortality, even after adjustment for other established risk factors and comorbidities.
A cross-sectional study from Spain observed increased prevalence of type 2 diabetes, atrial fibrillation, ischemic heart disease and hypertension in persons with elevated PTFQI.
Serum Concentrations of Adipocyte Fatty Acid-Binding Protein (A-FABP) are significantly correlateted to TFQI, suggesting some form of cross-talk between adipose tissue and HPT axis.
TFQI results are also elevated in takotsubo syndrome, potentially reflecting type 2 allostatic load in the situation of psychosocial stress. Reductions have been observed in subjects with schizophrenia after initiation of therapy with oxcarbazepine and quetiapine, potentially reflecting declining allostatic load.
Despite positive association to metabolic syndrome and type 2 allostatic load a large population-based study failed to identify an association to risks of dyslipidemia and non-alcoholic fatty liver disease (NAFLD).
See also
Thyroid function tests
Thyroid's secretory capacity
Sum activity of peripheral deiodinases
Jostel's TSH index
Thyrotroph Thyroid Hormone Sensitivity Index
References
Chemical pathology
Blood tests
Endocrine procedures
Thyroidological methods
Thyroid homeostasis
Structure parameters of thyroid function
Static endocrine function tests | Thyroid Feedback Quantile-based Index | [
"Chemistry",
"Biology"
] | 468 | [
"Biochemistry",
"Blood tests",
"Chemical pathology",
"Structure parameters of thyroid function"
] |
71,095,298 | https://en.wikipedia.org/wiki/Arsenide%20telluride | Arsenide tellurides or telluride arsenides are compounds containing anions composed of telluride (Te2−) and arsenide (As3−). They can be considered as mixed anion compounds. Related compounds include the arsenide sulfides, arsenide selenides, antimonide tellurides, and phosphide tellurides. Some are in the category of arsenopyrite-type compounds with As:Te of 1:1. Yet others are layered with As:Te of 1:2.
Arsenide telluride compounds can be made by heating the elements together.
List
References
Arsenides
Tellurides
Mixed anion compounds | Arsenide telluride | [
"Physics",
"Chemistry"
] | 147 | [
"Ions",
"Matter",
"Mixed anion compounds"
] |
71,097,631 | https://en.wikipedia.org/wiki/Plant%20Molecular%20Biology | The Plant Molecular Biology is a peer-reviewed scientific journal covering all aspects of plant molecular biology. It was established in 1981 and is published by Springer Science+Business Media. The editor-in-chief is Motoaki Seki.
According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.076.
References
External links
Academic journals established in 1981
Biochemistry journals
English-language journals
Springer Science+Business Media academic journals | Plant Molecular Biology | [
"Chemistry"
] | 90 | [
"Biochemistry journals",
"Biochemistry literature"
] |
71,107,103 | https://en.wikipedia.org/wiki/Stability%20of%20matter | In physics, the stability of matter refers to the ability of a large number of charged particles, such as electrons and protons, to form macroscopic objects without collapsing or blowing apart due to electromagnetic interactions. Classical physics predicts that such systems should be inherently unstable due to attractive and repulsive electrostatic forces between charges, and thus the stability of matter was a theoretical problem that required a quantum mechanical explanation.
The first solution to this problem was provided by Freeman Dyson and Andrew Lenard in 1967–1968, but a shorter and more conceptual proof was found later by Elliott Lieb and Walter Thirring in 1975 using the Lieb–Thirring inequality. The stability of matter is partly due to the uncertainty principle and the Pauli exclusion principle.
Description of the problem
In statistical mechanics, the existence of macroscopic objects is usually explained in terms of the behavior of the energy or the free energy with respect to the total number of particles. More precisely, the ground-state energy should be a linear function of for large values of .
In fact, if the ground-state energy behaves proportional to for some , then pouring two glasses of water would provide an energy proportional to , which is enormous for large . A system is called stable of the second kind or thermodynamically stable when the free energy is bounded from below by a linear function of . Upper bounds are usually easy to show in applications, and this is why scientists have worked more on proving lower bounds.
Neglecting other forces, it is reasonable to assume that ordinary matter is composed of negative and positive non-relativistic charges (electrons and ions), interacting solely via the Coulomb's interaction. A finite number of such particles always collapses in classical mechanics, due to the infinite depth of the electron-nucleus attraction, but it can exist in quantum mechanics thanks to Heisenberg's uncertainty principle. Proving that such a system is thermodynamically stable is called the stability of matter problem and it is very difficult due to the long range of the Coulomb potential. Stability should be a consequence of screening effects, but those are hard to quantify.
Let us denote by
the quantum Hamiltonian of electrons and nuclei of charges and masses in atomic units. Here denotes the Laplacian, which is the quantum kinetic energy operator. At zero temperature, the question is whether the ground state energy (the minimum of the spectrum of ) is bounded from below by a constant times the total number of particles:
The constant can depend on the largest number of spin states for each particle as well as the largest value of the charges . It should ideally not depend on the masses so as to be able to consider the infinite mass limit, that is, classical nuclei.
History
19th century physics
At the end of the 19th century it was understood that electromagnetic forces held matter together. However two problems co-existed. Earnshaw's theorem from 1842, proved that no charged body can be in a stable equilibrium under the influence of electrostatic forces alone. The second problem was that James Clerk Maxwell had shown that accelerated charge produces electromagnetic radiation, which in turn reduces its motion. In 1900, Joseph Larmor suggested the possibility of an electromagnetic system with electrons in orbits inside matter. He showed that if such system existed, it could be scaled down by scaling distances and vibrations times, however this suggested a modification to Coulomb's law at the level of molecules. Classical physics was thus unable to explain the stability of matter and could only be explained with quantum mechanics which was developed at the beginning of the 20th century.
Dyson–Lenard solution
Freeman Dyson showed in 1967 that if all the particles are bosons, then the inequality () cannot be true and the system is thermodynamically unstable. It was in fact later proved that in this case the energy goes like instead of being linear in .
It is therefore important that either the positive or negative charges are fermions. In other words, stability of matter is a consequence of the Pauli exclusion principle. In real life electrons are indeed fermions, but finding the right way to use Pauli's principle and prove stability turned out to be remarkably difficult. Michael Fischer and David Ruelle formalized the conjecture in 1966 According to Dyson, Fischer and Ruelled offered a bottle of Champagne to anybody who could prove it. Dyson and Lenard found the proof of () a year later and therefore got the bottle.
Lieb–Thirring inequality
As was mentioned before, stability is a necessary condition for the existence of macroscopic objects, but it does not immediately imply the existence of thermodynamic functions. One should really show that the energy really behaves linearly in the number of particles. Based on the Dyson–Lenard result, this was solved in an ingenious way by Elliott Lieb and Joel Lebowitz in 1972.
According to Dyson himself, the Dyson–Lenard proof is "extraordinarily complicated and difficult" and relies on deep and tedious analytical bounds. The obtained constant in () was also very large. In 1975, Elliott Lieb and Walter Thirring found a simpler and more conceptual proof, based on a spectral inequality, now called the Lieb–Thirring inequality.
They got a constant which was by several orders of magnitude smaller than the Dyson–Lenard constant and had a realistic value.
They arrived at the final inequality
where is the largest nuclear charge and is the number of electronic spin states which is 2. Since , this yields the desired linear lower bound ().
The Lieb–Thirring idea was to bound the quantum energy from below in terms of the Thomas–Fermi energy. The latter is always stable due to a theorem of Edward Teller which states that atoms can never bind in Thomas–Fermi model.
The Lieb–Thirring inequality was used to bound the quantum kinetic energy of the electrons in terms of the Thomas–Fermi kinetic energy . Teller's no-binding theorem was in fact also used to bound from below the total Coulomb interaction in terms of the simpler Hartree energy appearing in Thomas–Fermi theory. Speaking about the Lieb–Thirring proof, Dyson wrote later
Further work
The Lieb–Thirring approach has generated many subsequent works and extensions. (Pseudo-)Relativistic systems
magnetic fields
quantized fields
and two-dimensional fractional statistics (anyons)
have for instance been studied since the Lieb–Thirring paper.
The form of the bound () has also been improved over the years. For example, one can obtain a constant independent of the number of nuclei.
Bibliography
The Stability of Matter: From Atoms to Stars. Selecta of Elliott H. Lieb. Edited by W. Thirring, and with a preface by F. Dyson. Fourth edition. Springer, Berlin, 2005.
Elliott H. Lieb and Robert Seiringer, The Stability of Matter in Quantum Mechanics. Cambridge Univ. Press, 2010.
Elliott H. Lieb, The stability of matter: from atoms to stars. Bull. Amer. Math. Soc. (N.S.) 22 (1990), no. 1, 1-49.
References
Mathematical physics
Statistical mechanics theorems | Stability of matter | [
"Physics",
"Mathematics"
] | 1,484 | [
"Theorems in dynamical systems",
"Applied mathematics",
"Theoretical physics",
"Statistical mechanics theorems",
"Theorems in mathematical physics",
"Statistical mechanics",
"Mathematical physics",
"Physics theorems"
] |
68,237,099 | https://en.wikipedia.org/wiki/Sum%20rules%20%28quantum%20field%20theory%29 | In quantum field theory, a sum rule is a relation between a static quantity and an integral over a dynamical quantity. Therefore, they have a form such as:
where is the dynamical quantity, for example a structure function characterizing a particle, and is the static quantity, for example the mass or the charge of that particle.
Quantum field theory sum rules should not be confused with sum rules in quantum chromodynamics or quantum mechanics.
Properties
Many sum rules exist. The validity of a particular sum rule can be sound if its derivation is based on solid assumptions, or on the contrary, some sum rules have been shown experimentally to be incorrect, due to unwarranted assumptions made in their derivation. The list of sum rules below illustrate this.
Sum rules are usually obtained by combining a dispersion relation with the optical theorem, using the operator product expansion or current algebra.
Quantum field theory sum rules are useful in a variety of ways. They permit to test the theory used to derive them, e.g. quantum chromodynamics, or an assumption made for the derivation, e.g. Lorentz invariance. They can be used to study a particle, e.g. how does the spins of partons make up the spin of the proton. They can also be used as a measurement method. If the static quantity is difficult to measure directly, measuring and integrating it offers a practical way to obtain (providing that the particular sum rule linking to is reliable).
Although in principle, is a static quantity, the denomination of sum rule has been extended to the case where is a probability amplitude, e.g. the probability amplitude of Compton scattering, see the list of sum rules below.
List of sum rules
(The list is not exhaustive)
Adler sum rule. This sum rule relates the charged current structure function of the proton (here, is the Bjorken scaling variable and is the square of the absolute value of the four-momentum transferred between the scattering neutrino and the proton) to the Cabibbo angle . It states that in the limit , then . The and superscripts indicate that relates to antineutrino-proton or neutrino-proton deep inelastic scattering, respectively.
Baldin sum rule. This is the unpolarized equivalent of the GDH sum rule (see below). It relates the probability that a photon absorbed by a particle results in the production of hadrons (this probability is called the photo-production cross-section) to the electric and magnetic polarizabilities of the absorbing particle. The sum rule reads , where is the photon energy, is minimum value of energy necessary to create the lightest hadron (i.e. a pion), is the photo-production cross-section, and and are the particle electric and magnetic polarizabilities, respectively. Assuming its validity, the Baldin sum rule provides an important information on our knowledge of electric and magnetic polarizabilities, complementary to their direct calculations or measurements. (See e.g. Fig. 3 in the article.)
Bjorken sum rule (polarized). This sum rule is the prototypical QCD spin sum rule. It states that in the Bjorken scaling domain, the integral of the spin structure function of the proton minus that of the neutron is proportional to the axial charge of the nucleon. Specifically: , where is the Bjorken scaling variable, is the first spin structure function of the proton (neutron), and is the nucleon axial charge that characterizes the neutron β-decay. Outside of the Bjorken scaling domain, the Bjorken sum rule acquires QCD scaling corrections that are known up to the 5th order in precision. The sum rule was experimentally verified within better than a 10% precision.
Bjorken sum rule (unpolarized). The sum rule is, at leading order in perturbative QCD: where and are the first structure functions for the proton-neutrino, proton-antineutrino and neutron-neutrino deep inelastic scattering reactions, is the square of the 4-momentum exchanged between the nucleon and the (anti)neutrino in the reaction, and is the QCD coupling.
Burkhardt–Cottingham sum rule. The sum rule was experimentally verified. The sum rule is "superconvergent", meaning that its form is independent of . The sum rule is: where is the second spin structure function of the object studied.
sum rule.
Efremov–Teryaev–Leader sum rule.
Ellis–Jaffe sum rule. The sum rule was shown to not hold experimentally, suggesting that the strange quark spin contributes non-negligibly to the proton spin. The Ellis–Jaffe sum rule provides an example of how the violation of a sum rule teaches us about a fundamental property of matter (in this case, the origin of the proton spin).
Forward spin polarizability sum rule.
Fubini–Furlan–Rossetti Sum Rule.
Gerasimov–Drell–Hearn sum rule (GDH, sometimes DHG sum rule). This is the polarized equivalent of the Baldin sum rule (see above). The sum rule is: , where is the minimal energy required to produce a pion once the photon is absorbed by the target particle, is the difference between the photon absorption cross-sections when the photons spin are aligned and anti-aligned with the target spin, is the photon energy, is the fine-structure constant, and , and are the anomalous magnetic moment, spin quantum number and mass of the target particle, respectively. The derivation of the GDH sum rule assumes that the theory that governs the structure of the target particle (e.g. QCD for a nucleon or a nucleus) is causal (that is, one can use dispersion relations or equivalently for GDH, the Kramers–Kronig relations), unitary and Lorentz and gauge invariant. These three assumptions are very basic premises of Quantum Field Theory. Therefore, testing the GDH sum rule tests these fundamental premises. The GDH sum rule was experimentally verified (within a 10% precision).
Generalized GDH sum rule. Several generalized versions of the GDH sum rule have been proposed. The first and most common one is: , where is the first spin structure function of the target particle, is the Bjorken scaling variable, is the virtuality of the photon or equivalently, the square of the absolute value of the four-momentum transferred between the beam particle that produced the virtual photon and the target particle, and is the first forward virtual Compton scattering amplitude. It can be argued that calling this relation sum rule is improper, since is not a static property of the target particle nor a directly measurable observable. Nonetheless, the denomination sum rule is widely used.
Gottfried sum rule. The sum rule states that the integral weighted by of the unpolarized structure function of the proton minus that of the neutron is related to the flavor asymmetry of the sea quarks: . Assuming a flavor symmetric sea yields the Gottfried sum rule proper, , which has been ruled out by measurements, yielding the first clear evidence for flavor asymmetry in the nucleon sea.
Gross–Llewellyn Smith sum rule. It states that in the Bjorken scaling domain, the integral of the structure function of the nucleon is equal to the number of valence quarks composing the nucleon, i.e., equal to 3. Specifically: . Outside of the Bjorken scaling domain, the Gross–Llewellyn Smith sum rule acquires QCD scaling corrections that are identical to that of the Bjorken sum rule.
Momentum sum rule: It states that the sum of the momentum fraction of all the partons (quarks, antiquarks and gluons inside a hadron is equal to 1.
Ji Sum rule: Relates the integral of generalized parton distributions to the angular momentum carried by the quarks or by the gluons.
Proton mass sum rule: It decomposes the proton mass in four terms, quark energy, quark mass, gluon energy and quantum anomalous energy, with each of these terms an integral over 3-dimensional coordinate space.
Schwinger sum rule. The Schwinger sum rule is a theoretical result involving the scattering of polarized leptons off polarized target particles. It reads: , where is the mass of the target particle, the square of the absolute value of the four-momentum transferred to the target particle during the scattering process, the Bjorken scaling variable, the -value for the minimal energy required to produce a pion off the target particle, and and the first and second spin structure functions of the target particle, respectively. The limit is for , with the anomalous magnetic moment of the target particle and its charge. The integrand of the sum rule can also be expressed with the -weighted transverse-longitudinal interference cross-section, . This makes it similar to the generalized GDH sum rule. Interestingly, the sum rule involves longitudinal photons that do not exist in the limit, where the sum rule applies, since real photons have only transverse spin projections. Therefore, one expects in the limit . However, despite this, the integral over the ratio is expected to be finite and non-zero in this limit, according to the sum rule. The sum rule was experimentally tested for the neutron, and although experimental uncertainties exist, it was found to hold, provided the GDH sum rule also holds.
Wandzura–Wilczek sum rule.
See also
Quantum chromodynamics
Proton spin crisis
References
Quantum field theory
Quantum chromodynamics
Nuclear physics | Sum rules (quantum field theory) | [
"Physics"
] | 2,033 | [
"Quantum field theory",
"Matter",
"Hadrons",
"Quantum mechanics",
"Nuclear physics",
"Subatomic particles"
] |
68,241,562 | https://en.wikipedia.org/wiki/B%C3%BCchi-Elgot-Trakhtenbrot%20theorem | In formal language theory, the Büchi–Elgot–Trakhtenbrot theorem states that a language is regular if and only if it can be defined in monadic second-order logic (MSO): for every MSO formula, we can find a finite-state automaton defining the same language, and for every finite-state automaton, we can find an MSO formula defining the same language. The theorem is due to Julius Richard Büchi, Calvin Elgot, and Boris Trakhtenbrot.
See also
Trakhtenbrot's theorem
Courcelle's theorem
References
Formal languages
Mathematical logic | Büchi-Elgot-Trakhtenbrot theorem | [
"Mathematics"
] | 135 | [
"Foundations of mathematics",
"Formal languages",
"Mathematical logic",
"Mathematical problems",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
68,242,618 | https://en.wikipedia.org/wiki/Group%20analysis%20of%20differential%20equations | Group analysis of differential equations is a branch of mathematics that studies the symmetry properties of differential equations with respect to various transformations of independent and dependent variables. It includes methods and applied aspects of differential geometry, Lie groups and algebras theory, calculus of variations and is, in turn, a powerful research tool in theories of ODEs, PDEs, mathematical and theoretical physics.
Motivation
References
Group theory
Differential geometry
Lie groups
Lie algebras
Differential equations
Mathematical physics | Group analysis of differential equations | [
"Physics",
"Mathematics"
] | 90 | [
"Lie groups",
"Mathematical structures",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Differential equations",
"Equations",
"Group theory",
"Fields of abstract algebra",
"Algebraic structures",
"Geometry",
"Geometry stubs",
"Mathematical physics"
] |
68,245,955 | https://en.wikipedia.org/wiki/Cantor%27s%20isomorphism%20theorem | In order theory and model theory, branches of mathematics, Cantor's isomorphism theorem states that every two countable dense unbounded linear orders are order-isomorphic. For instance, Minkowski's question-mark function produces an isomorphism (a one-to-one order-preserving correspondence) between the numerical ordering of the rational numbers and the numerical ordering of the dyadic rationals.
The theorem is named after Georg Cantor, who first published it in 1895, using it to characterize the (uncountable) ordering on the real numbers. It can be proved by a back-and-forth method that is also sometimes attributed to Cantor but was actually published later, by Felix Hausdorff. The same back-and-forth method also proves that countable dense unbounded orders are highly symmetric, and can be applied to other kinds of structures. However, Cantor's original proof only used the "going forth" half of this method. In terms of model theory, the isomorphism theorem can be expressed by saying that the first-order theory of unbounded dense linear orders is countably categorical, meaning that it has only one countable model, up to logical equivalence.
One application of Cantor's isomorphism theorem involves temporal logic, a method for using logic to reason about time. In this application, the theorem implies that it is sufficient to use intervals of rational numbers to model intervals of time: using irrational numbers for this purpose will not lead to any increase in logical power.
Statement and examples
Cantor's isomorphism theorem is stated using the following concepts:
A linear order or total order is defined by a set of elements and a comparison operation that gives an ordering to each pair of distinct elements and obeys the The familiar numeric orderings on the integers, rational numbers, and real numbers are all examples of linear
Unboundedness means that the ordering does not contain a minimum or maximum element. This is different from the concept of a bounded set in a metric space. For instance, the open interval (0,1) is unbounded as an ordered set, even though it is bounded as a subset of the real numbers, because neither its infimum 0 nor its supremum 1 belong to the interval. The integers, rationals, and reals are also
An ordering is dense when every pair of elements has another element between This is different from being a topologically dense set within the real The rational numbers and real numbers are dense in this sense, as the arithmetic mean of any two numbers belongs to the same set and lies between them, but the integers are not dense because is no other integer between any two consecutive
The integers and rational numbers both form countable sets, but the real numbers do not, by a different result of Cantor, his proof that the real numbers are uncountable.
Two linear orders are order-isomorphic when there exists a one-to-one correspondence between them that preserves their For instance, the integers and the even numbers are order-isomorphic, under a bijection that multiplies each integer
With these definitions in hand, Cantor's isomorphism theorem states that every two unbounded countable dense linear orders are
Within the rational numbers, certain subsets are also countable, unbounded, and dense. The rational numbers in the open unit interval are an example. Another example is the set of dyadic rational numbers, the numbers that can be expressed as a fraction with an integer numerator and a power of two as the denominator. By Cantor's isomorphism theorem, the dyadic rational numbers are order-isomorphic to the whole set of rational numbers. In this example, an explicit order isomorphism is provided by Minkowski's question-mark function. Another example of a countable unbounded dense linear order is given by the set of real algebraic numbers, the real roots of polynomials with integer coefficients. In this case, they are a superset of the rational numbers, but are again It is also possible to apply the theorem to other linear orders whose elements are not defined as numbers. For instance, the binary strings that end in a 1, in their lexicographic order, form another isomorphic
Proofs
One proof of Cantor's isomorphism theorem, in some sources called "the standard uses the back-and-forth method. This proof builds up an isomorphism between any two given orders, using a greedy algorithm, in an ordering given by a countable enumeration of the two orderings. In more detail, the proof maintains two order-isomorphic finite subsets and of the two given orders, initially empty. It repeatedly increases the sizes of and by adding a new element from one order, the first missing element in its enumeration, and matching it with an order-equivalent element of the other order, proven to exist using the density and lack of endpoints of the order. The two orderings switch roles at each step: the proof finds the first missing element of the first order, adds it to , matches it with an element of the second order, and adds it to ; then it finds the first missing element of the second order, adds it to , matches it with an element of the first order, and adds it to , etc. Every element of each ordering is eventually matched with an order-equivalent element of the other ordering, so the two orderings are
Although the back-and-forth method has also been attributed to Cantor, Cantor's original publication of this theorem in 1895–1897 used a different In an investigation of the history of this theorem by logician Charles L. Silver, the earliest instance of the back-and-forth proof found by Silver was in a 1914 textbook by
Instead of building up order-isomorphic subsets and by going "back and forth" between the enumeration for the first order and the enumeration for the second order, Cantor's original proof only uses the "going forth" half of the back-and-forth It repeatedly augments the two finite sets and by adding to the first missing element of the first order's enumeration, and adding to the order-equivalent element that is first in the second order's enumeration. This naturally finds an equivalence between the first ordering and a subset of the second ordering, and Cantor then argues that the entire second ordering is
The back-and-forth proof has been formalized as a computer-verified proof using Coq, an interactive theorem prover. This formalization process led to a strengthened result that when two computably enumerable linear orders have a computable comparison predicate, and computable functions representing their density and unboundedness properties, then the isomorphism between them is also
Model theory
One way of describing Cantor's isomorphism theorem uses the language of model theory. The first-order theory of unbounded dense linear orders consists of sentences in mathematical logic concerning variables that represent the elements of an order, with a binary relation used as the comparison operation of the ordering. Here, a sentence means a well-formed formula that has no free variables. These sentences include both axioms, formulating in logical terms the requirements of a dense linear order, and all other sentences that can be proven as logical consequences from those axioms. The axioms of this system can be expressed
A model of this theory is any system of elements and a comparison relation that obeys all of the axioms; it is a countable model when the system of elements forms a countable set. For instance, the usual comparison relation on the rational numbers is a countable model of this theory. Cantor's isomorphism theorem can be expressed by saying that the first-order theory of unbounded dense linear orders is countably categorical: it has only one countable model, up to logical However, it is not categorical for higher cardinalities: for any higher cardinality, there are multiple inequivalent dense unbounded linear orders with the same
A method of quantifier elimination in the first-order theory of unbounded dense linear orders can be used to prove that it is a complete theory. This means that every logical sentence in the language of this theory is either a theorem, that is, provable as a logical consequence of the axioms, or the negation of a theorem. This is closely related to being categorical (a sentence is a theorem if it is true of the unique countable model; see the Łoś–Vaught test) but there can exist multiple distinct models that have the same complete theory. In particular, both the ordering on the rational numbers and the ordering on the real numbers are models of the same theory, even though they are different models. Quantifier elimination can also be used in an algorithm for deciding whether a given sentence is a
Related results
The same back-and-forth method used to prove Cantor's isomorphism theorem also proves that countable dense linear orders are highly symmetric. Their symmetries are called order automorphisms, and consist of order-preserving bijections from the whole linear order to itself. By the back-and-forth method, every countable dense linear order has order automorphisms that map any set of points to any other set of points. This can also be proven directly for the ordering on the rationals, by constructing a piecewise linear order automorphism with breakpoints at the given points. This equivalence of all sets of points is summarized by saying that the group of symmetries of a countable dense linear order is "highly homogeneous". However, there is no order automorphism that maps an ordered pair of points to its reverse, so these symmetries do not form a
The isomorphism theorem can be extended to colorings of an unbounded dense countable linear ordering, with a finite or countable set of colors, such that each color is dense, in the sense that a point of that color exists between any other two points of the whole ordering. The subsets of points with each color partition the order into a family of unbounded dense countable linear orderings. Any partition of an unbounded dense countable linear orderings into subsets, with the property that each subset is unbounded (within the whole set, not just in itself) and dense (again, within the whole set) comes from a coloring in this way. Each two colorings with the same number of colors are order-isomorphic, under any permutation of their colors. give as an example the partition of the rational numbers into the dyadic rationals and their complement; these two sets are dense in each other, and their union has an order isomorphism to any other pair of unbounded linear orders that are countable and dense in each other. Unlike Cantor's isomorphism theorem, the proof needs the full back-and-forth argument, and not just the "going forth"
Cantor used the isomorphism theorem to characterize the ordering of the real numbers, an uncountable set. Unlike the rational numbers, the real numbers are Dedekind-complete, meaning that every subset of the reals that has a finite upper bound has a real least upper bound. They contain the rational numbers, which are dense in the real numbers. By applying the isomorphism theorem, Cantor proved that whenever a linear ordering has the same properties of being Dedekind-complete and containing a countable dense unbounded subset, it must be order-isomorphic to the real Suslin's problem asks whether orders having certain other properties of the order on the real numbers, including unboundedness, density, and completeness, must be order-isomorphic to the reals; the truth of this statement is independent of Zermelo–Fraenkel set theory with the axiom of choice
Although uncountable unbounded dense orderings may not be order-isomorphic, it follows from the back-and-forth method that any two such orderings are elementarily equivalent. Another consequence of Cantor's proof is that every finite or countable linear order can be embedded into the rationals, or into any unbounded dense ordering. Calling this a "well known" result of Cantor, Wacław Sierpiński proved an analogous result for higher cardinality: assuming the continuum hypothesis, there exists a linear ordering of cardinality into which all other linear orderings of cardinality can be Baumgartner's axiom, formulated by James Earl Baumgartner in 1973 to study the continuum hypothesis, concerns sets of real numbers, unbounded sets with the property that every two elements are separated by exactly other elements. It states that each two such sets are order-isomorphic, providing in this way another higher-cardinality analogue of Cantor's isomorphism theorem ( is defined as the cardinality of the set of all countable ordinals). Baumgartner's axiom is consistent with ZFC and the negation of the continuum hypothesis, and implied by the but independent of
In temporal logic, various formalizations of the concept of an interval of time can be shown to be equivalent to defining an interval by a pair of distinct elements of a dense unbounded linear order. This connection implies that these theories are also countably categorical, and can be uniquely modeled by intervals of rational
Sierpiński's theorem stating that any two countable metric spaces without isolated points are homeomorphic can be seen as a topological analogue of Cantor's isomorphism theorem, and can be proved using a similar back-and-forth argument.
References
Model theory
Order theory
Georg Cantor
Theorems in the foundations of mathematics | Cantor's isomorphism theorem | [
"Mathematics"
] | 2,805 | [
"Order theory",
"Foundations of mathematics",
"Mathematical logic",
"Model theory",
"Mathematical problems",
"Mathematical theorems",
"Theorems in the foundations of mathematics"
] |
68,250,445 | https://en.wikipedia.org/wiki/Uranium%20ditelluride | Uranium ditelluride is an inorganic compound with the formula UTe2. It was discovered to be an unconventional superconductor in 2018.
Superconductivity
Superconductivity in UTe2 appears to be a consequence of triplet electrons spin-pairing. The material acts as a topological superconductor, stably conducting electricity without resistance even in high magnetic fields. It has superconducting transition temperature at Tc= 2K.
Charge density waves (CDW) and pair density waves (PDW) have been described in UTe2, with the latest case being the first time it has been described in a p-wave superconductor.
See also
Distrontium ruthenate a p-wave triplet state superconductor candidate.
Helium-3 a spin-triplet superfluid
Ferromagnetic superconductor spin-triplet pairing with coexisting superconductivity and ferromagnetic phases.
Reentrant superconductivity an effect similar to ferromagnetic superconductivity.
References
Uranium(IV) compounds
Tellurides
Superconductors | Uranium ditelluride | [
"Chemistry",
"Materials_science",
"Engineering"
] | 242 | [
"Materials science stubs",
"Inorganic compounds",
"Superconductivity",
"Materials science",
"Inorganic compound stubs",
"Superconductors",
"Electromagnetism stubs"
] |
74,025,383 | https://en.wikipedia.org/wiki/Haldane%27s%20sieve | Haldane's sieve is a concept in population genetics named after the British geneticist J. B. S. Haldane. It refers to the fact that dominant advantageous alleles are more likely to fix in the population than recessive alleles. Haldane's sieve is particularly relevant in situations where the effects of natural selection are strong and the beneficial mutations have a significant impact on an organism's fitness.
According to Haldane's sieve, when a new advantageous mutation arises in a population, it initially occurs as a single copy (a de novo mutation), borne by an heterozygous individual.
This way, genetic dominance is important to estimate the fate of new mutations, that is, if new mutations are going to fix or go extinct.
Dominant alleles are more readily exposed to directional selection since the moment they are rare, and thus they are more likely to fix as a result of a "hard sweep".
The term "sieve" in Haldane's sieve metaphorically represents this filtering effect of natural selection.
When adaptation stems from the species pool of standing genetic variation, a "soft sweep", the rationale does not apply, because the allele is no longer rare in the beginning of the sweep. In fact, recessive alleles are more likely to sweep than dominant sweeps when alleles are previously maintained in the population.
Limited dispersal and population structure can reduce the effects of Haldane's sieve. In subdivided populations, limited dispersal increases inbreeding and homozygosity, allowing recessive alleles to express their beneficial effects more frequently and thus accelerate their fixation. This effect is most pronounced when dispersal is strongly limited (e.g., ).
Haldane's sieve has important implications for understanding the dynamics of adaptation and evolution in diploid populations. It highlights the role of natural selection in driving genetic changes in the presence of genetic dominance.
See also
Population genetics
Selective sweep
Natural selection
Adaptive evolution
References
Population genetics
Genetics concepts
Classical genetics
Selection
Evolutionary processes
Evolutionary biology | Haldane's sieve | [
"Biology"
] | 423 | [
"Evolutionary biology",
"Evolutionary processes",
"Selection",
"Genetics concepts"
] |
74,026,151 | https://en.wikipedia.org/wiki/Molybdenum%20oxytetrafluoride | Molybdenum oxytetrafluoride is the inorganic compound with the formula MoOF4. It is a white, diamagnetic solid. According to X-ray crystallography, it is a coordination polymer consisting of a linear chain of alternating Mo and F atoms. Each Mo center is octahedral, the coordination sphere being defined by oxide, three terminal fluorides, and two bridging fluorides. In contrast to this motif, tungsten oxytetrafluoride crystallizes as a tetramer, again with bridging fluoride ligands.
Reactions
The acetonitrile adduct of molybdenum oxytetrafluoride can be prepared by treating molybdenum hexafluoride with hexamethyldisiloxane in acetonitrile:
Molybdenum oxytetrafluoride is susceptible to hydrolysis to give molybdenum difluoride dioxide.
References
Metal halides
Molybdenum(VI) compounds
Oxyfluorides
Coordination polymers | Molybdenum oxytetrafluoride | [
"Chemistry"
] | 230 | [
"Inorganic compounds",
"Metal halides",
"Salts"
] |
74,030,200 | https://en.wikipedia.org/wiki/Non-invertible%20symmetry | In physics, a non-invertible symmetry is a symmetry of a quantum field theory that is not described by a group, and which in particular does not have an inverse.
Non-invertible symmetries were first studied in 2-dimensional conformal field theory, where fusion categories govern the fusion rules, rather than a group.
Four-dimensional examples of non-invertible symmetries can be obtained from Maxwell theory with topological theta term, via a combination of its SL(2,Z) duality and a discrete subgroup of its electric or magnetic 1-form symmetry.
References
External links
"A New Kind of Symmetry Shakes Up Physics" by Kevin Hartnett, Quanta Magazine
"Non-Invertible Symmetries and their Representations", video lecture by Sahand Seifnashri at Institute for Advanced Study
Quantum field theory
Quantum mechanics
Mathematical physics | Non-invertible symmetry | [
"Physics",
"Mathematics"
] | 178 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Theoretical physics stubs",
"Mathematical physics"
] |
74,033,110 | https://en.wikipedia.org/wiki/Kim%20Guldstrand%20Larsen | Kim Guldstrand Larsen R (born 1957) is a Danish scientist and professor of computer science at Aalborg University, Denmark. His field of research includes modeling, validation and verification, performance analysis, and synthesing of real-time, embedded, and cyber-physical systems utilizing and contributing to concurrency theory and model checking. Within this domain, he has been instrumental in the invention and continuous development of one of the most widely used verification tools, and has received several awards and honors for his work.
Education
Larsen has an MSc in mathematics from Aalborg University, 1982. In 1986, he received his PhD in Computer Science from University of Edinburgh, advised by Robin Milner.
Career
Since 1993, Larsen has been a professor in Computer Science at Aalborg Universitet. He has also been a visiting professor at several places around the world, including the National Institute for Research in Digital Science and Technology (INRIA) (as an international chair 2016-2020).
Larsen heads the Center for Embedded Software Systems (CISS). From 2007 to 2011, he was director of the university-industry consortium Danish Network of Embedded Systems (DaNES), and from 2011 to 2017, he was the Danish co-lead of the Danish-Chinese Center for IDEA4CPS: Foundations for Cyber-Physical Systems, established by the and the Natural Science Foundation of China (NSFC).
In addition, he was director of the Danish ICT Innovation Network (InfinIT) from 2009 to 2020, director of the Center for Data-Intensive Cyber-Physical Systems (DiCyPS) funded by from 2015 to 2021, and head of project on the Learning, Analysis, Synthesis, and Optimization of Cyber-Physical Systems (LASSO) project from 2015 to 2020, funded by an ERC Advanced Grant.
Larsen is one of the key figures behind the award-winning tool UPPAAL, which is one of the most widely used tools for the verification of real-time models. "UPPAAL in a Nutshell," written by Larsen and colleagues, is one of the most cited papers in The Journal Software Tools for Technology Transfer, published by Springer (citation rank in the 99th percentile).
He is a member of Royal Danish Academy of Sciences and Letters and elected fellow and digital expert (vismand) in the . He has served as the national expert for the Information and Communication Technology theme under the EU's 7th Framework Programme (FP7-ICT), and currently he is a member of the Digital, Industry, and Space referencegroup that serves the Danish Ministry of Higher Education and Science in connection to the EU Horizon Europe program.
Awards and honors (selected)
Honorary Doctor (Honoris causa), Uppsala University, 1999
Honorary Doctor (Honoris causa), École normale supérieure Paris-Saclay (formerly École normale supérieure de Cachan), Paris, 2007
Thomson Scientific Award as the most cited Danish computer scientist 1990-2004
Knight of the Order of the Dannebrog, 2007
Member of Academia Europaea
CAV Award 2013
ERC Advanced Grant, 2015
2016
Foreign Expert of China, Distinguished Professor, Northeastern University, 2018
Villum Investigator 2021 (30 M DKK) from Villum Foundation
CONCUR Test of Time award 2022
Selected works
Larsen has published six books (monographs) and more than 400 peer-reviewed papers and he has been cited many times (Google Scholar Citation Tracker). Selected works:
UPPAAL in a Nutshell, 1997
References
External links
Profile at Aalborg University
UPPAAL an integrated tool environment for modeling, validation and verification of real-time systems modeled as networks of timed automata
Living people
1957 births
Danish computer scientists
Software engineering
Formal methods
Automata (computation)
Embedded systems
Real-time computing
Systems theory
Concurrency (computer science)
Model checkers
Aalborg University alumni
Members of the Royal Danish Academy of Sciences and Letters | Kim Guldstrand Larsen | [
"Mathematics",
"Technology",
"Engineering"
] | 793 | [
"Systems engineering",
"Real-time computing",
"Computer engineering",
"Embedded systems",
"Computer systems",
"Model checkers",
"Software engineering",
"Computer science",
"Information technology",
"Formal methods",
"Mathematical software"
] |
74,035,291 | https://en.wikipedia.org/wiki/Markov%20operator | In probability theory and ergodic theory, a Markov operator is an operator on a certain function space that conserves the mass (the so-called Markov property). If the underlying measurable space is topologically sufficiently rich enough, then the Markov operator admits a kernel representation. Markov operators can be linear or non-linear. Closely related to Markov operators is the Markov semigroup.
The definition of Markov operators is not entirely consistent in the literature. Markov operators are named after the Russian mathematician Andrey Markov.
Definitions
Markov operator
Let be a measurable space and a set of real, measurable functions .
A linear operator on is a Markov operator if the following is true
maps bounded, measurable function on bounded, measurable functions.
Let be the constant function , then holds. (conservation of mass / Markov property)
If then . (conservation of positivity)
Alternative definitions
Some authors define the operators on the Lp spaces as and replace the first condition (bounded, measurable functions on such) with the property
Markov semigroup
Let be a family of Markov operators defined on the set of bounded, measurables function on . Then is a Markov semigroup when the following is true
.
for all .
There exist a σ-finite measure on that is invariant under , that means for all bounded, positive and measurable functions and every the following holds
.
Dual semigroup
Each Markov semigroup induces a dual semigroup through
If is invariant under then .
Infinitesimal generator of the semigroup
Let be a family of bounded, linear Markov operators on the Hilbert space , where is an invariant measure. The infinitesimal generator of the Markov semigroup is defined as
and the domain is the -space of all such functions where this limit exists and is in again.
The carré du champ operator measuers how far is from being a derivation.
Kernel representation of a Markov operator
A Markov operator has a kernel representation
with respect to some probability kernel , if the underlying measurable space has the following sufficient topological properties:
Each probability measure can be decomposed as , where is the projection onto the first component and is a probability kernel.
There exist a countable family that generates the σ-algebra .
If one defines now a σ-finite measure on then it is possible to prove that ever Markov operator admits such a kernel representation with respect to .
Literature
References
Probability theory
Ergodic theory
Linear operators | Markov operator | [
"Mathematics"
] | 502 | [
"Functions and mappings",
"Mathematical objects",
"Linear operators",
"Ergodic theory",
"Mathematical relations",
"Dynamical systems"
] |
74,037,011 | https://en.wikipedia.org/wiki/Tsippy%20Tamiri | Tsippy Tamiri (1952 – 2017) was an Israeli mass spectrometrist, specialized in the analysis of explosives, drugs, and poisons.
Early life and education
Tsippy Tamiri grew up in Rishon LeZion. She enlisted in the Israel Defense Forces at eighteen. After discharge, she obtained a bachelor's degree in pharmacy from the Hebrew University of Jerusalem. She received a master's degree in chemistry from the Hebrew University of Jerusalem and a PhD in chemistry from Technion – Israel Institute of Technology.
Career
Tamiri worked at the forensic laboratory of the Israel Police, where she was the head of the mass spectrometry department. She conducted research in the analysis of explosives, drugs and poisons. She published on the preparation, characterization, and analysis of urea nitrate.
Tamiri served as the president of the Israeli Society of Mass Spectrometry. She served on the organizing committee of the second Middle Eastern and Mediterranean Sea Region Countries Mass Spectrometry – MASSA 2013 Workshop in 2013.
Selected publications
Book chapter
Analysis of Explosives by Mass Spectrometry, Tsippy Tamiri and Shmuel Zitrin, in Forensic Investigation of Explosions, A. Beveridge (Ed), 2011.
GC/MS Analysis of PETN and NG in Post-Explosion Residues, T. Tamiri, S. Zitrin, S. Abramovich-Bar, Y. Bamberger and J. Sterling, in Advances in Analysis and Detection of Explosives, J. Yinon (Ed), 1993.
Awards
Tamiri received the Yehuda Yinon Award in 2011.
References
Mass spectrometrists
Hebrew University of Jerusalem alumni
Technion – Israel Institute of Technology alumni
Israeli women scientists
Forensic scientists | Tsippy Tamiri | [
"Physics",
"Chemistry"
] | 361 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
72,606,747 | https://en.wikipedia.org/wiki/International%20Bathymetric%20Chart%20of%20the%20Southern%20Ocean | The International Bathymetric Chart of the Southern Ocean (IBCSO) is a regional mapping initiative of the General Bathymetric Chart of the Oceans (GEBCO). IBSCO receives support from the Nippon Foundation – GEBCO Seabed 2030 Project.
Background
IBCSO is a joint project by the International Hydrographic Organization, the Scientific Committee on Antarctic Research, the General Bathymetric Chart of the Oceans and the Seabed 2030 Project. The project aims to identify and pool all Bathymetry data in the Southern Ocean and use that data to produce gridded bathymetric maps of the seafloor.
The extent of the project is bound by 50°S, stretching from the southern tip of South America to the coastal waters of Antarctica. The IBCSO project is currently hosted by the bathymetry department at the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven.
Description
Bathymetric data from all data holders are reviewed and pooled at 1 meter resolution to form the basis of the bathymetric data set. This includes all seafloor mapping sources from modern echo sounding methods such as multibeam echosounders and singlebeam echosounders to historic lead line measurements. A weighted blockmedian filter is run across the pooled data set to create a spatial map of high and low quality data.
The low quality data is processed using a spline interpolation algorithm and used as a background layer. The high quality data from e.g. modern multibeam echosounder data is then added on top and incorporated in the background surface using a bending algorithm. Regions with no data coverage are padded by bathymetric data from the dataset collected by the Shuttle Radar Topography Mission.
The gridded product is made available to the public at a 500m x 500m resolution in a polar stereographic projection (EPSG: 9354) with either bedrock data of the Antarctic continent based on BedMachine and data for the ice surface topography derived from various sources such as REMA. The most recent version is also incorporated into the annual release of the General Bathymetric Chart of the Oceans grid.
The first version of IBCSO was published in 2013, covering the Southern Ocean south of 60°S. More than 4,200 million ocean soundings of diverse types and quality were incorporated.
IBCSO became associated with and is supported by the Nippon Foundation – Seabed 2030 Project since 2017. IBCSO version 2 was published in 2022 and increased the extent of the bathymetric map to 50°S, increasing the area covered by 2.5 compared to IBCSO version 1. 92.7% of map data originate from multibeam data, 6.7% originate from singlebeam data, and the remaining ~1% comes from mixed sources (seismic reflection, lidar, etc.).
Versions
IBCSO Version 1
500x500 meter resolution
coverage up to 60°S
IBCSO version 2
500x500 meter resolution
coverage up to 50°S
References
External links
IBCSO Version 1
IBCSO version 2
IBCSO Products
SCAR website of the IBCSO project
Current coverage of the world's oceans by SEABED2030 Project (hosted by the University of Stockholm, Sweden)
Oceanography
World maps
Hydrography | International Bathymetric Chart of the Southern Ocean | [
"Physics",
"Environmental_science"
] | 673 | [
"Hydrography",
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
72,610,476 | https://en.wikipedia.org/wiki/Cable-stayed%20suspension%20bridge | A cable-stayed suspension bridge or CSS bridge merges the designs of cable-stayed bridges and suspension bridges. The suspension bridge's architecture is better at handling the load in the middle of the bridge, while the cable stayed bridge is better suited to handle the load closest to the tower. Combining these two architectural engineering ideas into a hybrid has been done in Istanbul with the Yavuz Sultan Selim Bridge, and in New York City with the Brooklyn Bridge. A bridge over the Krishna River in India has been approved in October 2022 that will be a CSS bridge design.
Yavuz Sultan Selim Bridge
In Turkey the Yavuz Sultan Selim Bridge over the Bosporus Strait opened in August 2016. The main span is long and is the 13th-longest bridge span in the world.
See also
List of longest cable-stayed bridge spans
List of longest suspension bridge spans
List of cable-stayed bridges in the United States
List of bridge types
Floating cable-stayed bridge
Floating suspension bridge
References
Structural engineering
Building engineering | Cable-stayed suspension bridge | [
"Engineering"
] | 207 | [
"Structural engineering",
"Building engineering",
"Construction",
"Civil engineering",
"Architecture"
] |
63,964,488 | https://en.wikipedia.org/wiki/Rapid%20voltage%20change | A rapid voltage change or RVC is one of the power-quality (PQ) issue related to voltage disturbance. "According to IEC 61000-4-30, Ed. 3 standard, RVC is defined as "a quick transition in root means square (r.m.s.) voltage occurring between two steady-state conditions, and during which the r.m.s. voltage does not exceed the dip/swell thresholds." Switching processes such as motor starting, capacitor bank on/off, load switching, or transformer tap-changer operations can all create RVCs. Moreover, they can also be induced by sudden load variations or by disturbance in power output from distributed energy sources such as solar or wind power system. The main known effect of rapid voltage changes is light flicker, but other non-flicker effects also have been reported.
Rapid voltage change effect
The RVC voltage disturbance level is not as big as sag / dip and swell. While RVC events generally are not destructive for electronic equipment, it can be annoying for final users as they may influence light flicker.
References
Electronics
Voltage stability
Electric power | Rapid voltage change | [
"Physics",
"Engineering"
] | 232 | [
"Physical quantities",
"Electrical engineering",
"Power (physics)",
"Electric power",
"Voltage",
"Voltage stability"
] |
75,433,719 | https://en.wikipedia.org/wiki/Dronabinol/acetazolamide | Dronabinol/acetazolamide (investigational name IHL-42X) is a combination therapy under investigation for sleep apnea. It is developed by Incannex Healthcare.
References
Combination drugs | Dronabinol/acetazolamide | [
"Chemistry"
] | 45 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
75,434,340 | https://en.wikipedia.org/wiki/Protein%20therapeutics | Protein therapeutics are proteins used as experimental or approved therapies for disease states. They include "monoclonal antibodies (mAbs), peptide hormones, growth factors, plasma proteins, enzymes, and hemolytic factors" While proteins can be more specific and flexible in their mechanism of action compared to small-molecule drugs, duration of action and drug delivery can be a challenge.
References
Further reading
Proteins | Protein therapeutics | [
"Chemistry"
] | 83 | [
"Biomolecules by chemical classification",
"Pharmacology",
"Medicinal chemistry stubs",
"Molecular biology",
"Proteins",
"Pharmacology stubs"
] |
75,446,827 | https://en.wikipedia.org/wiki/Volga%20149.200 | According to Snopes, an American myths and rumors factcheck website analysis, Russian-owned websites copy-paste pro-Russian propaganda about Russia's alleged aims "to prevent pointless bloodshed among by Ukrainian soldiers", mentioning 149.200 'Volga', a frequency which is said to offer Ukrainian soldiers a chance to surrender to Russian forces. In 2024 when French President Emmanuel Macron made statements about the possibility of French troops being deployed to Ukraine, posters began to appear near the French Embassy in Moscow and public transport stations around the embassy. The posters depicted an image of the commander of the French SS division Charlemagne, Edgar Puaud and text in both French and Russian which read: "Frenchmen, do not repeat the mistakes of your ancestors; their fate is well-known" followed by "Call Volga 149.200".
References
Radio spectrum
Russian invasion of Ukraine
Ukrainian prisoners of war | Volga 149.200 | [
"Physics"
] | 182 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
75,449,252 | https://en.wikipedia.org/wiki/Locally%20recoverable%20code | Locally recoverable codes are a family of error correction codes that were introduced first by D. S. Papailiopoulos and A. G. Dimakis and have been widely studied in information theory due to their applications related to distributive and cloud storage systems.
An LRC is an linear code such that there is a function that takes as input and a set of other coordinates of a codeword different from , and outputs .
Overview
Erasure-correcting codes, or simply erasure codes, for distributed and cloud storage systems, are becoming more and more popular as a result of the present spike in demand for cloud computing and storage services. This has inspired researchers in the fields of information and coding theory to investigate new facets of codes that are specifically suited for use with storage systems.
It is well-known that LRC is a code that needs only a limited set of other symbols to be accessed in order to restore every symbol in a codeword. This idea is very important for distributed and cloud storage systems since the most common error case is when one storage node fails (erasure). The main objective is to recover as much data as possible from the fewest additional storage nodes in order to restore the node. Hence, Locally Recoverable Codes are crucial for such systems.
The following definition of the LRC follows from the description above: an -Locally Recoverable Code (LRC) of length is a code that produces an -symbol codeword from information symbols, and for any symbol of the codeword, there exist at most other symbols such that the value of the symbol can be recovered from them. The locality parameter satisfies because the entire codeword can be found by accessing symbols other than the erased symbol. Furthermore, Locally Recoverable Codes, having the minimum distance , can recover erasures.
Definition
Let be a linear code. For , let us denote by the minimum number of other coordinates we have to look at to recover an erasure in coordinate . The number is said to be the locality of the -th coordinate of the code. The locality of the code is defined as
An locally recoverable code (LRC) is an linear code with locality .
Let be an -locally recoverable code. Then an erased component can be recovered linearly, i.e. for every , the space of linear equations of the code contains elements of the form , where .
Optimal locally recoverable codes
Theorem Let and let be an -locally recoverable code having disjoint locality sets of size . Then
An -LRC is said to be optimal if the minimum distance of satisfies
Tamo–Barg codes
Let be a polynomial and let be a positive integer. Then is said to be (, )-good if
• has degree ,
• there exist distinct subsets of such that
– for any , for some , i.e., is constant on ,
– ,
– for any .
We say that {} is a splitting covering for .
Tamo–Barg construction
The Tamo–Barg construction utilizes good polynomials.
• Suppose that a -good polynomial over is given with splitting covering .
• Let be a positive integer.
• Consider the following -vector space of polynomials
• Let .
• The code is an -optimal locally coverable code, where denotes evaluation of at all points in the set .
Parameters of Tamo–Barg codes
• Length. The length is the number of evaluation points. Because the sets are disjoint for , the length of the code is .
• Dimension. The dimension of the code is , for ≤ , as each has degree at most , covering a vector space of dimension , and by the construction of , there are distinct .
• Distance. The distance is given by the fact that , where , and the obtained code is the Reed-Solomon code of degree at most , so the minimum distance equals .
• Locality. After the erasure of the single component, the evaluation at , where , is unknown, but the evaluations for all other are known, so at most evaluations are needed to uniquely determine the erased component, which gives us the locality of .
To see this, restricted to can be described by a polynomial of degree at most thanks to the form of the elements in (i.e., thanks to the fact that is constant on , and the 's have degree at most ). On the other hand , and evaluations uniquely determine a polynomial of degree . Therefore can be constructed and evaluated at to recover .
Example of Tamo–Barg construction
We will use to construct -LRC. Notice that the degree of this polynomial is 5, and it is constant on for , where , , , , , , , and : , , , , , , , . Hence, is a -good polynomial over by the definition. Now, we will use this polynomial to construct a code of dimension and length over . The locality of this code is 4, which will allow us to recover a single server failure by looking at the information contained in at most 4 other servers.
Next, let us define the encoding polynomial: , where . So, .
Thus, we can use the obtained encoding polynomial if we take our data to encode as the row vector . Encoding the vector to a length 15 message vector by multiplying by the generator matrix
For example, the encoding of information vector gives the codeword .
Observe that we constructed an optimal LRC; therefore, using the Singleton bound, we have that the distance of this code is . Thus, we can recover any 6 erasures from our codeword by looking at no more than 8 other components.
Locally recoverable codes with availability
A code has all-symbol locality and availability if every code symbol can be recovered from disjoint repair sets of other symbols, each set of size at most symbols. Such codes are called -LRC.
Theorem The minimum distance of -LRC having locality and availability satisfies the upper bound
If the code is systematic and locality and availability apply only to its information symbols, then the code has information locality and availability , and is called -LRC.
Theorem The minimum distance of an linear -LRC satisfies the upper bound
References
Cryptography
Information theory
Error detection and correction | Locally recoverable code | [
"Mathematics",
"Technology",
"Engineering"
] | 1,263 | [
"Cybersecurity engineering",
"Cryptography",
"Telecommunications engineering",
"Reliability engineering",
"Applied mathematics",
"Error detection and correction",
"Computer science",
"Information theory"
] |
66,796,710 | https://en.wikipedia.org/wiki/Symposium%20on%20Experimental%20Algorithms | The International Symposium on Experimental Algorithms (SEA), previously known as Workshop on Experimental Algorithms (WEA), is a computer science conference in the area of algorithm engineering.
Notes
Computer science conferences | Symposium on Experimental Algorithms | [
"Mathematics",
"Technology"
] | 40 | [
"Algorithms",
"Mathematical logic",
"Computer science conferences",
"Applied mathematics",
"Computer science"
] |
66,805,270 | https://en.wikipedia.org/wiki/Engineered%20cellular%20magmatic | Engineered cellular magmatics (ECMs) are synthetic stone of glass and ceramic. ECMs replicate rare, naturally occurring volcanic materials, and exhibit useful structural and chemical properties of those materials. The US Department of Energy has recognized ECMs as an advanced material, funding further research into the manufacture and application of ECMs through ARPA-E and Savannah River National Laboratory.
Properties
ECMs can be engineered to include a broad range of silicate species, with various reactivity. Their physical structure can range from closed to open cell, resembling pumice or porous ceramic. They can be composed of internal pore and vesicular structures with individual cross sections that can measure from millimeter down to nanometer scale. Open cell varieties exhibit extensive surface areas which amplify ion exchange capabilities (both cationic and anionic). These features make them well suited for various cement construction filtration and remediation applications. They typically contain both amorphous and crystalline structures.
Application
Known uses for ECMs include air and water filtration, biological and chemical remediation, microbial habitat, soil and cementitious amendments. They can also be used in the manufacture of various forms of zeolite due to the resulting silicate lattice, and in various reactors for chemical separation. They meet and exceed the ASTM Standard Specification for Lightweight Aggregates for Structural Concrete, and exceed ASTM standards for vegetative green roof media.
History
ECMs share a history with foam glass, but are engineered for specific chemical reactivity and structural properties not generally considered the domain of foam glass. The term engineered cellular magmatic was adopted to describe the material in late 2019 by inventor Robert Hust. Other named inventors include Gene Ramsey, Cory Trivelpiece, Gert Nielsen, and Philip Galland.
Manufacture
ECMs can be manufactured from raw materials (minerals with high silica content) and a range of waste and recycled materials that, by their utilization, represent significant savings in both energy and carbon emissions. ECMs have been successfully created utilizing various waste streams, upcycling glass waste, municipal incinerator waste ash, carbon fiber waste, and wastes from various types of mining. The production process is similar to that of sintered foam glass or ceramics, consisting of 1) grinding the input materials to a powder, 2) firing the material at various levels of thermal exposure (500º - 2000 °C) as it 3) travels through a 36-meter (120 ft.) long furnace.
See also
Silicate
Ceramic foam
Foam glass
High Performance Concrete
Materials science
Environmental technology
Upcycling
References
Materials science | Engineered cellular magmatic | [
"Physics",
"Materials_science",
"Engineering"
] | 533 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
66,810,834 | https://en.wikipedia.org/wiki/Extended%20natural%20numbers | In mathematics, the extended natural numbers is a set which contains the values and (infinity). That is, it is the result of adding a maximum element to the natural numbers. Addition and multiplication work as normal for finite values, and are extended by the rules (), and for .
With addition and multiplication, is a semiring but not a ring, as lacks an additive inverse. The set can be denoted by , or . It is a subset of the extended real number line, which extends the real numbers by adding and .
Applications
In graph theory, the extended natural numbers are used to define distances in graphs, with being the distance between two unconnected vertices. They can be used to show the extension of some results, such as the max-flow min-cut theorem, to infinite graphs.
In topology, the topos of right actions on the extended natural numbers is a category PRO of projection algebras.
In constructive mathematics, the extended natural numbers are a one-point compactification of the natural numbers, yielding the set of non-increasing binary sequences i.e. such that . The sequence represents , while the sequence represents . It is a retract of and the claim that implies the limited principle of omniscience.
Notes
References
Further reading
External links
Number theory | Extended natural numbers | [
"Mathematics"
] | 260 | [
"Discrete mathematics",
"Number theory"
] |
78,404,878 | https://en.wikipedia.org/wiki/ARKA%20descriptors%20in%20QSAR | One of the most commonly used in silico approaches for assessing new molecules' activity/property/toxicity is the Quantitative Structure-Activity/Property/Toxicity Relationship (QSAR/QSPR/QSTR), which generates predictive models for efficiently predicting query compounds . QSAR/QSPR/QSTR uses numerical chemical information in the form of molecular descriptors and correlates these to the response activity/property/toxicity using statistical techniques. While QSAR is essentially a similarity-based approach, the occurrence of activity/property cliffs may greatly reduce the predictive accuracy of the developed models. The novel Arithmetic Residuals in K-groups Analysis (ARKA) approach is a supervised dimensionality reduction technique that can easily identify activity cliffs in a data set. Activity cliffs are similar in their structures but differ considerably in their activity. The basic idea of the ARKA descriptors is to group the conventional QSAR descriptors based on a predefined criterion and then assign weightage to each descriptor in each group. ARKA descriptors have also been used to develop classification-based and regression-based QSAR models with acceptable quality statistics.
References
Cheminformatics
Dimension reduction | ARKA descriptors in QSAR | [
"Chemistry"
] | 249 | [
"Computational chemistry",
"nan",
"Cheminformatics"
] |
78,406,458 | https://en.wikipedia.org/wiki/Difenoxuron | Difenoxuron (commercially known as Lironion) is a phenylurea herbicide used to control annual broad-leaved weeds and grasses in allium crops (predominantly onions), carrots, jojoba, and celery.
Production
Difenoxuron may be synthesized from 4-chloroaniline, 4-methoxyphenol, dimethylamine, and phosgene. It is stereochemically achiral.
Mechanism of action
Difenoxuron is a member of the phenylurea class of herbicides. Phenylureas inhibit photosynthesis at photosystem II by binding to the serine 264 residue of the D1 protein, occupying the Qb (secondary plastoquinone) binding site and hence halting electron transfer from the primary acceptor Qa to the secondary acceptor Qb. This prevents fixation and energy production.
Moreover, this blockade prevents chlorophyll from transferring energy to Qa, increasing production of triplet-state chlorophyll, which reacts with molecular oxygen to form singlet oxygen, a highly reactive species that oxidatively damages the pigments, lipids and proteins of the photosynthetic thylakoid membrane.
Herbicidal activity
Liming in Boddington soil has been shown by a 1976 study to increase the herbicidal toxicity of difenoxuron by two to three times compared to soil without the additional level of liming.
Toxicology
Difenoxuron's hazards include acute toxicity caused by oral ingestion, and acute toxicity of inhalation. There are very few studies about the genotoxicity of difenoxuron and these studies are inconcordant but there appears to be a dose dependent relationship between the concentration of difenoxuron and rate of observed chromosomal aberrations.
References
Herbicides
Ureas
Diphenyl ethers | Difenoxuron | [
"Chemistry",
"Biology"
] | 397 | [
"Organic compounds",
"Herbicides",
"Biocides",
"Ureas"
] |
77,069,742 | https://en.wikipedia.org/wiki/Gower%27s%20distance | In statistics, Gower's distance between two mixed-type objects is a similarity measure that can handle different types of data within the same dataset and is particularly useful in cluster analysis or other multivariate statistical techniques. Data can be binary, ordinal, or continuous variables. It works by normalizing the differences between each pair of variables and then computing a weighted average of these differences. The distance was defined in 1971 by Gower and it takes values between 0 and 1 with smaller values indicating higher similarity.
Definition
For two objects and having descriptors, the similarity is defined as:
where the are non-negative weights usually set to and is the similarity between the two objects regarding their -th variable. If the variable is binary or ordinal, the values of are 0 or 1, with 1 denoting equality. If the variable is continuous, with being the range of -th variable and thus ensuring . As a result, the overall similarity between two objects is the weighted average of the
similarities calculated for all their descriptors.
In its original exposition, the distance does not treat ordinal variables in a special manner. In the 1990s, first Kaufman and Rousseeuw and later Podani suggested extensions where the ordering of an ordinal feature is used. For example, Podani obtains relative rank differences as with being the ranks corresponding to the ordered categories of the -th variable.
Software implementations
Many programming languages and statistical packages, such as R, Python, etc., include implementations of Gower's distance. The implementations may follow Kaufmann and Rousseeuw's extensions, which change the similarity for continuous variables to
References
Statistical distance
Similarity measures | Gower's distance | [
"Physics"
] | 342 | [
"Similarity measures",
"Physical quantities",
"Statistical distance",
"Distance"
] |
77,073,430 | https://en.wikipedia.org/wiki/Helffer%E2%80%93Sj%C3%B6strand%20formula | The Helffer–Sjöstrand formula is a mathematical tool used in spectral theory and functional analysis to represent functions of self-adjoint operators. Named after Bernard Helffer and Johannes Sjöstrand, this formula provides a way to calculate functions of operators without requiring the operator to have a simple or explicitly known spectrum. It is especially useful in quantum mechanics, condensed matter physics, and other areas where understanding the properties of operators related to energy or observables is important.
Background
If , then we can find a function such that , and for each , there exists a such that
Such a function is called an almost analytic extension of .
The formula
If and is a self-adjoint operator on a Hilbert space, then
where is an almost analytic extension of , and .
See also
Cauchy's integral formula
References
Further reading
Lecture notes on Weyl's law
Spectral Measures: Helffer-Sjöstrand
Functional analysis | Helffer–Sjöstrand formula | [
"Mathematics"
] | 195 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
77,078,531 | https://en.wikipedia.org/wiki/Cyberattacks%20against%20infrastructure | Once a cyberattack has been initiated, certain targets need to be attacked to cripple the opponent. Certain infrastructures as targets have been highlighted as critical infrastructures in times of conflict that can severely cripple a nation. Control systems, energy resources, finance, telecommunications, transportation, and water facilities are seen as critical infrastructure targets during conflict. A new report on the industrial cybersecurity problems, produced by the British Columbia Institute of Technology, and the PA Consulting Group, using data from as far back as 1981, reportedly has found a 10-fold increase in the number of successful cyber attacks on infrastructure Supervisory Control and Data Acquisition (SCADA) systems since 2000. Cyberattacks that have an adverse physical effect are known as cyber-physical attacks.
Control systems
Control systems are responsible for activating and monitoring industrial or mechanical controls. Many devices are integrated with computer platforms to control valves and gates to certain physical infrastructures. Control systems are usually designed as remote telemetry devices that link to other physical devices through internet access or modems. Little security can be offered when dealing with these devices, enabling many hackers or cyberterrorists to seek out systematic vulnerabilities. Paul Blomgren, manager of sales engineering at cybersecurity firm explained how his people drove to a remote substation, saw a wireless network antenna and immediately plugged in their wireless LAN cards. They took out their laptops and connected to the system because it wasn't using passwords. "Within 10 minutes, they had mapped every piece of equipment in the facility," Blomgren said. "Within 15 minutes, they mapped every piece of equipment in the operational control network. Within 20 minutes, they were talking to the business network and had pulled off several business reports. They never even left the vehicle."
Energy
Energy is seen as the second infrastructure that could be attacked. It is broken down into two categories, electricity and natural gas. Electricity also known as electric grids power cities, regions, and households; it powers machines and other mechanisms used in day-to-day life. Using US as an example, in a conflict cyber terrorists can access data through the Daily Report of System Status that shows power flows throughout the system and can pinpoint the busiest sections of the grid. By shutting those grids down, they can cause mass hysteria, backlog, and confusion; also being able to locate critical areas of operation to further attacks in a more direct method. Cyberterrorists can access instructions on how to connect to the Bonneville Power Administration which helps direct them on how to not fault the system in the process. This is a major advantage that can be utilized when cyberattacks are being made because foreign attackers with no prior knowledge of the system can attack with the highest accuracy without drawbacks. Cyberattacks on natural gas installations go much the same way as it would with attacks on electrical grids. Cyberterrorists can shutdown these installations stopping the flow or they can even reroute gas flows to another section that can be occupied by one of their allies. There was a case in Russia with a gas supplier known as Gazprom, they lost control of their central switchboard which routes gas flow, after an inside operator and Trojan horse program bypassed security.
The 2021 Colonial Pipeline cyberattack caused a sudden shutdown of the pipeline that carried 45% of the gasoline, diesel, and jet fuel consumed on the East Coast of the United States.
Wind farms, both onshore and offshore, are also at risk from cyber attacks. In February 2022, a German wind turbine maker, Enercon, lost remote connection to some 5,800 turbines following a large-scale disruption of satellite links. In April 2022, another company, Deutsche Windtechnik, also lost control of roughly 2,000 turbines because of a cyber-attack. While the wind turbines were not damaged during these incidents, these attacks illustrate just how vulnerable their computer systems are.
Finance
Financial infrastructures could be hit hard by cyberattacks as the financial system is linked by computer systems. Money is constantly being exchanged in these institutions and if cyberterrorists were to attack and if transactions were rerouted and large amounts of money stolen, financial industries would collapse and civilians would be without jobs and security. Operations would stall from region to region causing nationwide economic degradation. In the U.S. alone, the average daily volume of transactions hit $3 trillion and 99% of it is non-cash flow. To be able to disrupt that amount of money for one day or for a period of days can cause lasting damage making investors pull out of funding and erode public confidence.
A cyberattack on a financial institution or transactions may be referred to as a cyber heist. These attacks may start with phishing that targets employees, using social engineering to coax information from them. They may allow attackers to hack into the network and put keyloggers on the accounting systems. In time, the cybercriminals are able to obtain password and keys information. An organization's bank accounts can then be accessed via the information they have stolen using the keyloggers. In May 2013, a gang carried out a US$40 million cyber heist from the Bank of Muscat.
Transportation
Transportation infrastructure mirrors telecommunication facilities: by impeding transportation for individuals in a city or region, the economy will slightly degrade over time. Successful cyber attacks can impact scheduling and accessibility, creating a disruption in the economic chain. Carrying methods will be impacted, making it hard for cargo to be sent from one place to another. In January 2003 during the "slammer" virus, Continental Airlines was forced to shut down flights due to computer problems. Cyberterrorists can target railroads by disrupting switches, target flight software to impede airplanes, and target road usage to impede more conventional transportation methods. In May 2015, a man, Chris Roberts, who was a cyber consultant, revealed to the FBI that he had repeatedly, from 2011 to 2014, managed to hack into Boeing and Airbus flights' controls via the onboard entertainment system, allegedly, and had at least once ordered a flight to climb. The FBI, after detaining him in April 2015 in Syracuse, had interviewed him about the allegations.
Water
Water as an infrastructure could be one of the most critical infrastructures to be attacked. It is seen as one of the greatest security hazards among all of the computer-controlled systems. There is the potential to have massive amounts of water unleashed into an area which could be unprotected causing loss of life and property damage. Even water supplies could be attacked; sewer systems can be compromised too. There was no calculation given to the cost of damages, but the estimated cost to replace critical water systems could be in the hundreds of billions of dollars. Most of these water infrastructures are well developed making it hard for cyberattacks to cause any significant damage, at most, equipment failure can occur causing power outlets to be disrupted for a short time.
In 2024, multiple US water facilities had their industrial equipment compromised by hackers to display anti-Israel messages. Although no major damage has been inflicted, it has revealed US water facilities are experiencing lack of funding and resources to patch security vulnerabilities in their infrastructure.
Waste management
In addition to water facilities, waste management facilities can also be and have been targets of cyberattacks.
In 2023, the Radio Waste Management (RWM) company, owned by the United Kingdom government, experienced an unsuccessful cybersecurity breach through the use of LinkedIn. The attack attempted to identify and access the people who are part of the business.
In 2023, Sellafield, the UK's largest and most hazardous nuclear waste disposal site, had been targeted by foreign hackers, linked to Russia and China. Sleeper malware was discovered inside of the site's networks, and it is unknown how long it had been installed or if it had been fully removed. The full extent of the weak security was exposed when staff found they could access Sellafield's servers from outside the site. Reports in 2012 and 2015 reported that the company and senior management have been aware of the security vulnerabilities but failed to report or spend resources to address these vulnerabilities. Sellafield's sensitive documents, such as foreign attack or disaster emergency defense plans and radioactive waste management, may have been compromised.
It is possible for smaller scale electronics in e-waste to become targets of cyberattacks. The PwC estimates that globally by 2030, the amount of Internet of Things (IoT) devices owned around the world would reach over 25 billion, and of that, 70 million tonnes of e-waste will be generated and disposed of. Although only based on anecdotal evidence, it's estimated the majority of this e-waste is improperly disposed of, allowing the components of these devices to retain sensitive information and personal data. Cyber criminals may target e-waste of individuals or organizations to gain access to sensitive data that isn't as securely guarded as active devices.
Hospitals and Medical Facilities
Hospital as an infrastructure is one of the major assets to have been impacted by cyber attacks. These attacks could "directly lead to deaths." The cyberattacks are designed to deny hospital workers access to critical care systems. Recently, there has been a major increase of cyberattacks against hospitals amid the COVID-19 pandemic. Hackers lock up a network and demand ransom to return access to these systems. The ICRC and other human rights group have urged law enforcement to take "immediate and decisive action" to punish such cyber attackers.
Hospitals and medical facilities have seen an increase in ransomware attacks in which criminals encode Protected Health Information (PHI) and other private identifiable information. When the ransom is paid, the money is exchanged for a key to decode the information and to return the stolen data. Access points into hospital infrastructure are often through third-party companies that hospitals may contract jobs through. The HIPAA Omnibus Rule created in 2013 requires that all business contracted to perform work for the hospital where patient information could be involved would be required to be held to the same standards of security. An increasingly common access point has been through camera and security systems that are being added to the hospitals network. As more outside companies and devices become connected through the internet, the risks for cyberattacks increases. During the COVID- 19 pandemic an increase in attacks was noted. Researchers concluded that this was the result of increased remote work in which hospital staff had more devices connected to networks increasing potential areas of vulnerability. One tactic that has been effective in preventing cyberattacks in the healthcare industry is the Zero Trust method. In this model, all users known and unknown are viewed as a potential threat and requires everyone to verify their identity with the appropriate credentials.
With an increased use of Electronic Medical Records (EMR) comes an increased need for security to protect patient information and privacy. When a hospital experiences a data breach in the United States, the facility is required to report the breach to the people impacted under the Health Information Technology for Economic and Clinical Health Act, also called HITECH ACT, as it has the Breach Notification Rule. The rule states that facilities are required to report data breaches if the facility provides patient care under HIPAA guidelines. The Health Insurance Portability and Accountability Act protects patient's right to privacy regarding their Protected Health Information (PHI). Accessing PHI can be very lucrative for cybercriminals as this information can contain home addresses, social security numbers, banking information, and other personally identifiable information.
References
Cyberattacks
Infrastructure | Cyberattacks against infrastructure | [
"Engineering"
] | 2,376 | [
"Construction",
"Infrastructure"
] |
77,080,272 | https://en.wikipedia.org/wiki/Grokking%20%28machine%20learning%29 | In machine learning, grokking, or delayed generalization, is a transition to generalization that occurs many training iterations after the interpolation threshold, after many iterations of seemingly little progress, as opposed to the usual process where generalization occurs slowly and progressively once the interpolation threshold has been reached.
Grokking was introduced in January 2022 by OpenAI researchers investigating how neural network perform calculations. It derives from the word grok coined by Robert Heinlein in his novel Stranger in a Strange Land.
Grokking can be understood as a phase transition during the training process. While grokking has been thought of as largely a phenomenon of relatively shallow models, grokking has been observed in deep neural networks and non-neural models and is the subject of active research.
One potential explanation is that the weight decay (a component of the loss function that penalizes higher values of the neural network parameters, also called regularization) slightly favors the general solution that involves lower weight values, but that is also harder to find. According to Neel Nanda, the process of learning the general solution may actually be gradual, even though the transition to the general solution occurs more suddenly later.
References
See also
Deep double descent
Machine learning
Phenomena | Grokking (machine learning) | [
"Engineering"
] | 254 | [
"Artificial intelligence engineering",
"Machine learning"
] |
71,135,585 | https://en.wikipedia.org/wiki/Olga%20Garc%C3%ADa%20Manche%C3%B1o | Olga García Mancheño is an organic chemistry professor at the University of Münster in Germany. García Mancheño directs an organic chemistry research group at University of Münster that focuses on development of new catalytic methods with the goal of developing sustainable synthetic routes to accomplish carbon-hydrogen functionalization, organic chemical rearrangements, and photocatalyzed chemical reactions.
Academic career
García Mancheño earned her bachelor's degree in 2001 from the Faculty of Sciences of the Autonomous University of Madrid in Madrid, Spain. She continued at the Autonomous University of Madrid to earn her Ph.D. in 2005 under the mentorship of Juan Carlos Carretero. She continued her training in organic chemistry as a postdoctoral researcher in the lab of Carsten Bolm at RWTH Aachen University in Aachen, Germany. She completed her habilitation at University of Münster mentored by Frank Glorius, and then worked in a temporary professorship at the University of Göttingen in the city of Göttingen, Germany before acquiring her first permanent faculty position. She was an assistant professor of organic chemistry at the University of Regensburg in Bavaria, Germany from 2013-2017. In 2017, García Mancheño became a professor of organic chemistry at the University of Münster, in Münster, North Rhine-Westphalia, Germany, where she also completed her habilitation.
Research
García Mancheño is head of a research group at the University of Münster that focuses on developing new catalysts to accomplish organic chemical transformations. She has authored several review articles in peer-reviewed journals on topics in organocatalytic chemistry, and is the editor of a textbook on anion-binding catalysts.
Mentoring
García Mancheño was successful is acquiring funding from the European Research Council in 2017 to start her research program at the University of Münster. She has been a speaker at several training events to help other early career scientists in Germany to acquire funding for their research programs. In 2018 she was a speaker at the Interactive Information Event: ERC Consolidator Grant at the University of Münster to share advice about applying for that specific grant opportunity. She was invited by the German Fulbright Association and Research Corporation for Science Advancement to speak at workshops that are aimed to prepare university professors in Germany to be successful. She spoke at the Fulbright-Cottrell Junior Faculty Professional Development Workshops in 2018 (Berlin) and in 2019 (Göttingen).
Honors and awards
García Mancheño has received the following honors and awards during her career:
2019 invited speaker at Fulbright-Cottrell Junior Faculty Professional Development Workshop in Göttingen
2018 invited speaker at Fulbright-Cottrell Junior Faculty Professional Development Workshop in Berlin
2017 European Research Council Consolidator Grant (CoG). Frontiers in Catalytic Anion-Binding Chemistry (Max funding of €1,997,763)
2016 ORCHEM Prize from the Liebig-Vereinigung für Organische Chemie of the Gesellschaft Deutscher Chemiker
References
Organic chemists
Academic staff of the University of Münster
Autonomous University of Madrid alumni
Spanish women academics
Living people
Place of birth missing (living people)
Year of birth missing (living people)
Academic staff of the University of Regensburg
Spanish expatriates in Germany | Olga García Mancheño | [
"Chemistry"
] | 642 | [
"Organic chemists"
] |
71,139,409 | https://en.wikipedia.org/wiki/Tetraboric%20acid | Tetraboric acid or pyroboric acid is a chemical compound with empirical formula . It is a colourless water-soluble solid formed by the dehydration or polymerization boric acid.
Tetraboric acid is formally the parent acid of the tetraborate anion .
Preparation
Tetraboric acid can be obtained by heating orthoboric acid above about 170 °C:
4 → + 5
References
Borates
Inorganic polymers | Tetraboric acid | [
"Chemistry"
] | 93 | [
"Inorganic polymers",
"Inorganic compounds",
"Inorganic compound stubs"
] |
71,140,054 | https://en.wikipedia.org/wiki/Orthoborate | In inorganic chemistry, an orthoborate is a polyatomic anion with formula or a salt containing the anion; such as trisodium orthoborate . It is one of several boron oxyanions, or borates.
The name is also used in organic chemistry for the trivalent functional group , or any compound (ester) that contains it, such as triethyl orthoborate .
Structure
The orthoborate ion is known in the solid state, for example, in calcium orthoborate , where it adopts a nearly trigonal planar structure. It is a structural analogue of the carbonate anion , with which it is isoelectronic. Simple bonding theories point to the trigonal planar structure. In terms of valence bond theory, the bonds are formed by using sp2 hybrid orbitals on boron.
Some compounds termed orthoborates do not necessarily contain the trigonal planar ion. For example, gadolinium orthoborate contains the planar ion only a high temperatures; otherwise it contains the polyborate anion .
Reactions
Solution in water
When orthoborate salts are dissolved in water, the anion converts mostly to boric acid and other hydrogen-containing borate anions, mainly tetrahydroxyborate . The reactions of orthoborate in solution are therefore mostly those of these compounds.
In particular, these reactions include the condensation of tetrahydroxoborate with cis-vicinal diols such as mannitol, sorbitol, glucose and glycerol, to form relatively stable anion esters. This reaction is used in analytic chemistry to determine the concentration of borate anions.
See also
Metaborate
Tetraborate
References
Inorganic compounds
Boron compounds
Boron oxyanions | Orthoborate | [
"Chemistry"
] | 389 | [
"Inorganic compounds"
] |
68,252,674 | https://en.wikipedia.org/wiki/Time%20resolved%20microwave%20conductivity | Time resolved microwave conductivity (TRMC) is an experimental technique used to evaluate the electronic properties of semiconductors. Specifically, it is used to evaluate a proxy for charge carrier mobility and a representative carrier lifetime from light-induced changes in conductance. The technique works by photo-generating electrons and holes in a semiconductor, allowing these charge carriers to move under a microwave field, and detecting the resulting changes in the electric field. TRMC systems cannot be purchased as a single unit, and are generally "home-built" from individual components. One advantage of TRMC over alternative techniques is that it does not require direct physical contact to the material.
History
While semiconductors have been studied using microwave radiation since the 1950s, it was not until the late 1970s and early 1980s that John Warman at the Delft University of Technology exploited microwaves for time-resolved measurements of photoconductivity. The first reports used electrons then photons to generate charges in fluids. The technique was later refined to study semiconductors by Kunst and Beck at the Hahn Meitner Institute in Berlin.
Delft remains a significant center for TRMC, however the technique is now used at a number of institutions around the world, notably the National Renewable Energy Laboratory and Kyoto University.
Operating principles
The experiment relies upon the interaction between optically-generated charge carriers and microwave frequency electromagnetic radiation. The most common approach is to use a resonant cavity. An oscillating voltage is produced using a signal generator such as a voltage controlled oscillator or a Gunn diode. The oscillating current is incident on an antenna, resulting in the emission of microwaves of the same frequency. These microwaves are then directed into a resonant cavity. Because they can transmit microwaves with lower loss than cables, metallic waveguides are often used to form the circuit. With the appropriate cavity dimensions and microwave frequency, a standing wave can be formed with 1 full wavelength filing the cavity.
The sample to be studied is placed at a maximum of the electric field component of the standing wave. Because metals act as cavity walls, the sample needs to have a relatively low free carrier concentration in the dark to be measurable. TRMC is hence best suited to the study of intrinsic or lightly doped semiconductors. Electrons and hole are generated by illuminating the sample with above band gap optical photons. Optical access to the sample is provided by a cavity wall which is both electrically conducting and optically transparent; for example a metallic grating or a transparent conducting oxide.
The photo-generated charge carriers move under the influence of the electric field component of the standing wave, resulting in a change in intensity of microwaves that leave the cavity. The intensity of microwaves out of the cavity is measured as a function of time using an appropriate detector and an oscilloscope. Knowledge of the properties of the cavity can be used to evaluate photoconductance from changes in microwave intensity.
Theory
The reflection coefficient is determined by the coupling between cavity and waveguide. When the frequency of microwave is resonant frequency, the reflectance, , of the cavity is expressed as follows:
Here is the quality factor of the cavity including the sample, is the quality factor of the external coupling, which is generally adjusted by iris. The total loaded quality factor of the cavity, , is defined as follows:
The photo-generated charge carriers reduce the quality of the cavity, . When the change of quality factor is very small, the change of reflected microwave power is approximately proportional to the change of dissipation factor of the cavity. Furthermore, dissipation factor of the cavity is mainly determined by the conductivity of the inside space including the sample. Consequently, the change in the conductivity, , of the cavity contents is proportional to relative changes in microwave intensity:
Here is the background (unperturbed) microwave power measured coming out of the cavity and is the change in microwave power as a result of the change in cavity conductance. is the sensitivity factor determined by the quality of the cavity, is the geometry factor of the sample. can be derived by Taylor expanding of the reflectance equation:
Here is the resonant frequency of the cavity in Hertz unit, is the vacuum permittivity, is the relative permittivity of the medium inside the cavity. The relative permittivity should be considered only when the cavity is filled by solvent. When the sample is inserted into dry cavity, only vacuum permittivity should be used because most of the inside space is filled by air. The sign of depends on whether the cavity is in the under-coupled (lower) or over-coupled (upper) regime. So, the negative signal is detected in over-coupled regime, , whereas the positive signal is detected in under-coupled regime, . No signal can be detected at critical coupling condition,
is determined by the overlap between the electric field and the sample position:
Here is the electric field in the cavity. and denote the total inside volume of the cavity and the volume of photo-generated carriers, respectively. If the thickness of the sample is sufficiently thin (below several μm), the electric field to photo-generated carriers would be uniform. In this condition, is approximately proportional to the thickness of the sample. Above conductivity equation can be expressed as follows:
Here is the elementary charge, is the transmittance of the sample at the excitation wavelength, is the incident laser fluence, is the quantum yield of photo-carrier generation per absorbed photon, is the sum of the electron and hole mobility, is the thickness of the sample. Because is linearly proportional to the thickness, only the fractional absorbance of the semiconductor (between 0 and 1) should be additionally measured to determine the TRMC figure of merit (e.g. using ultraviolet–visible spectroscopy):
Applications
Knowledge of charge carrier mobility in semiconductors is important for understanding the electronic and materials properties of a system. It is also valuable in device design and optimization. This is particularly true for thin film solar cells and thin film transistors, where charge extraction and amplification, respectively, are highly dependent upon mobility. TRMC has been used to study electron and hole dynamics in hydrogenated amorphous silicon, organic semiconductors, metal halide perovskites, metal oxides, dye sensitized systems, quantum dots, carbon nanotubes, chalcogenides, metal organic frameworks, and the interfaces between various systems.
Because charges are normally generated using a green (~2.3 eV) or ultraviolet (~3 eV) laser, this restricts materials to those with comparable or smaller bandgaps. The technique is hence well suited to the study of solar absorbers, but not to wide bandgap semiconductors such as metal oxides.
While it is very similar, and has the same dimensions, the parameter is not the same a charge carrier mobility. contains contributions from both holes and electrons, which cannot conventionally be resolved using TRMC. This is in contrast to Hall Measurements or transistor measurements, where hole and electron mobility can easily be separated. Additionally, the mobility is not directly extracted from the measurements, it is measured multiplied by the carrier generation yield, . The carrier generation yield is the number of electron hole pairs generated per absorbed photon. Because some absorbed photons can lead to bound neutral excitons, not all absorbed photons will lead to detectable free carriers. This can make interpretation of more complicated than mobility. However, generally both mobility and are parameters which one wishes to maximize when developing solar cells.
As a time-resolved technique, TRMC also provides information on the timescale of carrier recombination in solar cells. Unlike time resolved photoluminescence measurements, TRMC is not sensitive to the lifetime of excitons.
See also
Electron mobility
Transient photocurrent
Terahertz time-domain spectroscopy
References
Laboratory techniques in condensed matter physics
Materials science
Semiconductors
Spectroscopy
Solar cells | Time resolved microwave conductivity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,617 | [
"Electrical resistance and conductance",
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Physical quantities",
"Molecular physics",
"Semiconductors",
"Instrumental analysis",
"Materials science",
"Laboratory techniques in condensed matter physics",
"Materials",
"Electro... |
68,253,197 | https://en.wikipedia.org/wiki/Revive%20%26%20Restore | Revive & Restore is a nonprofit wildlife conservation organization focused on use of biotechnology in conservation. Headquartered in Sausalito, California, the organization's mission is to enhance biodiversity through the genetic rescue of endangered and extinct species. The organization was founded by Stewart Brand and his wife, Ryan Phelan.
Revive & Restore has created a "Genetic Rescue Toolkit" for wildlife conservation – a suite of biotechnology tools adapted from human medicine and commercial agriculture that can improve wildlife conservation outcomes. The toolkit includes biobanking and cell culturing, genetic sequencing, and advanced reproductive technologies, such as cloning. The toolkit complements traditional conservation practices, such as captive breeding and habitat restoration.
Revive & Restore has caused controversy. In particular, Brand's work in de-extinction has been characterized as "playing god" and criticized for taking time and money away from traditional conservation efforts. In addition, many are concerned by the concept of cloning, even in the context of conservation.
History
Revive & Restore was co-founded in 2012 by Stewart Brand and Ryan Phelan with the idea of bringing biotechnology solutions to conservation. The group was incubated by the Long Now Foundation until 2017, when it became an independent 501(c)(3) organization.
In 2013 Revive & Restore organized the first public meeting on de-extinction. Their founding projects include the de-extinction of the passenger pigeon, heath hen, and woolly mammoth. Since then, Revive & Restore has established partnerships with research institutions, governmental agencies, and conservation organizations on a broad range of genetic rescue programs.
Revive & Restore is a member of the International Union for Conservation of Nature (IUCN) and has long-standing partnerships with the US Fish & Wildlife Service, The San Diego Zoo Wildlife Alliance, Morris Animal Foundation, and ViaGen Pets & Equine, among others.
Programs
Advanced Coral Toolkit
The Advanced Coral Toolkit supports research teams in the development and field testing of biotechnologies that benefit coral reef management and restoration efforts. Projects include coral cryopreservation methods for large scale biobanking and fieldable devices for measuring genetic information or molecular signals associated with coral stress. Launched in 2019, the program has funded 10 research teams.
Wild Genomes
Wild Genomes is a funding program to provide genomic tools to field scientists, wildlife managers, and citizens working to protect their local biodiversity. As of 2023, Wild Genomes has funded 30 individual projects. Program categories include Terrestrial Species, Marine Species, Amphibians, and Kelp Ecosystems.
Cloning for conservation
To help mitigate inbreeding depression for two endangered species, the black-footed ferret (Mustela nigripes) and Przewalski's horse (Equus ferus przewalskii), Revive & Restore facilitates on-going efforts to clone individuals from historic cell lines stored at the San Diego Zoo Wildlife Alliance Frozen Zoo.
On December 10, 2020, the world's first cloned black-footed ferret was born. This ferret, named Elizabeth Ann, marked the first time a U.S. endangered species was successfully cloned.
On August 6, 2020, the world's first cloned Przewalski’s horse was born. Since the oocyte used was from a domestic horse, this was an example of interspecies somatic cell nuclear transfer (SCNT). In 2022, the horse, named Kurt, was paired with a female Przewalski's horse at the San Diego Zoo Wildlife Safari Park to learn the behaviors of his species. On February 17, 2023, a second cloned Przewalski's horse was born from the same historic cell line. Kurt and the new foal are genetic twins that may become the first cloned animals to restore lost genetic variation to their species.
Intended Consequences Initiative
In 2020, Revive & Restore developed a campaign around the concept of "Intended Consequences" – focusing on the benefits of conservation interventions, as opposed to focusing on the fears of unintended consequences. That year, Revive & Restore hosted a virtual workshop that resulted in the publication of a special issue in the journal Conservation Science and Practice.
References
External links
Nature conservation organizations based in the United States
Non-profit organizations based in California
Environmental organizations based in California
Environmental organizations based in the San Francisco Bay Area
Conservation and restoration organizations
Rewilding advocates
Synthetic biology
Genetics
Conservation biology
Climate change mitigation
Extinction
Marine conservation
Bird conservation organizations | Revive & Restore | [
"Engineering",
"Biology"
] | 905 | [
"Synthetic biology",
"Biological engineering",
"Bioinformatics",
"Molecular genetics",
"Conservation biology"
] |
72,623,626 | https://en.wikipedia.org/wiki/Universal%20Taylor%20series | A universal Taylor series is a formal power series , such that for every continuous function on , if , then there exists an increasing sequence of positive integers such thatIn other words, the set of partial sums of is dense (in sup-norm) in , the set of continuous functions on that is zero at origin.
Statements and proofs
Fekete proved that a universal Taylor series exists.
References
Mathematical series | Universal Taylor series | [
"Mathematics"
] | 84 | [
"Sequences and series",
"Mathematical structures",
"Series (mathematics)",
"Calculus"
] |
72,628,540 | https://en.wikipedia.org/wiki/James%20A.%20Rafferty | James A. Rafferty, Vice President, Officers' Committee member, Director, and member of the executive committee of Union Carbide, was an important figure in the petrochemical industry. Rafferty guided Union Carbide's effort in developing the new industry of synthetic aliphatic chemicals (aliphatic compounds are one of the two main branches within organic chemistry) and was instrumental in the development of the liquid oxygen industry. Rafferty directed Union Carbide's collaboration with the United States government for the Manhattan Project and with the War Production Board for the synthetic rubber program during World War II.
Rafferty was born in Chicago, Illinois on May 4, 1886, and studied engineering and chemistry at the Illinois Institute of Technology (where Rafferty would later become a Trustee). After graduation in 1908, Rafferty worked for the People's Gas, Light, and Coke Company and then in 1917 joined the Linde Air Products Company, which later merged with three other companies to become Union Carbide.
Rafferty became general manager of the newly formed Union Carbide subsidiary, the Carbide and Carbon Chemicals Corporation (CCCC) in 1920. He became vice president in 1924, President in 1929, and chairman of the board in 1944. He was made president of the Bakelite Corporation in 1939 and Chairman of Bakelite in 1944. Under Rafferty's leadership, Carbide and Carbon Chemicals Corporation went on to become the second largest chemical company in the United States by 1948.
Rafferty became a Vice President of Union Carbide, the parent company of the CCCC, in 1938, a Director in 1941, and a member of the executive committee in 1944. Rafferty served as Chairman of the Union Carbide's new product development committee until his death on December 19, 1951.
As a result of lifetime achievements, Rafferty was awarded the Chemical Industry Medal in 1948.
The Manhattan Project
Under the auspices of the Manhattan Project, Rafferty directed Union Carbide's efforts to enrich uranium (some of which was mined by another Union Carbide subsidiary: the United States Vanadium Corporation) for use in an atomic bomb. This effort culminated in Union Carbide designing (along with the Kellex Corporation), building, and operating the massive K-25 gaseous diffusion plant at Oak Ridge.
General Leslie Groves wrote of Rafferty: "No one outside the project can ever appreciate how much we depended on you and how well you performed you well-nigh impossible task."
"Few men were more important in the production of the atomic bomb than he was."
- General Leslie R. Groves
Stéphane Groueff, in his book chronicling the Manhattan Project, had this to say about Rafferty:In every company he dealt with, [Leslie] Groves had a general rule: always try to deal directly with the person who could issue an order that nobody else could countermand. And this did not necessarily mean the president or the chairman of the board. Every company was run in a different way, and often it took some inquiring to find out who was the driving spirit, the executive with the real power of authority in a large corporation.
However, at Union Carbide, even though it was a huge organization with several, nearly autonomous subsidiaries, Groves did not have to search very far. It was common knowledge at Union Carbide that one of the driving forces behind the company's spectacular growth was the executive vice-president, James A. Rafferty.
…
"In recognition of his being 'the workhorse of the Carbide backfield,' " read an article in Chemical and Engineering News, "Rafferty was made vice-president of Union Carbide and Carbon Corporation, the parent corporation of all Carbide units, in 1939. This did little to destroy the idea held by some, however, that over 100 men named James A. Rafferty worked for Union Carbide and its many units."
An executive with drive and vision, Rafferty contributed tremendously to the birth and fantastic growth of a new industry in America: synthetics made from petroleum rather than from coal, as they had been formerly. In his field, Rafferty was hailed as one of the great "makers of the chemical industry." To him the word synthetic denoted something worthy: a material of uniform quality, designed for a particular purpose; a man-made product far superior to a natural material.
When Leslie Groves was ushered into Rafferty's oak-paneled office, the two men liked each other at sight. Groves realized immediately that he was talking to someone who loved action and efficiency, a man who would push things ahead. As for Rafferty, he was a firm believer in the American system of free enterprise and the importance of industry's participation in the war effort. It was not difficult for the general to convince Rafferty that his corporation should help the Manhattan Project. "The American chemical industry thus far has benefited tremendously from the stimulating atmosphere of American free enterprise," Rafferty used to say. "During several visits to Europe, I studied industries abroad and compared them with our own. I feel that the well-being and national security of a nation is in proportion to the success and extent of its industries." Rafferty was impressed by Groves's earnest persuasion; he promised to discuss the matter with the top people of those Carbide units that would be involved.
- Groueff, Stéphane. Manhattan Project: The Untold Story of the Making of the Atomic Bomb. [1st ed.] Boston, Little, Brown, 1967.On June 1, 1945, prominent industrialists, including Rafferty, were invited to the second meeting of the Interim Committee. The industrialists were introduced to the committee as follows:Mr. George H. Bucher, President of Westinghouse - manufacture of equipment for the electromagnetic process.
Mr. Walter S. Carpenter, President of Du Pont Company - construction of the Hanford Project.
Mr. James Rafferty, Vice President of Union Carbide - construction and operation of gas diffusion plant in Clinton.
Mr. James White, President of Tennessee Eastman - production of basic chemicals and construction of the RDX plant at Holston, Tennessee.Tennessee Eastman also managed and operated the Y-12 facility at Oak Ridge. In 1947 Union Carbide took over the operation of Y-12.
The graphite for the Hanford B Reactor as well as the Oak Ridge X-10 Reactor was produced by another Union Carbide subsidiary National Carbon Company.
World War II: Synthetic Rubber Program
After Pearl Harbor, the United States was effectively cut off from its supply of natural rubber and a large scale synthetic rubber production process needed to be invented and commercialized. Rafferty led Union Carbide's efforts in producing butadiene which would then be polymerized in a synthetic rubber production process.
The Baruch Committee in 1941 had reported to President Roosevelt: "Of all critical and strategic materials, rubber is the one which presents the greatest threat to the safety of our nation and the success of the Allied cause. If we fail to secure a large new rubber supply quickly, our war effort and domestic economy will collapse."
William M. Jeffers, leader of Union Pacific Railroad and first Rubber Director for the War Production Board, had this to say of Rafferty:As my final contribution to the rubber program, I want to say to you with all sincerity that had it not been for you and your great organization, the people who look upon rubber as tires would have been forced to the conclusion that the rubber program was more or less of a failure and so I feel it is fair to say and it is accurate to say that had it not been for the contribution of Carbide and Carbon Chemicals Corporation, this program could not have succeeded.
-William M. JeffersBradley Dewey, co-founder of the Dewey & Almy Chemical Company and the second Rubber Director for the War Production Board, said of Rafferty:Before winding up my affairs with the synthetic rubber program, I wish to express my appreciation for the magnificent performance of your organization in the production of raw materials for the GR-S [general-purpose synthetic rubbers formed by copolymerization of emulsions of styrene and butadiene; used in tires and other rubber products; previously also known as Buna-S, currently known as SBR (styrene-butadiene rubber)]. The Carbide and Carbon Chemicals Corporation should be extremely proud of the part it has played in the success of the synthetic rubber program. Without you this country might well have met with disaster.
- Bradley Dewey
References
1886 births
1951 deaths
Chemical engineering
20th-century American businesspeople
20th-century American chemists
Petrochemicals
Plastics
20th-century industrialists | James A. Rafferty | [
"Physics",
"Chemistry",
"Engineering"
] | 1,825 | [
"Products of chemical industry",
"Petrochemicals",
"Chemical engineering",
"Unsolved problems in physics",
"nan",
"Amorphous solids",
"Plastics"
] |
72,635,333 | https://en.wikipedia.org/wiki/Nicolson%E2%80%93Ross%E2%80%93Weir%20method | Nicolson–Ross–Weir method is a measurement technique for determination of complex permittivities and permeabilities of material samples for microwave frequencies. The method is based on insertion of a material sample with a known thickness inside a waveguide, such as a coaxial cable or a rectangular waveguide, after which the dispersion data is extracted from the resulting scattering parameters. The method is named after A. M. Nicolson and G. F. Ross, and W. B. Weir, who developed the approach in 1970 and 1974, respectively.
The technique is one of the most common procedures for material characterization in microwave engineering.
Method
The method uses scattering parameters of a material sample embedded in a waveguide, namely and , to calculate permittivity and permeability data. and correspond to the cumulative reflection and transmission coefficient of the sample that are referenced to the each sample end, respectively: these parameters account for the multiple internal reflections inside the sample, which is considered to have a thickness of . The reflection coefficient of the bulk sample is:
where
The sign of the root for the reflection coefficient is chosen appropriately to ensure its passivity (). Similarly, the transmission coefficient of the bulk sample can be written as:
Thus, the effective permeability () and permittivity () of the material can be written as:
where
and
is the free-space wavelength.
is the guided mode wavelength of the unfilled transmission line.
is the cutoff wavelength of the unfilled transmission line
The constitutive relation for admits an infinite number of solutions due to the branches of the complex logarithm. The ambiguity regarding its result can be resolved by taking the group delay into account.
Limitations and extensions
In the case of low material loss, the Nicolson–Ross–Weir method is known to be unstable for sample thicknesses at integer multiples of one half wavelength due to resonance phenomenon. Improvements over the standard algorithm have been presented in engineering literature to alleviate this effect. Furthermore, complete filling of a waveguide with sample material may pose a particular challenge: presence of gaps during the filling of the waveguide section would excite higher-order modes, which may yield errors in scattering parameter results. In such cases, more advanced methods based on the rigorous modal analysis of partially-filled waveguides or optimization methods can be used. A modification of the method for single-port measurements was also reported.
In addition to homogenous materials, the extension of the method was developed to obtain constitutive parameters of isotropic and bianisotropic metamaterials.
See also
Fourier-transform spectroscopy
Microwave radiometer
Reflection seismology
Spectroscopy
Time-domain reflectometer
Vector network analyzer
References
Further reading
Microwave technology
Spectroscopy
Electric and magnetic fields in matter | Nicolson–Ross–Weir method | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 564 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Electric and magnetic fields in matter",
"Materials science",
"Condensed matter physics",
"Spectroscopy"
] |
72,637,308 | https://en.wikipedia.org/wiki/Soft%20graviton%20theorem | In physics, the soft graviton theorem, first formulated by Steven Weinberg in 1965, allows calculation of the S-matrix, used in calculating the outcome of collisions between particles, when low-energy (soft) gravitons come into play.
Specifically, if in a collision between n incoming particles from which m outgoing particles arise, the outcome of the collision depends on a certain S matrix, by adding one or more gravitons to the n + m particles, the resulting S matrix (let it be S') differs from the initial S only by a factor that does not depend in any way, except for the momentum, on the type of particles to which the gravitons couple.
The theorem also holds by putting photons in place of gravitons, thus obtaining a corresponding soft photon theorem.
The theorem is used in the context of attempts to formulate a theory of quantum gravity in the form of a perturbative quantum theory, that is, as an approximation of a possible, as yet unknown, exact theory of quantum gravity.
In 2014 Andrew Strominger and Freddy Cachazo expanded the soft graviton theorem, gauge invariant under translation, to the subleading term of the series, obtaining the gauge invariance under rotation (implying global angular momentum conservation) and connected this to the gravitational spin memory effect.
Formulation
Given particles whose interaction is described by a certain initial S matrix, by adding a soft graviton (i.e., whose energy is negligible compared to the energy of the other particles) that couples to one of the incoming or outgoing particles, the resulting S' matrix is, leaving off some kinematic factors,
,
where p is the momentum of the particle interacting with the graviton, ϵμν is the graviton polarization, pG is the momentum of the graviton, ε is an infinitesimal real quantity which helps to shape the integration contour, and the factor η is equal to 1 for outgoing particles and -1 for incoming particles.
The formula comes from a power series and the last term with the big O indicates that terms of higher order are not considered. Although the series differs depending on the spin of the particle coupling to the graviton, the lowest-order term shown above is the same for all spins.
In the case of multiple soft gravitons involved, the factor in front of S is the sum of the factors due to each individual graviton.
If a soft photon (whose energy is negligible compared to the energy of the other particles) is added instead of the graviton, the resulting matrix S' is
,
with the same parameters as before but with pγ momentum of the photon, ϵ is its polarization, and q the charge of the particle coupled to the photon.
As for the graviton, in case of more photons, a sum over all the terms occurs.
Subleading order expansion
The expansion of the formula to the subleading term of the series for the graviton was calculated by Andrew Strominger and Freddy Cachazo:
,
where represents the angular momentum of the particle interacting with the graviton.
This formula is gauge invariant under rotation and is connected to the gravitational spin memory effect.
See also
Pasterski–Strominger–Zhiboedov triangle
References
Quantum field theory
Bosons
Hypothetical elementary particles | Soft graviton theorem | [
"Physics"
] | 691 | [
"Quantum field theory",
"Matter",
"Unsolved problems in physics",
"Quantum mechanics",
"Bosons",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Subatomic particles"
] |
74,041,037 | https://en.wikipedia.org/wiki/Radium%20azide | Radium azide is an inorganic compound of radium and nitrogen with the chemical formula .
Synthesis
Radium azide can be prepared by dissolving radium carbonate in aqueous hydrazoic acid and evaporating the resulting solution.
Physical properties
Radium azide forms white crystalline solid.
Chemical properties
The compound decomposes when heated to 180–250 °C:
References
Azides
Radium compounds | Radium azide | [
"Chemistry"
] | 83 | [
"Explosive chemicals",
"Azides"
] |
74,041,415 | https://en.wikipedia.org/wiki/Sterimol%20parameter | A sterimol parameter is a set of vectors which describes the steric occupancy of a molecule. First developed by Verloop in the 1970s. Sterimol parameters found extensive application in quantitative structure-activity relationship (QSAR) studies for drug discovery. Introduction of Sterimol parameters into organic synthesis was pioneered by the Sigman research group in the 2010s. Benefiting from the multi-dimensional values that they carry, sterimol parameters give more accurate predictions for the enantioselectivity of asymmetric catalytic reactions than its counterparts, especially in cases when structurally complicated ligands are used.
Definition
Sterimol parameters are built upon the Corey-Pauling-Koltun atomic models, which take into consideration the Van der Waals radii of each atom in the molecule. Unlike most other steric parameters such as A-value, Taft parameters and Tolman cone angle, which group all the spatial information into a single cumulative value, Sterimol parameters consist of three sub-parameters: one length parameter (L), and two width parameters (B1, B5). The three parameters add together to profile the 3-dimensional spatial information of a molecule.
In order to define the Sterimol parameters of a molecule, an axis needs to be defined at first. Since Sterimol parameters are usually applied for describing the bulkiness of a certain substituent which is attached to the substrate, the default choice of the axis is the one that passes through the atoms which link the substrate and substituent together. This axis is defined as the X-axis.
Once the X-axis has been defined, the Sterimol parameters can be assigned. Take the 1,2-dimethylpropyl group as an example (Figure 1). The length parameter (L) refers to the farthest extension of the substituents in the direction parallel to the X axis (shown in Figure 1, left). The width parameters can be assigned from the point of view which is perpendicular to the X axis. The width parameter B1 refers to the minimal profile width of the substituents on the linking atom from the X axis, while parameter B5 refers to the maximal width from the same axis (shown in Figure 1, right).
Sterimol B2–B4 parameters were initially used for obtaining the maximal width. However, in his second generation Sterimol approach, Verloop pointed out that due to their directional dependence on Sterimol B1, discrepancies arose when computing those three parameters in cases where B1 can point to multiple directions. Since Sterimol B2 and B3 hardly contributed significantly to any regression functions obtained, and Sterimol B4 was practically equal to B5, the parameters B2–B4 were omitted.
Sterimol B1 parameter demonstrates the steric effects imposed by branching at the linking atom of a substituent. The more branches the linking atom bears, the larger Sterimol B1 value the substituent has. On the other hand, Sterimol B5 parameter is more susceptible to the steric effects of the substituent's terminus. In general, Sterimol B1 represents vicinal steric effects of the substituent, while Sterimol B5 represents remote steric effects.
Several open-source programs have already included the feature of calculating Sterimol parameters, such as Morfeus, Kallisto, and dbstep.
Application in asymmetric catalysis
In the 2010s, machine learning emerged as a powerful tool for guiding catalyst discovery. More specifically, machine learning models such as multivariate linear regression have been applied to study the linear free energy relationships (LFERs) in catalytic asymmetric organic reactions. These relationships describe the effects that ligand substituents have on reaction outcomes, namely enantioselectivity, and can be extrapolated to predict the performance of ligands outside the known dataset. However, machine learning approaches require well-defined molecular descriptors for the steric and electronic properties of ligands in order to make accurate predictions. Sterimol parameters emerged as a good candidate for quantifying the steric environment induced by ligands.
In Matthew Sigman's seminal work published in 2012, Sterimol parameters were implemented in asymmetric catalysis for the first time in the analysis of an asymmetric Nozaki-Hiyama-Kishi reaction (Figure 2). In initial ligand screening the team found that the steric hindrance of the ester substituent on the oxazoline-proline-based ligand scaffold was pertinent to the overall enantioselectivity of the reaction. When attempting to use the Charton modification of the Taft's parameters for probing the LFERs, they observed breaks in linearity with respect to several "isopropyl-like" substituents with large Charton values (Figure 3, left). However, this break did not exist when the Sterimol B1 parameter was used instead. All of the substituents studied demonstrated good linear correlation between their Sterimol B1 value and the reaction enantioselectivity (Figure 3, right).
Sigman attributed the superiority of Sterimol B1 in this prediction over Charton values to the inherent limitations of the experimentally based Charton values. He noted that the Charton model assumes that the substituent can rapidly rotate around the X-axis. However, in the context of asymmetric catalysis, only one conformation of the substituent provides the transition state with lowest energy, which leads to the formation of the major enantiomer. Therefore, Charton values tend to overestimate the steric effects of substituents that are non-symmetrical around the X-axis, because they can only describe the net conformer of a certain substituent. Sterimol parameters, in contrast, are not derived from experimental results, which are sometimes idiosyncratic as a result of distinct mechanisms. By virtue of their origin, namely quantum chemical calculations, Sterimol parameters can more accurately interpret the steric effects of a substituent in its static form. Sterimol B1, in particular, can approximate the steric repulsive effect of the exact conformer with the lowest energy. Table 1 demonstrates the differences of the two parameters. For example, while they have the same Sterimol B1 values, the Charton value of the isopropyl-like CHPr2 substituent is significantly larger than that of i-Pr due to overestimation. This explains why better correlation was obtained with Sterimol B1.
To date, the Sigman lab has applied Sterimol parameters in the analysis of several catalytic asymmetric reactions. Sterimol parameters are also utilized by chemists worldwide to improve the enantioselectivity for various catalytic reactions, such as conjugative addition, Tsuji-Trost reaction, C–H activation, cyclopropanation, etc.
Weighted sterimol parameters
Following Sigman's work, the Paton lab developed a revised form of Sterimol parameters in 2019. Termed "weighted Sterimol" (wSterimol), this new depiction of Sterimol parameters considers the influence of conformational effects. Paton stated that enantioselectivity is a macroscopic observable, and multiple conformations should not be overlooked when generating descriptor values, especially for substituents with greater conformational flexibility. With this in mind, Paton designed the python-based program "wSterimol", which combines conformation search with Sterimol parameter calculation. In a fully automated fashion, the program performs conformer generation, geometry optimization, filtering and Sterimol computation. Finally, the program outputs weighted Sterimol values wB1, wB5 and wL, which are generalized based on Boltzmann distribution. This user-friendly program has been applied in the studies of several asymmetric catalytic systems
References
Stereochemistry
Physical organic chemistry | Sterimol parameter | [
"Physics",
"Chemistry"
] | 1,675 | [
"Stereochemistry",
"Space",
"nan",
"Physical organic chemistry",
"Spacetime"
] |
74,042,213 | https://en.wikipedia.org/wiki/Baobab-K | The Baobab-K is a Polish automated mine-laying system mounted on the Jelcz 8×8 truck chassis.
History
The Baobab-K's research and development phase began in 2017.
On 14 June 2023, Polish deputy prime minister Mariusz Błaszczak granted approval for delivery contracts of the Baobab-K mine-laying vehicles and their corresponding mines by Huta Stalowa Wola. During the approval process, it was disclosed that one of the contracts entails the supply of 24 Baobab-K mine-laying vehicles for a total value of PLN 510 million. The delivery of the vehicles is scheduled to take place between 2026 and 2028.
Additionally, the contracts involve the purchase of mines and cassettes amounting to PLN 566 million.
Specifications
The vehicle operates with a two-person crew and has a maximum capacity of up to 600 mines. It is capable of traversing mine-laying operations at speeds ranging from 5 to 25 km/h, covering an area of approximately 1,800 meters in length and 180 meters in width.
References
Minelayers
Mine warfare
Military equipment introduced in the 2020s | Baobab-K | [
"Engineering"
] | 239 | [
"Military engineering",
"Mine warfare"
] |
74,046,335 | https://en.wikipedia.org/wiki/Obi%20Peter%20Adigwe | Obi Peter Adigwe is a Nigerian pharmacist and a doctor of pharmaceutical policy. He was appointed the Director General of the National Institute for Pharmaceutical Research and Development (NIPRD) with effect from 10 August 2018 by the former president of the Federal Republic of Nigeria, Muhammadu Buhari. Before his appointment as NIPRD boss, he was the Executive Secretary of the Pharmaceutical Manufacturers Group of the Manufacturers' Association of Nigeria and holds a Doctor of Pharmaceutical Policy at the University of Leeds, United Kingdom.
Dr. Adigwe was the pioneer Head of the Health Policy Research and Development (HPRD) Unit at the National Assembly (Nigeria), where he formulated research and development strategies in Health Policy with a view to developing an Evidence-Based approach to Legislation and Policy formulation in Nigeria. He also developed innovative and contextual capacity-building modules for Healthcare Professionals, as well as coordinated research that contributed to Health System strengthening. He has a significant number of peer-reviewed publications including the first Knowledge Attitudes and Perceptions study on Ebola in Nigeria, as well as a seminal paper on Rational Use of Medicines. While in the United Kingdom, he was the lead author of the article “Rewrite the Script for Non-medical Prescribing” which contributed to prescribing policy reforms in the Parliament of the United Kingdom.
Throughout his career, Adigwe has pioneered a significant number of innovative research and developmental projects. He led teams that secured various high-profile grants including a Clinical Trials Grant from the ECOWAS, an API Grant from African Export–Import Bank, a Drug Development Mega Grant from Tertiary Education Trust Fund and a European Union / Government of Bulgaria Grant supporting Local Production of Vaccines in the Nigerian setting. Based on the NIPRD study led by Dr Adigwe, the European Union EU announced the award of an €18m grant to Nigeria, as a catalyst for vaccines research.
Dr. Obi Peter Adigwe has headed and served on numerous Committees and Expert Working Groups at both national and international levels, including the D8, United Nations, World Health Organisation [WHO], The African Union and ECOWAS. Dr. Adigwe was conferred with Nigeria's highest Productivity Award by the President of Nigeria in recognition of his hard work, productivity and excellence in National Developmental Initiatives. Adigwe is a Fellow of the Nigerian Academy of Pharmacy, as well as of the West African Institute of Public Health.
Education
Obi Adigwe attended University of Jos, Jos, obtaining a bachelor's degree in pharmacy in 2000. He proceeded to the University of Edinburgh, UK and obtained MSc in Global Health and Public Policy in 2008 and a Doctorate degree (PhD) in Pharmaceutical Policy from the University of Leeds, UK, in 2012.
Career
Obi Adigwe started his career as a pharmacist at the National Assembly (Nigeria)'s Pharmacy Department, FCT, Abuja Nigeria, in 2005 and remained there till 2007. He proceeded to become the Senior Project Officer at the Pharmacy Department Unit at University of Leeds, United Kingdom, from 2010 to 2012.
From 2012 to 2015, he was the head of the Health Policy Research and Development Unit at the National Assembly (Nigeria) (NASS), FCT, Abuja, Nigeria, where he formulated research and development strategies across multiple policy areas and also authored write-ups to disseminate findings from research projects.
On 14 June 2022, Adigwe was reappointed as the Director General of National Institute for Pharmaceutical Research and Development (NIPRD).
Adigwe has mentored over 80 PhD and MSc candidates, and 100 Pharmacists. Adigwe has more than 82 scientific presentations. He chaired and contributed to the COVID-19 pandemic response, by providing an internationally acclaimed analysis that was recognized by the Nigerian Government and positioned the country on the Madagascan Covid Organics preparation.
References
Living people
Nigerian pharmacists
Pharmaceutical industry
Year of birth missing (living people) | Obi Peter Adigwe | [
"Chemistry",
"Biology"
] | 814 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
74,046,549 | https://en.wikipedia.org/wiki/Lithium%20periodate | Lithium periodate is an inorganic compound of lithium, iodine, and oxygen with the chemical formula .
Physical properties
The compound forms a white powder. It also forms hydrates and is soluble in water.
References
Periodates
Lithium compounds | Lithium periodate | [
"Chemistry"
] | 47 | [
"Periodates",
"Lithium salts",
"Oxidizing agents",
"Salts"
] |
75,452,067 | https://en.wikipedia.org/wiki/Tavis%E2%80%93Cummings%20model | In quantum optics, fhe Tavis–Cummings model is a theoretical model to describe an ensemble of identical two-level atoms coupled symmetrically to a single-mode quantized bosonic field. The model extends the Jaynes–Cummings model to larger spin numbers that represent collections of multiple atoms. It differs from the Dicke model in its use of the rotating-wave approximation to conserve the number of excitations of the system.
Originally introduced by Michael Tavis and Fred Cummings in 1968 to unify representations of atomic gases in electromagnetic fields under a single fully quantum Hamiltonian — as Robert Dicke had done previously using perturbation theory — the Tavis–Cummings model's restriction to a single field-mode with negligible counterrotating interactions simplifies the system's mathematics while preserving the breadth of its dynamics.
The model demonstrates superradiance, bright and dark states, Rabi oscillations and spontaneous emission, and other features of interest in quantum electrodynamics, quantum control and computation, atomic and molecular physics, and many-body physics. The model has been experimentally tested to determine the conditions of its viability, and realized in semiconducting and superconducting qubits.
Hamiltonian
The Tavis–Cummings model assumes that for the purposes of electromagnetic interactions, atomic structures are dominated by their dipole, as they are for distant neutral atoms in the weak-field limit. Thus the only atomic quantity under consideration is its angular momentum, not its position nor fine electronic structure. Furthermore, the model asserts the atoms to be sufficiently distant that they don't interact with each-other, only with the electromagnetic field, modeled as a bosonic field (since photons are the gauge bosons of electromagnetism).
Formal derivation
For two atomic-electronic states separated by a Bohr frequency , then transitions between the ground- and excited-states and are mediated by Pauli operators: , , and , and the Hamiltonian separating these energy states in the th atom is . With independent atoms each subject to this energy gap, the total atomic Hamiltonian is thus with total spin operators .
Similarly, in a free field with no modal restrictions, creation and annihilation operators dictate the presence of photons in each mode: with wave number , polarization , and frequency . If the dynamics occur within a sufficiently small cavity, only one mode (the cavity's resonant mode) will couple to the atom, thus the field Hamiltonian simplifies to , just as in the Jaynes-Cummings and Dicke models.thumb|Schematic diagram of the Tavis–Cummings physical model, representing an ensemble of two-level atoms (four dots on red and green levels) interacting symmetrically with a single mode photonic field (blue standing wave), isolated within a cavity. The atomic level separation is , the cavity's resonant frequency is , and the coupling strength of atom-field interactions is .|400x400px
Finally, interactions between atoms and the field is determined by the atomic dipole, rendered quantumly as an operator , and the similarly expressed electric field at the atoms' centers (assuming the field is the same at each atom's position) , thus which acts on both the qubit and bosonic degrees of freedom. The dipole operator couples the excited and ground states of each atom , while the electric free field solution is:
, which at a static point evaluates as:
, thus the interaction Hamiltonian expands as
.
Here, specifies the coupling strength of the total dipole to each electric field mode, and functioning as a Rabi frequency that scales with ensemble size due to the Pythagorean addition of single-atom dipoles. Then, in the rotating frame, , which results in corotating terms (representing photon absorption causing atomic excitation), (representing spontaneous emission), and counterrotating terms and (representing second-order effects like self-interaction and Lamb shifts). When and (close to resonance in a weak field), the corotating terms accumulate phase very slowly, while the counterrotating terms accumulate phase too fast to significantly affect time-ordered integrals, thus the rotating wave approximation allows counterrotating terms to drop in the rotating frame. The cavity permits only one field mode with energy sufficiently close to the Bohr energy, , so the final form of the interaction Hamiltonian is for dephasing .
In total, the Tavis–Cummings Hamiltonian includes the atomic and photonic self-energies and the atom-field interaction:
,
,
,
.
Symmetries
The Tavis–Cummings model as described above exhibits two symmetries arising from the Hamiltonian's commutation with excitation number and angular momentum magnitude . Since , it is possible to find simultaneous eigenstates such that:
,
,
.
The quantum number is bounded by , and , but due to the infinity of Fock space, excitation number is unbounded above, unlike angular momentum projection quantum numbers. Just as the Jaynes-Cummings Hamiltonian block-diagonalizes into infinite blocks of constant excitation number, the Tavis–Cummings Hamiltonian block-diagonlizes into infinite blocks of size up to with constant , and within these larger blocks, further block-diagonalizes into (usually degenerate) blocks of size where , with constant cooperation number . The size of each of these smallest blocks (irreps of SU(2)) determine the bounds of the final quantum number that specifies the eigenenergy: , with signifying the ground state of each irrep, and the maximally excited state.
Dynamics
Under the simplifications of real and quasistatically small , the Hamiltonian becomes , whose matrix elements one can express in a joint Dicke and Fock basis such that and . Necessarily, , so the matrix elements are as follows:
,
,
if , , or .
From these elements, one can express Schrödinger equations of motion to demonstrate the photon field's ability to mediate entanglement formation between atoms without atom-atom interactions: , for which the fine-tuned, multivariate dependence on quantum numbers demonstrates the difficulty of solving the Tavis-Cumming model's eigensystem. Here, a few approximate methods, and an exact solution involving Stark shifts and Kerr nonlinearities follow.
Spectrum approximations
In 1969, Tavis and Cummings found approximate eigenenergies and eigenstates for a nondimensionalized Hamiltonian in three different regimes of approximation: first, for near the ground-state of each irrep; second, for and when each atom sees a highly saturated "averaged" field; third, for with sparse excitations. In all solutions, eigenstates are related to Dicke-Fock joint states by , for coefficients that are solved from the Hamiltonian spectrum.
For close to zero, a differential approach provides the eigenvalues: with , average photon number , and differential coefficient .
For an averaged photon field when in a large photon-rich system, the off-diagonal matrix elements in (above) replace with , and each two-level atom interacts independently with a photon field that conveys no information about the other atoms. For each of these atoms, there are two dressed eigenstates coupled to "pseudophoton" number states: , with single-atom eigenenergies . Superpositions of these single-atom dressed states construct the full eigenstates according to addition of angular momenta, mediated by Clebsch-Gordan coefficients. The full eigenvalues are approximately .
For when the atomic ensemble, and not the photon field, is averaged, then the off-diagonal elements in approximate as , smearing over the atomic degrees of freedom in . Eigenstates are constructed as weighted superpositions of photon number states coupled to a spin state, but the eigenvalues are now.
Bethe ansatz
In 1996, Nikolay Bogoliubov (son of the 1992 Dirac Medalist of the same name), Robin Bullough, and Jussi Timonen found that adding quadratic excitation-dependent terms to the Tavis–Cummings Hamiltonian allowed for an exact analytic eigensystem. In the limit where these Kerr and Stark shifts vanish, this solution can recover the eigensystem of the unmodified Tavis–Cummings system.
Including a Kerr term of the form , or a Stark term equivalently results in a new Hamiltonian: , which obeys the same operator symmetries (above) as does the unmodified Tavis–Cummings Hamiltonian, and which reduces to Tavis–Cummings in the limit . Thus the transformation preserves the dynamics and shares joint eigenvectors with the untransformed Hamiltonian. The transformed Hamiltonian, explicitly, is for new parameters and , which is integrable using quantum inverse methods. Separating the dynamics into two complex-parametrized operator matrices (that is, matrices whose elements are operators), one acting on the bosonic degrees of freedom and the other on the spin degrees of freedom produces a monodromy matrix whose determinant is directly proportional to , whose trace is proportional to , and whose trace's parametric derivative is proportional to . Manipulating the monodromy matrix allows its spectral parameter to determine the Hamiltonian eigenstates and eigenenergies as the complex roots of a Bethe ansatz. Every for must satisfy the following Bethe equations:
,
then the Hamiltonian eigenvalues arise from the roots as:
.
In the limit where (and thus ), the above Bethe equations simplify to , and the eigenenergies to . Eigenstates follow similarly.
Experiments
The Tavis–Cummings model has seen numerous experimental implementations verifying its phenomena, including several since 2009 virtually realizing the model on quantum computational platforms like superconducting qubits and circuit QED. Such experiments utilize the Tavis–Cummings Hamiltonian's ability to generate superradiance wherein the artificial atoms emit and absorb light from the field coherently, as though they were a single atom with a large total angular momentum. Superradiance, scaling dipole-interaction strength , and other features allow Tavis–Cummings-type dynamics to manifest quantum computationally and metrologically desirable states, such as Dicke states (joint eigenstates of and ) through global interactions, as was explored in the 2003 paper by Tessier et al.
One realization by Tuchman et al., in 2006, used a stream of ultracold Rubidium-87 atoms (), and observed cooperation number , or 12% its maximum possible value, indicating very high interatomic coherence relative to experimental capabilities of the time. This experiment also confirmed the scaling of the dipole-interaction; at the level of single atoms, dipole interactions are much weaker than monopolar interactions, so the ability of Tavis–Cummings dynamics to counteract the weakness of with quadrature addition of dipoles makes neutral atom control more feasible.
Circuit QED
A seminal result from Fink et al. in 2009 involved 3 transmons as virtual "atoms" with qubit-dependent Bohr frequencies for controllable Josephson energy and experimentally determined single electron charging energy , inside a microwave waveguide resonator which supplies a standing electric field at GHz. To ensure symmetric coupling of the qubits to the field, each transmon was placed at an antinode of the standing wave, and to best conserve excitations by minimizing photon leakage, the resonator was kept ultracold (20mK) which ensured a high quality factor. Manipulating each qubit's Bohr frequency so only one qubit resonated with the field, the team measured each single-qubit coupling strength , then reintroduced the other qubits to compare the total coupling strength with the average strength of the resonating qubits, , confirming that . In addition, the team observed bright and dark states characterized by high emission rates and zero emissions respectively, for 2 and 3 active qubits, with the 3-qubit bright and dark states each being degenerate.
In addition to superconducting qubits, semiconducting qubits have also been a platform for Tavis–Cummings dynamics, such as in a 2018 investigation by van Woerkom et al. at ETH Zürich, in which two qubits constructed of double quantum dots (DQDs) coupled to a SQUID resonator, with the two DQDs separated by a distance of 42μm. The micrometer regime is a far greater distance than that over which semiconducting qubits had previously achieved entanglement, and the difficulty of long-range interactions in semiconducting qubits was at the time a major weakness compared to other quantum computing platforms, for which the Tavis–Cummings model's ability to form entanglement through global atom-field interactions is one solution. By observing the reflection amplitude of field waves between the SQUID array and the DQDs, the team isolated the photon number states as they smoothly coupled to the first qubit to form superpositional Jaynes-Cummings eigenstates when the first qubit tuned to the resonator. Similarly, they observed these hybrid states shift into a pair of bright states and a dark state (which did not interact with the light, and thus did not cause a dip in the reflection amplitude) when the second qubit was tuned to resonance. In addition to physical photons mediating the long-range entanglement at , the team found similar energy shifts at signalling qubits interacting with "virtual" photons, measured by the phase shift of the field rather than the reflection amplitude.
Limitations
Recent investigations by Johnson, Blaha, et al., have verified and explained two major regimes where the Tavis–Cummings model fails to predict physical reality, both following from systemic parameters approaching or exceeding the free spectral range based on the waveguide length . The violating quantities are the coupling strength , and the rate of photon-loss from atomic emissions into non-cavity modes, , where is the single-atom spontaneous-emission rate into all modes. When and , the Tavis–Cummings model well-describes the system, since the atom-light interactions are suppressed for all but one mode, and the field intensity is not significantly attenuated due to atomic emissions into other modes. However, when then the coupling enters the so-called "superstrong" regime and atom-light interactions must consider multiple field-modes. More severely, when then the atomic ensemble becomes optically thick, and the model must consider time-ordered interactions between the field and each atom, as atoms at the front of the ensemble will experience a more intense photonic wavefront than those at the back, due to the frontal atoms' absorptions and non-cavity mode emissions. This has the effect of interactions and correlations cascading successively across multiple atoms. As photons cross the waveguide and interact sequentially with the atoms in the ensemble, they accumulate phase at phenomenon-dependent rates. The total phase accumulated by electromagnetic waves in one round-trip of the waveguide may manifest resonances causing high transmission rates under specific dephasings and emission rates , and the locations of these resonances differs between the standard Tavis–Cummings model and the team's proposed "cascade" model. Using a fluid of ultracool Cesium atoms surrounding a nanofiber-section of a 30m fiber-ring resonator, the team coupled the atoms to the light passing through the nanofiber via an evanescent field, measuring the light's transmission for variable and , into the superstrong coupling and cascade regimes. The data from the nanofiber-Cesium experiment agreed better with the cascade model's predictions than with the Tavis–Cummings', specifically in the parametrically violating regimes above.
References
Quantum models
Quantum optics | Tavis–Cummings model | [
"Physics"
] | 3,293 | [
"Quantum models",
"Quantum optics",
"Quantum mechanics"
] |
75,453,455 | https://en.wikipedia.org/wiki/Cyclin%20E/Cdk2 | The Cyclin E/Cdk2 complex is a structure composed of two proteins, cyclin E and cyclin-dependent kinase 2 (Cdk2). Similar to other cyclin/Cdk complexes, the cyclin E/Cdk2 dimer plays a crucial role in regulating the cell cycle, with this specific complex peaking in activity during the G1/S transition. Once the cyclin and Cdk subunits join together, the complex gets activated, allowing it to phosphorylate and bind to downstream proteins to ultimately promote cell cycle progression. Although cyclin E can bind to other Cdk proteins, its primary binding partner is Cdk2, and the majority of cyclin E activity occurs when it exists as the cyclin E/Cdk2 complex.
G1/S transition
Across eukaryotic cell types, the cell cycle is highly conserved, and the cyclin/Cdk complexes are consistently essential in driving the entire process forwards. Shortly before the end of G1 phase, cyclin E joins with Cdk2 to activate its serine-threonine kinase activity and thus promote entry into S phase.
Eukaryotic cells possess two types of cyclin, cyclin E1 and cyclin E2, with the protein sequences sharing 69.3% similarity in humans despite being encoded by two different genes. While there is significant overlap in function between the two cyclin Es, there are distinct differences in the roles and regulation of each cyclin E type. For example, in Xenopus laevis embryos only cyclin E1 is necessary for viability.
In living cells, over-expression (an excess amount) of either cyclin E type results in an earlier activation of the cyclin E/Cdk2 complex and the subsequent shortening of G1 phase and thus accelerated movement into S phase. The cyclin E/Cdk2 complex is not only important in regulating the G1/S transition, but in fact necessary and sufficient, as cells lacking functional cyclin E are unable to enter S phase, remaining forever arrested in G1.
Complex activation
The cyclin E protein contains a section called the cyclin box, which interacts with the PSTAIRE helix on Cdk2 to enact a conformational change in Cdk2's T loop. The resulting exposure of Cdk2's catalytic site enables Cdk activating kinase (CAK) to phosphorylate Cdk2, allowing full activation of the cyclin E/Cdk2 complex. Once the protein dimer is formed and activated, it phosphorylates several important proteins including "proteins involved in centrosome duplication (NPM, CP110, Mps1), DNA synthesis (Cdt1), DNA repair (Brca1, Ku70), histone gene transcription (p220/NPAT, CBP/p300, HIRA) and Cdk inhibitors p21Waf1/Cip1 or p27Kip1." The complex interacts with its substrates due to two distinct regions of the cyclin E protein–the MRAIL and VDCLE domains. MRAIL is located at the N-terminus of cyclin E's cyclin box and interacts with proteins containing an RLX sequence (argininine-leucine-any amino acid) such as Rb, and p27KIP1. VDCLE is located at cyclin E's C-terminal region and interacts with proteins of the retinoblastoma family including Rb1, p107, and p130.
Localization
Cyclin E is predominantly found in the cell nucleus, and although it shuttles between the nucleus and the cytoplasm, it typically appears as a nuclear protein in images as its nuclear import is more rapid than its export. Cyclin E's nuclear localization sequence (NLS) allows the cyclin E/Cdk2 complex to readily enter the nucleus, although other mechanisms are believed to help the complex localize to the region as well. Cyclin E also contains a centrosome localization sequence (CLS) that plays a key role in allowing the cyclin E/Cdk2 complex to control centrosome duplication during early S phase.
Retinoblastoma protein
Background–phosphorylation
The retinoblastoma tumor suppressor protein (Rb) plays a key regulatory role in several cellular activities, such as the G1 restriction checkpoint, the DNA damage checkpoint, cell cycle exit, and cellular differentiation. As its full name suggests, cells containing mutations in pathways upstream of Rb or in the protein itself (however this case is more rare), are often cancerous. In fact, the majority of human cancer cells contain mutations in proteins responsible for phosphorylating Rb, such as deletions (p16) or over-expressions (cyclin D, Cdk4, Cdk6).
Within its structure, Rb contains 16 possible sites for phosphorylation by other proteins. Surprisingly, however, it exists in only 3 possible states: un-phosphorylated (no sites phosphorylated), mono-phosphorylated (one site phosphorylated), or hyper-phosphorylated (all available sites phosphorylated). In G0 phase, Rb exists solely in its un-phosphorylated form, but in early G1 phase, the Cyclin D:Cdk4/6 complex adds one phosphate group and the protein remains in its mono-phosphorylated form until late G1 when it is rapidly hyper-phosporylated by the Cyclin E/Cdk 2 complex.
Cell cycle progression
The key mechanism through which the cyclin E/Cdk2 complex is able to promote S phase progression is through Rb and E2F transcription factors. Transcription factors (TF) regulate the rate at which specific target genes are transcribed from DNA to RNA, i.e. transcription. At the end of G1, cells move through the restriction point–essentially "the point of no return" as cells that pass through are irreversibly committed to division and extracellular signals are no longer required for cell cycle progression. The rapid accumulation and activation of the cyclin E/Cdk 2 complex through positive feedback loops drives the cell forward through G1.
After phosphorylation by Cyclin D:Cdk4/6, mono-phosphorylated Rb binds to E2F family proteins, preventing their target genes from being transcribed; interestingly, one of the target genes is cyclin E. The rate-limiting switch-like step to initially activate the cyclin E/Cdk2 complex after Rb mono-phosphorylation is currently unknown, but it is hypothesized that the activation is regulated by an unidentified metabolic sensor, such that once the necessary metabolic threshold has been exceeded, the sensor activates Cyclin E/Cdk2. The metabolic sensor's activation of the cyclin E/Cdk2 complex initiates the process of Rb hyper-phosphorylation of Rb.
Mono-phosphorylated Rb inactivates E2F TFs, but hyper-phosphorylation of Rb results in Rb inactivation, causing the release of E2F proteins from the Rb binding cleft and consequent activation of the E2F family proteins to initiate transcription of their target genes. As a result, more cyclin E is transcribed and more cyclin E/Cdk2 complex is formed and activated. Thus, since cyclin E/Cdk2 activates its transcription factors, cyclin E/Cdk2 can facilitate its own activation, leading to a rapid accumulation of the complex and simultaneous rapid hyper-phosphorylation (i.e. inactivation) of Rb. The rapid inactivation of Rb causes a sudden switch-like transition through the late G1 restriction point (and into S phase). In summary, cyclin E/Cdk2's inactivation of Rb activates E2F which activates more cylin E (and thus the cyclin E/Cdk2 complex), creating a strong positive feedback loop that results in sudden inactivation of Rb and the irreversible push out of G1 and into S phase.
References
Cell cycle regulators | Cyclin E/Cdk2 | [
"Chemistry"
] | 1,792 | [
"Cell cycle regulators",
"Signal transduction"
] |
75,459,758 | https://en.wikipedia.org/wiki/Cyanosulfidic%20prebiotic%20synthesis | Cyanosulfidic prebiotic synthesis is a proposed mechanism for the origin of the key chemical building blocks of life. It involves a systems chemistry approach to synthesize the precursors of amino acids, ribonucleotides, and lipids using the same starting reagents and largely the same plausible early Earth conditions. Cyanosulfidic prebiotic synthesis was developed by John Sutherland and co-workers at the Laboratory of Molecular Biology in Cambridge, England.
Challenges
Prebiotic synthesis of amino acids, nucleobases, lipids, and other building blocks of protocells and metabolisms is still poorly understood. Proposed reactions that produce individual components such as the Strecker synthesis of amino acids, the formose reaction for the production of sugars, and prebiotic syntheses for the production of nucleobases. These syntheses often rely on different starting reagents, different conditions (temperature, pH, catalysts, etc.), and often will interfere with each other. These challenges have made determining the conditions for the origin of life difficult. Researchers have turned to systems chemistry type approaches to help overcome some of these challenges. Systems chemistry approaches form multiple products form a single synthesis under the same conditions and tend to be more similar to biological processes in that they have emergent properties, self-organization, and autocatalysis. Cyanosulfidic prebiotic synthesis is a systems chemistry approach.
Mechanism
The starting reactants for these reactions are hydrogen cyanide (HCN) as well as HCN derivatives and acetylene. Both of these are hypothesized to be present on the early Earth. The conditions this reaction occurs in are a relatively moderate temperature of 35 degrees C and in anoxic or oxygen free conditions. The early Earth was anoxic before the great oxidation event, making these conditions plausible. In the laboratory synthesis, a neutral phosphate buffer was used to maintain a stable, neutral pH. hydrogen sulfide (H2S) is used as a reductant in these reactions. The reactions are driven forward by ultraviolet radiation and catalyzed by Cu(I)-Cu(II) photoredox cycling. Some compounds in the system perform multiple roles. For example, phosphate serves as a buffer to maintain a neutral pH, acts as a catalyst in the synthesis of 2-aminooxazole and urea and serves as a reagent in the formation of glycerol-3-phosphate and ribonucleotides. The mechanisms involved in these reactions include reductive homologation processes to build larger, more complex molecules from the simple starting materials. The products of this reaction include the precursors of many amino acids, the precursors of lipids, and ribonucleotides. It is worth noting that most of the prebiotic monomers are not synthesized in their entirety by these reactions, only their precursors. The amino acid precursors would then be produced by Strecker synthesis reactions. Cyanosulfidic metabolism also does produce the precursors of both purines and pyrimidines ribonucleotides simultaneously. Many of the compounds produced also include intermediates in one-carbon metabolism.
Geochemical context
Sutherland and collaborators proposed a geochemical scenario to argue that cyanosulfidic synthesis was a plausible process on the early Earth. Their scenario starts following a meteorite impact leads to the production of HCN and phosphate. The meteorite fragments also supply the necessary sulfide for the reaction. As ponds and lakes containing these reagents experience wet dry cycles, ferrocyanide, sodium, and potassium salts precipitate out of solution into evaporites, concentrating and storing reactants for future chemistry. These evaporites can then be thermally altered through additional impacts or geothermal heating, producing all necessary components for the proposed syntheses. Rain and runoff create streams that transport compounds along geochemical gradients, introducing new reactants along the way which causes new syntheses to occur. The streams are also exposed to ultraviolet radiation, providing energy for the reactions. The conditions described here support an evaporative lake or terrestrial hydrothermal pond scenario for the origin of life. The proposed geochemical scenario also relies on flow chemistry concepts to introduce new reactants throughout the process to cause additional chemical reactions and syntheses to occur.
Limitations
Cyanosulfidic chemistry has several limitations. While the products are all formed from the same starting materials, many of the reactions require the periodic delivery of new reagents which complicates the syntheses. The chemical synthesis is therefore not truly “one-pot” chemistry which would require all reactants to be provided at the beginning which no further alterations. Sutherland and colleagues argue that a “flow-chemistry” approach featuring the movement of compounds through a stream experiencing different geochemical conditions makes their proposed system plausible.
Variants
Other challenges of the cyanosulfidic prebiotic synthesis approach is that the reductant, sulfide, has low solubility in water except in alkaline conditions and the main catalyst, copper, has a relatively low abundance in Earth’s crust. To address these problems, an alternative scheme for prebiotic systems chemistry called cyanosulfitic prebiotic synthesis has been proposed. These set of reactions relies on sulfite instead of sulfide, and ferrocyanide to catalyze reactions when exposed to ultraviolet light. The products of these reactions rely on similar chemistry to cyanofidic mechanisms such as reductive homologation and produce similar products such as amino acid precursors as well as sugars and hydroxy acids. Both sulfite (from sulfur dioxide released by volcanos) and ferrous iron (FeII) are hypothesized to have been present in high quantities on the early Earth, suggesting that this is potentially a much for feasible set of reactions.
References
Prebiotic chemistry | Cyanosulfidic prebiotic synthesis | [
"Chemistry",
"Biology"
] | 1,208 | [
"Biological hypotheses",
"Origin of life",
"Prebiotic chemistry"
] |
75,460,161 | https://en.wikipedia.org/wiki/Linear%20biochemical%20pathway | A linear biochemical pathway is a chain of enzyme-catalyzed reaction steps where the product of one reaction becomes the substrate for the next reaction. The molecules progress through the pathway sequentially from the starting substrate to the final product. Each step in the pathway is usually facilitated by a different specific enzyme that catalyzes the chemical transformation. An example includes DNA replication, which connects the starting substrate and the end product in a straightforward sequence.
Biological cells consume nutrients to sustain life. These nutrients are broken down to smaller molecules. Some of the molecules are used in the cells for various biological functions, and others are reassembled into more complex structures required for life. The breakdown and reassembly of nutrients is called metabolism. An individual cell contains thousands of different kinds of small molecules, such as sugars, lipids, and amino acids. The interconversion of these molecules is carried out by catalysts called enzymes. For example, the most widely studied bacterium, E. coli strain K-12, is able to produce about 2,338 metabolic enzymes. These enzymes collectively form a complex web of reactions comprising pathways by which substrates (including nutients and intermediates) are converted to products (other intermediates and end-products).
The figure below shows a four step pathway, with intermediates, and . To sustain a steady-state, the boundary species and are fixed. Each step is catalyzed by an enzyme, .
Linear pathways follow a step-by-step sequence, where each enzymatic reaction results in the transformation of a substrate into an intermediate product. This intermediate is processed by subsequent enzymes until the final product is synthesized.
A linear pathway can be studied in various ways. Multiple computer simulations can be run to try to understand the pathway's behavior. Another way to understand the properties of a linear pathway is to take a more analytical approach. Analytical solutions can be derived for the steady-state if simple mass-action kinetics are assumed. Analytical solutions for the steady-state when assuming Michaelis-Menten kinetics can be obtained but are quite often avoided. Instead, such models are linearized. The three approaches that are usually used are therefore:
Computer simulation
Analytical solutions using a linear mathematical model
Linearization of a non-linear model
Computer simulation
It is possible to build a computer simulation of a linear biochemical pathway. This can be done by building a simple model that describes each intermediate through a differential equation. The differential equations can be written by invoking mass conservation. For example, for the linear pathway:
where and are fixed boundary species, the non-fixed intermediate can be described using the differential equation:
The rate of change of the non-fixed intermediates and can be written in the same way:
To run a simulation the rates, need to be defined. If mass-action kinetics are assumed for the reaction rates, then the differential equation can be written as:
If values are assigned to the rate constants, , and the fixed species and the differential equations can be solved.
Analytical solutions
Computer simulations can only yield so much insight, as one would be required to run simulations on a wide range of parameter values, which can be unwieldy. A generally more powerful way to understand the properties of a model is to solve the differential equations analytically.
Analytical solutions are possible if simple mass-action kinetics on each reaction step are assumed:
where and are the forward and reverse rate-constants, respectively. is the substrate and the product. If the equilibrium constant for this reaction is:
The mass-action kinetic equation can be modified to be:
Given the reaction rates, the differential equations describing the rates of change of the species can be described. For example, the rate of change of will equal:
By setting the differential equations to zero, the steady-state concentration for the species can be derived. From here, the pathway flux equation can be determined. For the three-step pathway, the steady-state concentrations of and are given by:
Inserting either or into one of the rate laws will give the steady-state pathway flux, :
A pattern can be seen in this equation such that, in general, for a linear pathway of steps, the steady-state pathway flux is given by:
Note that the pathway flux is a function of all the kinetic and thermodynamic parameters. This means there is no single parameter that determines the flux completely. If is equated to enzyme activity, then every enzyme in the pathway has some influence over the flux.
Linearized model: deriving control coefficients
Given the flux expression, it is possible to derive the flux control coefficients by differentiation and scaling of the flux expression. This can be done for the general case of steps:
This result yields two corollaries:
The sum of the flux control coefficients is one. This confirms the summation theorem.
The value of an individual flux control coefficient in a linear reaction chain is greater than 0 or less than one:
For the three-step linear chain, the flux control coefficients are given by:
where is given by:
Given these results, there are some patterns:
If all three steps have large equilibrium constants, that is , then tends to one and the remaining coefficients tend to zero.
If the equilibrium constants are smaller, control tends to get distributed across all three steps.
With more moderate equilibrium constants, perturbations can travel upstream as well as downstream. For example, a perturbation at the last step, , is better able to influence the reaction rates upstream, which results in an alteration in the steady-state flux.
An important result can be obtained if all are set as equal to each other. Under these conditions, the flux control coefficient is proportional to the numerator. That is:
If it is assumed that the equilibrium constants are all greater than 1.0, as earlier steps have more terms, it must mean that earlier steps will, in general, have high larger flux control coefficients. In a linear chain of reaction steps, flux control will tend to be biased towards the front of the pathway. From a metabolic engineering or drug-targeting perspective, preference should be given to targeting the earlier steps in a pathway since they have the greatest effect on pathway flux. Note that this rule only applies to pathways without negative feedback loops.
References
Metabolic pathways
Biochemistry
Enzyme kinetics
Metabolism | Linear biochemical pathway | [
"Chemistry",
"Biology"
] | 1,282 | [
"Biochemistry",
"Enzyme kinetics",
"Cellular processes",
"nan",
"Metabolic pathways",
"Chemical kinetics",
"Metabolism"
] |
75,463,818 | https://en.wikipedia.org/wiki/Quasi-isodynamic%20stellarator | A quasi-isodynamic (QI) stellarator is a type of stellarator (a magnetic confinement fusion reactor) that satisfies the property of omnigeneity, avoids the potentially hazardous toroidal bootstrap current, and has minimal neoclassical transport in the collisionless regime.
Wendelstein 7-X, the largest stellarator in the world, was designed to be roughly quasi-isodynamic (QI).
In contrast to quasi-symmetric fields, exactly QI fields on flux surfaces cannot be expressed analytically. However, it has been shown that nearly-exact QI can be extremely well approximated through mathematical optimization, and that the resulting fields enjoy the aforementioned properties.
In a QI field, level curves of the magnetic field strength on a flux surface close poloidally (the short way around the torus), and not toroidally (the long way around), causing the stellarator to resemble a series of linked magnetic mirrors.
References
Fusion power
Nuclear reactors by type
Physics | Quasi-isodynamic stellarator | [
"Physics",
"Chemistry"
] | 205 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
75,464,679 | https://en.wikipedia.org/wiki/Dyakis%20dodecahedron | In geometry, the dyakis dodecahedron /ˈdʌɪəkɪsˌdəʊdɪkəˈhiːdrən/ or diploid is a variant of the deltoidal icositetrahedron with pyritohedral symmetry, transforming the kite faces into chiral quadrilaterals. The name diploid derives from the Greek word διπλάσιος (diplásios), meaning twofold since it has 2-fold symmetry along its 6 octahedral vertices. It has the same number of faces, edges, and vertices as the deltoidal icositetrahedron as they are topologically identical.
Construction
The dyakis dodecahedron can be constructed by enlarging 24 of the 48 faces of the disdyakis dodecahedron and is inscribed in the dyakis dodecahedron, thus it exists as a hemihedral form of it with indices {hkl}. It can be constructed into two non-regular pentagonal dodecahedra, the pyritohedron and the tetartoid. The transformation to the pyritohedron can be made by combining two adjacent trapezoids that share a long edge together into one hexagon face. The short edges of the hexagon can then be combined to finally get the pentagon. The transformation to the tetartoid can be made by enlarging 12 of the dyakis dodecahedron's 24 faces.
Properties
Since the quadrilaterals are chiral and non-regular, the dyakis dodecahedron is a non-uniform polyhedron, a type of polyhedron that is not vertex-transitive and does not have regular polygon faces. It is an isohedron, meaning that it is face transitive.
The dual polyhedron of a dyakis dodecahedron is the cantic snub octahedron.
In crystallography
The dyakis dodecahedron only exists in one crystal, pyrite. Pyrite has other forms other than the dyakis dodecahedron, including tetrahedra, octahedra, cubes and pyritohedra. Though the cube and octahedron are in the cubic crystal system, the dyakis dodecahedron and the pyritohedron are in the isometric crystal system and the tetrahedron is in the tetrahedral crystal system. Although the dyakis dodecahedron has 3-fold axes like the pyritohedron and cube, it doesn't have 4-fold axes, rather it has order-4 vertices, as when the dyakis dodecahedron is rotated 90 or 270° along an order-4 vertex, it is not the same as before, because the order-4 vertices act as 2-fold axes, as when they are rotated a full turn or 180°, the polyhedron looks the same as before.
References
Polyhedra
Crystallography
Pyrite group | Dyakis dodecahedron | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 599 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
69,578,346 | https://en.wikipedia.org/wiki/Bismuth%20phosphide | Bismuth phosphide is a proposed inorganic compound with the chemical formula BiP. The structure of this material is unknown.
Synthesis
One route entails the reaction of sodium phosphide and bismuth trichloride in toluene (0 °C):
Another method uses tris(trimethylsilyl)phosphine in place of the sodium phosphide.
Physical properties
When heated in air, bismuth phosphide burns.
When heated in an atmosphere of carbon dioxide, a gradual volatilization of phosphorus is observed.
Chemical properties
This compound is oxidized when boiled in water.
All strong acids dissolve it.
References
Phosphides
Bismuth compounds
Semiconductors | Bismuth phosphide | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 149 | [
"Electrical resistance and conductance",
"Physical quantities",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
69,579,497 | https://en.wikipedia.org/wiki/Ivermectin%20during%20the%20COVID-19%20pandemic | Ivermectin is an antiparasitic drug that is well established for use in animals and people. The World Health Organization (WHO), the European Medicines Agency (EMA), the United States Food and Drug Administration (FDA), and the Infectious Diseases Society of America (IDSA) all advise against using ivermectin in an attempt to treat or prevent COVID-19.
Early in the COVID-19 pandemic, laboratory research suggested ivermectin might have a role in preventing or treating COVID-19. Online misinformation campaigns and advocacy boosted the drug's profile among the public. While scientists and physicians largely remained skeptical, some nations adopted ivermectin as part of their pandemic-control efforts. Some people, desperate to use ivermectin without a prescription, took veterinary preparations, which led to shortages of supplies of ivermectin for animal treatment. The FDA responded to this situation by saying "You are not a horse" in a tweet to draw attention to the issue, for which they were later sued by three ivermectin-prescribing doctors.
Subsequent research failed to confirm the utility of ivermectin for COVID-19, and in 2021 it emerged that many of the studies demonstrating benefit were faulty, misleading, or fraudulent. Nevertheless, misinformation about ivermectin continued to be propagated on social media and the drug remained a cause célèbre for anti-vaccinationists and conspiracy theorists.
Research
Some in vitro drug screening studies early in the pandemic showed that ivermectin has antiviral effects against several distinct positive-sense single-strand RNA viruses, including SARS-CoV-2. Subsequent studies found that ivermectin could inhibit replication of SARS-CoV-2 in monkey kidney cell culture with an IC50 of 2.2–2.8 μM.
However, doses much higher than the maximum approved or safely achievable for use in humans would be required for an antiviral effect while treating COVID-19. Aside from practical difficulties, such high doses are not covered by current human-use approvals of the drug and may be toxic, as the antiviral mechanism of action is believed to be via the suppression of a host cellular process, specifically the inhibition of nuclear transport by importin α/β1. Several other drugs which inhibit importin α/β1 at therapeutic doses have failed clinical trials due to systemic toxicity and a narrow therapeutic window.
To resolve uncertainties from previous small or poor-quality studies, , large scale trials were underway in the United States and the United Kingdom. A large randomised controlled trial ACTIV-6, published in October 2022, found ivermectin was not effective as a COVID-19 treatment.
Research limitations, ethics and fraud
Many studies on ivermectin for COVID‑19 have serious methodological limitations, resulting in very low evidence certainty. Several publications that supported the efficacy of ivermectin for COVID‑19 have been retracted due to errors, unverifiable data, and ethical concerns.
Several high-profile publications purporting to demonstrate reduced mortality in COVID-19 patients were later retracted due to suspected data falsification. This only added to confusion among the media and lay public, as these publications had been widely cited by ivermectin supporters and included in meta-analyses.
In January 2022, 22 inmates at the Washington County Detention Center in Arkansas filed a lawsuit over hundreds of ivermectin pills given to them as "vitamins" in 2020.
In February 2022, the American Journal of Therapeutics issued expressions of concern against two positive systematic reviews of ivermectin for COVID-19 which it had published in 2021, because of suspicions about the underlying data that would undermine these papers' findings of benefit.
In Mexico City the government distributed ivermectin widely as a COVID-19 treatment and published the observed results on the SocArXiv archive as a research paper. The paper was subsequently withdrawn by the archive citing concerns that it was unethical, as it effectively was an experiment carried out on people without gaining informed consent. Philip N. Cohen of the SocArXiv steering committee said "the article is of very poor quality or deliberately false and misleading" and that its removal was justified to prevent public harm.
Clinical guidance
In February 2021, Merck, the developer of the drug, issued a statement saying that there is no good evidence ivermectin is effective against COVID‑19 and that attempting such use may be unsafe.
After reviewing the evidence on ivermectin, the European Medicines Agency (EMA) advised against its use for prevention or treatment of COVID‑19 and that "the available data do not support its use for COVID‑19 outside well-designed clinical trials." Consequently, ivermectin is not authorized for use to treat COVID‑19 within the European Union.
Ivermectin is not approved by the U.S. Food and Drug Administration (FDA) for use in treating any viral illness, and the U.S. National Institutes of Health COVID‑19 Treatment Guidelines state that there is insufficient evidence for ivermectin to allow for a recommendation for or against its use.
In the United Kingdom, the national COVID‑19 Therapeutics Advisory Panel determined that the evidence base and plausibility of ivermectin as a COVID‑19 treatment were insufficient to pursue further investigations.
In November 2023, the WHO updated its treatment guidelines to recommend strongly against the use of ivermectin as a COVID-19 treatment, due to a lack of research evidence or biological plausibility.
The Brazilian Health Regulatory Agency, Brazilian Society of Infectious Diseases, and Brazilian Thoracic Society issued position statements advising against the use of ivermectin for prevention or treatment of early-stage COVID‑19.
COVID-19 and strongyloidiasis
There is one very specific circumstance in which ivermectin may be useful in the management of COVID-19. People infected with the Strongyloides stercoralis parasite are at risk for strongyloides hyperinfection syndrome (SHS) — a condition with a mortality rate as high as 90% — if given corticosteroids to treat COVID-19. Strongyloidiasis affects as many as 370 million people worldwide, and it is usually subclinical or even asymptomatic. However, it can become fatal in the setting of SHS, which can be triggered by the immunosuppression that results from the administration of corticosteroids. In fact, multiple cases of SHS have been reported after the use of corticosteroids in the management of COVID-19 pneumonia. For this reason, the World Health Organization (WHO), the European Centre for Disease Prevention and Control (ECDC), the Public Health Agency of Canada (PHAC) and the United States Centers for Disease Control and Prevention (CDC) all recommend presumptive treatment for strongyloidiasis with ivermectin in people at high or moderate risk of SHS before or in conjunction with corticosteroids in the management of COVID-19. People who were born, resided, or had long-term travel in Southeast Asia, Oceania, sub-Saharan Africa, South America, or the Caribbean are considered to be at high risk for SHS, while people from Central America, Eastern Europe, the Mediterranean, Mexico, Middle East, North Africa, and the Indian subcontinent are considered to be at moderate risk. In such cases, ivermectin is a treatment for strongyloidiasis, not for COVID-19.
Regulatory status and off-label use
Misinformation, lower degrees of trust, and a sense of despair over increasing case and death counts have led to an increase in ivermectin's use in Central and Eastern Europe, Latin America, and South Africa. A black market has also developed in many of these countries where official approval has not been granted.
The viral social media misinformation about ivermectin has gained particular attention in South Africa where an anti-vaccination group called "South Africa Has A Right To Ivermectin" has been lobbying for the drug to be made available for prescription. Another group, the "Ivermectin Interest Group" launched a court case against the South African Health Products Regulatory Authority (SAHPRA), and as a result a compassionate use exemption was granted. SAHPRA stated in April 2021 that "At present, there are no approved treatments for COVID-19 infections." In September 2021, SAHPRA repeated warnings against fake news and misinformation and took up the FDA's stance about ivermectin. Due to lacking evidence of efficacy and growing body of retracted pro-ivermectin papers, SAHPRA revoked the compassionate use program in May 2022.
Despite the absence of high-quality evidence to suggest any efficacy and advice to the contrary, some governments have allowed its off-label use for the prevention and treatment of COVID‑19. Countries that have granted such official approval for ivermectin include the Czech Republic, Slovakia, Mexico, Peru (later rescinded), India (later rescinded), and the Philippines. Cities that have launched campaigns of massive distribution of ivermectin include Cali, Colombia; and Itajai, Brazil.
In Arkansas in 2021, a prison doctor prescribed ivermectin for inmates without their consent. A legal action brought on the inmates' behalf by the American Civil Liberties Union (ACLU) was settled with the prison authorities paying compensation. The ACLU said the outcome was "victory for civil rights and medical ethics".
Ivermectin is not approved by the U.S. Food and Drug Administration (FDA) for use in treating any viral illness and is not authorized for use to treat COVID-19 within the European Union. After reviewing the evidence on ivermectin, the EMA said that "the available data do not support its use for COVID-19 outside well-designed clinical trials". The World Health Organization also said that ivermectin should not be used to treat COVID-19 except in a clinical trial. The Brazilian Health Regulatory Agency, Brazilian Society of Infectious Diseases, and Brazilian Thoracic Society issued position statements advising against the use of ivermectin for prevention or treatment of early-stage COVID-19.
Several Latin American government health organizations recommended ivermectin as a COVID-19 treatment based, in part, on preprints and anecdotal evidence; these recommendations were later denounced by the Pan American Health Organization.
In the United States, an analysis of prescribing data suggested the influence of political affiliation, as Republican-voting areas saw a pronounced surge in ivermectin (and hydroxychloroquine) prescription in 2020.
Human use of veterinary products
As people began using veterinary preparations of ivermectin for personal use stocks began to decline, requiring vendors to ration their sales and raise prices. In the United States supplies of horse dewormer paste began to run low as people used it for themselves; some vendors required their customers to show a picture of themselves and their horses together, to provide assurance they were purchasing the paste for animal use.
In August 2021 the CDC issued a health alert prompted by a sharp rise in calls to poison control centres about ivermectin poisoning. The CDC described two cases requiring hospitalization; in one, a person had drunk an injectable ivermectin product intended for use in cattle.
In August 2021, the FDA tweeted "You are not a horse. You are not a cow. Seriously, y'all. Stop it". Following a legal challenge from ivermectin-prescribing doctors, in August 2023 a US court found the FDA had exceeded its authority by posting the tweet, which they said amounted to medical advice, and that doctors could prescribe whatever they wanted. Remarks made during the legal proceedings were misrepresented on social media to claim that the FDA had somehow reversed its position on ivermectin and COVID-19, which in reality remained unchanged. In March 2024 the FDA settled outstanding litigation and removed all social media posts that could be construed as giving medical advice and thus exceeding its statutory authority, while re-iterating that its position remained unchanged and that "currently available clinical trial data do not demonstrate that ivermectin is effective against COVID-19".
Intellectual property and economics
As the patent on ivermectin has expired, generic drug manufacturers have been able to enjoy significantly increased revenue prompted by the spike in demand. One Brazilian company, Vitamedic Industria Farmaceutica, saw its annual revenue from ivermectin sales increase to $85 million in 2020, a more than fivefold increase.
In Australia in 2020 Thomas Borody, a professor and gastroenterologist, announced that he had discovered a "cure" for COVID-19: a combination of ivermectin, doxycycline and zinc. In a media interview Borody stated "The biggest thing about this is no one will make money from this". It later emerged that Topelia Australia, Borody's company, had filed a patent for the drug combination. Borody was accused of not adequately disclosing his conflict of interest.
In October 2021 a large network of companies selling hydroxychloroquine and ivermectin was disclosed in the US, targeting primarily right-wing and vaccine-hesitant groups through social media and conspiracy videos by anti-vaccine activists such as Simone Gold. The network had 72,000 customers who collectively paid $15 million for consultations and medications.
Misinformation and advocacy
Ivermectin became a cause célèbre for right-wing figures promoting it as a supposed COVID treatment. Misinformation about ivermectin's efficacy spread widely on social media, fueled by publications that have since been retracted, misleading "meta-analysis" websites with substandard methods, and conspiracy theories about efforts by governments and scientists to "suppress the evidence."
Social media advocacy
Ivermectin has been championed by a number of social media influencers.
American podcaster and author Bret Weinstein took ivermectin during a livestream video and said both he and his wife Heather Heying had not been vaccinated because of their fears concerning COVID-19 vaccines. In response, YouTube demonetized the channel.
In the United Kingdom, retired nurse educator and YouTuber John Campbell has posted videos carrying false claims about the use of ivermectin in Japan as a possible cause of a "miracle" decline in cases. In reality, there is no evidence of ivermectin use in Japan and it is not approved as a COVID-19 treatment. In February 2022, reports also appeared falsely claiming that the Japanese company Kowa had been able to evidence the efficacy of ivermectin in a phase III trial.
Misleading meta-analysis websites
During the pandemic, several misleading websites appeared purporting to show meta-analyses of clinical evidence in favor of ivermectin's use in treating COVID-19. The sites in question had anonymous owners, multiple domains which redirected to the same content, and used many colourful, but misleading, graphics to communicate their point. The web servers used for these sites are the same as those previously used to spread misinformation about hydroxychloroquine.
While these sites gained traction among many non-scientists on social media, they also violated many of the basic norms of meta-analysis methodology. Notably, many of these sites included studies with widely different dosages of the treatment, an open-label design (in which experimenters and participants both know who is in the control group), poor-quality control groups (such as another untested treatment which may worsen outcomes), or no control group at all. Another issue is the inclusion of multiple ad-hoc unpublished trials which did not undergo peer-review, and which had different incompatible outcome measures. Such methodological problems are known to distort the findings of meta-analyses and cause spurious or false findings. The misinformation communicated by these sites created confusion among the public and policymakers.
Fake endorsements
On Twitter, a tweet spread with a photograph of William C. Campbell, the co-inventor of ivermectin, alongside a fabricated quotation saying that he endorsed ivermectin as a COVID treatment. Campbell reacted by saying "I utterly despise and deny the remarks attributed to me on social media" adding that his field of expertise was not virology so he would never comment in such a way.
In February 2022 a report was broadcast by Australia's Nine Network about Queen Elizabeth II having COVID-19. The segment featured Mukesh Haikerwal and included an intercut image of a box of ivermectin tablets, leading antivaxxers to spread the idea via social media that ivermectin was being specially used, as a "treatment fit for a queen". Haikerwal stated that he rejected ivermectin as a COVID-19 treatment, and the network issued an apology to him, saying the ivermectin image has been included "as a result of human error".
Scientists targeted
In July 2021 Andrew Hill, a senior research fellow at Liverpool University, published a meta-analysis of ivermectin use for COVID which suggested it may be beneficial. However, as research fraud subsequently emerged in some studies included in the meta-analysis, Hill revised his analysis to discount the suspect evidence, and found the apparent success of ivermectin evaporated as a result. Writing for The Guardian, Hill recounted how the revision led to him being attacked on social media as being supposedly in the pay of Bill Gates, and how he was sent photos of coffins and hanged nazis.
Epidemiologist Gideon Meyerowitz-Katz has identified ivermectin as being one of the most politicized topics in the pandemic, alongside vaccination. Meyerowitz-Katz has used social media to publicize flaws in ivermectin research and as a result, he says, has received more death threats than for any other topic he has engaged with.
Front Line COVID-19 Critical Care Alliance
In December 2020, the chair of the US Senate Homeland Security Committee, Ron Johnson, used a Senate hearing to promote fringe theories about, and unproven treatments for, COVID-19, including ivermectin. Among the witnesses was Pierre Kory, a pulmonary and critical care doctor, who erroneously described ivermectin as "miraculous" and a "wonder drug" to be used against COVID-19. Video footage of his statements went viral on social media, receiving over one million views as of 11 December 2020.
In the United States, the use of ivermectin for COVID-19 is championed by an organization led by Kory called Front Line COVID-19 Critical Care Alliance (FLCCC), which promotes "the global movement to move #Ivermectin into the mainstream". The effort went viral on social media, where it was adopted by COVID deniers, anti-vaccination proponents, and conspiracy theorists. A review article by FLCCC members on the efficacy of ivermectin, which had been provisionally accepted by a Frontiers in Pharmacology, was subsequently rejected on account of what the publisher called "a series of strong, unsupported claims based on studies with insufficient statistical significance" meaning that the article did "not offer an objective [or] balanced scientific contribution to the evaluation of ivermectin as a potential treatment for COVID-19". David Gorski wrote that the narrative of ivermectin as a "miracle cure" for COVID-19 is a "metastasized" version of a similar conspiracy theory around the drug hydroxychloroquine, in which unspecified powers are thought to be suppressing news of the drug's effectiveness for their own profit.
Pfizer's drug development
Conspiracy theorists on the internet have claimed that Pfizer's anti-COVID-19 drug paxlovid is merely "repackaged ivermectin". Their claims are based on a narrative that Pfizer is suppressing the true benefits of ivermectin and rely on superficial correspondences between the drugs and a misunderstanding of their respective pharmokinetics. Paxlovid is a combination drug of two small-molecule antiviral compounds (nirmatrelvir and ritonavir) which have no connection to ivermectin.
Aftermath
The widespread misconduct found in ivermectin/COVID-19 research has prompted introspection within the scientific community.
Australian epidemiologist Gideon Meyerowitz-Katz wrote, "There are no two ways about it: Science is flawed". Meyerowitz-Katz estimates that as of December 2021, credence in flawed research had led to ivermectin being perhaps the most used medication worldwide during the pandemic and that the scale of the problem suggested a radical rethink was needed of how medical research was assessed.
See also
Big Pharma conspiracy theories
COVID-19 drug repurposing research
COVID-19 misinformation
Chloroquine and hydroxychloroquine during the COVID-19 pandemic
References
Communication of falsehoods
Conspiracy theories
COVID-19 misinformation
Fake news
Impact of the COVID-19 pandemic on journalism
Health-related conspiracy theories
Misinformation
Pseudoscience
Vaccine hesitancy
fr:Développement et recherche de médicaments contre la Covid-19#Ivermectine | Ivermectin during the COVID-19 pandemic | [
"Technology"
] | 4,635 | [
"Health-related conspiracy theories",
"Science and technology-related conspiracy theories"
] |
69,585,537 | https://en.wikipedia.org/wiki/FK962 | FK962 is a compound which acts as an enhancer of somatostatin release. It stimulates nerve growth and neurite elongation, and has been researched in animal models for potential applications in the treatment of conditions such as Alzheimer's disease and retinal neuropathy.
See also
Octreotide
Pasireotide
Sunifiram (structural similarity)
References
Pharmacology
Acetamides
Benzamides
Piperidines
4-Fluorophenyl compounds | FK962 | [
"Chemistry"
] | 102 | [
"Pharmacology",
"Medicinal chemistry"
] |
77,089,548 | https://en.wikipedia.org/wiki/N-Propylmagnesium%20bromide | n-Propylmagnesium bromide, often referred to as simply propylmagnesium bromide, is an organomagnesium compound with the chemical formula . As the Grignard reagent derived from 1-bromopropane, it is used for the n-propylation of electrophiles in organic synthesis.
Properties
Like all Grignard reagents, propylmagnesium bromide is a strong electrophile, sensitive to both water and air.
The propylmagnesium halides are the simplest Grignard reagents to exhibit isomerism. Isopropylmagnesium chloride is the primary synthetic equivalent of the isopropyl group.
n-Propylmagnesium bromide is soluble in ether, tetrahydrofuran, and toluene.
Synthesis
Synthesis is analogous to other saturated alkyl Grignard reagents. A solution of 1-bromopropane in ether - typically diethyl ether or tetrahydrofuran - is treated with magnesium, which inserts itself into the organohalogen bond. As both the magnesium metal and the product are sensitive to water, the reaction must take place in anhydrous conditions.
While the product is often portrayed as simply , in reality it will quickly form a tetrahedral coordination complex with the Lewis basic solvent, centred on the magnesium atom:
Applications
Propylmagnesium bromide is used in the Grignard reaction to introduce propyl groups to nucleophiles.
References
Organomagnesium compounds | N-Propylmagnesium bromide | [
"Chemistry"
] | 332 | [
"Organomagnesium compounds",
"Reagents for organic chemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.